default search action
Computer Vision and Image Understanding, Volume 243
Volume 243, 2024
- Shuang Liang, Zhiqi Yan, Chi Xie, Hongming Zhu, Jiewen Wang:
Scribble-based complementary graph reasoning network for weakly supervised salient object detection. 103977 - Matteo Dunnhofer, Christian Micheloni:
Visual tracking in camera-switching outdoor sport videos: Benchmark and baselines for skiing. 103978 - Kaiyu Guo, Brian C. Lovell:
Domain-aware triplet loss in domain generalization. 103979 - Yoli Shavit, Ron Ferens, Yosi Keller:
Learning single and multi-scene camera pose regression with transformer encoders. 103982 - Lu Zhou, Yingying Chen, Jinqiao Wang:
SlowFastFormer for 3D human pose estimation. 103992 - Wei Huo, Ke Wang, Jun Tang, Nian Wang, Dong Liang:
GaitSCM: Causal representation learning for gait recognition. 103995 - Zhanfei Chen, Xuelong Si, Dan Wu, Fengnian Tian, Zhenxing Zheng, Renfu Li:
A novel camera calibration method based on known rotations and translations. 103996 - Ed Pizzi, Giorgos Kordopatis-Zilos, Hiral Patel, Gheorghe Postelnicu, Sugosh Nagavara Ravindra, Akshay Gupta, Symeon Papadopoulos, Giorgos Tolias, Matthijs Douze:
The 2023 video similarity dataset and challenge. 103997 - Guilin Yao, Anming Sun:
Multi-guided-based image matting via boundary detection. 103998 - Federico Figari Tomenotti, Nicoletta Noceti, Francesca Odone:
Head pose estimation with uncertainty and an application to dyadic interaction detection. 103999 - Yugang Liao, Jun-qing Li, Shuwei Wei, Xiumei Xiao:
Evolutionary Search via channel attention based parameter inheritance and stochastic uniform sampled training. 104000 - Xiaoyu Geng, Qiang Guo, Shuaixiong Hui, Ming Yang, Caiming Zhang:
Tensor robust PCA with nonconvex and nonlocal regularization. 104007
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.