default search action
Tim K. Marks
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c42]Rahul Dey, Bernhard Egger, Vishnu Naresh Boddeti, Ye Wang, Tim K. Marks:
CoLa-SDF: Controllable Latent StyleSDF for Disentangled 3D Face Generation. CVPR Workshops 2024: 2852-2861 - [c41]Haomiao Ni, Bernhard Egger, Suhas Lohit, Anoop Cherian, Ye Wang, Toshiaki Koike-Akino, Sharon X. Huang, Tim K. Marks:
TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models. CVPR 2024: 9015-9025 - [c40]Zeyuan Yang, Jiageng Lin, Peihao Chen, Anoop Cherian, Tim K. Marks, Jonathan Le Roux, Chuang Gan:
RILA: Reflective and Imaginative Language Agent for Zero-Shot Semantic Audio-Visual Navigation. CVPR 2024: 16251-16261 - [i18]Haomiao Ni, Bernhard Egger, Suhas Lohit, Anoop Cherian, Ye Wang, Toshiaki Koike-Akino, Sharon X. Huang, Tim K. Marks:
TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models. CoRR abs/2404.16306 (2024) - [i17]Jie Yin, Andrew Luo, Yilun Du, Anoop Cherian, Tim K. Marks, Jonathan Le Roux, Chuang Gan:
Disentangled Acoustic Fields For Multimodal Physical Scene Understanding. CoRR abs/2407.11333 (2024) - 2023
- [c39]Nithin Gopalakrishnan Nair, Anoop Cherian, Suhas Lohit, Ye Wang, Toshiaki Koike-Akino, Vishal M. Patel, Tim K. Marks:
Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional Image Synthesis. ICCV 2023: 20793-20803 - [c38]Vineet R. Shenoy, Tim K. Marks, Hassan Mansour, Suhas Lohit:
Unrolled iPPG: Video Heart Rate Estimation via Unrolling Proximal Gradient Descent. ICIP 2023: 2715-2719 - [c37]Kei Ota, Hsiao-Yu Tung, Kevin A. Smith, Anoop Cherian, Tim K. Marks, Alan Sullivan, Asako Kanezaki, Joshua B. Tenenbaum:
H-SAUR: Hypothesize, Simulate, Act, Update, and Repeat for Understanding Object Articulations from Interactions. ICRA 2023: 7272-7278 - [c36]Anoop Cherian, Siddarth Jain, Tim K. Marks, Alan Sullivan:
Discriminative 3D Shape Modeling for Few-Shot Instance Segmentation. ICRA 2023: 9296-9302 - [d1]Safa C. Medin, Bernhard Egger, Anoop Cherian, Ye Wang, Joshua B. Tenenbaum, Xiaoming Liu, Tim K. Marks:
MOST-GAN Pre-trained Model. Zenodo, 2023 - [i16]Nithin Gopalakrishnan Nair, Anoop Cherian, Suhas Lohit, Ye Wang, Toshiaki Koike-Akino, Vishal M. Patel, Tim K. Marks:
Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional Image Synthesis. CoRR abs/2310.00224 (2023) - 2022
- [j6]Ewa Magdalena Nowara, Tim K. Marks, Hassan Mansour, Ashok Veeraraghavan:
Near-Infrared Imaging Photoplethysmography During Driving. IEEE Trans. Intell. Transp. Syst. 23(4): 3589-3600 (2022) - [c35]Anoop Cherian, Chiori Hori, Tim K. Marks, Jonathan Le Roux:
(2.5+1)D Spatio-Temporal Scene Graphs for Video Question Answering. AAAI 2022: 444-453 - [c34]Safa C. Medin, Bernhard Egger, Anoop Cherian, Ye Wang, Joshua B. Tenenbaum, Xiaoming Liu, Tim K. Marks:
MOST-GAN: 3D Morphable StyleGAN for Disentangled Face Image Manipulation. AAAI 2022: 1962-1971 - [c33]Ankit P. Shah, Shijie Geng, Peng Gao, Anoop Cherian, Takaaki Hori, Tim K. Marks, Jonathan Le Roux, Chiori Hori:
Audio-Visual Scene-Aware Dialog and Reasoning Using Audio-Visual Transformers with Joint Student-Teacher Learning. ICASSP 2022: 7732-7736 - [i15]Anoop Cherian, Chiori Hori, Tim K. Marks, Jonathan Le Roux:
(2.5+1)D Spatio-Temporal Scene Graphs for Video Question Answering. CoRR abs/2202.09277 (2022) - [i14]Kei Ota, Hsiao-Yu Tung, Kevin A. Smith, Anoop Cherian, Tim K. Marks, Alan Sullivan, Asako Kanezaki, Joshua B. Tenenbaum:
H-SAUR: Hypothesize, Simulate, Act, Update, and Repeat for Understanding Object Articulations from Interactions. CoRR abs/2210.12521 (2022) - 2021
- [j5]Seokhwan Kim, Michel Galley, R. Chulaka Gunasekara, Sungjin Lee, Adam Atkinson, Baolin Peng, Hannes Schulz, Jianfeng Gao, Jinchao Li, Mahmoud Adada, Minlie Huang, Luis A. Lastras, Jonathan K. Kummerfeld, Walter S. Lasecki, Chiori Hori, Anoop Cherian, Tim K. Marks, Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta:
Overview of the Eighth Dialog System Technology Challenge: DSTC8. IEEE ACM Trans. Audio Speech Lang. Process. 29: 2529-2540 (2021) - [c32]Anoop Cherian, Goncalo Dias Pais, Siddarth Jain, Tim K. Marks, Alan Sullivan:
InSeGAN: A Generative Approach to Segmenting Identical Instances in Depth Images. ICCV 2021: 10003-10012 - [c31]Armand Comas, Tim K. Marks, Hassan Mansour, Suhas Lohit, Yechi Ma, Xiaoming Liu:
Turnip: Time-Series U-Net With Recurrence For NIR Imaging PPG. ICIP 2021: 309-313 - [i13]Anoop Cherian, Goncalo Dias Pais, Siddarth Jain, Tim K. Marks, Alan Sullivan:
InSeGAN: A Generative Approach to Segmenting Identical Instances in Depth Images. CoRR abs/2108.13865 (2021) - [i12]Ankit P. Shah, Shijie Geng, Peng Gao, Anoop Cherian, Takaaki Hori, Tim K. Marks, Jonathan Le Roux, Chiori Hori:
Audio-Visual Scene-Aware Dialog and Reasoning using Audio-Visual Transformers with Joint Student-Teacher Learning. CoRR abs/2110.06894 (2021) - [i11]Safa C. Medin, Bernhard Egger, Anoop Cherian, Ye Wang, Joshua B. Tenenbaum, Xiaoming Liu, Tim K. Marks:
MOST-GAN: 3D Morphable StyleGAN for Disentangled Face Image Manipulation. CoRR abs/2111.01048 (2021) - 2020
- [j4]Luis Fernando D'Haro, Koichiro Yoshino, Chiori Hori, Tim K. Marks, Lazaros Polymenakos, Jonathan K. Kummerfeld, Michel Galley, Xiang Gao:
Overview of the seventh Dialog System Technology Challenge: DSTC7. Comput. Speech Lang. 62: 101068 (2020) - [c30]Abhinav Kumar, Tim K. Marks, Wenxuan Mou, Ye Wang, Michael Jones, Anoop Cherian, Toshiaki Koike-Akino, Xiaoming Liu, Chen Feng:
LUVLi Face Alignment: Estimating Landmarks' Location, Uncertainty, and Visibility Likelihood. CVPR 2020: 8233-8243 - [c29]Anoop Cherian, Jue Wang, Chiori Hori, Tim K. Marks:
Spatio-Temporal Ranked-Attention Networks for Video Captioning. WACV 2020: 1606-1615 - [c28]Rui Huang, Wenju Xu, Teng-Yok Lee, Anoop Cherian, Ye Wang, Tim K. Marks:
FX-GAN: Self-Supervised GAN Learning via Feature Exchange. WACV 2020: 3183-3191 - [i10]Anoop Cherian, Jue Wang, Chiori Hori, Tim K. Marks:
Spatio-Temporal Ranked-Attention Networks for Video Captioning. CoRR abs/2001.06127 (2020) - [i9]Abhinav Kumar, Tim K. Marks, Wenxuan Mou, Ye Wang, Michael J. Jones, Anoop Cherian, Toshiaki Koike-Akino, Xiaoming Liu, Chen Feng:
LUVLi Face Alignment: Estimating Landmarks' Location, Uncertainty, and Visibility Likelihood. CoRR abs/2004.02980 (2020)
2010 – 2019
- 2019
- [c27]Huda AlAmri, Vincent Cartillier, Abhishek Das, Jue Wang, Anoop Cherian, Irfan Essa, Dhruv Batra, Tim K. Marks, Chiori Hori, Peter Anderson, Stefan Lee, Devi Parikh:
Audio Visual Scene-Aware Dialog. CVPR 2019: 7558-7567 - [c26]Chiori Hori, Huda AlAmri, Jue Wang, Gordon Wichern, Takaaki Hori, Anoop Cherian, Tim K. Marks, Vincent Cartillier, Raphael Gontijo Lopes, Abhishek Das, Irfan Essa, Dhruv Batra, Devi Parikh:
End-to-end Audio Visual Scene-aware Dialog Using Multimodal Attention-based Video Features. ICASSP 2019: 2352-2356 - [c25]Abhinav Kumar, Tim K. Marks, Wenxuan Mou, Chen Feng, Xiaoming Liu:
UGLLI Face Alignment: Estimating Uncertainty with Gaussian Log-Likelihood Loss. ICCV Workshops 2019: 778-782 - [c24]Yuanwei Wu, Tim K. Marks, Anoop Cherian, Siheng Chen, Chen Feng, Guanghui Wang, Alan Sullivan:
Unsupervised Joint 3D Object Model Learning and 6D Pose Estimation for Depth-Based Instance Segmentation. ICCV Workshops 2019: 2777-2786 - [c23]Chiori Hori, Anoop Cherian, Tim K. Marks, Takaaki Hori:
Joint Student-Teacher Learning for Audio-Visual Scene-Aware Dialog. INTERSPEECH 2019: 1886-1890 - [i8]Koichiro Yoshino, Chiori Hori, Julien Perez, Luis Fernando D'Haro, Lazaros Polymenakos, R. Chulaka Gunasekara, Walter S. Lasecki, Jonathan K. Kummerfeld, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Xiang Gao, Huda AlAmri, Tim K. Marks, Devi Parikh, Dhruv Batra:
Dialog System Technology Challenge 7. CoRR abs/1901.03461 (2019) - [i7]Huda AlAmri, Vincent Cartillier, Abhishek Das, Jue Wang, Stefan Lee, Peter Anderson, Irfan Essa, Devi Parikh, Dhruv Batra, Anoop Cherian, Tim K. Marks, Chiori Hori:
Audio-Visual Scene-Aware Dialog. CoRR abs/1901.09107 (2019) - [i6]Seokhwan Kim, Michel Galley, R. Chulaka Gunasekara, Sungjin Lee, Adam Atkinson, Baolin Peng, Hannes Schulz, Jianfeng Gao, Jinchao Li, Mahmoud Adada, Minlie Huang, Luis A. Lastras, Jonathan K. Kummerfeld, Walter S. Lasecki, Chiori Hori, Anoop Cherian, Tim K. Marks, Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta:
The Eighth Dialog System Technology Challenge. CoRR abs/1911.06394 (2019) - 2018
- [c22]Ewa Magdalena Nowara, Tim K. Marks, Hassan Mansour, Ashok Veeraraghavan:
SparsePPG: Towards Driver Monitoring Using Camera-Based Vital Signs Estimation in Near-Infrared. CVPR Workshops 2018: 1272-1281 - [c21]Chiori Hori, Takaaki Hori, Gordon Wichern, Jue Wang, Teng-Yok Lee, Anoop Cherian, Tim K. Marks:
Multimodal Attention for Fusion of Audio and Spatiotemporal Features for Video Description. CVPR Workshops 2018: 2528-2531 - [i5]Varun Manjunatha, Srikumar Ramalingam, Tim K. Marks, Larry S. Davis:
Class Subset Selection for Transfer Learning using Submodularity. CoRR abs/1804.00060 (2018) - [i4]Huda AlAmri, Vincent Cartillier, Raphael Gontijo Lopes, Abhishek Das, Jue Wang, Irfan Essa, Dhruv Batra, Devi Parikh, Anoop Cherian, Tim K. Marks, Chiori Hori:
Audio Visual Scene-Aware Dialog (AVSD) Challenge at DSTC7. CoRR abs/1806.00525 (2018) - [i3]Chiori Hori, Huda AlAmri, Jue Wang, Gordon Wichern, Takaaki Hori, Anoop Cherian, Tim K. Marks, Vincent Cartillier, Raphael Gontijo Lopes, Abhishek Das, Irfan Essa, Dhruv Batra, Devi Parikh:
End-to-End Audio Visual Scene-Aware Dialog using Multimodal Attention-Based Video Features. CoRR abs/1806.08409 (2018) - 2017
- [c20]Chiori Hori, Takaaki Hori, Tim K. Marks, John R. Hershey:
Early and late integration of audio features for automatic video description. ASRU 2017: 430-436 - [c19]Chiori Hori, Takaaki Hori, Teng-Yok Lee, Ziming Zhang, Bret Harsham, John R. Hershey, Tim K. Marks, Kazuhiro Sumi:
Attention-Based Multimodal Fusion for Video Description. ICCV 2017: 4203-4212 - [i2]Chiori Hori, Takaaki Hori, Teng-Yok Lee, Kazuhiro Sumi, John R. Hershey, Tim K. Marks:
Attention-Based Multimodal Fusion for Video Description. CoRR abs/1701.03126 (2017) - 2016
- [c18]Bharat Singh, Tim K. Marks, Michael J. Jones, Oncel Tuzel, Ming Shao:
A Multi-stream Bi-directional Recurrent Neural Network for Fine-Grained Action Detection. CVPR 2016: 1961-1970 - [c17]Oncel Tuzel, Tim K. Marks, Salil Tambe:
Robust Face Alignment Using a Mixture of Invariant Experts. ECCV (5) 2016: 825-841 - [c16]Toshiaki Koike-Akino, Ruhi Mahajan, Tim K. Marks, Ye Wang, Shinji Watanabe, Oncel Tuzel, Philip V. Orlik:
High-accuracy user identification using EEG biometrics. EMBC 2016: 854-858 - 2015
- [c15]Ejaz Ahmed, Michael J. Jones, Tim K. Marks:
An improved deep learning architecture for person re-identification. CVPR 2015: 3908-3916 - [c14]Chavdar Papazov, Tim K. Marks, Michael J. Jones:
Real-time 3D head pose and facial landmark estimation from depth images using triangular surface patch features. CVPR 2015: 4722-4730 - [i1]Oncel Tuzel, Salil Tambe, Tim K. Marks:
Robust Face Alignment Using a Mixture of Invariant Experts. CoRR abs/1511.04404 (2015) - 2014
- [c13]Suren Kumar, Tim K. Marks, Michael J. Jones:
Improving Person Tracking Using an Inexpensive Thermal Infrared Sensor. CVPR Workshops 2014: 217-224 - [c12]Hao Tang, Shinji Watanabe, Tim K. Marks, John R. Hershey:
Log-linear dialog manager. ICASSP 2014: 4092-4096 - [c11]Shinji Watanabe, John R. Hershey, Tim K. Marks, Youichi Fujii, Yusuke Koji:
Cost-level integration of statistical and rule-based dialog managers. INTERSPEECH 2014: 323-327 - [c10]Jie Ni, Tim K. Marks, Oncel Tuzel, Fatih Porikli:
Detecting 3D geometric boundaries of indoor scenes under varying lighting. WACV 2014: 1-8 - 2012
- [j3]Ming-Yu Liu, Oncel Tuzel, Ashok Veeraraghavan, Yuichi Taguchi, Tim K. Marks, Rama Chellappa:
Fast object localization and pose estimation in heavy clutter for robotic bin picking. Int. J. Robotics Res. 31(8): 951-973 (2012) - 2011
- [c9]Akshay Asthana, Michael J. Jones, Tim K. Marks, Kinh H. Tieu, Roland Goecke:
Pose Normalization via Learned 2D Warping for Fully Automatic Face Recognition. BMVC 2011: 1-11 - [c8]Akshay Asthana, Tim K. Marks, Michael J. Jones, Kinh H. Tieu, M. V. Rohith:
Fully automatic pose-invariant face recognition via 3D pose normalization. ICCV 2011: 937-944 - [c7]Yuichi Taguchi, Tim K. Marks, John R. Hershey:
Entropy-based motion selection for touch-based registration using Rao-Blackwellized particle filtering. IROS 2011: 4690-4697 - 2010
- [j2]Tim K. Marks, John R. Hershey, Javier R. Movellan:
Tracking Motion, Deformation, and Texture Using Conditionally Gaussian Processes. IEEE Trans. Pattern Anal. Mach. Intell. 32(2): 348-363 (2010) - [c6]Ritwik Kumar, Michael J. Jones, Tim K. Marks:
Morphable Reflectance Fields for enhancing face recognition. CVPR 2010: 2606-2613 - [c5]Srikumar Ramalingam, Yuichi Taguchi, Tim K. Marks, Oncel Tuzel:
P2Pi: A Minimal Solution for Registration of 3D Points to 3D Planes. ECCV (5) 2010: 436-449 - [c4]Yuichi Taguchi, Tim K. Marks, Haruhisa Okuda:
Rao-Blackwellized particle filtering for probing-based 6-DOF localization in robotic assembly. ICRA 2010: 2610-2617
2000 – 2009
- 2009
- [j1]Tim K. Marks, Andrew Howard, Max Bajracharya, Garrison W. Cottrell, Larry H. Matthies:
Gamma-SLAM: Visual SLAM in unstructured environments using variance grid maps. J. Field Robotics 26(1): 26-51 (2009) - 2008
- [c3]Tim K. Marks, Andrew Howard, Max Bajracharya, Garrison W. Cottrell, Larry H. Matthies:
Gamma-SLAM: Using stereo vision and variance grid maps for SLAM in unstructured environments. ICRA 2008: 3717-3724 - 2006
- [b1]Tim K. Marks:
Facing uncertainty : 3D face tracking and learning with generative models. University of California, San Diego, USA, 2006 - 2004
- [c2]Tim K. Marks, John R. Hershey, J. Cooper Roddey, Javier R. Movellan:
3D Tracking of Morphable Objects Using Conditionally Gaussian Nonlinear Filters. CVPR Workshops 2004: 190 - [c1]Tim K. Marks, John R. Hershey, J. Cooper Roddey, Javier R. Movellan:
Joint Tracking of Pose, Expression, and Texture using Conditionally Gaussian Filters. NIPS 2004: 889-896
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-11 17:30 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint