×
Mar 6, 2013 · Abstract: We propose a new method to refine the result of video annotation by exploiting the semantic and visual context of video.
It is very useful for boosting video annotation performance, because semantic context is learned from labels given by people, indicating human intention. In ...
A new method to refine the result of video annotation by exploiting the semantic and visual context of video by using conditional random fields with ...
Abstract: We propose a new method to refine the result of video annotation by exploiting the semantic and visual context of video.
Jian Yi, Yuxin Peng, Jianguo Xiao: Exploiting Semantic and Visual Context for Effective Video Annotation. IEEE Trans. Multim. 15(6): 1400-1414 (2013).
• NYU is a video of 74 annotated frames with 11 semantic labels captured from a hand-held camera. The initial scene parsing maps are generated from a deep ...
By interacting with these distinct visual semantics, the audio-visual collaboration is strengthened and the deep model can develop more com- prehensive scene ...
In this paper we demonstrate that employing visual world semantics can support the automation of visual tasks better than learning visual cues directly from ...
Abstract. The rapid proliferation of video recording devices has led to a huge explosion of contents, determining an ever increasing interest towards the ...
ABSTRACT: Image annotation is an active field of research that serves as a precursor to video annotation. With the increase in.