Long-term recurrent convolutional networks for visual recognition and description

J Donahue, L Anne Hendricks… - Proceedings of the …, 2015 - openaccess.thecvf.com
Proceedings of the IEEE conference on computer vision and …, 2015openaccess.thecvf.com
Abstract Models comprised of deep convolutional network layers have dominated recent
image interpretation tasks; we investigate whether models which are also compositional, or"
deep", temporally are effective on tasks involving visual sequences or label sequences. We
develop a novel recurrent convolutional architecture suitable for large-scale visual learning
which is end-to-end trainable, and demonstrate the value of these models on benchmark
video recognition tasks, image to sentence generation problems, and video narration …
Abstract
Models comprised of deep convolutional network layers have dominated recent image interpretation tasks; we investigate whether models which are also compositional, or" deep", temporally are effective on tasks involving visual sequences or label sequences. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image to sentence generation problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are" doubly deep" in that they can be compositional in spatial and temporal" layers". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable length inputs (ie video frames) to variable length outputs (ie natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to state-of-the-art visual convnet models and can jointly trained, updating temporal dynamics and convolutional perceptual representations simultaneously. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.
openaccess.thecvf.com