skip to main content
10.1145/3434074.3447278acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
extended-abstract

Towards Visual Dialogue for Human-Robot Interaction

Published: 08 March 2021 Publication History

Abstract

The goal of the EU H2020-ICT funded SPRING project is to develop a socially pertinent robot to carry out tasks in a gerontological healthcare unit. In this context, being able to perceive its environment and have coherent and relevant conversations about the surrounding world is critical. In this paper, we describe current progress towards developing the necessary integrated visual and conversational capabilities for a robot to operate in such environments. Concretely, we introduce an architecture for conversing about objects and other entities present in an environment. The work described in this paper has applications that extend well beyond healthcare and can be used on any robot that requires to interact with its visual and spatial environment in order to be able to perform its duties.

Supplementary Material

MP4 File (hrivd1012vf.mp4)
Supplemental video

References

[1]
Amanda Cercas Curry, Ioannis Papaioannou, Alessandro Suglia, Shubham Agarwal, Igor Shalyminov, Xu Xinnuo, Ondvrej Duvsek, Arash Eshghi, Ioannis Konstas, Verena Rieser, and Oliver Lemon. 2018. Alana v2: Entertaining and Informative Open-Domain Social Dialogue using Ontologies and Entity Linking. In Proceedings of the 2nd Alexa Prize.
[2]
Christian Dondrup, Ioannis Papaioannou, and Oliver Lemon. 2019. Petri Net Machines for Human-Agent Interaction. arxiv: 1909.06174
[3]
Alexander Kirillov, Ross Girshick, Kaiming He, and Piotr Dollá r. 2019. Panoptic Feature Pyramid Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 6399--6408.
[4]
Kevin Lai, Liefeng Bo, and Dieter Fox. 2014. Unsupervised Feature Learning for 3D Scene Labeling. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). IEEE, 3050--3057.
[5]
Tsung Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft COCO: Common Objects in Context. In Proceedings of the European Conference on Computer Vision (ECCV). Springer Verlag, 740--755.
[6]
Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. 2019. Detectron2. https://github.com/facebookresearch/detectron2.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
HRI '21 Companion: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction
March 2021
756 pages
ISBN:9781450382908
DOI:10.1145/3434074
  • General Chairs:
  • Cindy Bethel,
  • Ana Paiva,
  • Program Chairs:
  • Elizabeth Broadbent,
  • David Feil-Seifer,
  • Daniel Szafir
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 08 March 2021

Check for updates

Author Tags

  1. conversational systems
  2. robotics
  3. visual dialogue

Qualifiers

  • Extended-abstract

Funding Sources

Conference

HRI '21
Sponsor:

Acceptance Rates

Overall Acceptance Rate 192 of 519 submissions, 37%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)10
  • Downloads (Last 6 weeks)1
Reflects downloads up to 27 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media