skip to main content
10.1145/3359996.3364269acmconferencesArticle/Chapter ViewAbstractPublication PagesvrstConference Proceedingsconference-collections
research-article
Open access

Technologies for Social Augmentations in User-Embodied Virtual Reality

Published: 12 November 2019 Publication History

Abstract

Technologies for Virtual, Mixed, and Augmented Reality (VR, MR, and AR) allow to artificially augment social interactions and thus to go beyond what is possible in real life. Motivations for the use of social augmentations are manifold, for example, to synthesize behavior when sensory input is missing, to provide additional affordances in shared environments, or to support inclusion and training of individuals with social communication disorders. We review and categorize augmentation approaches and propose a software architecture based on four data layers. Three components further handle the status analysis, the modification, and the blending of behaviors. We present a prototype (injectX) that supports behavior tracking (body motion, eye gaze, and facial expressions from the lower face), status analysis, decision-making, augmentation, and behavior blending in immersive interactions. Along with a critical reflection, we consider further technical and ethical aspects.

References

[1]
Jascha Achenbach, Thomas Waltemate, Marc Erich Latoschik, and Mario Botsch. 2017. Fast Generation of Realistic Virtual Humans. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology(VRST ’17). ACM, New York, NY, USA, Article 12, 10 pages. https://doi.org/10.1145/3139131.3139154
[2]
Andreas Aristidou and Joan Lasenby. 2011. FABRIK: A fast, iterative solver for the Inverse Kinematics problem. Graphical Models 73, 5 (2011), 243 – 260. https://doi.org/10.1016/j.gmod.2011.05.003
[3]
A Terry Bahill, Deborah Adler, and Lawrence Stark. 1975. Most naturally occurring human saccades have magnitudes of 15 degrees or less.Investigative Ophthalmology & Visual Science 14, 6 (1975), 468–469.
[4]
Jeremy N. Bailenson, Andrew C. Beall, Jack Loomis, Jim Blascovich, and Matthew Turk. 2004. Transformed social interaction: Decoupling representation from behavior and form in collaborative virtual environments. Presence 13, 4 (2004), 428–441.
[5]
Jeremy N. Bailenson, Andrew C. Beall, Jack Loomis, Jim Blascovich, and Matthew Turk. 2005. Transformed social interaction, augmented gaze, and social influence in immersive virtual environments. Human communication research 31, 4 (2005), 511–537. https://doi.org/10.1111/j.1468-2958.2005.tb00881.x
[6]
Jeremy N Bailenson and Jim Blascovich. 2004. Avatars. In Encyclopedia of human-computer interaction. Berkshire Publishing Group.
[7]
Jeremy N. Bailenson, Nick Yee, Jim Blascovich, and Rosanna E. Guadagno. 2008. Transformed social interaction in mediated interpersonal communication. Mediated interpersonal communication 6 (2008), 77–99.
[8]
Jeremy N. Bailenson, Nick Yee, Dan Merget, and Ralph Schroeder. 2006. The effect of behavioral realism and form realism of real-time avatar faces verbal disclosure, nonverbal disclosure, emotion recognition, and copresence in dyadic interaction. Presence: Teleoperators and Virtual Environments 15, 4(2006), 359–372.
[9]
Stephan Beck, Andre Kunert, Alexander Kulik, and Bernd Froehlich. 2013. Immersive group-to-group telepresence. IEEE Transactions on Visualization and Computer Graphics 19, 4(2013), 616–625.
[10]
Steve Benford, John Bowers, Lennart E. Fahlén, Chris Greenhalgh, and Dave Snowdon. 1995. User Embodiment in Collaborative Virtual Environments. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’95). ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 242–249. https://doi.org/10.1145/223904.223935
[11]
Gary Bente, Sabine Rüggenberg, Nicole C. Krämer, and Felix Eschenburg. 2008. Avatar-Mediated Networking: Increasing Social Presence and Interpersonal Trust in Net-Based Collaborations. Human Communication Research 34, 2 (April 2008), 287–318. https://doi.org/10.1111/j.1468-2958.2008.00322.x
[12]
Mark Billinghurst and Hirokazu Kato. 1999. Collaborative mixed reality. In Proceedings of the First International Symposium on Mixed Reality. 261–284.
[13]
Chuck Blanchard, Scott Burgess, Young Harvill, Jaron Lanier, Ann Lasko, Mark Oberman, and Mike Teitel. 1990. Reality Built for Two: A Virtual Reality Tool. Vol. 24. ACM, New York, NY, USA. 35–36 pages. https://doi.org/10.1145/91394.91409
[14]
Steven M. Boker, Jeffrey F. Cohn, Barry-John Theobald, Iain Matthews, Timothy R. Brick, and Jeffrey R. Spies. 2009. Effects of Damping Head Movement and Facial Expression in Dyadic Conversation Using Real–time Facial Expression Tracking and Synthesized Avatars. Philosophical Transactions of the Royal Society B: Biological Sciences 364, 1535 (2009), 3485–3495. https://doi.org/10.1098/rstb.2009.0152
[15]
David Borland, Tabitha Peck, and Mel Slater. 2013. An Evaluation of Self-Avatar Eye Movement for Virtual Embodiment. IEEE Transactions on Visualization and Computer Graphics 19, 4(2013), 591–596. https://doi.org/10.1109/TVCG.2013.24
[16]
Joseph N Cappella. 1981. Mutual influence in expressive behavior: Adult–adult and infant–adult dyadic interaction.Psychological bulletin 89, 1 (1981), 101.
[17]
Emmanuel Frécon and Mårten Stenius. 1998. DIVE: A scaleable network architecture for distributed virtual environments. Distributed Systems Engineering 5, 3 (1998), 91.
[18]
Henry Fuchs, Gary Bishop, Kevin Arthur, Leonard McMillan, Ruzena Bajcsy, Sang Lee, Hany Farid, and Takeo Kanade. 1994. Virtual space teleconferencing using a sea of cameras. In Proc. First International Conference on Medical Robotics and Computer Assisted Surgery, Vol. 26.
[19]
Michael Gerhard, David J. Moore, and Dave J. Hobbs. 2001. Continuous presence in collaborative virtual environments: Towards a hybrid avatar-agent model for user representation. In Intl. Workshop Intelligent Virtual Agents. Springer, 137–155.
[20]
Jonathan Gratch, Anna Okhmatovskaia, Francois Lamothe, Stacy Marsella, Mathieu Morales, R. J. van der Werf, and Louis-Philippe Morency. 2006. Virtual Rapport. In Intelligent Virtual Agents, Jonathan Gratch, Michael Young, Ruth Aylett, Daniel Ballin, and Patrick Olivier (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 14–27.
[21]
Jonathon D. Hart, Thammathip Piumsomboon, Louise Lawrence, Gun A. Lee, Ross T. Smith, and Mark Billinghurst. 2018a. Emotion Sharing and Augmentation in Cooperative Virtual Reality Games. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts(CHI PLAY ’18 Extended Abstracts). ACM, New York, NY, USA, 453–460. https://doi.org/10.1145/3270316.3271543
[22]
Jonathon D Hart, Thammathip Piumsomboon, Gun Lee, and Mark Billinghurst. 2018b. Sharing and Augmenting Emotion in Collaborative Mixed Reality. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 212–213.
[23]
Chin-Chang Ho and Karl F. MacDorman. 2010. Revisiting the uncanny valley theory: Developing and validating an alternative to the Godspeed indices. Computers in Human Behavior 26, 6 (2010), 1508–1518.
[24]
Jim Hollan and Scott Stornetta. 1992. Beyond Being There. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’92). ACM, New York, NY, USA, 119–125. https://doi.org/10.1145/142750.142769
[25]
Michael J Hove and Jane L Risen. 2009. It’s all in the timing: Interpersonal synchrony increases affiliation. Social cognition 27, 6 (2009), 949–960.
[26]
Anya Kolesnichenko, Joshua McVeigh-Schultz, and Katherine Isbister. 2019. Understanding Emerging Design Practices for Avatar Systems in the Commercial Social VR Ecology. In Proceedings of the 2019 on Designing Interactive Systems Conference. ACM, 241–252.
[27]
Lucas Kovar, Michael Gleicher, and Frédéric Pighin. 2008. Motion Graphs. In ACM SIGGRAPH 2008 Classes(SIGGRAPH ’08). ACM, New York, NY, USA, 51:1–51:10. https://doi.org/10.1145/1401132.1401202
[28]
Alexander Kulik, André Kunert, Stephan Beck, Carl-Feofan Matthes, Andre Schollmeyer, Adrian Kreskowski, Bernd Fröhlich, Sue Cobb, and Mirabelle D’Cruz. 2018. Virtual Valcamonica: collaborative exploration of prehistoric petroglyphs and their surrounding environment in multi-user virtual reality. PRESENCE: Teleoperators and Virtual Environments 26, 03(2018), 297–321.
[29]
Jessica L. Lakin, Valerie E. Jefferis, Clara Michelle Cheng, and Tanya L. Chartrand. 2003. The chameleon effect as social glue: Evidence for the evolutionary significance of nonconscious mimicry. J. nonverbal behavior 27, 3 (2003), 145–162.
[30]
Marc Erich Latoschik, Florian Kern, Jan-Philipp Stauffert, Andrea Bartl, Mario Botsch, and Jean-Luc Lugrin. 2019. Not Alone Here?! Scalability and User Experience of Embodied Ambient Crowds in Distributed Social Virtual Reality. IEEE transactions on visualization and computer graphics 25, 5(2019), 2134–2144.
[31]
Marc Erich Latoschik, Daniel Roth, Dominik Gall, Jascha Achenbach, Thomas Waltemate, and Mario Botsch. 2017. The Effect of Avatar Realism in Immersive Social Virtual Realities. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology(VRST ’17). ACM, New York, NY, USA, Article 39, 10 pages. https://doi.org/10.1145/3139131.3139156
[32]
A. Lecuyer, F. Lotte, R.B. Reilly, R. Leeb, M. Hirose, and M. Slater. 2008. Brain-Computer Interfaces, Virtual Reality, and Videogames. Computer 41, 10 (Oct. 2008), 66–72. https://doi.org/10.1109/MC.2008.410
[33]
Jehee Lee, Jinxiang Chai, Paul SA Reitsma, Jessica K. Hodgins, and Nancy S. Pollard. 2002b. Interactive control of avatars animated with human motion data. In ACM Transactions on Graphics (TOG), Vol. 21. ACM, 491–500.
[34]
Sooha Park Lee, Jeremy B. Badler, and Norman I. Badler. 2002a. Eyes Alive. In ACM Transactions on Graphics (TOG), Vol. 21. ACM, 637–644.
[35]
Hao Li, Laura Trutoiu, Kyle Olszewski, Lingyu Wei, Tristan Trutna, Pei-Lun Hsieh, Aaron Nicholls, and Chongyang Ma. 2015. Facial performance sensing head-mounted display. ACM Transactions on Graphics (TOG) 34, 4 (2015), 47.
[36]
Simon Liversedge, Iain Gilchrist, and Stefan Everling. 2011. The Oxford handbook of eye movements. Oxford University Press.
[37]
Andrew Maimone and Henry Fuchs. 2011. A first look at a telepresence system with room-sized real-time 3d capture and life-sized tracked display wall. Proceedings of ICAT 2011(2011), 4–9.
[38]
Marvin Minsky. 1980. Telepresence. (1980).
[39]
Arjun Nagendran, Remo Pillat, Adam Kavanaugh, Greg Welch, and Charles Hughes. 2014. A unified framework for individualized avatar-based interactions. Presence: Teleoperators and Virtual Environments 23, 2(2014), 109–132.
[40]
Soo Youn Oh, Jeremy Bailenson, Nicole Krämer, and Benjamin Li. 2016. Let the Avatar Brighten Your Smile: Effects of Enhancing Facial Expressions in Virtual Environments. PLOS ONE 11, 9 (09 2016), 1–18. https://doi.org/10.1371/journal.pone.0161794
[41]
OptiTrack, Inc.2018. Flex 3 - Technical Specifications for the Flex 3. http://optitrack.com/products/flex-3/specs.html
[42]
Sergio Orts-Escolano, Christoph Rhemann, Sean Fanello, Wayne Chang, Adarsh Kowdle, Yury Degtyarev, David Kim, Philip L Davidson, Sameh Khamis, Mingsong Dou, 2016. Holoportation: Virtual 3d teleportation in real-time. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology. ACM, 741–754.
[43]
Kazuhiro Otsuka. 2016. MMSpace: Kinetically-augmented telepresence for small group-to-group conversations. In 2016 IEEE Virtual Reality (VR). 19–28.
[44]
Thammathip Piumsomboon, Gun A. Lee, Jonathon D. Hart, Barrett Ens, Robert W. Lindeman, Bruce H. Thomas, and Mark Billinghurst. 2018. Mini-Me: An Adaptive Avatar for Mixed Reality Remote Collaboration. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems(CHI ’18). ACM, New York, NY, USA, Article 46, 13 pages. https://doi.org/10.1145/3173574.3173620
[45]
David Roberts, Robin Wolff, John Rae, Anthony Steed, Rob Aspin, Moira McIntyre, Adriana Pena, Oyewole Oyekoya, and Will Steptoe. 2009. Communicating eye-gaze across a distance: Comparing an eye-gaze enabled immersive collaborative virtual environment, aligned video conferencing, and being together. In 2009 IEEE Virtual Reality Conference. IEEE, 135–142. 00026.
[46]
Charles Rose, Brian Guenter, Bobby Bodenheimer, and Michael F. Cohen. 1996. Efficient generation of motion transitions using spacetime constraints. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. ACM, 147–154.
[47]
Daniel Roth, Constantin Kleinbeck, Tobias Feigl, Christopher Mutschler, and Marc Erich Latoschik. 2018a. Beyond Replication: Augmenting Social Behaviors in Multi-User Virtual Realities. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). 215–222. https://doi.org/10.1109/VR.2018.8447550
[48]
Daniel Roth, Peter Kullmann, Gary Bente, Dominik Gall, and Marc Erich Latoschik. 2018b. Effects of Hybrid and Synthetic Social Gaze in Avatar-Mediated Interactions. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). 103–108. https://doi.org/10.1109/ISMAR-Adjunct.2018.00044
[49]
Daniel Roth, Marc Erich Latoschik, Kai Vogeley, and Gary Bente. 2015. Hybrid Avatar-Agent Technology–A Conceptual Step Towards Mediated “Social” Virtual Reality and its Respective Challenges. i-com 14, 2 (2015), 107–114. https://doi.org/10.1515/icom-2015-0030
[50]
Daniel Roth, Jean-Luc Lugrin, Julia Büser, Gary Bente, Arnulph Fuhrmann, and Marc Erich Latoschik. 2016a. A simplified inverse kinematic approach for embodied VR applications. In 2016 IEEE Virtual Reality (VR). 275–276. https://doi.org/10.1109/VR.2016.7504760
[51]
Daniel Roth, Jean-Luc Lugrin, Dmitri Galakhov, Arvid Hofmann, Gary Bente, Marc Erich Latoschik, and Arnulph Fuhrmann. 2016b. Avatar realism and social interaction quality in virtual reality. In 2016 IEEE Virtual Reality (VR). 277–278. https://doi.org/10.1109/VR.2016.7504761
[52]
Daniel Roth, David Mal, Christian Felix Purps, Peter Kullmann, and Marc Erich Latoschik. 2018c. Injecting Nonverbal Mimicry with Hybrid Avatar-Agent Technologies: A Naïve Approach. In Proceedings of the Symposium on Spatial User Interaction(SUI ’18). ACM, New York, NY, USA, 69–73. https://doi.org/10.1145/3267782.3267791
[53]
Daniel Roth, Kristoffer Waldow, Felix Stetter, Gary Bente, Marc Erich Latoschik, and Arnulph Fuhrmann. 2016c. SIAMC: A Socially Immersive Avatar Mediated Communication Platform. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology(VRST). ACM, Munich, Germany, 357–358. https://doi.org/10.1145/2993369.2996302
[54]
Daniel Roth, Franziska Westermeier, Larissa Brübach, Tobias Feigl, Christian Schell, and Marc Erich Latoschik. 2019. Brain 2 Communicate: EEG-based Affect Recognition to Augment Virtual Social Interactions. (2019). https://doi.org/10.18420/muc2019-ws-571
[55]
Jamie Shotton, Toby Sharp, Alex Kipman, Andrew Fitzgibbon, Mark Finocchio, Andrew Blake, Mat Cook, and Richard Moore. 2013. Real-time human pose recognition in parts from single depth images. Commun. ACM 56, 1 (2013), 116–124.
[56]
Mel Slater. 2014. Grand Challenges in Virtual Environments. Frontiers in Robotics and AI 1 (2014), 3. https://doi.org/10.3389/frobt.2014.00003
[57]
Mel Slater, Bernhard Spanlang, Maria V. Sanchez-Vives, and Olaf Blanke. 2010. First Person Experience of Body Transfer in Virtual Reality. PLOS ONE 5, 5 (05 2010), 1–9. https://doi.org/10.1371/journal.pone.0010564
[58]
Mel Slater and Anthony Steed. 2002. Meeting People Virtually: Experiments in Shared Virtual Environments. In The Social Life of Avatars: Presence and Interaction in Shared Virtual Environments, Ralph Schroeder (Ed.). Springer London, London, 146–171. https://doi.org/10.1007/978-1-4471-0277-9_9
[59]
Harrison Jesse Smith and Michael Neff. 2018. Communication Behavior in Embodied Virtual Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 289. https://doi.org/10.1145/3173574.3173863
[60]
Oleg Špakov. 2012. Comparison of Eye Movement Filters Used in HCI. In Proceedings of the Symposium on Eye Tracking Research and Applications - ETRA ’12. ACM Press, 281. https://doi.org/10.1145/2168556.2168616
[61]
Bernhard Spanlang, Jean-Marie Normand, David Borland, Konstantina Kilteni, Elias Giannopoulos, Ausiàs Pomés, Mar González-Franco, Daniel Perez-Marcos, Jorge Arroyo-Palacios, Xavi Navarro Muncunill, and Mel Slater. 2014. How to Build an Embodiment Lab: Achieving Body Representation Illusions in Virtual Reality. Frontiers in Robotics and AI 1 (2014), 9. https://doi.org/10.3389/frobt.2014.00009
[62]
Anthony Steed, William Steptoe, Wole Oyekoya, Fabrizio Pece, Tim Weyrich, Jan Kautz, Doron Friedman, Angelika Peer, Massimiliano Solazzi, Franco Tecchia, and others. 2012. Beaming: an asymmetric telepresence system.IEEE computer graphics and applications 32, 6 (2012), 10. https://doi.org/10.1109/MCG.2012.110
[63]
William Steptoe, Oyewole Oyekoya, Alessio Murgia, Robin Wolff, John Rae, Estefania Guimaraes, David Roberts, and Anthony Steed. 2009. Eye tracking for avatar eye gaze control during object-focused multiparty interaction in immersive collaborative virtual environments. In 2009 IEEE Virtual Reality Conference. IEEE, 83–90.
[64]
Evan A Suma, Belinda Lange, Albert Skip Rizzo, David M Krum, and Mark Bolas. 2011. Faast: The flexible action and articulated skeleton toolkit. In Virtual Reality Conference (VR), 2011 IEEE. IEEE, 247–248.
[65]
Unity Technologies. 2018. Unity - Manual: Preparing your own character. https://docs.unity3d.com/560/Documentation/Manual/Preparingacharacterfromscratch.html
[66]
Linda Tickle-Degnen and Robert Rosenthal. 1990. The Nature of Rapport and Its Nonverbal Correlates. Psychological Inquiry 1, 4 (1990), 285–293.
[67]
Michael Tomasello. 2009. Why we cooperate. MIT press.
[68]
Jolanda Tromp, Adrian Bullock, Anthony Steed, Amela Sadagic, Mel Slater, and Emmanuel Frécon. 1998. Small group behaviour experiments in the coven project. IEEE Computer Graphics and Applications 18, 6 (1998), 53–63.
[69]
Unity Technologies, ApS. 2018. Dissonance Voice Chat - Asset Store. https://assetstore.unity.com/packages/tools/audio/dissonance-voice-chat-70078?aid=1100lJDF&utm_source=aff
[70]
Alessandro Vinciarelli, Maja Pantic, and Hervé Bourlard. 2009. Social signal processing: Survey of an emerging domain. Image and vision computing 27, 12 (2009), 1743–1759.
[71]
Johannes Wagner, Florian Lingenfelser, Tobias Baur, Ionut Damian, Felix Kistler, and Elisabeth André. 2013. The social signal interpretation (SSI) framework: multimodal signal processing and recognition in real-time. In Proceedings of the 21st ACM international conference on Multimedia. ACM, 831–834.
[72]
Thomas Waltemate, Dominik Gall, Daniel Roth, Mario Botsch, and Marc Erich Latoschik. 2018. The Impact of Avatar Personalization and Immersion on Virtual Body Ownership, Presence, and Emotional Response. IEEE Transactions on Visualization and Computer Graphics 24, 4 (April 2018), 1643–1652. https://doi.org/10.1109/TVCG.2018.2794629
[73]
L. Wilson and K. Seko. 2017-09. Head Mounted Display. Google Patents. https://www.google.de/patents/US20170262703
[74]
Andrea Stevenson Won, Jeremy N Bailenson, and Jaron Lanier. 2015. Homuncular flexibility: the human ability to inhabit nonhuman avatars. Emerging Trends in the Social and Behavioral Sciences: An Interdisciplinary, Searchable, and Linkable Resource(2015), 1–16.
[75]
Jihun Yu and Jungwoon Park. 2017. Head-Mounted Display with Facial Expression Detecting Capability. Google Patents. https://www.google.com/patents/US20170091535

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
VRST '19: Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology
November 2019
498 pages
ISBN:9781450370011
DOI:10.1145/3359996
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 November 2019

Check for updates

Author Tags

  1. artificial intelligence
  2. augmented social interaction
  3. virtual reality

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

VRST '19
VRST '19: 25th ACM Symposium on Virtual Reality Software and Technology
November 12 - 15, 2019
NSW, Parramatta, Australia

Acceptance Rates

Overall Acceptance Rate 66 of 254 submissions, 26%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)778
  • Downloads (Last 6 weeks)90
Reflects downloads up to 24 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media