skip to main content
10.1145/3313831.3376702acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Acoustic Transparency and the Changing Soundscape of Auditory Mixed Reality

Published: 23 April 2020 Publication History

Abstract

Auditory headsets capable of actively or passively intermixing both real and virtual sounds are in-part acoustically transparent. This paper explores the consequences of acoustic transparency, both on the perception of virtual audio content, given the presence of a real-world auditory backdrop, and more broadly in facilitating a wearable, personal, private, always-available soundspace. We experimentally compare passive acoustically transparent, and active noise cancelling, orientation-tracked auditory headsets across a range of content types, both indoors and outdoors for validity. Our results show differences in terms of presence, realness and externalization for select content types. Via interviews and a survey, we discuss attitudes toward acoustic transparency (e.g. being perceived as safer), the potential shifts in audio usage that might be precipitated by adoption, and reflect on how such headsets and experiences fit within the area of Mixed Reality.

Supplementary Material

MP4 File (paper573vf.mp4)
Supplemental video
MP4 File (paper573pv.mp4)
Preview video
MP4 File (a573-mcgill-presentation.mp4)

References

[1]
Utku Günay Acer, Marc Van den Broeck, and Fahim Kawsar. 2019. The city as a personal assistant. In Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers - UbiComp/ISWC '19. ACM Press, New York, New York, USA, 1102--1106.
[2]
Emanuel Aguilera, Jose J. Lopez, Pablo Gutierrez, and Maximo Cobos. 2014. An immersive multi-party conferencing system for mobile devices using binaural audio. In Proceedings of the AES International Conference, Vol. 2014-Janua. https://pdfs.semanticscholar.org/b7c9/842d3a32550f 449dbbec55e65f3744b78d47.pdf
[3]
Robert Albrecht, Riitta Väänänen, and Tapio Lokki. 2016. Guided by music: pedestrian and cyclist navigation with route and beacon guidance. Personal and Ubiquitous Computing 20, 1 (2016), 121--145.
[4]
JJ Alvarsson, S Wiens, and Mats Nilsson. 2010. Stress recovery during exposure to nature sound and environmental noise. International Journal of Environmental Research and Public Health (2010). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2872309/
[5]
Jesse Armstrong, Brian Welsh, and Charlie Brooker. 2011. The entire history of you. (2011).
[6]
Ronald T. Azuma. 1997. A survey of augmented reality. Presence: Teleoperators and Virtual Environments 6, 4 (aug 1997), 355--385.
[7]
Sean Barbeau. 2018. Dual-frequency GNSS on Android devices. (2018). https://medium.com/@sjbarbeau/dual-f requency-gnss-on-android-devices-152b8826e1c
[8]
Amit Barde, William S. Helton, Gun Lee, and Mark Billinghurst. 2016. Binaural spatialisation over a bone conduction headset: Minimum discernable angular difference. In 140th Audio Engineering Society International Convention 2016, AES 2016. Audio Engineering Society. http://www.aes.org/e-lib/online/browse.cfm?elib=18254
[9]
Amit Barde, Gun Lee, William S. Helton, and Mark Billinghurst. 2017. Binaural Spatialisation Over a Bone Conduction Headset: Elevation Perception. In Proceedings of the 22nd International Conference on Auditory Display - ICAD 2016. The International Community for Auditory Display, Arlington, Virginia, 173--176.
[10]
Benjamin B. Bederson. 1995. Audio augmented reality: a prototype automated tour guide. In CHI '95 Conference Companion on Human Factors in Computing Systems. 210--211.
[11]
D. R. Begault, E. M. Wenzel, and M. R. Anderson. 2001. Direct comparison of the impact of head tracking, reverberation, and individualized head-related transfer functions on the spatial perception of a virtual speech source. AES: Journal of the Audio Engineering Society 49, 10 (2001), 904--916. http://www.aes.org/e-lib/browse.cfm?elib=10175
[12]
Frauke Behrendt. 2015. Locative Media as Sonic Interaction Design: Walking Through Placed Sounds. Wi: Journal of Mobile Media 9, 2 (2015), 25. http://wi .mobilities.ca/frauke-behrendt-locative-media-as-son ic-interaction-designwalking-through-placed-sounds
[13]
Jacob A. Benfield, B. Derrick Taff, Peter Newman, and Joshua Smyth. 2014. Natural sound facilitates mood recovery. Ecopsychology 6, 3 (2014), 183--188.
[14]
Karin Bijsterveld. 2010. Acoustic Cocooning. The Senses and Society 5, 2 (jul 2010), 189--211.
[15]
Mark Billinghurst and Shaleen Deo. 2007. Motion-tracking in spatial mobile audio-conferencing. Workshop on spatial audio for mobile devices (SAMD 2007) at Mobile HCI (2007), 3--6. https://www.researchgate.net/profile/Mark
[16]
Costas Boletsis and Dimitra Chasanidou. 2018. Smart tourism in cities: Exploring urban destinations with audio augmented reality. In ACM International Conference Proceeding Series. 515--521.
[17]
Bose. 2019a. Smart Noise Cancelling Headphones 700. (2019). https://www.bose.co.uk/en_gb/products/headph ones/noise_cancelling_headphones/noise-cancelling-he adphones-700.html
[18]
Bose. 2019b. SoundWear Companion Wearable Speaker. (2019). https://www.bose.co.uk/en_gb/products/speake rs/portable_speakers/soundwear-companion.html
[19]
Bose. 2019c. Wearables by Bose - AR Audio Sunglasses. (2019). https://www.bose.co.uk/en_gb/products/frames.html
[20]
A. L. Brown and Andreas Muhar. 2004. An approach to the acoustic design of outdoor space. Journal of Environmental Planning and Management 47, 6 (nov 2004), 827--842.
[21]
Alan Chamberlain, Mads Bødker, Adrian Hazzard, David McGookin, David De Roure, Pip Willcox, and Konstantinos Papangelis. 2017. Audio technology and mobile human computer interaction: From space and place, to social media, music, composition and creation. International Journal of Mobile Human Computer Interaction 9, 4 (2017), 25--40.
[22]
Priscilla Chueng. 2002. Designing Auditory Spaces to Support Sense of Place: The Role of Expectation. The Role of Place in On-line Communities (Workshop) November (2002). http://scom.hud.ac.uk/scompc2/research.htmhttp://pnainby.businesscatalyst.com/pdf/pchueng
[23]
Michael Cohen. 1993. Throwing, pitching and catching sound: audio windowing models and modes. International Journal of Man-Machine Studies 39, 2 (1993), 269 -- 304.
[24]
Poppy Crum. 2019. Hearables: Here come the: Technology tucked inside your ears will augment your daily life. IEEE Spectrum 56, 5 (may 2019), 38--43.
[25]
James J. Cummings and Jeremy N. Bailenson. 2015. How Immersive Is Enough? A Meta-Analysis of the Effect of Immersive Technology on User Presence. Media Psychology (may 2015). http://www.tandfonline.com/doi/abs/10.1080/15213269.2015.1015740
[26]
Jérome Daniel, J B Rault, and J D Polack. 1998. Ambisonics encoding of other audio formats for multiple listening conditions. 105th AES Convention 4795 (sep 1998), Preprint 4795. http://www.aes.org/e-lib/browse.cfm?elib=8385
[27]
Florian Denk, Marko Hiipakka, Birger Kollmeier, and Stephan M.A. Ernst. 2018. An individualised acoustically transparent earpiece for hearing devices. International Journal of Audiology 57, sup3 (may 2018), S62--S70.
[28]
Shaleen Deo, Mark Billinghurst, Nathan Adams, and Juha Lehikoinen. 2007. Experiments in spatial mobile audio-conferencing. In Mobility Conference 2007 - The 4th Int. Conf. Mobile Technology, Applications and Systems, Mobility 2007, Incorporating the 1st Int. Symp. Computer Human Interaction in Mobile Technology, IS-CHI 2007. 447--451.
[29]
Frank Van Diggelen, Roy Want, and Wei Wang. 2018. How to achieve 1-meter accuracy in Android. (2018). https://www.gpsworld.com/how-to-achieve-1-meter-accu racy-in-android/
[30]
H. Durrant-Whyte and T. Bailey. 2006. Simultaneous localization and mapping: part I. IEEE Robotics & Automation Magazine 13, 2 (jun 2006), 99--110.
[31]
Andreas Floros, Nicolas Alexander Tatlas, and Stylianos Potirakis. 2011. Sonic perceptual crossings: A tic-tac-toe audio game. In ACM International Conference Proceeding Series. 88--94.
[32]
Claudio Forlivesi, Utku Günay Acer, Marc van den Broeck, and Fahim Kawsar. 2018. Mindful interruptions: a lightweight system for managing interruptibility on wearables. In Proceedings of the 4th ACM Workshop on Wearable Systems and Applications - WearSys '18. ACM Press, New York, New York, USA, 27--32.
[33]
Jon Francombe. 2018. Vostok-K Incident: Immersive Audio Drama on Personal Devices. (2018). https://www.bbc.co.uk/rd/blog/2018--10-multi-speakerimmersive-audio-metadata
[34]
J Freeman and J Lessiter. 2001. Here, there and everywhere: The effects of multichannel audio on presence. (2001). https://smartech.gatech.edu/handle/1853/50621
[35]
Masaaki Fukumoto and Masaaki. 2018. SilentVoice: Unnoticeable Voice Input by Ingressive Speech. In The 31st Annual ACM Symposium on User Interface Software and Technology - UIST '18. ACM Press, New York, New York, USA, 237--246.
[36]
Samuel Gibbs. 2020. Amazon launches Alexa smart ring, smart glasses and earbuds. (2020). https://www.theguardian.com/technology/2019/sep/26/amazon -launches-alexa-smart-ring-smart-glasses-and-earbuds
[37]
Robert H. Gilkey and Janet M. Weisenberger. 1995. The Sense of Presence for the Suddenly Deafened Adult:Implications for Virtual Environments. Presence: Teleoperators and Virtual Environments 4, 4 (jan 1995), 357--363.
[38]
Stephen Groening. 2016. ?No One Likes to Be a Captive Audience': Headphones and In-Flight Cinema. Film History: An International Journal (2016). https://muse.jhu.edu/article/640056/summary
[39]
Catherine Guastavino, Véronique Larcher, Guillaume Catusseau, and Patrick Boussard. 2007. Spatial Audio Quality Evaluation: Comparing Transaural, Ambisonics and Stereo. In Proceedings of the 13th International Conference on Auditory Display. 53--58.
[40]
Etienne Hendrickx, Peter Stitt, Jean Christophe Messonnier, Jean Marc Lyzwa, Brian Fg Katz, and Catherine de Boishéraud. 2017. Influence of head tracking on the externalization of speech stimuli for non-individualized binaural synthesis. The Journal of the Acoustical Society of America 141, 3 (2017), 2011.
[41]
Kristian Hentschel and Jon Francombe. 2019. Framework for Web Delivery of Immersive Audio Experiences Using Device Orchestration. In Adjunct Proceedings of ACM TVX 2019. ACM.
[42]
Thomas R. Herzog and Patrick J. Bosley. 1992. Tranquility and preference as affective qualities of natural environments. Journal of Environmental Psychology 12, 2 (jun 1992), 115--127.
[43]
Mansoor Hyder, Michael Haun, and Christian Hoene. 2010. Placing the participants of a spatial audio conference call. In 2010 7th IEEE Consumer Communications and Networking Conference, CCNC 2010. IEEE, 1--7.
[44]
Robin Ince, Brian Cox, Jo Brand, Jo Dunkley, and Adam Masters. 2019. The Infinite Monkey Cage, Series 19, How We Measure the Universe. (2019). https://www.bbc.co.uk/programmes/m0002g8k
[45]
Andrew F. Jarosz and Jennifer Wiley. 2014. What Are the Odds? A Practical Guide to Computing and Reporting Bayes Factors. The Journal of Problem Solving (2014).
[46]
Robert E. Kass and Adrian E. Raftery. 1995. Bayes Factors. J. Amer. Statist. Assoc. 90, 430 (jun 1995), 773--795.
[47]
Fahim Kawsar and Alastair Beresford. 2019. EarComp 2019 - 1st International Workshop on Earable Computing. (2019). www.esense.io/earcomp
[48]
Fahim Kawsar, Chulhong Min, Akhil Mathur, and Allesandro Montanari. 2018a. Earables for Personal-Scale Behavior Analytics. IEEE Pervasive Computing 17, 3 (jul 2018), 83--89.
[49]
Fahim Kawsar, Chulhong Min, Akhil Mathur, Alessandro Montanari, Utku Günay Acer, and Marc Van den Broeck. 2018b. eSense: Open Earable Platform for Human Sensing. In Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems SenSys '18. ACM Press, New York, New York, USA, 371--372.
[50]
Richard King, Brett Leonard, Will Howie, and Jack Kelly. 2017. Real or Illusion? A Comparative Study of Captured Ambiance vs. Artificial Reverberation in Immersive Audio Applications. (may 2017). http://www.aes.org/e-lib/browse.cfm?elib=18621
[51]
Elise Labbé, Nicholas Schmidt, Jonathan Babin, and Martha Pharr. 2007. Coping with stress: The effectiveness of different types of music. Applied Psychophysiology Biofeedback 32, 3--4 (2007), 163--168.
[52]
Chris Lane and Richard Gould. 2018. Let's Test: 3D Audio Spatialization Plugins. (2018). http://designingsound.org/2018/03/29/lets-test-3d-a udio-spatialization-plugins/
[53]
Pontus Larsson, Aleksander Väljamäe, Daniel Västfjäll, Ana Tajadura-Jiménez, and Mendel Kleiner. 2010. Auditory-Induced Presence in Mixed Reality Environments and Related Technology. Springer, London, 143--163.
[54]
Catherine Lavandier and Boris Defréville. 2006. The contribution of sound source characteristics in the assessment of urban soundscapes. Acta Acustica united with Acustica 92, 6 (2006), 912--921. https://www.ingentaconnect.com/content/dav/aaua/2006/00000092/00000006/art00009
[55]
Robert W. Lindeman and Haruo Noma. 2007. A classification scheme for multi-sensory augmented reality. In Proceedings of the 2007 ACM symposium on Virtual reality software and technology - VRST '07. ACM Press, New York, New York, USA, 175.
[56]
Robert W. Lindeman, Haruo Noma, and Paulo Gonçalves De Barros. 2007. Hear-through and mic-through augmented reality: Using bone conduction to display spatialized audio. In 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, ISMAR.
[57]
Robert W. Lindeman, Haruo Noma, and Paulo Goncalves de Barros. 2008. An Empirical Study of Hear-Through Augmented Reality: Using Bone Conduction to Deliver Spatialized Audio. In 2008 IEEE Virtual Reality Conference. IEEE, 35--42.
[58]
Matthew Lombard and Theresa Ditton. 2006. At the Heart of It All: The Concept of Presence. Journal of Computer-Mediated Communication 3, 2 (jun 2006), 0--0.
[59]
Dave Madole and Durand Begault. 1995. 3-D Sound for Virtual Reality and Multimedia. Computer Music Journal 19, 4 (1995), 99.
[60]
Mapbox. 2019. Mapbox Live Location Platform. (2019). https://www.mapbox.com/
[61]
Georgios Marentakis and Rudolfs Liepins. 2014. Evaluation of hear-through sound localization. In Conference on Human Factors in Computing Systems Proceedings. 267--270.
[62]
Nick Mariette. 2007. From Backpack to Handheld: The Recent Trajectory of Personal Location Aware Spatial Audio. academia.edu (2007). http://www.academia.edu/download/698511/Nick
[63]
Nicholas Mariette. 2013. Human Factors Research in Audio Augmented Reality. In Human Factors in Augmented Reality Environments. Springer New York, New York, NY, 11--32.
[64]
Mark Weiser. 1991. The Computer for the 21st Century. Scientific American 265, 3 (1991), 94--104. https://ieeexplore.ieee.org/abstract/document/993141/ http://www.cse.nd.edu/
[65]
Grzegorz Maziarczyk. 2018. Transhumanist Dreams and/as Posthuman Nightmares in Black Mirror. Roczniki Humanistyczne 66, 11 Zeszyt specjalny (2018), 125--136.
[66]
Mark McGill, Daniel Boland, Roderick Murray-Smith, and Stephen Brewster. 2015a. A Dose of Reality: Overcoming Usability Challenges in VR Head-Mounted Displays. In Proc. of CHI '15. ACM Press, New York, New York, USA.
[67]
Mark McGill, John Williamson, and Stephen A. Brewster. 2015b. It Takes Two (To Co-View). In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video - TVX '15. ACM Press, 23--32.
[68]
David McGookin and David. 2016. Towards ubiquitous location-based audio. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct MobileHCI '16. ACM Press, New York, New York, USA, 1064--1068.
[69]
Siobhán McHugh. 2012. The affective power of sound: Oral history on radio. (2012).
[70]
Frank Melchior and Darius Satongar. 2019. Spatial Audio for Broadcast. (2019). https://www.bbc.co.uk/rd/projects/periphony-for-broadcast
[71]
Microsoft. 2019. Microsoft Soundscape. (2019). https://www.microsoft.com/en-us/research/product/sou ndscape/
[72]
Chulhong Min, Akhil Mathur, and Fahim Kawsar. 2018. Exploring audio and kinetic sensing on earable devices. In Proceedings of the 4th ACM Workshop on Wearable Systems and Applications - WearSys '18. ACM Press, New York, New York, USA, 5--10.
[73]
Miscellaneous. 2019a. Countryside birds 1 - ?GN00421Birds_chirp_during_the_early_spring-wxyz.wav'. (2019). http://www.spheric-collection.com/
[74]
Miscellaneous. 2019b. VR Ambisonic Sound Library ?Waves, Mediterranean Sea, Rocky Beach, Crushing, Birds, Seagulls, Splashing, Windy, Edro 3 Shipwreck, Cyprus, Zoom H3VR,9624, 02.wav'. (2019). https://freetousesounds.com/product/vr-ambisonic-sou nd-library/
[75]
Richard D. Morey. 2019. Computation of Bayes Factors for Common Designs [R package BayesFactor version 0.9.12--4.2]. cran (2019). https://cran.r-project.org/w eb/packages/BayesFactor/index.html
[76]
Craig D. Murray, Paul Arnold, and Ben Thornton. 2000. Presence accompanying induced hearing loss: Implications for immersive virtual environments. Presence: Teleoperators and Virtual Environments 9, 2 (apr 2000), 137--148.
[77]
Elizabeth D. Mynatt, Maribeth Back, Roy Want, and Ron Frederick. 1997. Audio Aura: Light-weight audio augmented reality. In UIST (User Interface Software and Technology): Proceedings of the ACM Symposium. 211--212. http://www.roywant.com/cv/papers/1997/199710(UIST97)AudioAura.pdf
[78]
Takuji Narumi, Shinya Nishizaka, Takashi Kajinami, Tomohiro Tanikawa, and Michitaka Hirose. 2011. MetaCookie+. In 2011 IEEE Virtual Reality Conference. IEEE, 265--266.
[79]
Danielle Navarro. 2018. Learning statistics with R: A tutorial for psychology students and other beginners. (Version 0.6.1). In University of New South Wales. Chapter Chapter 17. https://learningstatisticswithr.com/book/bayes.html
[80]
Rolf Nordahl and Niels Christian Nilsson. 2014. The Sound of Being There: Presence and Interactive Audio in Immersive Virtual Reality. The Oxford Handbook of Interactive Audio March 2016 (2014), 213--233.
[81]
Oculus. 2019. Oculus Audio SDK. (2019). https://developer.oculus.com/documentation/audiosdk/latest/concepts/book-audiosdk/
[82]
Kenji Ozawa, Yoshihiro Chujo, Yôiti Suzuki, and Toshio Sone. 2003. Psychological factors involved in auditory presence. Acoustical Science and Technology 24, 1 (2003), 42--44.
[83]
Joseph Plazak and Marta Kersten-Oertel. 2018. A survey on the affordances of 'hearables'. (jul 2018).
[84]
Jussi Rämö and Vesa Välimäki. 2012. Digital Augmented Reality Audio Headset. Journal of Electrical and Computer Engineering 2012 (oct 2012), 1--13.
[85]
Josephine Reid, Erik Geelhoed, Richard Hull, Kirsten Cater, and Ben Clayton. 2005. Parallel Worlds: Immersion in Location-Based Experiences. In CHI '05 Extended Abstracts on Human Factors in Computing Systems (CHI EA '05). Association for Computing Machinery, 1733--1736.
[86]
Resonance Audio. 2019a. Fundamental Concepts. (2019). https://resonance-audio.github.io/resonanceaudio/discover/concepts.html
[87]
Resonance Audio. 2019b. Resonance Audio - Spatial Audio Toolkit. (2019). https://resonance-audio.github.io/resonance-audio/
[88]
Julie Rico and Stephen Brewster. 2010. Gesture and voice prototyping for early evaluations of social acceptability in multimodal interfaces. In International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction on ICMI-MLMI '10. ACM Press, New York, New York, USA, 1.
[89]
Chiara Rossitto, Louise Barkhuus, and Arvid Engström. 2016. Interweaving place and story in a location-based audio drama. Personal and Ubiquitous Computing 20, 2 (2016), 245--260.
[90]
J Saldaña. 2015. The coding manual for qualitative researchers. Sage.
[91]
Nitin Sawhney and Chris Schmandt. 2000. Nomadic Radio: Speech and Audio Interaction for Contextual Messaging in Nomadic Environments. ACM Transactions on Computer-Human Interaction 7, 3 (2000), 353--383.
[92]
Hanna Kathrin Schraffenberger. Arguably augmented reality: relationships between the virtual and the real. Ph.D. Dissertation. https://openaccess.leidenuniv.nl/handle/1887/67292
[93]
T Schubert, F Friedmann, and H Regenbrecht. 2001. The experience of presence: Factor analytic insights. Presence (2001). http://ieeexplore.ieee.org/xpls/abs
[94]
Stefania Serafin, Michele Geronazzo, Cumhur Erkut, Niels C. Nilsson, and Rolf Nordahl. 2018. Sonic Interactions in Virtual Reality: State of the Art, Current Challenges, and Future Directions. IEEE Computer Graphics and Applications 38, 2 (mar 2018), 31--43.
[95]
Sheng Shen, Nirupam Roy, Junfeng Guan, Haitham Hassanieh, and Romit Roy Choudhury. 2018. MUTE: bringing IoT to noise cancellation. In Proceedings of the 2018 Conference of the ACM Special Interest Group on Data Communication - SIGCOMM '18. ACM Press, New York, New York, USA, 282--296.
[96]
Adalberto L Simeone, Eduardo Velloso, and Hans Gellersen. 2015. Substitutional Reality: Using the Physical Environment to Design Virtual Reality Experiences. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems CHI '15 (2015).
[97]
Maximilian Speicher, Brian D. Hall, and Michael Nebeling. 2019. What is Mixed Reality?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI '19. ACM Press, New York, New York, USA, 1--15.
[98]
Charles Spence and Maya U. Shankar. 2010. The influence of auditory cues on the perception of, and responses to, food and drink. Journal of Sensory Studies 25, 3 (mar 2010), 406--430.
[99]
Evgeny Stemasov, Gabriel Haas, Michael Rietzler, and Enrico Rukzio. 2018. Augmenting Human Hearing Through Interactive Auditory Mediated Reality. (2018).
[100]
D Storek, J Stuchlik, and F Rund. 2015. Modifications of the surrounding auditory space by augmented reality audio: Introduction to warped acoustic reality. (2015). https://smartech.gatech.edu/handle/1853/54143
[101]
Miikka Tikander, Matti Karjalainen, and Ville Riikonen. 2008. An Augmented Reality Audio Headset. In Proc. of the 11th International Conference on Digital Audio Effects (DAFx-08). https://core.ac.uk/download/pdf/21721632.pdf
[102]
Unity Technologies. 2019. Unity Game Engine. (2019). https://unity3d.com/
[103]
Vesa Valimaki, Andreas Franck, Jussi Ramo, Hannes Gamper, and Lauri Savioja. 2015a. Assisted listening using a headset: Enhancing audio perception in real, augmented, and virtual environments. IEEE Signal Processing Magazine 32, 2 (mar 2015), 92--99.
[104]
Vesa Valimaki, Andreas Franck, Jussi Ramo, Hannes Gamper, and Lauri Savioja. 2015b. Assisted Listening Using a Headset: Enhancing audio perception in real, augmented, and virtual environments. IEEE Signal Processing Magazine 32, 2 (mar 2015), 92--99.
[105]
Valve. 2018. Steam Audio. (2018). https://valvesoftware.github.io/steam-audio/
[106]
Daniel Västfjäll, Mendel Kleiner, and Tommy Görling. 2003. Affective Reactions to and Preference for Combinations of Interior Aircraft Sound and Vibration. The International Journal of Aviation Psychology 13, 1 (2003), 33--47.
[107]
Javier Vilanova. 2017. Extended Reality and Abstract Objects: A pragmalinguistic approach.
[108]
James Vincent. 2016. Are bone conduction headphones good enough yet? (2016). https://www.theverge.com/2016/10/24/13383616/bone-c onduction-headphones-best-pair-aftershokz
[109]
Vue. 2020. Meet Vue - Smart Glasses. (2020). https://www.enjoyvue.com/
[110]
Jack Wallen. 2015. Earables: The next big thing. (2015). https://www.techrepublic.com/article/earablesthe-next-big-thing/
[111]
Marian Weger, Thomas Hermann, and Robert Höldrich. 2019. Real-time Auditory Contrast Enhancement; Real-time Auditory Contrast Enhancement. (2019), 23--27.
[112]
Chris Welch. 2020. Apple AirPods Pro hands-on: the noise cancellation really works. (2020). https://www.theverge.com/2019/10/29/20938740/apple-airpodspro-hands-on-noise-cancellation-photos-features
[113]
CD Wickens, R Martin-Emerson, and I Larish. 1993. Attentional tunneling and the head-up display. (1993). https://ntrs.nasa.gov/search.jsp?R=19950063577
[114]
Kris Wouk. 2014. Intelligent Headset - the first big thing in wearable tech? (2014). https://www.soundguys.com/meet-intelligent-headset-h eadphones-first-big-thing-wearable-tech-676/
[115]
Wei Yang and Jian Kang. 2005. Soundscape and Sound Preferences in Urban Squares: A Case Study in Sheffield. Journal of Urban Design 10, 1 (feb 2005), 61--80.
[116]
Bin Yu, Jun Hu, Mathias Funk, and Loe Feijs. 2016. A Study on User Acceptance of Different Auditory Content for Relaxation. (2016).
[117]
Bin Yu, Jun Hu, Mathias Funk, and Loe Feijs. 2017. A Model of Nature Soundscape for Calm Information Display. Interacting with Computers 29, 6 (nov 2017), 813--823.

Cited By

View all

Index Terms

  1. Acoustic Transparency and the Changing Soundscape of Auditory Mixed Reality

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
    April 2020
    10688 pages
    ISBN:9781450367080
    DOI:10.1145/3313831
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 23 April 2020

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. acoustic transparency
    2. audio
    3. mixed reality

    Qualifiers

    • Research-article

    Funding Sources

    • European Research Council (ERC) Horizon 2020 (ViAjeRo)

    Conference

    CHI '20
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)190
    • Downloads (Last 6 weeks)19
    Reflects downloads up to 22 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media