skip to main content
research-article

Commercialization of multimodal systems

Published: 01 July 2019 Publication History
First page of PDF

References

[1]
S. Al Moubayed, J. Beskow, G. Skantze, and B. Granström. 2012. Furhat: a back-projected human-like robot head for multiparty human-machine interaction. Cognitive Behavioural Systems. Lecture Notes in Computer Science. Springer. 633
[2]
M. Allodi, A. Broggi, D. Giaquinto, M. Patander, and A. Prioletti. 2016. Machine learning in tracking associations with stereo vision and lidar observations for an autonomous vehicle. In IEEE Intelligent Vehicles Symposium (IV), 2016, pp. 648--653. 625, 626, 777
[3]
E. Alpaydin. 2018. Classifying multimodal data. In S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos, and A. Krüger, editors, Handbook of Multimodal-Multisensor Interfaces, Volume 2: Signal Processing, Architectures, and Detection of Emotion and Cognition. Morgan & Claypool Publishers, San Rafael, CA. 636
[4]
A. Aviles-Rivero, S. M. Alsaleh, J. Philbeck, S. P. Raventos, N. Younes, J. K. Hahn, and A. Casals. August 2018. Sensory substitution for force feedback recovery: a perception experimental study. ACM Transactions on Applied Perception (TAP), 15(3). 629
[5]
J. Barnett, R. Akolkar, R. J. Auburn, M. Bodell, D. C. Burnett, J. Carter, S. McGlashan, T. Lager, M. Helbing, R. Hosn, T. V. Raman, K. Reifenrath, and N. Rosenthal. 2015. State Chart XML (SCXML): State machine notation for control abstraction. http://www.w3.org/TR/scxml/ 640
[6]
M. J. Barry, P. M. Gallagher, J. S. Skinner, and F. J. Fowler Jr. February 2012. Adverse Effects of Robotic-Assisted Laparoscopic Versus Open Retropubic Radical Prostatectomy Among a Nationwide Random Sample of Medicare-Age Men. Journal of Clinical Oncology, 30(5): 513--518. 628
[7]
B. T. Bethea, A. M. Okamura, M. Kitagawa, T. P. Fitton, S. M. Cattaneo, V. L. Gott, W. A. Baumgartner, and D. D. Yuh. June 2004. Application of Haptic Feedback to Robotic Surgery. Journal Laparoendosc Advanced Surgical Techniques, 14(3): 191--195. 629
[8]
L. Bodenhagen, S. D. Suvei, W. K. Juel, E. Brander, and N. Krüger. 2019. Robot technology for future welfare: meeting upcoming societal challenges---an outlook with offset in the development in Scandinavia, Health Technology. 635
[9]
D. Bohus, S. Andrist, and M. Jalobeanu. November 2017. Rapid development of multimodal interactive systems: a demonstration of platform for situated intelligence. ICMI 2017 Proceedings of the 19th ACM International Conference on Multimodal Interaction. 641
[10]
J. Cacace, A. Finzi, and V. Lippiello. 2016. Multimodal interaction with multiple co-located drones in search and rescue mission. In Proceedings of IEEE International Conference on Robot and Human Interactive Communication. RoMan.
[11]
S. Calderwood, K. McAreavey, W. Liu, J. Hong. 2017. Context-dependent combination of sensor information in Dempster-Shafer theory for BDI. Knowledge Information System. 51, 259--285. 624, 767
[12]
D. Cameron, A. Millings, S. Fernando, E. Collins, R. Moore, A. Sharkey, V. Evers, and T. Prescott. April 2015. The effects of robot facial emotional expressions and gender on child-robot interaction in a field study. 4th International Symposium on New Frontiers in Human-Robot Interaction, 632
[13]
R. O. Chavez-Garcia and O. Aycard. 2016. Multiple sensor fusion and classification for moving object detection and tracking. IEEE Transactions on Intelligent Transportation Systems, 17(2): 525--534. 626
[14]
J. Choi, S. Ulbrich, B. Lichte, and M. Maurer. 2013. Multi-target tracking using a 3d-lidar sensor for autonomous vehicles. 16th International IEEE Conference on Intelligent Transportation Systems-(ITSC), pp. 881--88. 626
[15]
K. Dautenhahn, C. L. Nehaniv, M. L. Walters, B. Robins, H. Kose-Bagci, N. Assif Mirza, and M. Blow. 2009. KASPAR-a minimally expressive humanoid robot for human-robot interaction research. Applied Bionics and Biomechanics, 6(3--4): 369--397. 635
[16]
L. A. Dickstein-Fischer, D. E. Crone-Todd, I. M. Chapman, A. T. Fathima, and G. S. Fischer. 2018. Socially assistive robots: current status and future prospects for autism interventions. Innovation and Entrepreneurship in Health, 5: 15--25. 635
[17]
J. J. Diehl, L. M. Schmitt, M. Villano, and C. R. Crowell. 2012. The clinical use of robots for individuals with autism spectrum disorders: a critical review, Research in Autism Spectrum Disorder, 6(1): 249--262. 635
[18]
P. Ekman and W. V. Friesen. 1978. Facial action coding system. A Technique for Measuring Facial Movement, Consulting Psychologists Press. Palo Alto. 643
[19]
J. Elfring, R. Appeldoorn, S. van den Dries and M. Kwakkernaat. 2016. Effective world modeling: Multisensor data fusion methodology for automated driving. Sensors, 16: 1668. 627
[20]
N. Enayati, E. De Momi, and G. Ferrigno. February 2016. Haptics in robot-assisted surgery: challenges and benefits. IEEE Reviews In Biomedical Engineering. 629
[21]
J. Fierrez, A. Morales, R. Vera-Rodriguez, and D. Camacho. November 2018. Multiple classifiers in biometrics. Part 1: Fundamentals and review. Information Fusion 44. 637
[22]
Food and Drug Administration. 2019. Implanted brain-computer interface (BCI) devices for patients with paralysis or amputation---Non-clinical testing and clinical considerations. US Food and Drug Administration.
[23]
P. Fosse. November 2019. Deep dive into Tesla's autopilot & self-driving architecture vs. Lidar-based systems. Clean Technica. https://cleantechnica.com/2018/11/04/deep-dive-into-teslas-autopilot-self-driving-architecture-vs-lidar-based-systems/ 627
[24]
H. Franco, J. Zheng, J. Butzberger, F. Cesari M. Frandsen, J. Arnold, V. R. R. Gadde, A. Stolcke, and V. Abrashy. March 2002. DynaSpeak: SRI's scalable speech recognizer for embedded and mobile systems. HLT '02 Proceedings of the Second International Conference on Human Language Technology Research. Morgan Kaufmann Publishers Inc. San Francisco CA. 623
[25]
T. L. Frey, C. Aguilar, K. Engebretson, D. Faulk, and L. G. Lenning. 2018. F-35 information fusion. 2018 Aviation Technology, Integration, and Operations Conference. Atlanta, GA. 623
[26]
T. Gulrez. 2014. Role of Haptic Interfaces in Robot-Assisted Minimally Invasive Surgery. International Journal of Swarm Intelligence and Evolutionary Computing, 3(2). 629
[27]
R. Haenni. July 2004. Shedding new light on Zadeh's criticism of Dempster's rule of combination. 7th International Conference on Information Fusion, Stockholm, Sweden. 624, 767
[28]
M. Johnston. 2009. Building multimodal applications with EMMA. International Conference on Multimodal Interfaces. Cambridge, MA. 640
[29]
G. Jones, N. Berthouze, R. Bielski, and S. Julier. 2010. Towards a situated, multimodal interface for multiple UAV control. IEEE International Conference on Robotics and Automation (ICRA), pp. 1739--1744.
[30]
S. J. Julier and J. K. Uhlmann. 2004. Unscented filtering and nonlinear estimation. In Proceedings of the IEEE, 92(3): 401--422. 625, 626, 777
[31]
E. Kaiser, A. Olwal, D. McGee, H. Benko, A. Corradini, X. Li, P. Cohen, and S. Feiner. 2003. Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality. Proceedings of the 5th International Conference on Multimodal Interfaces, pp. 12--19. ACM Press. 645
[32]
R. E. Kalman. 1960. A New Approach to Linear Filtering and Prediction Problems. Journal of Basic Engineering, 82: 35. 625, 777
[33]
S. Kane, K. McGurgan, M. Voshell, C. Monnier, S. German, and A. Ost. 2017. A multimodal interface for natural operator teaming with autonomous robots (MINOTAUR). In J. Chen editor, Advances in Human Factors in Robots and Unmanned Systems, pp. 99--108. Springer.
[34]
R. Kasper and S. Schmidt. September 2008. Sensor-data-fusion for an autonomous vehicle using a Kalman-filter. 6th International Symposium on Intelligent Systems and Informatics, pp. 26--27. Subotica, Serbia.
[35]
J. Kittler, M. Hatef, R. P. W. Duin, and J. Matas. 1998. On combining classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(3): 226--239. 636
[36]
J. Kocic, N. Jovičic, and V. Drndarevic. November 2018. Sensors and sensor fusion in autonomous vehicles. 6th Telecommunications Forum TELFOR.
[37]
K. Laenen, S. Zoghbi, and M.-F. Moens. 2018. Web search of fashion items with multimodal querying. In Proceedings Of the 18th International Conference on Web Search and Data Mining, pp. 342--350. ACM Press. 644
[38]
M. A. Lebedev and M. A. L. Nicolelis. 2017. Brain-machine interfaces: From basic science to neuroprostheses and neurorehabilitation. Physiological Reviews, 97(2): 767--837. American Physiological Society. 631
[39]
G. Lemons, K. Carrington, T. Frey, and J. Ledyard. June 2018. F-35 mission systems design, development and verification. AIAA AVIATION Forum, 2018 Aviation Technology, Integration, and Operations Conference, pp. 25--29. Atlanta, GA. 623
[40]
T. S. Lendvay, B. Hannaford, and R. M Satava. 2013. Future of robotic surgery. The Cancer Journal 19(2: 109--119. 628
[41]
M. Liang, B. Yang, S. Wang, and R. Urtasun. 2018. Deep continuous fusion for multi-densor 3D object detection. Proceedings of European Conference on Computer Vision (ECCV). 626
[42]
D. A. Maerz, L. N. Beck, A. J. Sim, and D. M. Gainsburg. 2017. Complications of robotic-assisted laparoscopic surgery distant from the surgical site. British Journal of Anaesthesia, 118(4): 492--503. 628
[43]
T. Matsumoto, H. Matsumoto, K. Yamada, and S. Hoshino. 2002. Impact of artificial "Gummy" fingers on fingerprint systems. Proceedings of SPIE Vol. #4677, Optical Security and Counterfeit Deterrence Techniques. 637
[44]
I. Maza, F. Caballero, R. Molina, N. Pena, and A. Ollero. 2010. Multimodal interface technologies for UAV ground control stations: A comparative analysis. Journal of Intelligent and Robotic Systems, 57(1--4): 371--391.
[45]
C. McCool, S. Marcel, A. Hadid, M. Pietikäinen, P. Matejka, J. Cernocky, N. Poh, J. Kittler, A. Larcher, C. Levy, D. Matrouf, J.-F. Bonastre, P. Tresadern, and T. Cootes. 2012. Bi-modal person recognition on a mobile phone: using mobile phone data. IEEE International Conference on Multimedia and Expo Workshops (ICMEW), pp. 635--640. 637
[46]
G. Meccariello, F. Faedi, S. AlGhamdi, F. Montevecchi, E. Firinu, C. Zanotti, D. Cavaliere, R. Gunelli, M. Taurchini, A. Amadori, and C. Vicini. 2016. An experimental study about haptic feedback in robotic surgery: may visual feedback substitute tactile feedback? Journal of Robotic Surgery, 10(1): 57--61. 628
[47]
A. Melamed, D. J. Margul, L. Chen, N. L. Keating, M. G. Del Carmen, J. Yang, B. L. Seagle, A. Alexander, E. L. Barber, L. W. Rice, J. D. Wright, M. Kocherginsky, S. Shahabi, and J. A. Rauh-Hain. 2018. Survival after Minimally Invasive Radical Hysterectomy for Early-Stage Cervical Cancer. New England Journal of Medicine. 15,379(20): 1905--1914. Epub. 629
[48]
R. Mola. 2017. Super Helmet: F-35 pilots get X-ray vision and other magic powers. Air and Space Magazine, September, 2017. https://www.airspacemag.com/military-aviation/super-helmet-180964342/. 623
[49]
C. Monnier. May 2017. Developing a multimodal ugv robot control interface. Aerospace and Defense Technology.
[50]
R. Murphy. 1998. Dempster-Shafer theory for sensor fusion in autonomous mobile robots. IEEE Transactions on Robotics and Automation, 14(2): 197--206. 624, 767
[51]
A. M. Okamura. 2009. Haptic feedback in robot-assisted minimally invasive surgery. Current Opinions in Urology, 19(1): 102--107. 629
[52]
C. Pacchierotti, L. Meli, F. Chinello, M. Malvezzi, and D. Prattichizzo. 2015. Cutaneous haptic feedback to ensure the stability of robotic teleoperation systems. The International Journal of Robotics Research, 34(14): 1773--1787. 628
[53]
D. Pfeiffer and U. Franke. 2010. Efficient representation of traffic scenes by means of dynamic stixels. IEEE Intelligent Vehicles Symposium (IV), pp. 217--224. 625, 626, 777
[54]
S. Place, D. Blanch-Hartigan, C. Rubin, C. Gorrostieta, C. Mead, J. Kane, B. P. Marx, J. Feast, T. Deckersbach, A. Pentland, A. Nierenberg, and A. Azarbayejani. 2017. Behavioral indicators on a mobile sensing platform predict clinically validated psychiatric symptoms of mood and anxiety disorders. Journal of Medical Internet Research, 19(3): e75. Published online 2017 Mar 16. 651
[55]
A. Ross and A. K. Jain. 2007. Fusion techniques in multibiometric systems. In R. I. Hammoud, B. R. Abidi, M. A. Abidi, editors, Face Biometrics for Personal Identification. Signals and Communication Technology. Springer, Berlin. 636
[56]
A. Ross, K. Nandakumar, and A. Jain. 2006. Handbook of Multibiometrics. Springer, Berlin Heidelberg. 636
[57]
A. Ross and N. Poh. 2009. Multibiometric systems: overview, case studies, and open issues. In M. Tistarelli, S. Z. Li, R. Chellappa, editors, Handbook of Remote Biometrics for Surveillance and Security, pp. 273--292. Springer. 636, 637
[58]
J. P Ruurda, IAMJ Broeders, B. Pulles, F. M. Kappelhof, and C. Van der Werken. 2004. Manual robot assisted endoscopic suturing: time-action analysis in an experimental model. Surgical Endoscopy and Other Interventional Techniques, 18(8): 1249--1252. 628
[59]
U. R. Sanchez and J. Kittler. 2006. Fusion of talking face biometric modalities for personal identity verification. IEEE International Conference on Acoustics, Speech and Signal Processing, Volume 5, V-V. 636, 637
[60]
B. Schoettle. 2017. Sensor Fusion: A comparison of sensing capabilities of human drivers and highly automated vehicles. Report SWOT-2017-12. University of Michigan, Sustainable Worldwide Transportation. 626, 627
[61]
K. Sentz and S. Ferson. April 2002. Combination of Evidence in Dempster-Shafer Theory. Sandia Report Sand2002-0835. 624, 767
[62]
S. Song, Z. Xiang, and J. Liu. 2015. Object tracking with 3d lidar via multi-task sparse learning. IEEE International Conference on Mechatronics and Automation (ICMA), pp. 2603--2608. 626
[63]
J. Stewart. 2019. Why Tesla's Autopilot Can't See a Stopped Firetruck, Wired, August 27, 2018. https://www.wired.com/story/tesla-autopilot-why-crash-radar/ 627
[64]
A. Tangel and A. Pasztor. March 2019. Boeing to Make Key Change in 737 MAX Cockpit Software. Wall Street Journal. 623
[65]
S. Tulyakov, S. Jaeger, V. Govindaraju, and D. Doermann. 2008. Review of classifier combination methods. In S. Marinai and H. Fujisawa, editors, Machine Learning in Document Analysis and Recognition, pp. 361--386. 637
[66]
K. Uludaǧ and A. Roebroeck. November 2014. General overview on the merits of multimodal neuroimaging data fusion. NeuroImage Vol. 102, Part 1, pp. 3--10. 621
[67]
US FDA. February 2019. Implanted Brain-Computer Interface (BCI) devices for patients with paralysis or amputation---non-clinical testing and clinical considerations draft guidance for industry and Food and Drug Administration Staff. United States Food and Drug Administration. https://www.fda.gov/downloads/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/UCM631786.pdf 631
[68]
O. A. J. Van der Meijden, and M. P. Schijven. 2009. The value of haptic feedback in conventional and robot-assisted minimal invasive surgery and virtual reality training: a current review. Surgical Endoscopy, pp. 1180--1190. 628
[69]
N. Vogler, S. Heuke, T. W. Bocklitz, M. Schmitt, and J. Popp. July 2015. Multimodal Imaging Spectroscopy of Tissue. Annual Review of Analytical Chemistry, 8: 359--387. 621
[70]
W. Wahlster, E. Andre, B. Bandyopadhyay, W. Graf, and T. Rist. 1991. WIP: The coordinated generation of multimodal presentations from a common representation. In O. Stock., J. Slack, and A. Ortony, editors, Computational Theories of Communication and their Application, Springer, Berlin. 640
[71]
W. Wahlster, editor. 2006. SmartKom: Fooundations of Multimodal Dialogue Systems. Springer Berlin. 640
[72]
T. Wang, B. Pan, Y. Fu, S. Wang, and Y. Ai. 2017. Design of a new haptic device and experiments in minimally invasive surgical robot. Innovation in Biomedical Science and Engineering, pp, 240--250. Journal Computer Assisted Surgery Vol. 22, 2017. Taylor and Francis Online Journal. 629
[73]
B. Weber and S. Schneider. 2014. The effects of force feedback on surgical task performance: a meta-analytical integration. In International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, 150--157. Springer. 628
[74]
C. R. Wottawa, B. Genovese, B. N. Nowroozi, S. D. Hart, J. W. Bisley, W. S. Grundfest, and E. P. Dutson. 2016. Evaluating tactile feedback in robotic surgery for potential clinical application using an animal model. Surgical Endoscopy, 30(8): 3198--3209. 628
[75]
M. Wu and J. Shu. 2018. Multimodal molecular imaging: current status and future directions, Contrast Media & Molecular Imaging. 621
[76]
L. A. Zadeh. 1986. A simple view of the Dempster-Shafer theory of evidence and its implication for the rule of combination. AI Magazine, 85--90. 624, 767

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Books
The Handbook of Multimodal-Multisensor Interfaces: Language Processing, Software, Commercialization, and Emerging Directions
July 2019
813 pages
ISBN:9781970001754
DOI:10.1145/3233795

Publisher

Association for Computing Machinery and Morgan & Claypool

Publication History

Published: 01 July 2019

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article

Appears in

ACM Books

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)13
  • Downloads (Last 6 weeks)1
Reflects downloads up to 15 Sep 2024

Other Metrics

Citations

Cited By

View all

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media