Chris Harrison (computer scientist): Difference between revisions
Citation bot (talk | contribs) Added date. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 116/748 |
|||
(10 intermediate revisions by 9 users not shown) | |||
Line 21: | Line 21: | ||
'''Chris Harrison''' is a British-born, American [[computer scientist]] and [[entrepreneur]], working in the fields of [[human–computer interaction]], [[machine learning]] and sensor-driven interactive systems. He is a professor at [[Carnegie Mellon University]]<ref name="hcii-page">{{cite web|title=Chris Harrison|url=https://www.hcii.cmu.edu/people/chris-harrison|website=[[Human–Computer Interaction Institute]]}}</ref> and director of the Future Interfaces Group<ref name="figlab-about">{{cite web|title=About Us|url=http://www.figlab.com/about-us/|website=Future Interfaces Group}}</ref> within the [[Human–Computer Interaction Institute]]. He has previously conducted research at [[AT&T Labs]], [[Microsoft Research]], [[IBM Research]] and [[Disney Research]]. He is also the CTO and co-founder of Qeexo,<ref name="qeexo-about">{{cite web|title=About Us|url=http://www.qeexo.com/about-us/|website=Qeexo}}</ref> a [[machine learning]] and [[interaction technique|interaction technology]] startup. |
'''Chris Harrison''' is a British-born, American [[computer scientist]] and [[entrepreneur]], working in the fields of [[human–computer interaction]], [[machine learning]] and sensor-driven interactive systems. He is a professor at [[Carnegie Mellon University]]<ref name="hcii-page">{{cite web|title=Chris Harrison|url=https://www.hcii.cmu.edu/people/chris-harrison|website=[[Human–Computer Interaction Institute]]}}</ref> and director of the Future Interfaces Group<ref name="figlab-about">{{cite web|title=About Us|url=http://www.figlab.com/about-us/|website=Future Interfaces Group}}</ref> within the [[Human–Computer Interaction Institute]]. He has previously conducted research at [[AT&T Labs]], [[Microsoft Research]], [[IBM Research]] and [[Disney Research]]. He is also the CTO and co-founder of Qeexo,<ref name="qeexo-about">{{cite web|title=About Us|url=http://www.qeexo.com/about-us/|website=Qeexo}}</ref> a [[machine learning]] and [[interaction technique|interaction technology]] startup. |
||
Harrison has authored more than 80 peer-reviewed papers and his work appears in more than |
Harrison has authored more than 80 peer-reviewed papers and his work appears in more than 40 books.<ref name="dblp">{{cite web|title=Most prolific authors in computer science|url=http://dblp.uni-trier.de/statistics/prolific1|website=[[DBLP]]}}</ref> For his contributions in human–computer interaction, Harrison was named a [[TR35|top 35 innovator under 35]] by [[MIT Technology Review]] (2012),<ref name="profile-tr35" /> a top 30 scientist under 30 by Forbes (2012),<ref name="profile-30under30" /> one of six innovators to watch by [[Smithsonian]] (2013),<ref name="profile-smithsonian">{{cite web|title=Six Innovators to Watch in 2013|url=http://www.smithsonianmag.com/innovation/six-innovators-to-watch-in-2013-195/|website=[[Smithsonian (magazine)|Smithsonian]]}}</ref> and a top Young Scientist by the [[World Economic Forum]] (2014).<ref name="profile-wef" /> Over the course of his career, Harrison has been awarded fellowships by the [[Packard Foundation]], [[Sloan Research Fellowship|Sloan Foundation]], [[Google]], [[Qualcomm]] and [[Microsoft Research]]. He currently holds the [[Nico Habermann|A. Nico Habermann]] Chair in Computer Science. More recently, NYU, Harrison's undergraduate alma mater named him as their 2014 Distinguished Young Alumnus,<ref>{{Cite web|url=http://video.alumni.nyu.edu/detail/videos/alumni-profiles/video/5466046470001/chris-harrison-cas-05-cims-06-the-2014-distinguished-young-alumnus-award|title=Chris Harrison (CAS '05, CIMS '06), The 2014 Distinguished Young Alumnus Award|website=video.alumni.nyu.edu|language=en-us|access-date=2019-08-18}}</ref> and the lab also won a [[Fast Company (magazine)|Fast Company]] Innovation by Design Award for their work on EM-Sense.<ref name="fastco-emsense">{{cite web|title=EM-Sense|url=https://www.fastcodesign.com/product/em-sense|website=[[Fast Company (magazine)|Fast Company]]}}</ref> |
||
==Biography== |
==Biography== |
||
Harrison was born in 1984 in [[London]], [[United Kingdom]], but emigrated with his family to [[New York City]] in the [[United States]] at a young age. Harrison actively participated in the [[Association for Computing Machinery|ACM]] programming competitions and engaged in a variety crafts. He also displayed an interest in [[Sling (weapon)|Slinging]] and was contacted for this hobby by [[BBC News|BBC]] for an ancient weapons documentary. Consequently, Harrison created and launched slinging.org on March 20, 2003 as an online forum for sling enthusiasts, as is currently the largest website on the subject, with over |
Harrison was born in 1984 in [[London]], [[United Kingdom]], but emigrated with his family to [[New York City]] in the [[United States]] at a young age. Harrison actively participated in the [[Association for Computing Machinery|ACM]] programming competitions and engaged in a variety crafts. He also displayed an interest in [[Sling (weapon)|Slinging]] and was contacted for this hobby by [[BBC News|BBC]] for an ancient weapons documentary. Consequently, Harrison created and launched slinging.org on March 20, 2003 as an online forum for sling enthusiasts, as is currently the largest website on the subject, with over 200,000 forum posts.<ref>{{Cite web|url=http://www.slinging.org/forum/YaBB.pl|title=Slinging.org Forum - Index|website=www.slinging.org|language=en|access-date=2018-02-07}}</ref> Harrison obtained his citizenship in the [[United States]] on May 13, 2002.<ref>{{Cite web|url=http://www.chrisharrison.net/index.php/Home/Log|title=Chris Harrison {{!}} Log|website=www.chrisharrison.net|access-date=2018-02-07}}</ref> |
||
Harrison obtained both a [[Bachelor of Arts|B.A.]] (2002–2005) and [[Master of Science|M.S.]] (2006) in Computer Science from the [[Courant Institute of Mathematical Sciences]] at [[New York University]]. His Master's thesis was advised by Dr. [[Dennis Shasha]], with whom he worked on a |
Harrison obtained both a [[Bachelor of Arts|B.A.]] (2002–2005) and [[Master of Science|M.S.]] (2006) in Computer Science from the [[Courant Institute of Mathematical Sciences]] at [[New York University]]. His Master's thesis was advised by Dr. [[Dennis Shasha]], with whom he worked on a [[Relational database|relational]] [[file system]] built around the concept of temporal context. New York University honored Harrison as its 2014 Distinguished Young Alumnus. |
||
During his master's studies, Harrison worked at [[IBM Research - Almaden]] on an early personal assistant application called Enki under [[Mark Dean (computer scientist)|Mark Dean]], then the director of the lab. After completing his master's degree, Harrison worked at [[AT&T Labs]], developing among the first asynchronous social video platforms, dubbed CollaboraTV, with features now common in modern systems. Encouraged by colleagues, Harrison joined the [[Human–Computer Interaction Institute#PhD Program|Ph.D. program in Human–Computer Interaction]] at [[Carnegie Mellon University]] in 2007, completing his dissertation on "The Human Body as an Interactive Computing Platform" in 2013 under the supervision of Dr. [[Scott Hudson (computer scientist)|Scott Hudson]]. |
During his master's studies, Harrison worked at [[IBM Research - Almaden]] on an early personal assistant application called Enki under [[Mark Dean (computer scientist)|Mark Dean]], then the director of the lab. After completing his master's degree, Harrison worked at [[AT&T Labs]], developing among the first asynchronous social video platforms, dubbed CollaboraTV, with features now common in modern systems. Encouraged by colleagues, Harrison joined the [[Human–Computer Interaction Institute#PhD Program|Ph.D. program in Human–Computer Interaction]] at [[Carnegie Mellon University]] in 2007, completing his dissertation on "The Human Body as an Interactive Computing Platform" in 2013 under the supervision of Dr. [[Scott Hudson (computer scientist)|Scott Hudson]]. |
||
From 2009 to 2012, Harrison was the Editor-in-Chief of [[Association for Computing Machinery|ACM]]'s Crossroads magazine, which he relaunched as [[XRDS (magazine)|XRDS]], the flagship magazine for the over 30,000 student members of the ACM. Harrison has [[University spin-off|spun-out]] several technologies from CMU and cofounded the machine learning startup Qeexo in 2012, which provides specialized machine-learning engines for mobile and embedded platforms, with a focus on interactive technologies.<ref>{{Cite web|url=https://qeexo.com/|title=Qeexo – Machine Learning with Sensor Data {{!}} Qeexo - Lightweight Machine Learning for Sensor Data|language=en-US|access-date=2019-08-18}}</ref> In 2019, the company won a CES Innovation Award for their EarSense solution,<ref>{{Cite web|url=https://www.ces.tech/Events-Programs/Innovation-Awards/Honorees.aspx|title=CES Innovation Awards > 2019 > Software and Mobile Apps|website=CES Innovation Awards}}</ref> which was used in the bezel-less [[Oppo Find X]], replacing the need for a physical proximity sensor with a virtual machine-learning-powered solution. In total, the company software is used on more than |
From 2009 to 2012, Harrison was the Editor-in-Chief of [[Association for Computing Machinery|ACM]]'s Crossroads magazine, which he relaunched as [[XRDS (magazine)|XRDS]], the flagship magazine for the over 30,000 student members of the ACM. Harrison has [[University spin-off|spun-out]] several technologies from CMU and cofounded the machine learning startup Qeexo in 2012, which provides specialized machine-learning engines for mobile and embedded platforms, with a focus on interactive technologies.<ref>{{Cite web|url=https://qeexo.com/|title=Qeexo – Machine Learning with Sensor Data {{!}} Qeexo - Lightweight Machine Learning for Sensor Data|language=en-US|access-date=2019-08-18}}</ref> In 2019, the company won a CES Innovation Award for their EarSense solution,<ref>{{Cite web|url=https://www.ces.tech/Events-Programs/Innovation-Awards/Honorees.aspx|title=CES Innovation Awards > 2019 > Software and Mobile Apps|website=CES Innovation Awards}}</ref> which was used in the bezel-less [[Oppo Find X]], replacing the need for a physical proximity sensor with a virtual machine-learning-powered solution. In total, the company software is used on more than 100 million devices as of 2017.<ref>{{Cite web|url=https://twitter.com/esangwon/status/939197826474450944|title=We just passed 100M unit sales!!! So proud to be a part of this team! Well done #Qeexo|last=Lee|first=Sang Won|date=2017-12-08|website=@esangwon|language=en|access-date=2019-08-18}}</ref> |
||
In 2013, Harrison became faculty at [[Carnegie Mellon University]], founding the Future Interfaces Group within the [[Human–Computer Interaction Institute]]. |
In 2013, Harrison became faculty at [[Carnegie Mellon University]], founding the Future Interfaces Group within the [[Human–Computer Interaction Institute]]. |
||
==Research== |
==Research== |
||
Harrison broadly investigates novel sensing and interface technologies, especially those "that empower people to interact with small devices in big ways". He is best known for his research into [[ubiquitous computing|ubiquitous]] and [[wearable computing]], where computation escapes the confines of today's small, rectangular screens, and spills interactivity out onto everyday surfaces, such as walls, countertops and furniture.<ref name="infobulb-wef-youtube">{{cite web|last=Harrison|first=Chris|title=Reimagining everyday devices as information-delivery systems|url=https://www.youtube.com/watch?v=Eb4OzF2e5qM|website=YouTube|publisher=[[World Economic Forum]]}}</ref> This research thread dates back to 2008, starting with [[Scratch input|Scratch Input]] appropriating walls and tables as ad hoc input surfaces. Insights from this work, especially the vibroacoustic propagation of touch inputs, led to [[Skinput]] being developed while Harrison was interning at [[Microsoft Research]].<ref name="skinput-msrc">{{cite web | publisher = Microsoft Research Computational User Experiences Group | title = Skinput: Appropriating the Body as an Input Surface | url=http://research.microsoft.com/en-us/um/redmond/groups/cue/skinput}}</ref><ref name="skinput-msr-youtube">{{cite web|title=Desney Tan, Chris Harrison on Interacting with Impossibly Small Devices |
Harrison broadly investigates novel sensing and interface technologies, especially those "that empower people to interact with small devices in big ways". He is best known for his research into [[ubiquitous computing|ubiquitous]] and [[wearable computing]], where computation escapes the confines of today's small, rectangular screens, and spills interactivity out onto everyday surfaces, such as walls, countertops and furniture.<ref name="infobulb-wef-youtube">{{cite web|last=Harrison|first=Chris|title=Reimagining everyday devices as information-delivery systems|url=https://www.youtube.com/watch?v=Eb4OzF2e5qM|website=YouTube|date=19 February 2016 |publisher=[[World Economic Forum]]}}</ref> This research thread dates back to 2008, starting with [[Scratch input|Scratch Input]] appropriating walls and tables as ad hoc input surfaces. Insights from this work, especially the vibroacoustic propagation of touch inputs, led to [[Skinput]] being developed while Harrison was interning at [[Microsoft Research]].<ref name="skinput-msrc">{{cite web | publisher = Microsoft Research Computational User Experiences Group | title = Skinput: Appropriating the Body as an Input Surface | url=http://research.microsoft.com/en-us/um/redmond/groups/cue/skinput}}</ref><ref name="skinput-msr-youtube">{{cite web|title=Desney Tan, Chris Harrison on Interacting with Impossibly Small Devices |
||
|url=https://www.youtube.com/watch?v=QG5HzTIcpnA|website=YouTube|publisher=[[Microsoft Research]]}}</ref> [[Skinput]] was the first on-body system to demonstrate touch input and coordinated projected graphics without the need to instrument the hands. This research was followed shortly after by [[OmniTouch]], also at Microsoft Research. |
|url=https://www.youtube.com/watch?v=QG5HzTIcpnA|website=YouTube| date=15 July 2014 |publisher=[[Microsoft Research]]}}</ref> [[Skinput]] was the first on-body system to demonstrate touch input and coordinated projected graphics without the need to instrument the hands. This research was followed shortly after by [[OmniTouch]], also at Microsoft Research. |
||
More recently, Harrison has been conducting research at the Future Interfaces Group<ref>{{Cite web|url=http://www.figlab.com/about-us/|title=About Us|website=Future Interfaces Group|language=en-US|access-date=2018-02-02}}</ref> in the Human Computer Interaction department of [[Carnegie Mellon University|CMU]]. In 2016, Harrison presented 3 topics at the [[ACM Symposium on User Interface Software and Technology|ACM Symposium on User Interface Software and Technology (UIST)]] through Future Interfaces Group. These topics were: using a high speed mode of a smartwatch's accelerometer to acquire and interpret acoustic samples at 4000 samples per second,<ref>{{Citation|last=ACM SIGCHI|title=ViBand: High-Fidelity Bio-Acoustic Sensing Using Commodity Smartwatch Accelerometers|date=2016-10-04|url=https://www.youtube.com/watch?v=-JcezIL3UKQ|accessdate=2018-02-08}}</ref> using electrical impedance sensing to create a real-time hand gesture sensor,<ref>{{Citation|last=ACM SIGCHI|title=Advancing Hand Gesture Recognition with High Resolution Electrical Impedance Tomography|date=2016-10-04|url=https://www.youtube.com/watch?v=OzmFi_e1Voc|accessdate=2018-02-08}}</ref> and using infrared sensors on a smartwatch to recognize and utilize hand gestures made on the skin directly around the smartwatch.<ref>{{Citation|last=ACM SIGCHI|title=AuraSense: Enabling Expressive Around-Smartwatch Interactions with Electric Field Sensing|date=2016-10-04|url=https://www.youtube.com/watch?v=t3CwyV3048Y|accessdate=2018-02-08}}</ref> In 2017, Robert Xiao, an [[Human–Computer Interaction Institute|HCII]] PhD student, along with Harrison and [[Scott Hudson (computer scientist)|Scott Hudson]], his advisors, created Desktopography,<ref>{{Cite journal|last1=Xiao|first1=Robert|last2=Hudson|first2=Scott|last3=Harrison|first3=Chris|date=2017-06-30|title=Supporting Responsive Cohabitation Between Virtual Interfaces and Physical Objects on Everyday Surfaces|journal=Proceedings of the ACM on Human–Computer Interaction|volume=1|issue=EICS|pages=12|doi=10.1145/3095814|s2cid=11463668 }}</ref> an interactive multi-touch interface that is projected onto a desktop surface. Inspired by the [[XEROX PARC|Xerox PARC]] DigitalDesk, one of the first digitally [[Augmented reality|augmented]] desks of its time, Desktopography explores the possibilities of virtual-physical interactions and deals with how to best create a user-friendly interface which has to navigate around various, constantly moved objects, as commonly found on one's desktop surface. |
More recently, Harrison has been conducting research at the Future Interfaces Group<ref>{{Cite web|url=http://www.figlab.com/about-us/|title=About Us|website=Future Interfaces Group|language=en-US|access-date=2018-02-02}}</ref> in the Human Computer Interaction department of [[Carnegie Mellon University|CMU]]. In 2016, Harrison presented 3 topics at the [[ACM Symposium on User Interface Software and Technology|ACM Symposium on User Interface Software and Technology (UIST)]] through Future Interfaces Group. These topics were: using a high speed mode of a smartwatch's accelerometer to acquire and interpret acoustic samples at 4000 samples per second,<ref>{{Citation|last=ACM SIGCHI|title=ViBand: High-Fidelity Bio-Acoustic Sensing Using Commodity Smartwatch Accelerometers|date=2016-10-04|url=https://www.youtube.com/watch?v=-JcezIL3UKQ|accessdate=2018-02-08}}</ref> using electrical impedance sensing to create a real-time hand gesture sensor,<ref>{{Citation|last=ACM SIGCHI|title=Advancing Hand Gesture Recognition with High Resolution Electrical Impedance Tomography|date=2016-10-04|url=https://www.youtube.com/watch?v=OzmFi_e1Voc|accessdate=2018-02-08}}</ref> and using infrared sensors on a smartwatch to recognize and utilize hand gestures made on the skin directly around the smartwatch.<ref>{{Citation|last=ACM SIGCHI|title=AuraSense: Enabling Expressive Around-Smartwatch Interactions with Electric Field Sensing|date=2016-10-04|url=https://www.youtube.com/watch?v=t3CwyV3048Y|accessdate=2018-02-08}}</ref> In 2017, Robert Xiao, an [[Human–Computer Interaction Institute|HCII]] PhD student, along with Harrison and [[Scott Hudson (computer scientist)|Scott Hudson]], his advisors, created Desktopography,<ref>{{Cite journal|last1=Xiao|first1=Robert|last2=Hudson|first2=Scott|last3=Harrison|first3=Chris|date=2017-06-30|title=Supporting Responsive Cohabitation Between Virtual Interfaces and Physical Objects on Everyday Surfaces|journal=Proceedings of the ACM on Human–Computer Interaction|volume=1|issue=EICS|pages=12|doi=10.1145/3095814|s2cid=11463668 }}</ref> an interactive multi-touch interface that is projected onto a desktop surface. Inspired by the [[XEROX PARC|Xerox PARC]] DigitalDesk, one of the first digitally [[Augmented reality|augmented]] desks of its time, Desktopography explores the possibilities of virtual-physical interactions and deals with how to best create a user-friendly interface which has to navigate around various, constantly moved objects, as commonly found on one's desktop surface. |
||
==Other activities== |
==Other activities== |
||
Harrison co-developed and co-wrote [[Crash Course (YouTube)|Crash Course]] [[Crash Course (YouTube)#Computer Science|Computer Science]], a [[PBS Digital Studios]]-funded educational series hosted on [[YouTube]], with his partner, Amy Ogan. This project was initiated following a discussion between Harrison and [[ |
Harrison co-developed and co-wrote [[Crash Course (YouTube)|Crash Course]] [[Crash Course (YouTube)#Computer Science|Computer Science]], a [[PBS Digital Studios]]-funded educational series hosted on [[YouTube]], with his partner, Amy Ogan. This project was initiated following a discussion between Harrison and [[John Green]] at the [[World Economic Forum]] in 2016, where both were guest speakers.{{citation needed|date=April 2024}} |
||
Along with Robert Xiao and [[Scott Hudson (computer scientist)|Scott Hudson]], colleagues at CMU, he developed [[Lumitrack]], a motion tracking technology which is currently used in video game controllers and in the film industry.<ref>{{Cite web|url=http://www.cmu.edu/news/stories/archives/2013/october/oct7_motiontracking.html|title=Press Release: Carnegie Mellon-Disney Motion Tracking Technology Is Extremely Precise and Inexpensive With Minimal Lag - News - Carnegie Mellon University|last=University|first=Carnegie Mellon|date=|website=www.cmu.edu|language=en|access-date=2019-10-23}}</ref> |
Along with Robert Xiao and [[Scott Hudson (computer scientist)|Scott Hudson]], colleagues at CMU, he developed [[Lumitrack]], a motion tracking technology which is currently used in video game controllers and in the film industry.<ref>{{Cite web|url=http://www.cmu.edu/news/stories/archives/2013/october/oct7_motiontracking.html|title=Press Release: Carnegie Mellon-Disney Motion Tracking Technology Is Extremely Precise and Inexpensive With Minimal Lag - News - Carnegie Mellon University|last=University|first=Carnegie Mellon|date=|website=www.cmu.edu|language=en|access-date=2019-10-23}}</ref> |
||
Line 47: | Line 47: | ||
Harrison was also one of the Program Committee Chairs for the 2017 [[ACM Symposium on User Interface Software and Technology|ACM Symposium on User Interface Software and Technology (UIST)]].<ref>{{Cite web|url=https://uist.acm.org/uist2017/#organizers|title=UIST 2017: 30th ACM User Interface Software and Technology Symposium|website=uist.acm.org|date=22 October 2017 |language=en|access-date=2018-02-08}}</ref> |
Harrison was also one of the Program Committee Chairs for the 2017 [[ACM Symposium on User Interface Software and Technology|ACM Symposium on User Interface Software and Technology (UIST)]].<ref>{{Cite web|url=https://uist.acm.org/uist2017/#organizers|title=UIST 2017: 30th ACM User Interface Software and Technology Symposium|website=uist.acm.org|date=22 October 2017 |language=en|access-date=2018-02-08}}</ref> |
||
Chris is also |
Chris is also an amateur [[digital artist]] and sculptor. His artworks have appeared in over 40 books and more than a dozen international galleries. Notable among these appearances were showings at the [[Triennale di Milano]] in [[Milan, Italy]] (2014), and the [[:fr:Biennale internationale du design de Saint-Étienne|Biennale Internationale Design]] in [[Saint-Étienne, France]] (2010). |
||
==References== |
==References== |
Revision as of 04:21, 12 August 2024
Chris Harrison | |
---|---|
Born | |
Citizenship | United States United Kingdom |
Alma mater | New York University (B.A., M.S.), Carnegie Mellon University (Ph.D.) |
Known for | Omnitouch, Skinput |
Awards | Packard Fellow,[1] Sloan Fellow,[2] World Economic Forum Young Scientist,[3] Forbes 30 Under 30 Scientist,[4] TR35 Award,[5] Qualcomm Innovation Fellowship[6] |
Scientific career | |
Fields | Human–computer interaction, Wearable computing |
Institutions | Carnegie Mellon University |
Thesis | The Human Body as an Interactive Computing Platform (2013) |
Doctoral advisor | Scott Hudson |
Website | chrisharrison |
Chris Harrison is a British-born, American computer scientist and entrepreneur, working in the fields of human–computer interaction, machine learning and sensor-driven interactive systems. He is a professor at Carnegie Mellon University[7] and director of the Future Interfaces Group[8] within the Human–Computer Interaction Institute. He has previously conducted research at AT&T Labs, Microsoft Research, IBM Research and Disney Research. He is also the CTO and co-founder of Qeexo,[9] a machine learning and interaction technology startup.
Harrison has authored more than 80 peer-reviewed papers and his work appears in more than 40 books.[10] For his contributions in human–computer interaction, Harrison was named a top 35 innovator under 35 by MIT Technology Review (2012),[5] a top 30 scientist under 30 by Forbes (2012),[4] one of six innovators to watch by Smithsonian (2013),[11] and a top Young Scientist by the World Economic Forum (2014).[3] Over the course of his career, Harrison has been awarded fellowships by the Packard Foundation, Sloan Foundation, Google, Qualcomm and Microsoft Research. He currently holds the A. Nico Habermann Chair in Computer Science. More recently, NYU, Harrison's undergraduate alma mater named him as their 2014 Distinguished Young Alumnus,[12] and the lab also won a Fast Company Innovation by Design Award for their work on EM-Sense.[13]
Biography
Harrison was born in 1984 in London, United Kingdom, but emigrated with his family to New York City in the United States at a young age. Harrison actively participated in the ACM programming competitions and engaged in a variety crafts. He also displayed an interest in Slinging and was contacted for this hobby by BBC for an ancient weapons documentary. Consequently, Harrison created and launched slinging.org on March 20, 2003 as an online forum for sling enthusiasts, as is currently the largest website on the subject, with over 200,000 forum posts.[14] Harrison obtained his citizenship in the United States on May 13, 2002.[15]
Harrison obtained both a B.A. (2002–2005) and M.S. (2006) in Computer Science from the Courant Institute of Mathematical Sciences at New York University. His Master's thesis was advised by Dr. Dennis Shasha, with whom he worked on a relational file system built around the concept of temporal context. New York University honored Harrison as its 2014 Distinguished Young Alumnus.
During his master's studies, Harrison worked at IBM Research - Almaden on an early personal assistant application called Enki under Mark Dean, then the director of the lab. After completing his master's degree, Harrison worked at AT&T Labs, developing among the first asynchronous social video platforms, dubbed CollaboraTV, with features now common in modern systems. Encouraged by colleagues, Harrison joined the Ph.D. program in Human–Computer Interaction at Carnegie Mellon University in 2007, completing his dissertation on "The Human Body as an Interactive Computing Platform" in 2013 under the supervision of Dr. Scott Hudson.
From 2009 to 2012, Harrison was the Editor-in-Chief of ACM's Crossroads magazine, which he relaunched as XRDS, the flagship magazine for the over 30,000 student members of the ACM. Harrison has spun-out several technologies from CMU and cofounded the machine learning startup Qeexo in 2012, which provides specialized machine-learning engines for mobile and embedded platforms, with a focus on interactive technologies.[16] In 2019, the company won a CES Innovation Award for their EarSense solution,[17] which was used in the bezel-less Oppo Find X, replacing the need for a physical proximity sensor with a virtual machine-learning-powered solution. In total, the company software is used on more than 100 million devices as of 2017.[18]
In 2013, Harrison became faculty at Carnegie Mellon University, founding the Future Interfaces Group within the Human–Computer Interaction Institute.
Research
Harrison broadly investigates novel sensing and interface technologies, especially those "that empower people to interact with small devices in big ways". He is best known for his research into ubiquitous and wearable computing, where computation escapes the confines of today's small, rectangular screens, and spills interactivity out onto everyday surfaces, such as walls, countertops and furniture.[19] This research thread dates back to 2008, starting with Scratch Input appropriating walls and tables as ad hoc input surfaces. Insights from this work, especially the vibroacoustic propagation of touch inputs, led to Skinput being developed while Harrison was interning at Microsoft Research.[20][21] Skinput was the first on-body system to demonstrate touch input and coordinated projected graphics without the need to instrument the hands. This research was followed shortly after by OmniTouch, also at Microsoft Research.
More recently, Harrison has been conducting research at the Future Interfaces Group[22] in the Human Computer Interaction department of CMU. In 2016, Harrison presented 3 topics at the ACM Symposium on User Interface Software and Technology (UIST) through Future Interfaces Group. These topics were: using a high speed mode of a smartwatch's accelerometer to acquire and interpret acoustic samples at 4000 samples per second,[23] using electrical impedance sensing to create a real-time hand gesture sensor,[24] and using infrared sensors on a smartwatch to recognize and utilize hand gestures made on the skin directly around the smartwatch.[25] In 2017, Robert Xiao, an HCII PhD student, along with Harrison and Scott Hudson, his advisors, created Desktopography,[26] an interactive multi-touch interface that is projected onto a desktop surface. Inspired by the Xerox PARC DigitalDesk, one of the first digitally augmented desks of its time, Desktopography explores the possibilities of virtual-physical interactions and deals with how to best create a user-friendly interface which has to navigate around various, constantly moved objects, as commonly found on one's desktop surface.
Other activities
Harrison co-developed and co-wrote Crash Course Computer Science, a PBS Digital Studios-funded educational series hosted on YouTube, with his partner, Amy Ogan. This project was initiated following a discussion between Harrison and John Green at the World Economic Forum in 2016, where both were guest speakers.[citation needed]
Along with Robert Xiao and Scott Hudson, colleagues at CMU, he developed Lumitrack, a motion tracking technology which is currently used in video game controllers and in the film industry.[27]
Harrison was also one of the Program Committee Chairs for the 2017 ACM Symposium on User Interface Software and Technology (UIST).[28]
Chris is also an amateur digital artist and sculptor. His artworks have appeared in over 40 books and more than a dozen international galleries. Notable among these appearances were showings at the Triennale di Milano in Milan, Italy (2014), and the Biennale Internationale Design in Saint-Étienne, France (2010).
References
- ^ "Harrison, Chris". The David and Lucile Packard Foundation.
- ^ "Alfred P. Sloan Research Fellowships 2018" (PDF). Sloan Research Fellowship.
- ^ a b "Chris Harrison". World Economic Forum.
- ^ a b "30 Under 30 - Science & Innovation". Forbes.
- ^ a b "Innovator under 35: Chris Harrison, 28". MIT Technology Review.
- ^ "HCII PhD Students Win Qualcomm Innovation Fellowship". Carnegie Mellon School of Computer Science.
- ^ "Chris Harrison". Human–Computer Interaction Institute.
- ^ "About Us". Future Interfaces Group.
- ^ "About Us". Qeexo.
- ^ "Most prolific authors in computer science". DBLP.
- ^ "Six Innovators to Watch in 2013". Smithsonian.
- ^ "Chris Harrison (CAS '05, CIMS '06), The 2014 Distinguished Young Alumnus Award". video.alumni.nyu.edu. Retrieved 2019-08-18.
- ^ "EM-Sense". Fast Company.
- ^ "Slinging.org Forum - Index". www.slinging.org. Retrieved 2018-02-07.
- ^ "Chris Harrison | Log". www.chrisharrison.net. Retrieved 2018-02-07.
- ^ "Qeexo – Machine Learning with Sensor Data | Qeexo - Lightweight Machine Learning for Sensor Data". Retrieved 2019-08-18.
- ^ "CES Innovation Awards > 2019 > Software and Mobile Apps". CES Innovation Awards.
- ^ Lee, Sang Won (2017-12-08). "We just passed 100M unit sales!!! So proud to be a part of this team! Well done #Qeexo". @esangwon. Retrieved 2019-08-18.
- ^ Harrison, Chris (19 February 2016). "Reimagining everyday devices as information-delivery systems". YouTube. World Economic Forum.
- ^ "Skinput: Appropriating the Body as an Input Surface". Microsoft Research Computational User Experiences Group.
- ^ "Desney Tan, Chris Harrison on Interacting with Impossibly Small Devices". YouTube. Microsoft Research. 15 July 2014.
- ^ "About Us". Future Interfaces Group. Retrieved 2018-02-02.
- ^ ACM SIGCHI (2016-10-04), ViBand: High-Fidelity Bio-Acoustic Sensing Using Commodity Smartwatch Accelerometers, retrieved 2018-02-08
- ^ ACM SIGCHI (2016-10-04), Advancing Hand Gesture Recognition with High Resolution Electrical Impedance Tomography, retrieved 2018-02-08
- ^ ACM SIGCHI (2016-10-04), AuraSense: Enabling Expressive Around-Smartwatch Interactions with Electric Field Sensing, retrieved 2018-02-08
- ^ Xiao, Robert; Hudson, Scott; Harrison, Chris (2017-06-30). "Supporting Responsive Cohabitation Between Virtual Interfaces and Physical Objects on Everyday Surfaces". Proceedings of the ACM on Human–Computer Interaction. 1 (EICS): 12. doi:10.1145/3095814. S2CID 11463668.
- ^ University, Carnegie Mellon. "Press Release: Carnegie Mellon-Disney Motion Tracking Technology Is Extremely Precise and Inexpensive With Minimal Lag - News - Carnegie Mellon University". www.cmu.edu. Retrieved 2019-10-23.
- ^ "UIST 2017: 30th ACM User Interface Software and Technology Symposium". uist.acm.org. 22 October 2017. Retrieved 2018-02-08.