Human–computer interaction: Difference between revisions

Content deleted Content added
Nik2608 (talk | contribs)
m Just changed the language and made it more simple to understand.
Reverting edit(s) by 2001:8A0:E498:2500:94E5:25B:408A:9A54 (talk) to rev. 1261167645 by Quid Est Squid: non-constructive (RW 16.1)
(39 intermediate revisions by 30 users not shown)
Line 5:
}}
[[File:Computer monitor screen image simulated.jpg|alt=A close-up photograph of a computer monitor.|thumb|A computer monitor provides a visual interface between the machine and the user.]]
'''Human–computer interaction''' ('''HCI''') is research in the design and the use of [[Computing|computer technology]], which focuses on the [[Interface (computing)|interface]]s between people ([[user (computing)|users]]) and [[computer]]s. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. A device that allows interaction between human being and a computer is known as a "'''Human-computer Interface (HCI)'''".
 
As a field of research, human–computer interaction is situated at the intersection of [[computer science]], [[behavioural sciences|behavioral sciences]], [[design]], [[media studies]], and [[Outline of human–computer interaction#Related fields|several other fields of study]]. The term was popularized by [[Stuart K. Card]], [[Allen Newell]], and [[Thomas P. Moran]] in their 1983 book, ''The Psychology of Human–Computer Interaction.'' The first known use was in 1975 by Carlisle.<ref name="Evaluating the impact of office automation on top management communication"/> The term is intended to convey that, unlike other tools with specific and limited uses, computers have many uses which often involve an open-ended dialogue between the user and the computer. The notion of dialogue likens human–computer interaction to human-to-human interaction: an analogy that is crucial to theoretical considerations in the field.<ref>{{cite book|last1=Suchman|first1=Lucy|title=Plans and Situated Action. The Problem of Human-Machine Communication|date=1987|publisher=Cambridge University Press|location=New York, Cambridge|url=https://books.google.com/books?id=AJ_eBJtHxmsC&q=suchman+situated+action&pg=PR7|access-date=7 March 2015|isbn=9780521337397}}</ref><ref name=":0">{{cite book|last1=Dourish|first1=Paul|title=Where the Action Is: The Foundations of Embodied Interaction|date=2001|publisher=MIT Press|location=Cambridge, MA|url=https://books.google.com/books?id=DCIy2zxrCqcC&q=Dourish+where+the+action+is&pg=PR7|isbn=9780262541787}}</ref>
Line 11:
==Introduction==
{{More citations needed section|date=May 2021}}
Humans interact with computers in many ways, and the interface between the two is crucial to facilitating this interaction. HCI is also sometimes termed ''human–machine interaction'' (HMI), ''man-machine interaction'' (MMI) or ''computer-human interaction'' (CHI). Desktop applications, internet browsers, handheld computers, and computer kiosks make use of the prevalent [[graphical user interface]]s (GUI) of today.<ref name="ACM SIGCHI">{{cite web|last1=Hewett|last2=Baecker|last3=Card|last4=Carey|last5=Gasen|last6=Mantei|last7=Perlman|last8=Strong|last9=Verplank|title=ACM SIGCHI Curricula for Human–Computer Interaction|url=http://old.sigchi.org/cdg/cdg2.html#2_1|publisher=ACM SIGCHI|access-date=15 July 2014|archive-url=https://web.archive.org/web/20140817165957/http://old.sigchi.org/cdg/cdg2.html#2_1|archive-date=17 August 2014|url-status=dead}}</ref> [[Voice user interface]]s (VUI) are used for [[speech recognition]] and synthesizing systems, and the emerging [[multimodal interaction|multi-modal]] and Graphical user interfaces (GUI) allow humans to engage with [[Embodied agent|embodied character agents]] in a way that cannot be achieved with other interface paradigms. The growth in human–computer interaction field has led to an increase in the quality of interaction, and resulted in many new areas of research beyond. Instead of designing regular interfaces, the different research branches focus on the concepts of [[multimodality]]<ref>{{Diff|en:Multimodality|diff=|oldid=876504380|label=en:Multimodality, oldid 876504380}}{{Circular referenceCN|date=SeptemberNovember 20212023}}</ref> over unimodality, [[Adaptive autonomy|intelligent adaptive interfaces]] over command/action based ones, and active interfaces over passive interfaces.<ref>{{Cite journal |last1=Gurcan |first1=Fatih |last2=Cagiltay |first2=Nergiz Ercil |last3=Cagiltay |first3=Kursat |date=2021-02-07 |title=Mapping Human–Computer Interaction Research Themes and Trends from Its Existence to Today: A Topic Modeling-Based Review of past 60 Years |url=https://doi.org/10.1080/10447318.2020.1819668 |journal=International Journal of Human–Computer Interaction |volume=37 |issue=3 |pages=267–280 |doi=10.1080/10447318.2020.1819668 |s2cid=224998668 |issn=1044-7318}}</ref>
 
The [[Association for Computing Machinery]] (ACM) defines human–computer interaction as "a discipline that is concerned with the design, evaluation, and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them".<ref name="ACM SIGCHI"/> A key aspect of HCI is user satisfaction, also referred to as End-User Computing Satisfaction. It goes on to say:
Line 27:
 
* '''Visual Based''': The visual-based human–computer interaction is probably the most widespread human–computer interaction (HCI) research area.
* '''Audio -Based''': The audio-based interaction between a computer and a human is another important area of HCI systems. This area deals with information acquired by different audio signals.
* '''''Task environment''''': The conditions and goals set upon the user.
* '''''Machine environment''''': The computer's environment is connected to, e.g., a laptop in a college student's dorm room.
* '''''Areas of the interface''''': Non-overlapping areas involve the processes related to humans and computers themselves, while the overlapping areas only involve the processes related to their interaction.
* '''''Input flow''''': The flow of information begins in the task environment when the user has some tasktasks requiring using their computer.
* '''''Output''''': The flow of information that originates in the machine environment.
* '''''Feedback''''': Loops through the interface that evaluate, moderate, and confirm processes as they pass from the human through the interface to the computer and back.
* '''''Fit''''': This matches the computer design, the user, and the task to optimize the human resources needed to accomplish the task.
** '''Visual- Based HCI''' ----
**# Facial Expression Analysis: This area focuses on visually recognizing and analyzing emotions through facial expressions.
**# Body Movement Tracking (Large-scale): Researchers in this area concentrate on tracking and analyzing large-scale body movements.
**# Gesture Recognition: Gesture recognition involves identifying and interpreting gestures made by users, often used for direct interaction with computers in command and action scenarios.
**# Gaze Detection (Eyes Movement Tracking): Gaze detection involves tracking the movement of a user's eyes and is primarily used to better understand the user's attention, intent, or focus in context-sensitive situations. <br />While the specific goals of each area vary based on applications, they collectively contribute to enhancing human-computer interaction. Notably, visual approaches have been explored as alternatives or aids to other types of interactions, such as audio- and sensor-based methods. For example, lip reading or lip movement tracking has proven influential in correcting speech recognition errors.
**'''Audio - Based HCI''' ----Audio-based interaction in human-computer interaction (HCI) is a crucial field focused on processing information acquired through various audio signals. While the nature of audio signals may be less diverse compared to visual signals, the information they provide can be highly reliable, valuable, and sometimes uniquely informative. The research areas within this domain include:
**# Speech Recognition: This area centers on the recognition and interpretation of spoken language.
**# Speaker Recognition: Researchers in this area concentrate on identifying and distinguishing different speakers.
**# Auditory Emotion Analysis: Efforts have been made to incorporate human emotions into intelligent human-computer interaction by analyzing emotional cues in audio signals.
**# Human-Made Noise/Sign Detections: This involves recognizing typical human auditory signs like sighs, gasps, laughs, cries, etc., which contribute to emotion analysis and the design of more intelligent HCI systems.
**# Musical Interaction: A relatively new area in HCI, it involves generating and interacting with music, with applications in the art industry. This field is studied in both audio- and visual-based HCI systems.
**'''Sensor-Based HCI''' ----This section encompasses a diverse range of areas with broad applications, all of which involve the use of physical sensors to facilitate interaction between users and machines. These sensors can range from basic to highly sophisticated. The specific areas include:
**# Pen-Based Interaction: Particularly relevant in mobile devices, focusing on pen gestures and handwriting recognition.
**# Mouse & Keyboard: Well-established input devices discussed in Section 3.1, commonly used in computing.
**# Joysticks: Another established input device for interactive control, commonly used in gaming and simulations.
**# Motion Tracking Sensors and Digitizers: Cutting-edge technology that has revolutionized industries like film, animation, art, and gaming. These sensors, in forms like wearable cloth or joint sensors, enable more immersive interactions between computers and reality.
**# Haptic Sensors: Particularly significant in applications related to robotics and virtual reality, providing feedback based on touch. They play a crucial role in enhancing sensitivity and awareness in humanoid robots, as well as in medical surgery applications.
**# Pressure Sensors: Also important in robotics, virtual reality, and medical applications, providing information based on pressure exerted on a surface.
**# Taste/Smell Sensors: Although less popular compared to other areas, research has been conducted in the field of sensors for taste and smell. These sensors vary in their level of maturity, with some being well-established and others representing cutting-edge technologies.
 
==Goals for computers==
Line 47 ⟶ 66:
* Methods for determining whether or not the user is human or computer.
* Models and theories of human–computer use as well as conceptual frameworks for the design of computer interfaces, such as [[cognitivism (psychology)|cognitivist]] user models, [[Activity Theory]], or [[ethnomethodology|ethnomethodological]] accounts of human–computer use.<ref>{{cite journal|last1=Rogers|first1=Yvonne|title=HCI Theory: Classical, Modern, and Contemporary|journal=Synthesis Lectures on Human-Centered Informatics|date=2012|doi=10.2200/S00418ED1V01Y201205HCI014|volume=5|issue=2|pages=1–129}}</ref>
* Perspectives that critically reflect upon the values that underlie computational design, computer use, and HCI research practice.<ref>{{Cite book|last1=Sengers|first1=Phoebe|author1-link= Phoebe Sengers |last2=Boehner|first2=Kirsten|last3=David|first3=Shay|last4=Joseph|first4=Kaye|title=Proceedings of the 4th decennial conference on Critical computing: Between sense and sensibility |chapter=Reflective design |volume=5|pages=49–58|doi=10.1145/1094562.1094569|year=2005|isbn=978-1595932037|s2cid=9029682|url=https://www.semanticscholar.org/paper/06c5e804c0f0292231d2d7be407bf3f1ac01c0d3}}</ref>
 
Visions of what researchers in the field seek to achieve might vary. When pursuing a cognitivist perspective, researchers of HCI may seek to align computer interfaces with the mental model that humans have of their activities. When pursuing a [[post-cognitivist]] perspective, researchers of HCI may seek to align computer interfaces with existing social practices or existing sociocultural values.
 
Researchers in HCI are interested in developing design methodologies, experimenting with devices, prototyping software, and hardware systems, exploring interaction paradigms, and developing models and theories of interaction.
 
==Design==
Line 83 ⟶ 102:
Christopher Wickens et al. defined 13 principles of display design in their book ''An Introduction to Human Factors Engineering''.<ref name="introduction"/>
 
These principles of human perception and information processing principles can be utilized to create an effective display design. A reduction in errors, a reduction in required training time, an increase in efficiency, and an increase in user satisfaction are a few of the many potential benefits that can be achieved by utilizing these principles.
 
Certain principles may not apply to different displays or situations. Some principles may also appear to be conflicting, and there is no simple solution to say that one principle is more important than another. The principles may be tailored to a specific design or situation. Striking a functional balance among the principles is critical for an effective design.<ref name="guidelines"/>
Line 89 ⟶ 108:
====Perceptual principles====
{{Unreferenced section|date=May 2021}}
''1. Make displays legible (or audible)''. A display's legibility is critical and necessary for designing a usable display. If the characters or objects being displayed cannot be discernible, the operator cannot effectively use them.
 
''2. Avoid absolute judgment limits''. Do not ask the user to determine the level of a variable based on a single sensory variable (e.g., color, size, loudness). These sensory variables can contain many possible levels.
 
''3. Top-down processing''. Signals are likely perceived and interpreted by what is expected based on a user's experience. If a signal is presented contrary to the user's expectation, more physical evidence of that signal may need to be presented to assure that it is understood correctly.
 
''4. Redundancy gain''. If a signal is presented more than once, it is more likely to be understood correctly. This can be done by presenting the signal in alternative physical forms (e.g., color and shape, voice and print, etc.), as redundancy does not imply repetition. A traffic light is a good example of redundancy, as color and position are redundant.
 
''5. Similarity causes confusion: Use distinguishable elements''. Signals that appear to be similar will likely be confused. The ratio of similar features to different features causes signals to be similar. For example, A423B9 is more similar to A423B8 than 92 is to 93. Unnecessarily similar features should be removed, and dissimilar features should be highlighted.
 
====Mental model principles====
Line 171 ⟶ 190:
 
* ''[[Ubiquitous computing]] and communication''. Computers are expected to communicate through high-speed local networks, nationally over wide-area networks, and portably via infrared, ultrasonic, cellular, and other technologies. Data and computational services will be portably accessible from many if not most locations to which a user travels.
* ''highHigh-functionality systems''. Systems can have large numbers of functions associated with them. There are so many systems that most users, technical or non-technical, do not have time to learn about traditionally (e.g., through thick user manuals).
* ''The mass availability of computer graphics''. Computer graphics capabilities such as image processing, graphics transformations, rendering, and interactive animation become widespread as inexpensive chips become available for inclusion in general workstations and mobile devices.
* ''Mixed media''. Commercial systems can handle images, voice, sounds, video, text, formatted data. These are exchangeable over communication links among users. The separate consumer electronics fields (e.g., stereo sets, DVD players, televisions) and computers are beginning to merge. Computer and print fields are expected to cross-assimilate.
Line 187 ⟶ 206:
* ASSETS: ACM International Conference on Computers and [[Accessibility]]
* CSCW: ACM conference on [[Computer Supported Cooperative Work]]
 
* CC: Aarhus decennial conference on Critical Computing
* CUI: ACM conference on [[Conversational user interface|Conversational User Interfaces]]
* DIS: ACM conference on Designing Interactive Systems
Line 219 ⟶ 238:
* [[User experience design]]
* {{Portal-inline|size=tiny|Human–computer interaction}}
* [[Human City Interaction]]
 
==Footnotes==
Line 228 ⟶ 248:
<ref name="engineering">Green, Paul (2008). Iterative Design. Lecture presented in Industrial and Operations Engineering 436 (Human Factors in Computer Systems, University of Michigan, Ann Arbor, MI, February 4, 2008.</ref>
 
<ref name="Evaluating the impact of office automation on top management communication">{{Cite book|last=Carlisle|first=James H.|workseries=Proceedings of the June 7–10, 1976, National Computer Conference and Exposition|pages=611–616|doi=10.1145/1499799.1499885|date=June 1976|quote=Use of 'human–computer interaction' appears in references|title=Proceedings of the June 7-10, 1976, national computer conference and exposition on - AFIPS '76|chapter=Evaluating the impact of office automation on top management communication|s2cid=18471644|chapter-url=https://www.semanticscholar.org/paper/7a864fc9cfbb01306cb2a75ceef1ed246727f1f0}}<!--|access-date=10 September 2012--></ref>
 
<ref name="value sensitive design">Friedman, B., Kahn Jr, P. H., Borning, A., & Kahn, P. H. (2006). Value Sensitive Design and information systems. Human–Computer Interaction and Management Information Systems: Foundations. ME Sharpe, New York, 348–372.</ref>
Line 246 ⟶ 266:
Not in use-->
 
<ref name="three mile island">{{Cite web| url=http://www.threemileisland.org/downloads/188.pdf| title=Report of the President's Commission on the Accident at Three Miles Island| date=2019-03-14| access-date=2011-08-17| archive-url=https://web.archive.org/web/20110409064628/http://www.threemileisland.org/downloads/188.pdf| archive-date=2011-04-09| url-status=deadusurped}}</ref>
 
<ref name="What is Cognitive Ergonomics?">{{cite web |author=Ergoweb |url=http://www.ergoweb.com/news/detail.cfm?id=352 |title=What is Cognitive Ergonomics? |publisher=Ergoweb.com |access-date=August 29, 2011 |archive-url=https://web.archive.org/web/20110928150026/http://www.ergoweb.com/news/detail.cfm?id=352 |archive-date=September 28, 2011 |url-status=dead }}</ref>
Line 269 ⟶ 289:
* {{cite journal |last1= Carroll |first1= John M. |year= 2010 |title= Conceptualizing a possible discipline of human–computer interaction |journal= Interacting with Computers |volume= 22 |issue= 1 |pages= 3–12 |doi= 10.1016/j.intcom.2009.11.008}}
* Sara Candeias, S. and A. Veiga ''The dialogue between man and machine: the role of language theory and technology'', Sandra M. Aluísio & Stella E. O. Tagnin, New Language Technologies, and Linguistic Research, A Two-Way Road: cap. 11. Cambridge Scholars Publishing. ({{ISBN|978-1-4438-5377-4}})
<references responsive="0" />
 
;Social science and HCI
* {{cite journal |last1= Nass |first1= Clifford |last2= Fogg |first2= B. J. |last3= Moon |first3= Youngme |year= 1996 |title= Can computers be teammates? |journal= International Journal of Human-Computer Studies|volume= 45 |issue= 6 |pages= 669–678 |doi=10.1006/ijhc.1996.0073|doi-access= free }}
* {{cite journal |last1= Nass |first1= Clifford |last2= Moon |first2= Youngme |year= 2000 |title= Machines and mindlessness: Social responses to computers |url= https://semanticscholar.org/paper/56ccf17dced2d3bb73f66a18afa20caf5a429c21|journal= Journal of Social Issues |volume= 56 |issue= 1 |pages= 81–103 |doi= 10.1111/0022-4537.00153|s2cid= 15851410 }}
* {{cite journal |last1= Posard |first1= Marek N |year= 2014 |title= Status processes in human–computer interactions: Does gender matter? |journal= Computers in Human Behavior |volume= 37 |pages= 189–195 |doi=10.1016/j.chb.2014.04.025}}
* {{cite journal |last1= Posard |first1= Marek N. |last2= Rinderknecht |first2= R. Gordon |year= 2015 |title= Do people like working with computers more than human beings?. |journal= Computers in Human Behavior |volume= 51 |pages= 232–238 |doi=10.1016/j.chb.2015.04.057|doi-access= free }}
Line 315 ⟶ 336:
*[http://hcibib.org/hci-sites/organizations HCI Webliography]
 
{{Evolutionary psychology}}
{{Digital media use and mental health}}
{{Authority control}}