Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleDecember 2024
Understanding Trust and Reliance Development in AI Advice: Assessing Model Accuracy, Model Explanations, and Experiences from Previous Interactions
- research-articleDecember 2024
- research-articleDecember 2024JUST ACCEPTED
AI-Augmented Predictions: LLM Assistants Improve Human Forecasting Accuracy
ACM Transactions on Interactive Intelligent Systems (TIIS), Just Accepted https://doi.org/10.1145/3707649Large language models (LLMs) match and sometimes exceed human performance in many domains. This study explores the potential of LLMs to augment human judgment in a forecasting task. We evaluate the effect on human forecasters of two LLM assistants: one ...
- research-articleSeptember 2024
-
- research-articleApril 2024
Entity Footprinting: Modeling Contextual User States via Digital Activity Monitoring
ACM Transactions on Interactive Intelligent Systems (TIIS), Volume 14, Issue 2Article No.: 9, Pages 1–27https://doi.org/10.1145/3643893Our digital life consists of activities that are organized around tasks and exhibit different user states in the digital contexts around these activities. Previous works have shown that digital activity monitoring can be used to predict entities that ...
- research-articleJanuary 2024
Simulation-based Optimization of User Interfaces for Quality-assuring Machine Learning Model Predictions
ACM Transactions on Interactive Intelligent Systems (TIIS), Volume 14, Issue 1Article No.: 1, Pages 1–32https://doi.org/10.1145/3594552Quality-sensitive applications of machine learning (ML) require quality assurance (QA) by humans before the predictions of an ML model can be deployed. QA for ML (QA4ML) interfaces require users to view a large amount of data and perform many interactions ...
- research-articleDecember 2023
Meaningful Explanation Effect on User’s Trust in an AI Medical System: Designing Explanations for Non-Expert Users
ACM Transactions on Interactive Intelligent Systems (TIIS), Volume 13, Issue 4Article No.: 30, Pages 1–39https://doi.org/10.1145/3631614Whereas most research in AI system explanation for healthcare applications looks at developing algorithmic explanations targeted at AI experts or medical professionals, the question we raise is: How do we build meaningful explanations for laypeople? And ...
- research-articleDecember 2023
Explainable Activity Recognition in Videos using Deep Learning and Tractable Probabilistic Models
- Chiradeep Roy,
- Mahsan Nourani,
- Shivvrat Arya,
- Mahesh Shanbhag,
- Tahrima Rahman,
- Eric D. Ragan,
- Nicholas Ruozzi,
- Vibhav Gogate
ACM Transactions on Interactive Intelligent Systems (TIIS), Volume 13, Issue 4Article No.: 29, Pages 1–32https://doi.org/10.1145/3626961We consider the following video activity recognition (VAR) task: given a video, infer the set of activities being performed in the video and assign each frame to an activity. Although VAR can be solved accurately using existing deep learning techniques, ...
- research-articleDecember 2023
LIMEADE: From AI Explanations to Advice Taking
ACM Transactions on Interactive Intelligent Systems (TIIS), Volume 13, Issue 4Article No.: 24, Pages 1–29https://doi.org/10.1145/3589345Research in human-centered AI has shown the benefits of systems that can explain their predictions. Methods that allow AI to take advice from humans in response to explanations are similarly useful. While both capabilities are well developed for ...
- research-articleDecember 2023
How Do Users Experience Traceability of AI Systems? Examining Subjective Information Processing Awareness in Automated Insulin Delivery (AID) Systems
ACM Transactions on Interactive Intelligent Systems (TIIS), Volume 13, Issue 4Article No.: 25, Pages 1–34https://doi.org/10.1145/3588594When interacting with artificial intelligence (AI) in the medical domain, users frequently face automated information processing, which can remain opaque to them. For example, users with diabetes may interact daily with automated insulin delivery (AID). ...
- research-articleDecember 2023
Effects of AI and Logic-Style Explanations on Users’ Decisions Under Different Levels of Uncertainty
ACM Transactions on Interactive Intelligent Systems (TIIS), Volume 13, Issue 4Article No.: 22, Pages 1–42https://doi.org/10.1145/3588320Existing eXplainable Artificial Intelligence (XAI) techniques support people in interpreting AI advice. However, although previous work evaluates the users’ understanding of explanations, factors influencing the decision support are largely overlooked in ...
- research-articleDecember 2023
Co-design of Human-centered, Explainable AI for Clinical Decision Support
- Cecilia Panigutti,
- Andrea Beretta,
- Daniele Fadda,
- Fosca Giannotti,
- Dino Pedreschi,
- Alan Perotti,
- Salvatore Rinzivillo
ACM Transactions on Interactive Intelligent Systems (TIIS), Volume 13, Issue 4Article No.: 21, Pages 1–35https://doi.org/10.1145/3587271eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box AI models and the way such explanations are presented to users, i.e., the explanation user interface. Despite its ...
- research-articleDecember 2023
Directive Explanations for Actionable Explainability in Machine Learning Applications
- Ronal Singh,
- Tim Miller,
- Henrietta Lyons,
- Liz Sonenberg,
- Eduardo Velloso,
- Frank Vetere,
- Piers Howe,
- Paul Dourish
ACM Transactions on Interactive Intelligent Systems (TIIS), Volume 13, Issue 4Article No.: 23, Pages 1–26https://doi.org/10.1145/3579363In this article, we show that explanations of decisions made by machine learning systems can be improved by not only explaining why a decision was made but also explaining how an individual could obtain their desired outcome. We formally define the ...
- research-articleSeptember 2023
Conversational Context-sensitive Ad Generation with a Few Core-Queries
ACM Transactions on Interactive Intelligent Systems (TIIS), Volume 13, Issue 3Article No.: 15, Pages 1–37https://doi.org/10.1145/3588578When people are talking together in front of digital signage, advertisements that are aware of the context of the dialogue will work the most effectively. However, it has been challenging for computer systems to retrieve the appropriate advertisement from ...
- research-articleSeptember 2023
The Impact of Intelligent Pedagogical Agents’ Interventions on Student Behavior and Performance in Open-Ended Game Design Environments
ACM Transactions on Interactive Intelligent Systems (TIIS), Volume 13, Issue 3Article No.: 11, Pages 1–29https://doi.org/10.1145/3578523Research has shown that free-form Game-Design (GD) environments can be very effective in fostering Computational Thinking (CT) skills at a young age. However, some students can still need some guidance during the learning process due to the highly open-...
- research-articleMay 2023
Combining the Projective Consciousness Model and Virtual Humans for Immersive Psychological Research: A Proof-of-concept Simulating a ToM Assessment
ACM Transactions on Interactive Intelligent Systems (TIIS), Volume 13, Issue 2Article No.: 8, Pages 1–31https://doi.org/10.1145/3583886Relating explicit psychological mechanisms and observable behaviours is a central aim of psychological and behavioural science. One of the challenges is to understand and model the role of consciousness and, in particular, its subjective perspective as an ...
- research-articleApril 2023
Explaining Recommendations through Conversations: Dialog Model and the Effects of Interface Type and Degree of Interactivity
ACM Transactions on Interactive Intelligent Systems (TIIS), Volume 13, Issue 2Article No.: 6, Pages 1–47https://doi.org/10.1145/3579541Explaining system-generated recommendations based on user reviews can foster users’ understanding and assessment of the recommended items and the recommender system (RS) as a whole. While up to now explanations have mostly been static, shown in a single ...
- research-articleMarch 2023
The Influence of Personality Traits on User Interaction with Recommendation Interfaces
ACM Transactions on Interactive Intelligent Systems (TIIS), Volume 13, Issue 1Article No.: 3, Pages 1–39https://doi.org/10.1145/3558772Users’ personality traits can take an active role in affecting their behavior when they interact with a computer interface. However, in the area of recommender systems (RS), though personality-based RS has been extensively studied, most works focus on ...
- research-articleDecember 2022
On the Importance of User Backgrounds and Impressions: Lessons Learned from Interactive AI Applications
- Mahsan Nourani,
- Chiradeep Roy,
- Jeremy E. Block,
- Donald R. Honeycutt,
- Tahrima Rahman,
- Eric D. Ragan,
- Vibhav Gogate
ACM Transactions on Interactive Intelligent Systems (TIIS), Volume 12, Issue 4Article No.: 28, Pages 1–29https://doi.org/10.1145/3531066While EXplainable Artificial Intelligence (XAI) approaches aim to improve human-AI collaborative decision-making by improving model transparency and mental model formations, experiential factors associated with human users can cause challenges in ways ...