A Simulation-Based Evaluation Framework for Interactive AI Systems and Its Application

Authors

  • Maeda F. Hanafi IBM Research AI
  • Yannis Katsis IBM Research AI
  • Martín Santillán Cooper IBM
  • Yunyao Li IBM Research AI

DOI:

https://doi.org/10.1609/aaai.v36i11.21541

Keywords:

Human-in-the-loop, Interactive Machine Learning, Interactive AI, Evaluation, User Simulation

Abstract

Interactive AI (IAI) systems are increasingly popular as the human-centered AI design paradigm is gaining strong traction. However, evaluating IAI systems, a key step in building such systems, is particularly challenging, as their output highly depends on the performed user actions. Developers often have to rely on limited and mostly qualitative data from ad-hoc user testing to assess and improve their systems. In this paper, we present InteractEva; a systematic evaluation framework for IAI systems. We also describe how we have applied InteractEva to evaluate a commercial IAI system, leading to both quality improvements and better data-driven design decisions.

Downloads

Published

2022-06-28

How to Cite

Hanafi, M. F., Katsis, Y., Santillán Cooper, M., & Li, Y. (2022). A Simulation-Based Evaluation Framework for Interactive AI Systems and Its Application. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 12658-12664. https://doi.org/10.1609/aaai.v36i11.21541

Issue

Section

IAAI Technical Track on Innovative Tools for Enabling AI Application