skip to main content
10.1145/3020165.3022149acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
short-paper

What Do You Mean Exactly?: Analyzing Clarification Questions in CQA

Published: 07 March 2017 Publication History

Abstract

Search as a dialogue is an emerging paradigm that is fueled by the proliferation of mobile devices and technological advances, e.g. in speech recognition and natural language processing. Such an interface allows search systems to engage in a dialogue with users aimed at fulfilling their information needs. One key capability required to make such search dialogues effective is asking clarification questions (CLARQ) proactively, when a user's intent is not clear, which could help the system provide more useful responses. With this in mind, we explore the dialogues between the users on a community question answering (CQA) website as a rich repository of information-seeking interactions. In particular, we study the clarification questions asked by CQA users in two different domains, analyze their behavior, and the types of clarification questions asked. Our results suggest that the types of CLARQ are very diverse, while the questions themselves tend to be specific and require both domain- and general knowledge. However, focusing on popular CLARQ types and domains can be fruitful. As a first step towards automatic generation of clarification questions, we explore the problem of predicting the specific subject of a clarification question. Our findings can be useful for future improvements of intelligent dialog search and question answering systems.

References

[1]
J. Allan, B. Croft, A. Moffat, and M. Sanderson. Frontiers, challenges, and opportunities for information retrieval. SIGIR Forum, 46(1):2--32, 2012.
[2]
A. Anderson et al. Discovering value from community activity on focused question answering sites: a case study of stack overflow. In KDD'2012.
[3]
R. Gangadharaiah and B. Narayanaswamy. Natural language query refinement for problem resolution from crowdsourced semi-structured data. In IJCNLP'2013.
[4]
M. A. Hearst. 'Natural' search user interfaces. CACM, 54(11), 2011.
[5]
M. P. Kato, R. W. White, J. Teevan, and S. T. Dumais. Clarifications and question specificity in synchronous social Q&A. In CHI'2013 Extended Abstracts.
[6]
A. Kotov and C. Zhai. Towards natural question guided search. In WWW'2010.
[7]
C. D. Manning et al. The Stanford CoreNLP natural language processing toolkit. In ACL System Demonstrations'2014.
[8]
S. Quarteroni and S. Manandhar. Designing an interactive open-domain question answering system. Natural Language Engineering, 15(01):73--95, 2009.
[9]
F. Radlinski and N. Craswell. A theoretical framework for conversational search. In CHIIR'2017.
[10]
H. Sajjad, P. Pantel, and M. Gamon. Underspecified query refinement via natural language question generation. In COLING'2012.
[11]
S. Stoyanchev, A. Liu, and J. Hirschberg. Modelling human clarification strategies. In SIGDIAL'2013.
[12]
Y. Tang, F. Bu, Z. Zheng, and X. Zhu. Towards interactive qa: suggesting refinement for questions. In SIGIR'2011 Workshop on "entertain me": Supporting Complex Search Tasks.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHIIR '17: Proceedings of the 2017 Conference on Conference Human Information Interaction and Retrieval
March 2017
454 pages
ISBN:9781450346771
DOI:10.1145/3020165
  • Conference Chairs:
  • Ragnar Nordlie,
  • Nils Pharo,
  • Program Chairs:
  • Luanne Freund,
  • Birger Larsen,
  • Dan Russel
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 March 2017

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. community qa
  2. conversational answer retrieval
  3. cqa
  4. dialog systems
  5. interactive qa
  6. qa
  7. question answering

Qualifiers

  • Short-paper

Conference

CHIIR '17
Sponsor:

Acceptance Rates

CHIIR '17 Paper Acceptance Rate 10 of 48 submissions, 21%;
Overall Acceptance Rate 55 of 163 submissions, 34%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)57
  • Downloads (Last 6 weeks)3
Reflects downloads up to 05 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media