skip to main content
10.1145/2499178.2499201acmotherconferencesArticle/Chapter ViewAbstractPublication PagesictirConference Proceedingsconference-collections
research-article

Query-Performance Prediction Using Minimal Relevance Feedback

Published: 29 September 2013 Publication History

Abstract

There has been much work on devising query-performance prediction approaches that estimate search effectiveness without relevance judgments (i.e., zero feedback). Specifically, post-retrieval predictors analyze the result list of top-retrieved documents. Departing from the zero-feedback approach, in this paper we show that relevance feedback for even very few top ranked documents can be exploited to dramatically improve prediction quality. Specifically, applying state-of-the-art zero-feedback-based predictors to only a very few relevant documents, rather than to the entire result list as originally designed, substantially improves prediction quality. This novel form of prediction is based on quantifying properties of relevant documents that can attest to query performance. We also show that integrating prediction based on relevant documents with zero-feedback-based prediction is highly effective; specifically, with respect to utilizing state-of-the-art direct estimates of retrieval effectiveness when minimal feedback is available.

References

[1]
N. Abdul-Jaleel, J. Allan, W. B. Croft, F. Diaz, L. Larkey, X. Li, M. D. Smucker, and C. Wade. UMASS at TREC 2004 --- novelty and hard. In Proc. of TREC-13, 2004.
[2]
G. Amati, C. Carpineto, and G. Romano. Query difficulty, robustness and selective application of query expansion. In Proc. of ECIR, pages 127--137, 2004.
[3]
C. Buckley. Why current IR engines fail. Information Retrieval, 12(6):652--665, 2009.
[4]
C. Buckley and E. M. Voorhees. Retrieval evaluation with incomplete information. In Proc. of SIGIR, pages 25--32, 2004.
[5]
D. Carmel and E. Yom-Tov. Estimating the Query Difficulty for Information Retrieval. Synthesis Lectures on Information Concepts, Retrieval, and Services. Morgan & Claypool Publishers, 2010.
[6]
S. Cronen-Townsend, Y. Zhou, and W. B. Croft. Predicting query performance. In Proc. of SIGIR, pages 299--306, 2002.
[7]
S. Cronen-Townsend, Y. Zhou, and W. B. Croft. Precision prediction based on ranked list coherence. Information Retrieval, 9(6):723--755, 2006.
[8]
O. Kurland, A. Shtok, D. Carmel, and S. Hummel. A unified framework for post-retrieval query-performance prediction. In Proc. of ICTIR, pages 15--26, 2011.
[9]
V. Lavrenko and W. B. Croft. Relevance models in information retrieval. In Language Modeling for Information Retrieval, pages 11--56. Kluwer, 2003.
[10]
M. Lease. Incorporating relevance and pseudo-relevance feedback in the markov random field model. In Proc. of TREC 2008, 2008.
[11]
Y. Lv and C. Zhai. Adaptive relevance feedback in information retrieval. In Proc. of CIKM, pages 255--264, 2009.
[12]
M. Sanderson. Test collection based evaluation of information retrieval systems. Foundations and Trends in Information Retrieval, 4(4):247--375, 2010.
[13]
A. Shtok, O. Kurland, and D. Carmel. Using statistical decision theory and relevance models for query-performance prediction. In Proc. of SIGIR, pages 259--266, 2010.
[14]
F. Song and W. B. Croft. A general language model for information retrieval (poster abstract). In Proc. of SIGIR, pages 279--280, 1999.
[15]
E. M. Voorhees. Overview of the TREC 2004 Robust Retrieval Track. In Proc. of TREC-13, 2004.
[16]
R. W. White, J. M. Jose, and I. Ruthven. An implicit feedback approach for interactive information retrieval. Inf. Process. Manage., 42:166--190, January 2006.
[17]
E. Yilmaz and J. A. Aslam. Estimating average precision with incomplete and imperfect judgments. In Proc. of CIKM, pages 102--111, 2006.
[18]
C. Zhai and J. D. Lafferty. A study of smoothing methods for language models applied to ad hoc information retrieval. In Proc. of SIGIR, pages 334--342, 2001.
[19]
L. Zhao, C. Liang, and J. Callan. Extending relevance model for relevance feedback. In Proc. of TREC 2008, 2008.
[20]
Y. Zhou. Retrieval Performance Prediction and Document Quality. PhD thesis, University of Massachusetts, 2007.
[21]
Y. Zhou and W. B. Croft. Query performance prediction in web search environments. In Proc. of SIGIR, pages 543--550, 2007.

Cited By

View all

Index Terms

  1. Query-Performance Prediction Using Minimal Relevance Feedback

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICTIR '13: Proceedings of the 2013 Conference on the Theory of Information Retrieval
    September 2013
    148 pages
    ISBN:9781450321075
    DOI:10.1145/2499178
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    • Findwise: Findwise AB
    • Google Inc.
    • Spinque: Spinque
    • Univ. of Copenhagen: University of Copenhagen
    • LARM: LARM Audio Research Archive
    • Royal School of Library and Information Science: Royal School of Library and Information Science
    • Yahoo! Labs

    In-Cooperation

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 29 September 2013

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tag

    1. query-performance prediction

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ICTIR '13
    Sponsor:
    • Findwise
    • Spinque
    • Univ. of Copenhagen
    • LARM
    • Royal School of Library and Information Science

    Acceptance Rates

    ICTIR '13 Paper Acceptance Rate 11 of 51 submissions, 22%;
    Overall Acceptance Rate 235 of 527 submissions, 45%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)6
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 01 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media