skip to main content
10.1145/2600428.2609581acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

Query-performance prediction: setting the expectations straight

Published: 03 July 2014 Publication History

Abstract

The query-performance prediction task has been described as estimating retrieval effectiveness in the absence of relevance judgments. The expectations throughout the years were that improved prediction techniques would translate to improved retrieval approaches. However, this has not yet happened. Herein we provide an in-depth analysis of why this is the case. To this end, we formalize the prediction task in the most general probabilistic terms. Using this formalism we draw novel connections between tasks --- and methods used to address these tasks --- in federated search, fusion-based retrieval, and query-performance prediction. Furthermore, using formal arguments we show that the ability to estimate the probability of effective retrieval with no relevance judgments (i.e., to predict performance) implies knowledge of how to perform effective retrieval. We also explain why the expectation that using previously proposed query-performance predictors would help to improve retrieval effectiveness was not realized. This is due to a misalignment with the actual goal for which these predictors were devised: ranking queries based on the presumed effectiveness of using them for retrieval over a corpus with a specific retrieval method. Focusing on this specific prediction task, namely query ranking by presumed effectiveness, we present a novel learning-to-rank-based approach that uses Markov Random Fields. The resultant prediction quality substantially transcends that of state-of-the-art predictors.

References

[1]
J. Allan, M. E. Connell, W. B. Croft, F.-F. Feng, D. Fisher, and X. Li. INQUERY and TREC-9. In Proc. of TREC-9, pages 551--562, 2000.
[2]
G. Amati, C. Carpineto, and G. Romano. Query difficulty, robustness, and selective application of query expansion. In Proc. of ECIR, pages 127--137, 2004.
[3]
C. C. V. ant Garrison W. Cottrell. Fusion via linear combination of scores. Information Retrieval, 1(3):151--173, 1999.
[4]
J. A. Aslam and V. Pavlu. Query hardness estimation using Jensen-Shannon divergence among multiple scoring functions. In Proc. of ECIR, pages 198--209, 2007.
[5]
N. Balasubramanian and J. Allan. Learning to select rankers. In Proc. of SIGIR, pages 855--856, 2010.
[6]
N. Balasubramanian, G. Kumaran, and V. R. Carvalho. Predicting query performance on the web. In Proc. of SIGIR, pages 785--786, 2010.
[7]
S. M. Beitzel, E. C. Jensen, A. Chowdhury, O. Frieder, D. A. Grossman, and N. Goharian. Disproving the fusion hypothesis: An analysis of data fusion via effective information retrieval strategies. In Proc. of SAC, pages 823--827, 2003.
[8]
M. Bendersky, W. B. Croft, and Y. Diao. Quality-biased ranking of web documents. In Proc. of WSDM, pages 95--104, 2011.
[9]
Y. Bernstein, B. Billerbeck, S. Garcia, N. Lester, F. Scholer, and J. Zobel. RMIT university at trec 2005: Terabyte and robust track. In Proc. of TREC-14, 2005.
[10]
J. Callan. Distributed information retrieval. In W. Croft, editor, Advances in information retrieval, chapter 5, pages 127--150. Kluwer Academic Publishers, 2000.
[11]
D. Carmel and E. Yom-Tov. Estimating the Query Difficulty for Information Retrieval. Synthesis lectures on information concepts, retrieval, and services. Morgan & Claypool, 2010.
[12]
D. Carmel, E. Yom-Tov, A. Darlow, and D. Pelleg. What makes a query difficult? In Proc. of SIGIR, pages 390--397, 2006.
[13]
G. V. Cormack, M. D. Smucker, and C. L. A. Clarke. Efficient and effective spam filtering and re-ranking for large web datasets. Informaltiom Retrieval Journal, 14(5):441--465, 2011.
[14]
W. B. Croft. Combining approaches to information retrieval. In W. B. Croft, editor, Advances in information retrieval, chapter 1, pages 1--36. Kluwer Academic Publishers, 2000.
[15]
S. Cronen-Townsend, Y. Zhou, and W. B. Croft. Predicting query performance. In Proc. of SIGIR, pages 299--306, 2002.
[16]
S. Cronen-Townsend, Y. Zhou, and W. B. Croft. A language modeling framework for selective query expansion. Technical Report IR-338, Center for Intelligent Information Retrieval, University of Massachusetts, 2004.
[17]
R. Cummins. Predicting query performance directly from score distributions. In Proc. of AIRS, pages 315--326, 2011.
[18]
R. Cummins, J. M. Jose, and C. O'Riordan. Improved query performance prediction using standard deviation. In Proc. of SIGIR, pages 1089--1090, 2011.
[19]
F. Diaz. Performance prediction using spatial autocorrelation. In Proc. of SIGIR, pages 583--590, 2007.
[20]
E. A. Fox and J. A. Shaw. Combination of multiple searches. In Proc. of TREC-2, 1994.
[21]
C. Hauff and L. Azzopardi. When is query performance prediction effective? In Proc. of SIGIR, pages 829--830, 2009.
[22]
C. Hauff, L. Azzopardi, and D. Hiemstra. The combination and evaluation of query performance prediction methods. In Proc. of ECIR, pages 301--312, 2009.
[23]
C. Hauff, D. Hiemstra, and F. de Jong. A survey of pre-retrieval query performance predictors. In Proc. of CIKM, pages 1419--1420, 2008.
[24]
C. Hauff, V. Murdock, and R. A. Baeza-Yates. Improved query difficulty prediction for the web. In Proc. of CIKM, pages 439--448, 2008.
[25]
B. He and I. Ounis. Inferring query performance using pre-retrieval predictors. In Proc. of SPIRE, pages 43--54, 2004.
[26]
T. Joachims. Training linear svms in linear time. In Proc. of KDD, pages 217--226, 2006.
[27]
O. Kurland, F. Raiber, and A. Shtok. Query-performance prediction and cluster ranking: Two sides of the same coin. In Proc. of CIKM, pages 2459--2462, 2012.
[28]
O. Kurland, A. Shtok, D. Carmel, and S. Hummel. A unified framework for post-retrieval query-performance prediction. In Proc. of ICTIR, pages 15--26, 2011.
[29]
O. Kurland, A. Shtok, S. Hummel, F. Raiber, D. Carmel, and O. Rom. Back to the roots: a probabilistic framework for query-performance prediction. In Proc. of CIKM, pages 823--832, 2012.
[30]
V. Lavrenko and W. B. Croft. Relevance-based language models. In Proc. of SIGIR, pages 120--127, 2001.
[31]
D. Lillis, F. Toolan, R. W. Collier, and J. Dunnion. Probfuse: a probabilistic approach to data fusion. In Proc. of SIGIR, pages 139--146, 2006.
[32]
T.-Y. Liu. Learning to Rank for Information Retrieval. Springer, 2011.
[33]
X. Liu and W. B. Croft. Cluster-based retrieval using language models. In Proc. of SIGIR, pages 186--193, 2004.
[34]
X. Liu and W. B. Croft. Experiments on retrieval of optimal clusters. Technical Report IR-478, Center for Intelligent Information Retrieval (CIIR), University of Massachusetts, 2006.
[35]
C. Macdonald, R. L. T. Santos, and I. Ounis. On the usefulness of query features for learning to rank. In Proc. of CIKM, pages 2559--2562, 2012.
[36]
D. Metzler and W. B. Croft. A Markov random field model for term dependencies. In Proc. of SIGIR, pages 472--479, 2005.
[37]
J. Mothe and L. Tanguy. Linguistic features to predict query difficulty. In ACM SIGIR 2005 Workshop on Predicting Query Difficulty - Methods and Applications, 2005.
[38]
J. Pérez-Iglesias and L. Araujo. Standard deviation as a query hardness estimator. In Proc. of SPIRE, pages 207--212, 2010.
[39]
F. Raiber and O. Kurland. Ranking document clusters using markov random fields. In Proc. of SIGIR, pages 333--342, 2013.
[40]
F. Raiber and O. Kurland. Using document-quality measures to predict web-search effectiveness. In Proc. of ECIR, pages 134--145, 2013.
[41]
F. Scholer and S. Garcia. A case for improved evaluation of query difficulty prediction. In Proc. of SIGIR, pages 640--641, 2009.
[42]
F. Scholer, H. E. Williams, and A. Turpin. Query association surrogates for web search. JASIST, 55(7):637--650, 2004.
[43]
D. Sheldon, M. Shokouhi, M. Szummer, and N. Craswell. Lambdamerge: merging the results of query reformulations. In Proc. of WSDM, pages 795--804, 2011.
[44]
M. Shokouhi and L. Si. Federated search. Foundations and Trends in Information Retrieval, 5(1):1--102, 2011.
[45]
A. Shtok, O. Kurland, and D. Carmel. Using statistical decision theory and relevance models for query-performance prediction. In Proc. of SIGIR, 2010.
[46]
A. Shtok, O. Kurland, D. Carmel, F. Raiber, and G. Markovits. Predicting query performance by query-drift estimation. ACM Transactions on Information Systems, 30(2):11, 2012.
[47]
I. Soboroff, C. K. Nicholas, and P. Cahan. Ranking retrieval systems without relevance judgments. In Proc. of SIGIR, pages 66--73, 2001.
[48]
F. Song and W. B. Croft. A general language model for information retrieval (poster abstract). In Proc. of SIGIR, pages 279--280, 1999.
[49]
K. Sparck Jones, S. Walker, and S. E. Robertson. A probabilistic model of information retrieval: development and comparative experiments - part 1. Information Processing and Management, 36(6):779--808, 2000.
[50]
S. Tomlinson. Robust, Web and Terabyte Retrieval with Hummingbird Search Server at TREC 2004. In Proc. of TREC-13, 2004.
[51]
V. Vinay, I. J. Cox, N. Milic-Frayling, and K. R. Wood. On ranking the effectiveness of searches. In Proc. of SIGIR, pages 398--404, 2006.
[52]
E. Yom-Tov, S. Fine, D. Carmel, and A. Darlow. Learning to estimate query difficulty: including applications to missing content detection and distributed information retrieval. In Proc. of SIGIR, pages 512--519, 2005.
[53]
Y. Zhao, F. Scholer, and Y. Tsegay. Effective pre-retrieval query performance prediction using similarity and variability evidence. In Proc. of ECIR, pages 52--64, 2008.
[54]
Y. Zhou and B. Croft. Ranking robustness: a novel framework to predict query performance. In Proc. of CIKM, pages 567--574, 2006.
[55]
Y. Zhou and B. Croft. Query performance prediction in web search environments. In Proc. of SIGIR, pages 543--550, 2007.

Cited By

View all

Index Terms

  1. Query-performance prediction: setting the expectations straight

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SIGIR '14: Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval
    July 2014
    1330 pages
    ISBN:9781450322577
    DOI:10.1145/2600428
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 03 July 2014

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. learning-to-rank
    2. query-performance prediction

    Qualifiers

    • Research-article

    Conference

    SIGIR '14
    Sponsor:

    Acceptance Rates

    SIGIR '14 Paper Acceptance Rate 82 of 387 submissions, 21%;
    Overall Acceptance Rate 792 of 3,983 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)34
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 01 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media