skip to main content
10.1145/3331184.3331279acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
short-paper

On Topic Difficulty in IR Evaluation: The Effect of Systems, Corpora, and System Components

Published: 18 July 2019 Publication History

Abstract

In a test collection setting, topic difficulty can be defined as the average effectiveness of a set of systems for a topic. In this paper we study the effects on the topic difficulty of: (i) the set of retrieval systems; (ii) the underlying document corpus; and (iii) the system components. By generalizing methods recently proposed to study system component factor analysis, we perform a comprehensive analysis on topic difficulty and the relative effects of systems, corpora, and component interactions. Our findings show that corpora have the most significant effect on topic difficulty.

References

[1]
David Banks, Paul Over, and Nien-Fan Zhang. 1999. Blind men and elephants: Six approaches to TREC data. Information Retrieval, Vol. 1, 1 (1999), 7--34.
[2]
David Carmel and Elad Yom-Tov. 2010. Estimating the query difficulty for information retrieval. Synthesis Lectures on Information Concepts, Retrieval, and Services, Vol. 2, 1 (2010), 1--89.
[3]
Nicola Ferro, Yubin Kim, and Mark Sanderson. 2019. Using Collection Shards to Study Retrieval Performance Effect Sizes. ACM TOIS, Vol. 5, 44 (2019), 59.
[4]
Nicola Ferro and Mark Sanderson. 2017. Sub-corpora impact on system effectiveness. In Proceedings of the 40th ACM SIGIR. ACM, 901--904.
[5]
Nicola Ferro and Gianmaria Silvello. 2016. A general linear mixed models approach to study system component effects. In 39th ACM SIGIR. 25--34.
[6]
Nicola Ferro and Gianmaria Silvello. 2018. Toward an anatomy of IR system component performances. JASIST, Vol. 69, 2 (2018), 187--200.
[7]
Donna Harman and Chris Buckley. 2009. Overview of the reliable information access workshop. Information Retrieval, Vol. 12, 6 (2009), 615--641.
[8]
Stefano Mizzaro, Josiane Mothe, Kevin Roitero, and Md Zia Ullah. 2018. Query Performance Prediction and Effectiveness Evaluation Without Relevance Judgments: Two Sides of the Same Coin. In The 41st ACM SIGIR (SIGIR '18). 1233--1236.
[9]
Stefano Mizzaro and Stephen Robertson. 2007. Hits Hits TREC: Exploring IR Evaluation Results with Network Analysis. In Proceedings 30th SIGIR. 479--486.
[10]
Kevin Roitero, Eddy Maddalena, and Stefano Mizzaro. {n. d.}. Do Easy Topics Predict Effectiveness Better Than Difficult Topics?. In ECIR2017. 605--611.
[11]
Mark Sanderson, Andrew Turpin, Ying Zhang, and Falk Scholer. 2012. Differences in effectiveness across sub-collections. In Proc. of the 21st ACM CIKM. 1965--1969.

Cited By

View all

Index Terms

  1. On Topic Difficulty in IR Evaluation: The Effect of Systems, Corpora, and System Components

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SIGIR'19: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval
    July 2019
    1512 pages
    ISBN:9781450361729
    DOI:10.1145/3331184
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 18 July 2019

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. information retrieval evaluation
    2. system component analysis
    3. test collection design

    Qualifiers

    • Short-paper

    Funding Sources

    Conference

    SIGIR '19
    Sponsor:

    Acceptance Rates

    SIGIR'19 Paper Acceptance Rate 84 of 426 submissions, 20%;
    Overall Acceptance Rate 792 of 3,983 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)4
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 01 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media