skip to main content
10.1145/3121050.3121095acmconferencesArticle/Chapter ViewAbstractPublication PagesictirConference Proceedingsconference-collections
short-paper

Merge-Tie-Judge: Low-Cost Preference Judgments with Ties

Published: 01 October 2017 Publication History

Abstract

Preference judgments have been demonstrated to yield more accurate labels than graded judgments and also forego the need to define grades upfront. These benefits, however, come at the cost of a larger number of judgments that is required. Prior research, by exploiting the transitivity of preferences, successfully reduced the overall number of preference judgments required to O(N log(N)) for N documents, which is still prohibitive in practice. In this work, we reduce the overall number of preference judgments required by allowing for ties and exploiting that ties naturally cluster documents. Our novel judgment mechanism Merge-Tie-Judge exploits this ``clustering effect'' by automatically inferring preferences between documents from different clusters. Experiments on relevance judgments from the TREC Web Track show that the proposed mechanism requires fewer judgments

References

[1]
N. Ailon and M. Mohri. An efficient reduction of ranking to classification. arXiv preprint arXiv:0710.2889, 2007.
[2]
M. Bashir, J. Anderton, J. Wu, P. B. Golbus, V. Pavlu, and J. A. Aslam. A document rating system for preference judgements. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval, pages 909--912. ACM, 2013.
[3]
B. Carterette, P. N. Bennett, D. M. Chickering, and S. T. Dumais. Here or there: Preference Judgments for Relevance. In Advances in Information Retrieval, ECIR '08, pages 16--27. Springer, 2008.
[4]
K. Hui and K. Berberich. Low-cost preference judgment via ties. In Advances in Information Retrieval, ECIR '17, pages 626--632. Springer, 2017.
[5]
K. Hui and K. Berberich. Transitivity, time consumption, and quality of preference judgments in crowdsourcing. In Advances in Information Retrieval, ECIR '17, pages 239--251. Springer, 2017.
[6]
G. Kazai, N. Craswell, E. Yilmaz, and S. Tahaghoghi. An analysis of systematic judging errors in information retrieval. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM '12, pages 105--114, New York, NY, USA, 2012. ACM.
[7]
S. Niu, J. Guo, Y. Lan, and X. Cheng. Top-k learning to rank: labeling, ranking and evaluation. In Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval, pages 751--760. ACM, 2012.
[8]
K. Radinsky and N. Ailon. Ranking from pairs and triplets: Information quality, evaluation methods and query complexity. In Proceedings of the Fourth ACM International Conference on Web Search and Data Mining, WSDM '11, pages 105--114, New York, NY, USA, 2011. ACM.
[9]
M. E. Rorvig. The simple scalability of documents. Journal of the American Society for Information Science, 41(8):590, 1990.
[10]
R. Song, Q. Guo, R. Zhang, G. Xin, J.-R. Wen, Y. Yu, and H.-W. Hon. Select-the-best-ones: A new way to judge relative relevance. Information processing & management, 47(1):37--52, 2011.
[11]
S. Tong and D. Koller. Support vector machine active learning with applications to text classification. The Journal of Machine Learning Research, 2:45--66, 2002.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICTIR '17: Proceedings of the ACM SIGIR International Conference on Theory of Information Retrieval
October 2017
348 pages
ISBN:9781450344906
DOI:10.1145/3121050
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 October 2017

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. low-cost evaluation
  2. preference judgments
  3. ties

Qualifiers

  • Short-paper

Conference

ICTIR '17
Sponsor:

Acceptance Rates

ICTIR '17 Paper Acceptance Rate 27 of 54 submissions, 50%;
Overall Acceptance Rate 235 of 527 submissions, 45%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1
  • Downloads (Last 6 weeks)0
Reflects downloads up to 03 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media