skip to main content
10.1145/1553374.1553395acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicmlConference Proceedingsconference-collections
research-article

Decision tree and instance-based learning for label ranking

Published: 14 June 2009 Publication History

Abstract

The label ranking problem consists of learning a model that maps instances to total orders over a finite set of predefined labels. This paper introduces new methods for label ranking that complement and improve upon existing approaches. More specifically, we propose extensions of two methods that have been used extensively for classification and regression so far, namely instance-based learning and decision tree induction. The unifying element of the two methods is a procedure for locally estimating predictive probability models for label rankings.

References

[1]
Aha, D., Kibler, D., & Albert, M. (1991). Instance-based learning algorithms. Mach. Learn., 6, 37--66.
[2]
Alon, N. (2006). Ranking tournaments. SIAM Journal on Discrete Mathematics, 20(1), 134--142.
[3]
Breiman, L., Friedman, J., Olshen, R., & Stone, C. (1984). Classification and regression trees. Belmont, CA: Wadsworth International Group.
[4]
Coppersmith, D., Fleischer, L., & Rudra, A. (2006). Ordering by weighted number of wins gives a good ranking for weighted tournaments. ACM-SIAM Symposium on Discrete Algorithms (pp. 776--782).
[5]
Dekel, O., Manning, C., & Singer, Y. (2004). Loglinear models for label ranking. Advances in Neural Information Processing Systems 16 (pp. 497--504). MIT Press.
[6]
Dempster, A., Laird, N., & Rubin, D. (1977). Maximum likelihood from incomplete data via the EM algorithm. J. of the Royal Stat. Soc. B, 39, 1--38.
[7]
Demšar, J. (2006). Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7, 1--30.
[8]
Dudani, S. (1976). The distance-weighted k-nearest-neighbor rule. IEEE Trans. SMC, 6, 325--327.
[9]
Fligner, M., & Verducci, J. (1986). Distance based ranking models. J. Royal Stat. Soc., 48, 359--369.
[10]
Har-Peled, S., Roth, D., & Zimak, D. (2003). Constraint classification for multiclass classification and ranking. Advances in Neural Information Processing Systems 15 (pp. 785--792). MIT Press.
[11]
Ho, T., Hull, J., & Srihari, S. (1994). Decision combination in multiple classifier systems. IEEE Trans. Pattern Anal. and Machine Intell., 16, 66--75.
[12]
Hüllermeier, E., Fürnkranz, J., Cheng, W., & Brinker, K. (2008). Label ranking by learning pairwise preferences. Artificial Intelligence, 172, 1897--1916.
[13]
Khardon, R., & Wachman, G. (2007). Noise tolerant variants of the perceptron algorithm. Journal of Machine Learning Research, 8, 227--248.
[14]
Mallows, C. (1957). Non-null ranking models. Biometrika, 44(1), 114--130.
[15]
Quinlan, J. (1993). C4.5: Programs for machine learning. San Mateo, CA: Morgan Kaufmann.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICML '09: Proceedings of the 26th Annual International Conference on Machine Learning
June 2009
1331 pages
ISBN:9781605585161
DOI:10.1145/1553374

Sponsors

  • NSF
  • Microsoft Research: Microsoft Research
  • MITACS

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 June 2009

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article

Conference

ICML '09
Sponsor:
  • Microsoft Research

Acceptance Rates

Overall Acceptance Rate 140 of 548 submissions, 26%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)13
  • Downloads (Last 6 weeks)0
Reflects downloads up to 03 Feb 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media