skip to main content
10.1145/2872427.2883070acmotherconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article

Using Hierarchical Skills for Optimized Task Assignment in Knowledge-Intensive Crowdsourcing

Published: 11 April 2016 Publication History

Abstract

Besides the simple human intelligence tasks such as image labeling, crowdsourcing platforms propose more and more tasks that require very specific skills, especially in participative science projects. In this context, there is a need to reason about the required skills for a task and the set of available skills in the crowd, in order to increase the resulting quality. Most of the existing solutions rely on unstructured tags to model skills (vector of skills). In this paper we propose to finely model tasks and participants using a skill tree, that is a taxonomy of skills equipped with a similarity distance within skills. This model of skills enables to map participants to tasks in a way that exploits the natural hierarchy among the skills. We illustrate the effectiveness of our model and algorithms through extensive experimentation with synthetic and real data sets.

References

[1]
Acemoglu, D., Mostagir, M., and Ozdaglar, A. Managing innovation in a crowd. In Proceedings of the Sixteenth ACM Conference on Economics and Computation (New York, NY, USA, 2015), EC '15, ACM, pp. 283--283.
[2]
Amsterdamer, Y., Davidson, S. B., Milo, T., Novgorodov, S., and Somech, A. OASSIS: query driven crowd mining. In International Conference on Management of Data, SIGMOD 2014, Snowbird, UT, USA, June 22--27, 2014 (2014), pp. 589--600.
[3]
Bender, M. A., Farach-Colton, M., Pemmasani, G., Skiena, S., and Sumazin, P. Lowest common ancestors in trees and directed acyclic graphs. J. Algorithms 57, 2 (2005), 75--94.
[4]
Bozzon, A., Brambilla, M., Ceri, S., Silvestri, M., and Vesci, G. Choosing the right crowd: Expert finding in social networks. In Proceedings of the 16th International Conference on Extending Database Technology (2013), EDBT '13, pp. 637--648.
[5]
Bradley, K., Rafter, R., and Smyth, B. Case-based user profiling for content personalization. In Proceedings of the International Conference on Adaptive Hypermedia and Adaptive Web-based Systems (2000), Springer-Verlag, pp. 62--72.
[6]
Campion, M. A., Fink, A. A., Ruggeberg, B. J., Carr, L., Phillips, G. M., and Odman, R. B. Doing competencies well: Best practices in competency modeling. Personnel Psychology 64, 1 (2011), 225--262.
[7]
Cao, C. C., She, J., Tong, Y., and Chen, L. Whom to ask?: Jury selection for decision making tasks on micro-blog services. Proc. VLDB Endow. 5, 11 (July 2012), 1495--1506.
[8]
Desmarais, M. C., and d Baker, R. S. A review of recent advances in learner and skill modeling in intelligent learning environments. User Modeling and User-Adapted Interaction 22, 1--2 (2012), 9--38.
[9]
Enrich, M., Braunhofer, M., and Ricci, F. Cold-start management with cross-domain collaborative filtering and tags. In E-Commerce and Web Technologies (2013), C. Huemer and P. Lops, Eds., vol. 152 of Lecture Notes in Business Information Processing, Springer Berlin Heidelberg, pp. 101--112.
[10]
Fan, J., Li, G., Ooi, B. C., Tan, K.-l., and Feng, J. icrowd: An adaptive crowdsourcing framework. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data (2015), SIGMOD '15, pp. 1015--1030.
[11]
Karger, D. R., Oh, S., and Shah, D. Budget-optimal task allocation for reliable crowdsourcing systems. Operations Research 62, 1 (February 2014), 1--24.
[12]
Kuhn, H. W. The hungarian method for the assignment problem. Naval Research Logistics Quarterly 2 (1955), 83--97.
[13]
Liu, X., Lu, M., Ooi, B. C., Shen, Y., Wu, S., and Zhang, M. Cdas: a crowdsourcing data analytics system. Proceedings of the VLDB Endowment 5, 10 (2012), 1040--1051.
[14]
Maarry, K., Balke, W.-T., Cho, H., Hwang, S.-w., and Baba, Y. Skill ontology-based model for quality assurance in crowdsourcing. In Database Systems for Advanced Applications, Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2014, pp. 376--387.
[15]
Middleton, S. E., Shadbolt, N. R., and De Roure, D. C. Ontological user profiling in recommender systems. ACM Transactions on Information Systems (TOIS) 22, 1 (2004), 54--88.
[16]
Mo, L., Cheng, R., Kao, B., Yang, X. S., Ren, C., Lei, S., Cheung, D. W., and Lo, E. Optimizing plurality for human intelligence tasks. In 22nd ACM International Conference on Information and Knowledge Management, CIKM'13, San Francisco, CA, USA, October 27 - November 1, 2013 (2013), pp. 1929--1938.
[17]
Rahman, H., Thirumuruganathan, S., Roy, S. B., Amer-Yahia, S., and Das, G. Worker skill estimation in team-based tasks. PVLDB 8, 11 (2015), 1142--1153.
[18]
Resnik, P. Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language. J. Artif. Intell. Res. (JAIR) 11 (1999), 95--130.
[19]
Roy, S. B., Lykourentzou, I., Thirumuruganathan, S., Amer-Yahia, S., and Das, G. Task assignment optimization in knowledge-intensive crowdsourcing. VLDB J. 24, 4 (2015), 467--491.
[20]
Tamir, D. 500000 worldwide mechanical turk workers. In Techlist (Retrieved September 17, 2014).
[21]
Victor, P., Cornelis, C., Teredesai, A. M., and De Cock, M. Whom should i trust?: The impact of key figures on cold start recommendations. In Proceedings of the 2008 ACM Symposium on Applied Computing (New York, NY, USA, 2008), SAC '08, ACM, pp. 2014--2018.
[22]
Vuurens, J., and de Vries, A. Obtaining High-Quality Relevance Judgments Using Crowdsourcing. IEEE Internet Computing 16, 5 (2012), 20--27.
[23]
Wang, D., Abdelzaher, T., Kaplan, L., and Aggarwal, C. C. Recursive fact-finding: A streaming approach to truth estimation in crowdsourcing applications. In Distributed Computing Systems (ICDCS), 2013 IEEE 33rd International Conference on (2013), IEEE, pp. 530--539.
[24]
Zhang, J., Tang, J., and Li, J.-Z. Expert finding in a social network. In DASFAA (2007), K. Ramamohanarao, P. R. Krishna, M. K. Mohania, and E. Nantajeewarawat, Eds., vol. 4443 of Lecture Notes in Computer Science, Springer, pp. 1066--1069.
[25]
Zhang, W., and Wang, J. A collective bayesian poisson factorization model for cold-start local event recommendation. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (New York, NY, USA, 2015), KDD '15, ACM, pp. 1455--1464.
[26]
Zhao, Z., Cheng, J., Wei, F., Zhou, M., Ng, W., and Wu, Y. Socialtransfer: Transferring social knowledge for cold-start cowdsourcing. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management (New York, NY, USA, 2014), CIKM '14, ACM, pp. 779--788.
[27]
Zheng, Y., Cheng, R., Maniu, S., and Mo, L. On optimality of jury selection in crowdsourcing. In Proceedings of the 18th International Conference on Extending Database Technology, EDBT 2015, Brussels, Belgium, March 23--27, 2015. (2015), pp. 193--204.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
WWW '16: Proceedings of the 25th International Conference on World Wide Web
April 2016
1482 pages
ISBN:9781450341431

Sponsors

  • IW3C2: International World Wide Web Conference Committee

In-Cooperation

Publisher

International World Wide Web Conferences Steering Committee

Republic and Canton of Geneva, Switzerland

Publication History

Published: 11 April 2016

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. crowdsourcing
  2. skill modeling
  3. task mapping

Qualifiers

  • Research-article

Conference

WWW '16
Sponsor:
  • IW3C2
WWW '16: 25th International World Wide Web Conference
April 11 - 15, 2016
Québec, Montréal, Canada

Acceptance Rates

WWW '16 Paper Acceptance Rate 115 of 727 submissions, 16%;
Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)43
  • Downloads (Last 6 weeks)3
Reflects downloads up to 13 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media