skip to main content
research-article

Crowdsourcing-Based Web Accessibility Evaluation with Golden Maximum Likelihood Inference

Published: 01 November 2018 Publication History

Abstract

Web accessibility evaluation examines how well websites comply with accessibility guidelines which help people with disabilities to perceive, navigate and contribute to the Web. This demanding task usually requires manual assessment by experts with many years of training and experience. However, not enough experts are available to carry out the increasing number of evaluation projects while non-experts often have different opinions about the presence of accessibility barriers. Addressing these issues, we introduce a crowdsourcing system with a novel truth inference algorithm to derive reliable and accurate assessments from conflicting opinions of evaluators. Extensive evaluation on 23,901 complex tasks assessed by 50 people with and without disabilities shows that our approach outperforms state of the art approaches. In addition, we conducted surveys to identify frequent barriers that people with disabilities are facing in their daily lives and the difficulty to access Web pages when they encounter these barriers. The frequencies and severities of barriers correlate with their derived importance in our evaluation project.

Supplementary Material

ZIP File (cscw163.zip)

References

[1]
Amaia Aizpurua, Simon Harper, and Markel Vigo. 2016. Exploring the relationship between web accessibility and user experience, International Journal of Human-Computer Studies. 91, 13--23.
[2]
Bahadir I. Aydin, Yavuz S. Yilmaz, Yaliang Li, Qi Li, Jing Gao and Murat Demirbas. 2014. Crowdsourcing for Multiple-Choice Question Answering. In Proceedings of the Twenty-Sixth Annual Conference on Innovative Applications of Artificial Intelligence (AAAI '14). 2946--2953.
[3]
Giorgio Brajnik, Yeliz Yesilada, and Simon Harper. 2011. The expertise effect on web accessibility evaluation methods. In Proceedings of Human--Computer Interaction. 26, 3, 246--283.
[4]
China Disabled Persons' Federation CDPF. 2012. Statistics on the population of people from different disability groups in China. Retrieved July 4, 2018 from http://www.cdpf.org.cn/sjzx/cjrgk/201206/t20120626_387581.shtml
[5]
Alexander P. Dawid, and Allan M. Skene. 1979. Maximum likelihood estimation of observer error-rates using the EM algorithm. Applied statistics. 28, 1, 20--28.
[6]
Gianluca Demartini, Djellel Eddine Difallah, and Philippe Cudré-Mauroux. 2012. ZenCrowd: leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking. In Proceedings of the 21st international conference on World Wide Web. 469--478.
[7]
Arthur P. Dempster, Nan M. Laird., and Donald B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the royal statistical society. Series B (methodological). 1--38.
[8]
Xianghua Ding, Patrick C. Shih, and Ning Gu. 2017. Socially Embedded Work: A Study of Wheelchair Users Performing Online Crowd Work in China. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW'17). 642--654.
[9]
Hyun-Chul Kim and Zoubin Ghahramani. 2012. Bayesian classifier combination. Artificial Intelligence and Statistics. 619--627.
[10]
Shawn Lawton Henry and Shadi Abou-Zahra. First published in 2005, updated in 2016. WCAG-EM Overview: Website Accessibility Conformance Evaluation Methodology. Web Accessibility Initiative. Retrieved April 10, 2018 from http://www.w3.org/WAI/eval/conformance.html.
[11]
Jeff Howe. 2006. The rise of crowdsourcing. In Wired magazine. 14, 6, 1--4.
[12]
David R. Karger, Sewoong Oh, and Devavrat Shah. 2011. Iterative learning for reliable crowdsourcing systems. In Advances in neural information processing systems. 1953--1961.
[13]
Matthew King, James. W. Thatcher, Philip M. Bronstad, and Robert Easton. 2005. Managing usability for people with disabilities in a large web presence. IBM Systems Journal. 44, 3, 519--535.
[14]
Liangcheng Li, Can Wang, Shuyi Song, Zhi Yu, Fenqin Zhou, and Jiajun Bu. 2017. A task assignment strategy for crowdsourcing-based web accessibility evaluation system. In Proceedings of the 14th Web for All Conference on The Future of Accessible Work. 18.
[15]
Qi Li, Yaliang Li, Jing Gao, Bo Zhao, Wei Fan, and Jiawei Han. 2014. Resolving conflicts in heterogeneous data by truth discovery and source reliability estimation. In Proceedings of the 2014 ACM SIGMOD international conference on Management of data. 1187--1198.
[16]
Rui Lopes, Daniel Gomes, and Luís Carriço. 2010. Web not for all: a large scale study of web accessibility. In Proceedings of the 2010 International Cross Disciplinary Conference on Web Accessibility (W4A '10). 10, 4.
[17]
Scott Novotney, and Chris Callison-Burch. 2010. Shared task: crowdsourced accessibility elicitation of wikipedia articles. In Proceedings of the NAACL HLT 2010 workshop on creating speech and language data with Amazon's Mechanical Turk. 41--44.
[18]
Vikas C. Raykar, Shipeng Yu, Linda H. Zhao, Gerardo Hermosillo Valadez, Charles Florin, Luca Bogoni, and Linda Moy. 2010. Learning from crowds. Journal of Machine Learning Research. 11, 1297--1322.
[19]
Paat Rusmevichientong, David M. Pennock, Steve Lawrence, and C. Lee Giles. 2001. Methods for sampling pages uniformly from the World Wide Web. AAAI Fall symposium on Using Uncertainty within Computation. 121--128.
[20]
Shuyi Song, Jiajun Bu, Chengchao Shen, Andreas Artmeier, Zhi Yu and Qin Zhou. 2018. Reliability Aware Web Accessibility Experience Metric. In Proceedings of the 15th Web for All Conference on The Future of Accessible Work.
[21]
Shuyi Song, Jiajun Bu, Ye Wang, Zhi Yu, Andreas Artmeier, Lianjun Dai, and Can Wang. 2018. Web Accessibility Evaluation in a Crowdsourcing-Based System with Expertise-Based Decision Strategy. In Proceedings of the 15th Web for All Conference on The Future of Accessible Work.
[22]
Eric Velleman, Colin Meerveld, Christophe Strobbe, Johannes Koch, Carlos A. Velasco, Mikael Snaprud, and Annika Nietzio. 2007. D-WAB4 Unified Web Evaluation Methodology (UWEM 1.2 Core).
[23]
Markel Vigo, Myriam Arrue, Giorgio Brajnik, Raffaella Lomuscio, and Julio Abascal. 2007. Quantitative metrics for measuring web accessibility. In Proceedings of the 2007 international cross-disciplinary conference on Web accessibility (W4A '07). 99--107.
[24]
World Wide Web Consortium. 2008. Web Content Accessibility Guidelines (WCAG) 2.0. Retrieved April 10, 2018 from http://www.w3.org/TR/WCAG20/
[25]
World Health Organization. 2011. World report on disability, WHO Press. Retrieved April 10, 2018 from http://www.who.int/disabilities/world_report/2011/report/en/index.html.
[26]
World Health Organization. 2016. Disability and health. Retrieved April 10, 2018 from http://www.who.int/mediacentre/factsheets/fs352/en/
[27]
World Wide Web Consortium. 2005. Introduction to Web accessibility. Retrieved April 10, 2018 from https://www.w3.org/WAI/intro/accessibility
[28]
World Wide Web Consortium. 2016. Failure of Success Criterion 1.4.4 when resizing visually rendered text up to 200 percent causes the text, image or controls to be clipped, truncated or obscured. Retrieved April 10, 2018 from https://www.w3.org/TR/WCAG20-TECHS/F69.html
[29]
Xiaoming Zeng. 2004. Evaluation and enhancement of web content accessibility for persons with disabilities. Ph.D Dissertation. University of Pittsburgh.
[30]
Mengni Zhang, Can Wang, Jiajun Bu, Zhi Yu, Yi Lu, Ruijie Zhang, and Chun Chen. 2015. An optimal sampling method for web accessibility quantitative metric. In Proceedings of the 12th Web for All Conference (W4A '15). 18.
[31]
Yudian Zheng, Guoliang Li, Yuanbing Li, Caihua Shan, and Reynold Cheng. 2017. Truth inference in crowdsourcing: Is the problem solved? In Proceedings of the VLDB Endowment. 10, 5, 541--552.
[32]
Kathryn Zyskowski, Meredith Ringel Morris, Jeffrey P. Bigham, Mary L. Gray, and Shaun K. Kane. 2015. Accessible Crowdwork? Understanding the Value in and Challenge of Microtask Employment for People with Disabilities. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW'15). 1682--1693.

Cited By

View all

Index Terms

  1. Crowdsourcing-Based Web Accessibility Evaluation with Golden Maximum Likelihood Inference

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Proceedings of the ACM on Human-Computer Interaction
      Proceedings of the ACM on Human-Computer Interaction  Volume 2, Issue CSCW
      November 2018
      4104 pages
      EISSN:2573-0142
      DOI:10.1145/3290265
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 01 November 2018
      Published in PACMHCI Volume 2, Issue CSCW

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. collaborative work
      2. crowdsourcing
      3. disability
      4. evaluation system
      5. user experience
      6. web accessibility

      Qualifiers

      • Research-article

      Funding Sources

      • Zhejiang Provincial Natural Science Foundation of China
      • National Key Technology R&D Program of China
      • National Natural Science Foundation of China

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)22
      • Downloads (Last 6 weeks)4
      Reflects downloads up to 15 Sep 2024

      Other Metrics

      Citations

      Cited By

      View all

      View Options

      Get Access

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media