skip to main content
10.1145/3623809.3623856acmotherconferencesArticle/Chapter ViewAbstractPublication PageshaiConference Proceedingsconference-collections
research-article

Effects of Automated Misinformation Warning Labels on the Intents to Like, Comment and Share Posts

Published: 04 December 2023 Publication History

Abstract

With fact-checking by professionals being difficult to scale on social media, algorithmic techniques have been considered. However, it is uncertain how the public may react to labels by automated fact-checkers. In this study, we investigate the use of automated warning labels derived from misinformation detection literature and investigate their effects on three forms of post engagement. Focusing on political posts, we also consider how partisanship affects engagement. In a two-phases within-subjects experiment with 200 participants, we found that the generic warnings suppressed intents to comment on and share posts, but not on the intent to like them. Furthermore, when different reasons for the labels were provided, their effects on post engagement were inconsistent, suggesting that the reasons could have undesirably motivated engagement instead. Partisanship effects were observed across the labels with higher engagement for politically congruent posts. We discuss the implications on the design and use of automated warning labels.

References

[1]
Sacha Altay, Manon Berriche, Hendrik Heuer, Johan Farkas, and Steven Rathje. 2023. A survey of expert views on misinformation: Definitions, determinants, solutions, and future of the field. Retrieved Sep 1, 2023 from https://misinforeview.hks.harvard.edu/article/a-survey-of-expert-views-on-misinformation-definitions-determinants-solutions-and-future-of-the-field/
[2]
Wazib Ansar and Saptarsi Goswami. 2021. Combating the menace: A survey on characterization and detection of fake news from a data science perspective. International Journal of Information Management Data Insights 1, 2 (2021), 100052. https://doi.org/10.1016/j.jjimei.2021.100052
[3]
Maria Teresa Borges-Tiago, Flavio Tiago, and Carla Cosme. 2019. Exploring users’ motivations to participate in viral communication on social media. Journal of Business Research 101 (2019), 574–582. https://doi.org/10.1016/j.jbusres.2018.11.011
[4]
Pew Research Center. 2014. Political Polarization in the American Public. Retrieved May 1, 2023 from https://www.pewresearch.org/politics/2014/06/12/political-polarization-in-the-american-public/
[5]
Katherine Clayton, Spencer Blair, Jonathan A. Busam, Samuel Forstner, John Glance, Guy Green, Anna Kawata, Akhila Kovvuri, Jonathan Martin, Evan Morgan, Morgan Sandhu, Rachel Sang, Rachel Scholz-Bright, Austin T. Welch, Andrew G. Wolff, Amanda Zhou, and Brendan Nyhan. 2020. Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media. Political Behavior 42, 4 (01 Dec 2020), 1073–1095. https://doi.org/10.1007/s11109-019-09533-0
[6]
Jr Edson C Tandoc, Darren Lim, and Rich Ling. 2020. Diffusion of disinformation: How social media users respond to fake news and why. Journalism 21, 3 (2020), 381–398. https://doi.org/10.1177/1464884919868325 arXiv:https://doi.org/10.1177/1464884919868325
[7]
Jr Edson C Tandoc, Richard Ling, Oscar Westlund, Andrew Duffy, Debbie Goh, and Lim Zheng Wei. 2018. Audiences’ acts of authentication in the age of fake news: A conceptual framework. New Media & Society 20, 8 (2018), 2745–2763. https://doi.org/10.1177/1461444817731756 arXiv:https://doi.org/10.1177/1461444817731756
[8]
Lisa A. Elkin, Matthew Kay, James J. Higgins, and Jacob O. Wobbrock. 2021. An Aligned Rank Transform Procedure for Multifactor Contrast Tests. In The 34th Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST ’21). Association for Computing Machinery, New York, NY, USA, 754–768. https://doi.org/10.1145/3472749.3474784
[9]
Mingkun Gao, Ziang Xiao, Karrie Karahalios, and Wai-Tat Fu. 2018. To Label or Not to Label: The Effect of Stance and Credibility Labels on Readers’ Selection and Perception of News Articles. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 55 (nov 2018), 16 pages. https://doi.org/10.1145/3274324
[10]
Christine Geeng, Savanna Yee, and Franziska Roesner. 2020. Fake News on Facebook and Twitter: Investigating How People (Don’t) Investigate. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376784
[11]
Mark E. Glickman, Sowmya R. Rao, and Mark R. Schultz. 2014. False discovery rate control is a recommended alternative to Bonferroni-type adjustments in health studies. Journal of Clinical Epidemiology 67, 8 (2014), 850–857. https://doi.org/10.1016/j.jclinepi.2014.03.012
[12]
Rebecca Hofstein Grady, Peter H. Ditto, and Elizabeth F. Loftus. 2021. Nevertheless, partisanship persisted: fake news warnings help briefly, but bias returns with time. Cognitive Research: Principles and Implications 6, 1 (23 Jul 2021), 52. https://doi.org/10.1186/s41235-021-00315-z
[13]
Hendrik Heuer and Elena Leah Glassman. 2022. A Comparative Evaluation of Interventions Against Misinformation: Augmenting the WHO Checklist. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 241, 21 pages. https://doi.org/10.1145/3491102.3517717
[14]
Md Rafiqul Islam, Shaowu Liu, Xianzhi Wang, and Guandong Xu. 2020. Deep learning for misinformation detection on online social networks: a survey and new perspectives. Social Network Analysis and Mining 10, 1 (29 Sep 2020), 82. https://doi.org/10.1007/s13278-020-00696-x
[15]
Kathleen Hall Jamieson and Joseph N. Cappella. 2010. Echo Chamber: Rush Limbaugh and the Conservative Media Establishment. Oxford University Press.
[16]
Jennifer Jerit and Yangzi Zhao. 2020. Political Misinformation. Annual Review of Political Science 23, 1 (2020), 77–94. https://doi.org/10.1146/annurev-polisci-050718-032814 arXiv:https://doi.org/10.1146/annurev-polisci-050718-032814
[17]
Chenyan Jia, Alexander Boltz, Angie Zhang, Anqing Chen, and Min Kyung Lee. 2022. Understanding Effects of Algorithmic vs. Community Label on Perceived Accuracy of Hyper-Partisan Misinformation. Proc. ACM Hum.-Comput. Interact. 6, CSCW2, Article 371 (nov 2022), 27 pages. https://doi.org/10.1145/3555096
[18]
Cheonsoo Kim and Sung-Un Yang. 2017. Like, comment, and share on Facebook: How each behavior differs from the other. Public Relations Review 43, 2 (2017), 441–449. https://doi.org/10.1016/j.pubrev.2017.02.006
[19]
Candice Lanius, Ryan Weber, and William I. MacKenzie. 2021. Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey. Social Network Analysis and Mining 11, 1 (12 Mar 2021), 32. https://doi.org/10.1007/s13278-021-00739-x
[20]
Jeffrey Lees, Abigail McCarter, and Dawn M. Sarno. 2022. Twitter’s Disputed Tags May Be Ineffective at Reducing Belief in Fake News and Only Reduce Intentions to Share Fake News Among Democrats and Independents. Journal of Online Trust and Safety 1, 3 (Aug. 2022). https://doi.org/10.54501/jots.v1i3.39
[21]
Bertin Martens, Luis Aguiar, Estrella Gomez-Herrera, and Frank Mueller-Langer. 2018. The Digital Transformation of News Media and the Rise of Disinformation and Fake News. Technical Report. Digital Economy Working Paper 2018-02, Joint Research Centre Technical Reports. https://ssrn.com/abstract=3164170
[22]
Mohsen Mosleh, Gordon Pennycook, and David G. Rand. 2020. Self-reported willingness to share political news articles in online surveys correlates with actual sharing on Twitter. PLOS ONE 15, 2 (02 2020), 1–9. https://doi.org/10.1371/journal.pone.0228882
[23]
Adam Mosseri. 2016. Addressing Hoaxes and Fake News. Retrieved May 1, 2023 from https://about.fb.com/news/2016/12/news-feed-fyi-addressing-hoaxes-and-fake-news/
[24]
Rachel R. Mourão and Craig T. Robertson. 2019. Fake News as Discursive Integration: An Analysis of Sites That Publish False, Misleading, Hyperpartisan and Sensational Information. Journalism Studies 20, 14 (2019), 2077–2095. https://doi.org/10.1080/1461670X.2019.1566871 arXiv:https://doi.org/10.1080/1461670X.2019.1566871
[25]
Will Moy. 2021. Scaling Up the Truth: Fact-Checking Innovations and the Pandemic. National Endowment For Democracy. https://www.ned.org/wp-content/uploads/2021/01/Fact-Checking-Innovations-Pandemic-Moy.pdf.
[26]
M. F. Mridha, Ashfia Jannat Keya, Md. Abdul Hamid, Muhammad Mostafa Monowar, and Md. Saifur Rahman. 2021. A Comprehensive Review on Fake News Detection With Deep Learning. IEEE Access 9 (2021), 156151–156170. https://doi.org/10.1109/ACCESS.2021.3129329
[27]
Anne Oeldorf-Hirsch, Mike Schmierbach, Alyssa Appelman, and Michael P. Boyle. 2020. The Ineffectiveness of Fact-Checking Labels on News Memes and Articles. Mass Communication and Society 23, 5 (2020), 682–704. https://doi.org/10.1080/15205436.2020.1733613 arXiv:https://doi.org/10.1080/15205436.2020.1733613
[28]
Anne Oeldorf-Hirsch, Mike Schmierbach, Alyssa Appelman, and Michael P. Boyle. 2023. The Influence of Fact-Checking Is Disputed! The Role of Party Identification in Processing and Sharing Fact-Checked Social Media Posts. American Behavioral Scientist 0, 0 (2023), 00027642231174335. https://doi.org/10.1177/00027642231174335 arXiv:https://doi.org/10.1177/00027642231174335
[29]
Cynthia Peacock, Jennifer Hoewe, Elliot Panek, and G. Paul Willis. 2021. Hyperpartisan News Use: Relationships with Partisanship and Cognitive and Affective Involvement. Mass Communication and Society 24, 2 (2021), 210–232. https://doi.org/10.1080/15205436.2020.1844902 arXiv:https://doi.org/10.1080/15205436.2020.1844902
[30]
Steve Rathje, Jay J. Van Bavel, and Sander van der Linden. 2021. Out-group animosity drives engagement on social media. Proceedings of the National Academy of Sciences 118, 26 (2021), e2024292118. https://doi.org/10.1073/pnas.2024292118 arXiv:https://www.pnas.org/doi/pdf/10.1073/pnas.2024292118
[31]
E. Saltz, S. Barari, C. R. Leibowicz, and C. Wardle. 2021. Misinformation interventions are common, divisive, and poorly understood. Harvard Kennedy School (HKS) Misinformation Review. https://doi.org/10.37016/mr-2020-81
[32]
Haeseung Seo, Aiping Xiong, and Dongwon Lee. 2019. Trust It or Not: Effects of Machine-Learning Warnings in Helping Individuals Mitigate Misinformation. In Proceedings of the 10th ACM Conference on Web Science (Boston, Massachusetts, USA) (WebSci ’19). Association for Computing Machinery, New York, NY, USA, 265–274. https://doi.org/10.1145/3292522.3326012
[33]
Laura Silver. 2022. Most across 19 countries see strong partisan conflicts in their society, especially in South Korea and the U.S.Pew Research Center. https://www.pewresearch.org/short-reads/2022/11/16/most-across-19-countries-see-strong-partisan-conflicts-in-their-society-especially-in-south-korea-and-the-u-s/.
[34]
Jakub Simko, Martina Hanakova, Patrik Racsko, Matus Tomlein, Robert Moro, and Maria Bielikova. 2019. Fake News Reading on Social Media: An Eye-Tracking Study. In Proceedings of the 30th ACM Conference on Hypertext and Social Media (Hof, Germany) (HT ’19). Association for Computing Machinery, New York, NY, USA, 221–230. https://doi.org/10.1145/3342220.3343642
[35]
François t’Serstevens, Giulia Piccillo, and Alexander Grigoriev. 2022. Fake news zealots: Effect of perception of news on online sharing behavior. Frontiers in Psychology 13 (2022). https://doi.org/10.3389/fpsyg.2022.859534
[36]
Joseph E. Uscinski and Ryden W. Butler. 2013. The Epistemology of Fact Checking. Critical Review 25, 2 (2013), 162–180. https://doi.org/10.1080/08913811.2013.843872 arXiv:https://doi.org/10.1080/08913811.2013.843872
[37]
Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins. 2011. The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only Anova Procedures. Association for Computing Machinery, New York, NY, USA, 143–146. https://doi.org/10.1145/1978942.1978963
[38]
Liang Wu, Fred Morstatter, Kathleen M. Carley, and Huan Liu. 2019. Misinformation in Social Media: Definition, Manipulation, and Detection. SIGKDD Explor. Newsl. 21, 2 (nov 2019), 80–90. https://doi.org/10.1145/3373464.3373475
[39]
Xinyi Zhou and Reza Zafarani. 2020. A Survey of Fake News: Fundamental Theories, Detection Methods, and Opportunities. ACM Comput. Surv. 53, 5, Article 109 (sep 2020), 40 pages. https://doi.org/10.1145/3395046

Cited By

View all

Index Terms

  1. Effects of Automated Misinformation Warning Labels on the Intents to Like, Comment and Share Posts

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    HAI '23: Proceedings of the 11th International Conference on Human-Agent Interaction
    December 2023
    506 pages
    ISBN:9798400708244
    DOI:10.1145/3623809
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 04 December 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. automated fact-checking
    2. misinformation
    3. partisanship
    4. social media engagement
    5. warning labels

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    HAI '23

    Acceptance Rates

    Overall Acceptance Rate 121 of 404 submissions, 30%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)67
    • Downloads (Last 6 weeks)5
    Reflects downloads up to 09 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media