Masking Actor Information Leads to Fairer Political Claims Detection

Erenay Dayanik, Sebastian Padó


Abstract
A central concern in Computational Social Sciences (CSS) is fairness: where the role of NLP is to scale up text analysis to large corpora, the quality of automatic analyses should be as independent as possible of textual properties. We analyze the performance of a state-of-the-art neural model on the task of political claims detection (i.e., the identification of forward-looking statements made by political actors) and identify a strong frequency bias: claims made by frequent actors are recognized better. We propose two simple debiasing methods which mask proper names and pronouns during training of the model, thus removing personal information bias. We find that (a) these methods significantly decrease frequency bias while keeping the overall performance stable; and (b) the resulting models improve when evaluated in an out-of-domain setting.
Anthology ID:
2020.acl-main.404
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4385–4391
Language:
URL:
https://aclanthology.org/2020.acl-main.404
DOI:
10.18653/v1/2020.acl-main.404
Bibkey:
Cite (ACL):
Erenay Dayanik and Sebastian Padó. 2020. Masking Actor Information Leads to Fairer Political Claims Detection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4385–4391, Online. Association for Computational Linguistics.
Cite (Informal):
Masking Actor Information Leads to Fairer Political Claims Detection (Dayanik & Padó, ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.404.pdf
Video:
 http://slideslive.com/38928894