×
Oct 14, 2022 · We provide a novel debiasing algorithm by adjusting the predictive model's belief to (1) ignore the sensitive information if it is not useful for the task.
Recent work on reducing bias in NLP models usually focuses on protecting or isolating information related to a sensitive attribute (like gender or race).
However, in an ideal situation, a model should use only the necessary amount of information, irrespective of bias, to achieve an acceptable task performance ...
Feb 21, 2024 · Controlling bias. Page 10. exposure for fair interpretable predictions. In Find- ings of the Association for Computational Linguistics: EMNLP ...
An interpretable debiasing algorithm produces a rationale along with a prediction of the original task to expose the amount of bias or sensitive infor- mation ...
InterFair: Debiasing with Natural Language Feedback for Fair Interpretable Predictions ... Controlling Bias Exposure for Fair Interpretable Predictions · 1 code ...
Sep 4, 2024 · Debiasing methods in NLP models traditionally focus on isolating information related to a sensitive attribute (like gender or race).
Controlling bias exposure for fair interpretable predictions. Z He, Y Wang, J McAuley, BP Majumder. EMNLP2022-finding, 2022. 21, 2022 ; Robust and interpretable ...
Her research primarily focuses on understanding different types of biases/stereotypes/toxicity inherited in current NLP systems and mitigating them with ...
We show there exists algorithmic predictors that can detect novel but accurate language cues in many cases where humans failed to detect deception, opening up ...