×
Nov 2, 2023 · Abstract:NLP models are used in a variety of critical social computing tasks, such as detecting sexist, racist, or otherwise hateful content ...
Abstract. NLP models are used in a variety of critical social computing tasks, such as detecting sexist, racist, or otherwise hateful content.
This work explores various ways to generate Counterfactually Augmented Data creation, such as through ChatGPT, FLAN-T5 and PolyJuice.
This work automatically generates CADs using Polyjuice, ChatGPT, and Flan-T5, and evaluates their usefulness in improving model robustness compared to ...
NLP models are used in a variety of critical social computing tasks, such as detecting sexist, racist, or otherwise hateful content.
Dec 14, 2023 · In this paper the first preliminary results of the analysis of marks collected within the tables of META-NET series of Language White Papers ...
Dec 10, 2023 · People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection | VIDEO.
Related. People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection. Powered by the ...
Mattia Samory. Related. People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection ...
People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection. Proceedings of the 2023 ...