May 18, 2023 · We propose Chunk-LEvel Multi-reference Evaluation (CLEME), designed to evaluate GEC systems in the multi-reference evaluation setting.
We propose Chunk-LE Multi-reference Evaluation (CLEME), designed to evaluate GEC systems in the multi-reference evaluation setting.
TL;DR: A novel GEC metric designed to debias multi-reference evaluation for grammatical error correction, supporting two distinct evaluation hypotheses.
Sep 4, 2024 · Our proposed CLEME approach consistently and substantially outperforms existing reference-based GEC metrics on multiple reference sets in both ...
... We employ CLEME and ChER-RANT to evaluate the correction performance. Both are edit-based metrics that output P/R/F 0.5 scores, and they have been proven ...
Dec 8, 2023 · On-demand video platform giving you access to lectures from conferences worldwide.
Oct 17, 2023 · (1) We propose CLEME, a reference-based metric that evaluates GEC systems at the chunk-level, aiming to provide unbiased F0.5 scores for. GEC ...
People also ask
Do grammatical error correction models realize grammatical generalization?
What is grammatical error correction?
The paper focuses on improving the interpretability of Grammatical Error Correction (GEC) metrics, which receives little attention in previous studies.
CLEME: debiasing multi-reference evaluation for grammatical error correction. J Ye, Y Li, Q Zhou, Y Li, S Ma, HT Zheng, Y Shen. arXiv preprint arXiv ...
The paper focuses on improving the interpretability of Grammatical Error Correction (GEC) metrics, which receives little attention in previous studies.