Impact of Training Instance Selection on Domain-Specific Entity Extraction using BERT

Eileen Salhofer, Xing Lan Liu, Roman Kern


Abstract
State of the art performances for entity extraction tasks are achieved by supervised learning, specifically, by fine-tuning pretrained language models such as BERT. As a result, annotating application specific data is the first step in many use cases. However, no practical guidelines are available for annotation requirements. This work supports practitioners by empirically answering the frequently asked questions (1) how many training samples to annotate? (2) which examples to annotate? We found that BERT achieves up to 80% F1 when fine-tuned on only 70 training examples, especially on biomedical domain. The key features for guiding the selection of high performing training instances are identified to be pseudo-perplexity and sentence-length. The best training dataset constructed using our proposed selection strategy shows F1 score that is equivalent to a random selection with twice the sample size. The requirement of only a small number of training data implies cheaper implementations and opens door to wider range of applications.
Anthology ID:
2022.naacl-srw.11
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop
Month:
July
Year:
2022
Address:
Hybrid: Seattle, Washington + Online
Editors:
Daphne Ippolito, Liunian Harold Li, Maria Leonor Pacheco, Danqi Chen, Nianwen Xue
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
83–88
Language:
URL:
https://aclanthology.org/2022.naacl-srw.11
DOI:
10.18653/v1/2022.naacl-srw.11
Bibkey:
Cite (ACL):
Eileen Salhofer, Xing Lan Liu, and Roman Kern. 2022. Impact of Training Instance Selection on Domain-Specific Entity Extraction using BERT. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop, pages 83–88, Hybrid: Seattle, Washington + Online. Association for Computational Linguistics.
Cite (Informal):
Impact of Training Instance Selection on Domain-Specific Entity Extraction using BERT (Salhofer et al., NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-srw.11.pdf
Video:
 https://aclanthology.org/2022.naacl-srw.11.mp4
Code
 tugraz-isds/kd
Data
BC5CDRCoNLL 2003