Revisiting self-training for few-shot learning of language model

Y Chen, Y Zhang, C Zhang, G Lee, R Cheng… - arXiv preprint arXiv …, 2021 - arxiv.org
arXiv preprint arXiv:2110.01256, 2021arxiv.org
As unlabeled data carry rich task-relevant information, they are proven useful for few-shot
learning of language model. The question is how to effectively make use of such data. In this
work, we revisit the self-training technique for language model fine-tuning and present a
state-of-the-art prompt-based few-shot learner, SFLM. Given two views of a text sample via
weak and strong augmentation techniques, SFLM generates a pseudo label on the weakly
augmented version. Then, the model predicts the same pseudo label when fine-tuned with …
As unlabeled data carry rich task-relevant information, they are proven useful for few-shot learning of language model. The question is how to effectively make use of such data. In this work, we revisit the self-training technique for language model fine-tuning and present a state-of-the-art prompt-based few-shot learner, SFLM. Given two views of a text sample via weak and strong augmentation techniques, SFLM generates a pseudo label on the weakly augmented version. Then, the model predicts the same pseudo label when fine-tuned with the strongly augmented version. This simple approach is shown to outperform other state-of-the-art supervised and semi-supervised counterparts on six sentence classification and six sentence-pair classification benchmarking tasks. In addition, SFLM only relies on a few in-domain unlabeled data. We conduct a comprehensive analysis to demonstrate the robustness of our proposed approach under various settings, including augmentation techniques, model scale, and few-shot knowledge transfer across tasks.
arxiv.org
Showing the best result for this search. See all results