Selection bias explorations and debias methods for natural language sentence matching datasets
arXiv preprint arXiv:1905.06221, 2019•arxiv.org
Natural Language Sentence Matching (NLSM) has gained substantial attention from both
academics and the industry, and rich public datasets contribute a lot to this process.
However, biased datasets can also hurt the generalization performance of trained models
and give untrustworthy evaluation results. For many NLSM datasets, the providers select
some pairs of sentences into the datasets, and this sampling procedure can easily bring
unintended pattern, ie, selection bias. One example is the QuoraQP dataset, where some …
academics and the industry, and rich public datasets contribute a lot to this process.
However, biased datasets can also hurt the generalization performance of trained models
and give untrustworthy evaluation results. For many NLSM datasets, the providers select
some pairs of sentences into the datasets, and this sampling procedure can easily bring
unintended pattern, ie, selection bias. One example is the QuoraQP dataset, where some …
Natural Language Sentence Matching (NLSM) has gained substantial attention from both academics and the industry, and rich public datasets contribute a lot to this process. However, biased datasets can also hurt the generalization performance of trained models and give untrustworthy evaluation results. For many NLSM datasets, the providers select some pairs of sentences into the datasets, and this sampling procedure can easily bring unintended pattern, i.e., selection bias. One example is the QuoraQP dataset, where some content-independent naive features are unreasonably predictive. Such features are the reflection of the selection bias and termed as the leakage features. In this paper, we investigate the problem of selection bias on six NLSM datasets and find that four out of them are significantly biased. We further propose a training and evaluation framework to alleviate the bias. Experimental results on QuoraQP suggest that the proposed framework can improve the generalization ability of trained models, and give more trustworthy evaluation results for real-world adoptions.
arxiv.org
Showing the best result for this search. See all results