Christopher Lin
2022
Know Thy Strengths: Comprehensive Dialogue State Tracking Diagnostics
Hyundong Cho
|
Chinnadhurai Sankar
|
Christopher Lin
|
Kaushik Ram Sadagopan
|
Shahin Shayandeh
|
Asli Celikyilmaz
|
Jonathan May
|
Ahmad Beirami
Findings of the Association for Computational Linguistics: EMNLP 2022
Recent works that revealed the vulnerability of dialogue state tracking (DST) models to distributional shifts have made holistic comparisons on robustness and qualitative analyses increasingly important for understanding their relative performance. We present our findings from standardized and comprehensive DST diagnoses, which have previously been sparse and uncoordinated, using our toolkit, CheckDST, a collection of robustness tests and failure mode analytics. We discover that different classes of DST models have clear strengths and weaknesses, where generation models are more promising for handling language variety while span-based classification models are more robust to unseen entities. Prompted by this discovery, we also compare checkpoints from the same model and find that the standard practice of selecting checkpoints using validation loss/accuracy is prone to overfitting and each model class has distinct patterns of failure. Lastly, we demonstrate how our diagnoses motivate a pre-finetuning procedure with non-dialogue data that offers comprehensive improvements to generation models by alleviating the impact of distributional shifts through transfer learning.
2021
Constrained Language Models Yield Few-Shot Semantic Parsers
Richard Shin
|
Christopher Lin
|
Sam Thomson
|
Charles Chen
|
Subhro Roy
|
Emmanouil Antonios Platanios
|
Adam Pauls
|
Dan Klein
|
Jason Eisner
|
Benjamin Van Durme
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
We explore the use of large pretrained language models as few-shot semantic parsers. The goal in semantic parsing is to generate a structured meaning representation given a natural language input. However, language models are trained to generate natural language. To bridge the gap, we use language models to paraphrase inputs into a controlled sublanguage resembling English that can be automatically mapped to a target meaning representation. Our results demonstrate that with only a small amount of data and very little code to convert into English-like representations, our blueprint for rapidly bootstrapping semantic parsers leads to surprisingly effective performance on multiple community tasks, greatly exceeding baseline methods also trained on the same limited data.