Narrowing the Gap between Supervised and Unsupervised Sentence Representation Learning with Large Language Model

Authors

  • Mingxin Li SKLSDE, School of Computer Science and Engineering, Beihang University, Beijing, China
  • Richong Zhang SKLSDE, School of Computer Science and Engineering, Beihang University, Beijing, China Zhongguancun Laboratory, Beijing, China
  • Zhijie Nie SKLSDE, School of Computer Science and Engineering, Beihang University, Beijing, China Shen Yuan Honors College, Beihang University, Beijing, China
  • Yongyi Mao School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada

DOI:

https://doi.org/10.1609/aaai.v38i12.29263

Keywords:

ML: Representation Learning, NLP: Sentence-level Semantics, Textual Inference, etc.

Abstract

Sentence Representation Learning (SRL) is a fundamental task in Natural Language Processing (NLP), with the Contrastive Learning of Sentence Embeddings (CSE) being the mainstream technique due to its superior performance. An intriguing phenomenon in CSE is the significant performance gap between supervised and unsupervised methods, with their only difference lying in the training data. Previous works attribute this performance gap to differences in two representation properties (alignment and uniformity). However, since alignment and uniformity only measure the results, they fail to answer "What aspects of the training data contribute to the performance gap?" and "How can the performance gap be narrowed?", In this paper, we conduct empirical experiments to answer these "What" and "How" questions. We first answer the "What" question by thoroughly comparing the behavior of supervised and unsupervised CSE during their respective training processes. From the comparison, we identify the similarity pattern as a key factor to the performance gap, and introduce a metric, called Relative Fitting Difficulty (RFD), to measure the complexity of the similarity pattern. Then, based on the insights gained from the "What" question, we tackle the "How" question by increasing the pattern complexity of the training data. We achieve this by leveraging the In-Context Learning (ICL) capability of the Large Language Model (LLM) to generate data that simulates complex patterns. By utilizing the hierarchical patterns in the LLM-generated data, we effectively narrow the gap between supervised and unsupervised CSE. We release our codes and appendix at https://github.com/BDBC-KG-NLP/NGCSE.

Published

2024-03-24

How to Cite

Li, M., Zhang, R., Nie, Z., & Mao, Y. (2024). Narrowing the Gap between Supervised and Unsupervised Sentence Representation Learning with Large Language Model. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13590-13599. https://doi.org/10.1609/aaai.v38i12.29263

Issue

Section

AAAI Technical Track on Machine Learning III