In this paper, we propose to learn hash codes from BERT embeddings after observing their tremendous successes on downstream tasks.
Nov 7, 2021 · The results demonstrate that by effectively refining the BERT embeddings via. MI maximization principle, the proposed method is able to generate ...
Sep 7, 2021 · In this paper, we propose to learn hash codes from BERT embeddings after observing their tremendous successes on downstream tasks.
The Pytorch implementation of paper "Refining BERT Embeddings for Document Hashing via Mutual Information Maximization" (EMNLP 2021).
Experimental results on three benchmark datasets demonstrate that the proposed method is able to generate hash codes that outperform existing ones learned ...
To remedy this issue, a new unsupervised hashing paradigm is further proposed based on the mutual information (MI) maximization principle. Specifically, the ...
Sep 11, 2024 · In this paper, we propose to learn hash codes from BERT embeddings after observing their tremendous successes on downstream tasks. As a first ...
On-demand video platform giving you access to lectures from conferences worldwide.
2021) . In this approach, we aim to maximize the mutual information between the embedding of the [CLS] token and the embedding of each language token. ..
Star 8. source code of baselines for paper "Refining BERT Embeddings for Document Hashing via Mutual Information Maximization". License. MIT license · 8 stars 2 ...