Only 5% attention is all you need: Efficient long-range document-level neural machine translation. Zihan Liu, Zewei Sun, Shanbo Cheng, Shujian Huang, Mingxuan ...
Sep 25, 2023 · Only 5\% Attention Is All You Need: Efficient Long-range Document-level Neural Machine Translation. Authors:Zihan Liu, Zewei Sun, Shanbo Cheng, ...
Nov 4, 2023 · Experimen- tal results show that our method could achieve up to 95% sparsity (only 5% tokens attended) approximately, and save 93% computation ...
This work keeps the translation performance while gaining 20% speed up by introducing extra selection layer based on lightweight attention that selects a ...
Dec 31, 2022 · Only 5% Attention Is All You Need: Efficient Long-range Document-level Neural Machine Translation. Download PDF · Open Webpage · Zihan Liu, ...
... 5% Attention Is All You Need: Efficient Long-range Document-level Neural Machine Translation | Find, read and cite all the research you need on ResearchGate.
Only 5% Attention Is All You Need: Efficient Long-range Document-level Neural Machine Translation ... Translation Dataset for Multimodal Machine Translation ...
People also ask
What is attention in neural machine translation?
What level of quality can neural machine translation attain on literary text?
Which neural network is best for machine translation?
Is Google Translate a neural machine translation?
Jun 12, 2017 · We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.
Only 5\% Attention Is All You Need: Efficient Long-range Document-level Neural Machine Translation ... Semantics-aware Attention Improves Neural Machine ...
Only 5% Attention Is All You Need: Efficient Long-range Document-level Neural Machine Translation. Conference Paper. Jan 2023. Zihan Liu · Zewei Sun ...