default search action
2nd DeepLo@EMNLP-IJCNLP 2019: Hong Kong, China
- Colin Cherry, Greg Durrett, George F. Foster, Reza Haffari, Shahram Khadivi, Nanyun Peng, Xiang Ren, Swabha Swayamdipta:
Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP, DeepLo@EMNLP-IJCNLP 2019, Hong Kong, China, November 3, 2019. Association for Computational Linguistics 2019, ISBN 978-1-950737-78-9 - Varun Kumar, Hadrien Glaude, Cyprien de Lichy, William Campbell:
A Closer Look At Feature Space Data Augmentation For Few-Shot Intent Classification. 1-10 - Gil Rocha, Henrique Lopes Cardoso:
A Comparative Analysis of Unsupervised Language Adaptation Methods. 11-21 - Felipe de Souza Salvatore, Marcelo Finger, Roberto Hirata Jr.:
A logical-based corpus for cross-lingual evaluation. 22-30 - Jeroen Van Hautte, Guy Emerson, Marek Rei:
Bad Form: Comparing Context-Based and Form-Based Few-Shot Learning in Distributional Semantic Models. 31-39 - Seth Ebner, Felicity Wang, Benjamin Van Durme:
Bag-of-Words Transfer: Non-Contextual Techniques for Multi-Task Learning. 40-46 - Jasdeep Singh, Bryan McCann, Richard Socher, Caiming Xiong:
BERT is Not an Interlingua and the Bias of Tokenization. 47-55 - Xiaoman Pan, Thamme Gowda, Heng Ji, Jonathan May, Scott Miller:
Cross-lingual Joint Entity and Word Embedding to Improve Entity Linking and Parallel Sentence Mining. 56-66 - Yannis Papanikolaou, Ian Roberts, Andrea Pierleoni:
Deep Bidirectional Transformers for Relation Extraction without Supervision. 67-75 - Xiaofei Ma, Peng Xu, Zhiguo Wang, Ramesh Nallapati, Bing Xiang:
Domain Adaptation with BERT-based Domain Classification and Data Selection. 76-83 - Xiangkai Zeng, Sarthak Garg, Rajen Chatterjee, Udhyakumar Nallasamy, Matthias Paulik:
Empirical Evaluation of Active Learning Techniques for Neural MT. 84-93 - Avik Ray, Yilin Shen, Hongxia Jin:
Fast Domain Adaptation of Semantic Parsers via Paraphrase Attention. 94-103 - Marcel Bollmann, Natalia Korchagina, Anders Søgaard:
Few-Shot and Zero-Shot Learning for Historical Text Normalization. 104-114 - Mayur Patidar, Surabhi Kumari, Manasi Patwardhan, Shirish Karande, Puneet Agarwal, Lovekesh Vig, Gautam Shroff:
From Monolingual to Multilingual FAQ Assistant using Multilingual Co-training. 115-123 - Luke Melas-Kyriazi, George Han, Celine Liang:
Generation-Distillation for Efficient Natural Language Understanding in Low-Data Settings. 124-131 - He He, Sheng Zha, Haohan Wang:
Unlearn Dataset Bias in Natural Language Inference by Fitting the Residual. 132-142 - Jeremy Wohlwend, Ethan R. Elenberg, Sam Altschul, Shawn Henry, Tao Lei:
Metric Learning for Dynamic Text Classification. 143-152 - Shrey Desai, Hongyuan Zhan, Ahmed Aly:
Evaluating Lottery Tickets Under Distributional Shifts. 153-162 - James Barry, Joachim Wagner, Jennifer Foster:
Cross-lingual Parsing with Polyglot Training and Multi-treebank Learning: A Faroese Case Study. 163-174 - Tianqi Wang, Naoya Inoue, Hiroki Ouchi, Tomoya Mizumoto, Kentaro Inui:
Inject Rubrics into Short Answer Grading System. 175-182 - Somnath Basu Roy Chowdhury, K. M. Annervaz, Ambedkar Dukkipati:
Instance-based Inductive Deep Transfer Learning by Cross-Dataset Querying with Locality Sensitive Hashing. 183-191 - James Route, Steven Hillis, Isak Czeresnia Etinger, Han Zhang, Alan W. Black:
Multimodal, Multilingual Grapheme-to-Phoneme Conversion for Low-Resource Languages. 192-201 - Raphael Tang, Yao Lu, Jimmy Lin:
Natural Language Generation for Effective Knowledge Distillation. 202-208 - Katharina Kann, Anhad Mohananey, Samuel R. Bowman, Kyunghyun Cho:
Neural Unsupervised Parsing Beyond English. 209-218 - Anirudh Joshi, Timothy Baldwin, Richard O. Sinnott, Cécile Paris:
Reevaluating Argument Component Extraction in Low Resource Settings. 219-224 - Farhad Nooralahzadeh, Jan Tore Lønning, Lilja Øvrelid:
Reinforcement-based denoising of distantly supervised NER with partial annotation. 225-233 - Suma Reddy Duggenpudi, Subrahamanyam Varma, Radhika Mamidi:
Samvaadhana: A Telugu Dialogue System in Hospital Domain. 234-242 - Shuyan Zhou, Shruti Rijhwani, Graham Neubig:
Towards Zero-resource Cross-lingual Entity Linking. 243-252 - Johannes Bjerva, Katharina Kann, Isabelle Augenstein:
Transductive Auxiliary Task Self-Training for Neural Multi-Task Models. 253-258 - Lingjun Zhao, Rabih Zbib, Zhuolin Jiang, Damianos G. Karakos, Zhongqiang Huang:
Weakly Supervised Attentional Model for Low Resource Ad-hoc Cross-lingual Information Retrieval. 259-264 - Mostafa Abdou, Cezar Sas, Rahul Aralikatte, Isabelle Augenstein, Anders Søgaard:
X-WikiRE: A Large, Multilingual Resource for Relation Extraction as Machine Comprehension. 265-274 - Kevin Blissett, Heng Ji:
Zero-Shot Cross-lingual Name Retrieval for Low-Resource Languages. 275-280 - Ke M. Tran, Arianna Bisazza:
Zero-shot Dependency Parsing with Pre-trained Multilingual Sentence Representations. 281-288
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.