default search action
VS@HLT-NAACL 2015: Denver, Colorado, USA
- Phil Blunsom, Shay B. Cohen, Paramveer S. Dhillon, Percy Liang:
Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, VS@NAACL-HLT 2015, June 5, 2015, Denver, Colorado, USA. The Association for Computational Linguistics 2015, ISBN 978-1-941643-46-4 - Oren Melamud, Omer Levy, Ido Dagan:
A Simple Word Embedding Model for Lexical Substitution. 1-7 - Vivek Kumar Rangarajan Sridhar:
Unsupervised Text Normalization Using Distributed Representations of Words and Phrases. 8-16 - Ana Zelaia, Olatz Arregi, Basilio Sierra:
A Multi-classifier Approach to support Coreference Resolution in a Vector Space Model. 17-24 - Mikael Kågebäck, Fredrik D. Johansson, Richard Johansson, Devdatt P. Dubhashi:
Neural context embeddings for automatic discovery of word senses. 25-32 - Chenglong Ma, Weiqun Xu, Peijia Li, Yonghong Yan:
Distributional Representations of Words for Short Text Classification. 33-38 - Thien Huu Nguyen, Ralph Grishman:
Relation Extraction: Perspective from Convolutional Neural Networks. 39-48 - Jay Urbain, Glenn Bushee, George Kowalski:
Distributional Semantic Concept Models for Entity Relation Discovery. 49-55 - Erick Rocha Fonseca, Sandra M. Aluísio:
A Deep Architecture for Non-Projective Dependency Parsing. 56-61 - Jiaming Xu, Peng Wang, Guanhua Tian, Bo Xu, Jun Zhao, Fangyuan Wang, Hongwei Hao:
Short Text Clustering via Convolutional Neural Networks. 62-69 - Marco Del Tredici, Núria Bel:
A Word-Embedding-based Sense Index for Regular Polysemy Representation. 70-78 - Karl Stratos, Michael Collins:
Simple Semi-Supervised POS Tagging. 79-87 - Hieu Pham, Thang Luong, Christopher D. Manning:
Learning Distributed Representations for Multilingual Text Sequences. 88-94 - Justin Garten, Kenji Sagae, Volkan Ustun, Morteza Dehghani:
Combining Distributed Vector Representations for Words. 95-101 - Mohit Bansal:
Dependency Link Embeddings: Continuous Representations of Syntactic Substructures. 102-108 - Giuseppe Attardi:
DeepNL: a Deep Learning NLP pipeline. 109-115 - Abdulaziz Alghunaim, Mitra Mohtarami, Scott Cyphers, James R. Glass:
A Vector Space Approach for Aspect Based Sentiment Analysis. 116-122 - Melanie Tosik, Carsten Lygteskov Hansen, Gerard Goossen, Mihai Rotaru:
Word Embeddings vs Word Types for Sequence Labeling: the Curious Case of CV Parsing. 123-128 - Garrett Nicolai, Colin Cherry, Grzegorz Kondrak:
Morpho-syntactic Regularities in Continuous Word Representations: A multilingual study. 129-134 - Sameer Singh, Tim Rocktäschel, Sebastian Riedel:
Towards Combined Matrix and Tensor Factorization for Universal Schema Relation Extraction. 135-142 - Joo-Kyung Kim, Marie-Catherine de Marneffe, Eric Fosler-Lussier:
Neural word embeddings with multiplicative feature interactions for tensor-based compositions. 143-150 - Thang Luong, Hieu Pham, Christopher D. Manning:
Bilingual Word Representations with Monolingual Quality in Mind. 151-159 - Mahesh Joshi, Ethan Hart, Mirko Vogel, Jean-David Ruvini:
Distributed Word Representations Improve NER for e-Commerce. 160-167 - Jiaqiang Chen, Gerard de Melo:
Semantic Information Extraction for Improved Word Embeddings. 168-175 - Ayah Zirikly, Mona T. Diab:
Named Entity Recognition for Arabic Social Media. 176-185 - John M. Conroy, Sashka Davis:
Vector Space Models for Scientific Document Summarization. 186-191 - Vivek Kumar Rangarajan Sridhar:
Unsupervised Topic Modeling for Short Texts Using Distributed Representations of Words. 192-200 - Ji Liu, Diana Inkpen:
Estimating User Location in Social Media with Stacked Denoising Auto-encoders. 201-210
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.