Improving Vector Space Word Representations Via Kernel Canonical Correlation Analysis

Abstract

Cross-lingual word embeddings are representations for vocabularies of two or more languages in one common continuous vector space and are widely used in various natural language processing tasks. A state-of-the-art way to generate cross-lingual word embeddings is to learn a linear mapping, with an assumption that the vector representations of similar words in different languages are related by a linear relationship. However, this assumption does not always hold true, especially for substantially different languages. We therefore propose to use kernel canonical correlation analysis to capture a non-linear relationship between word embeddings of two languages. By extensively evaluating the learned word embeddings on three tasks (word similarity, cross-lingual dictionary induction, and cross-lingual document classification) across five language pairs, we demonstrate that our proposed approach achieves essentially better performances than previous linear methods on all of the three tasks, especially for language pairs with substantial typological difference.

Publication
ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP, SCI-Q4, JCR-Q4)
Xuefeng Bai
Xuefeng Bai
Ph.D candidate

My research interests include semantics, dialogues and generation.