On the Robustness of Unsupervised and Semi-supervised Cross-lingual Word Embedding Learning

Abstract

Cross-lingual word embeddings are vector representations of words in different languages where words with similar meaning are represented by similar vectors, regardless of the language. Recent developments which construct these embeddings by aligning monolingual spaces have shown that accurate alignments can be obtained with little or no supervision, which usually comes in the form of bilingual dictionaries. However, the focus has been on a particular controlled scenario for evaluation, and there is no strong evidence on how current state-of-the-art systems would fare with noisy text or for language pairs with major linguistic differences. In this paper we present an extensive evaluation over multiple cross-lingual embedding models, analyzing their strengths and limitations with respect to different variables such as target language, training corpora and amount of supervision. Our conclusions put in doubt the view that high-quality cross-lingual embeddings can always be learned without much supervision.

Type
Publication
Proceedings of the 12th Language Resources and Evaluation Conference
Jose Camacho-Collados
Jose Camacho-Collados
Professor & UKRI Future Leaders Fellow
Luis Espinosa-Anke
Luis Espinosa-Anke
Senior Lecturer
Steven Schockaert
Steven Schockaert
Professor