Static word embeddings such as GloVe represent words by relatively low-dimensional, memory- and compute-efficient vectors but are not sensitive to the different senses of the word. On the other hand, sense embedding learning methods learn multi-prototype embeddings for each word, therefore, each sense of a word is associated with a vector. In this talk, I will present our method to learn sense embeddings without learning from scratch. In addition, I will dig into the properties of sense embeddings, including the social biases in sense embeddings as well as the relationship between the frequency and L2 norm of sense embeddings. Finally, I will introduce our proposed dynamic embeddings learning method.
Invited Speaker: Yi Zhou (Cardiff University - COMSC)