Continuous-Space Methods for Natural Language Processing
In the last few years, significant advances in natural language processing have been achieved with continuous-space methods based on deep learning and word embeddings. In the NLP group at Uppsala University, one researcher and three graduate students are working on such methods. Applying continuous-space methods to large-scale data sets requires GPU computing resources, which the UPPMAX HPC facility in Uppsala cannot currently provide. During 2016, we have used the Erik cluster at LUNARC, but that will be shut down in December 2016. This project will cater to the Uppsala NLP group's need for GPU-accelerated computing. In particular, the resources allocated will be used for the following purposes: - Neural discourse models in statistical machine translation (Christian Hardmeier): Using deep learning to improve the translation of linguistic elements such as pronouns in the context of both neural and phrase-based statistical Machine translation. - Word embeddings for NLP (Ali Basirat): Exploring techniques for generating and using continuous-space word embeddings with methods such as matrix factorisation, auto-encoding and restricted Boltzmann machines. - Character-level embeddings for sequence learning tasks (Yan Shao): Applying character-level embeddings to deep neural networks to solve various sequence learning tasks such as named entity recognition, character segmentation, part-of-speech tagging and chunking. - Continuous semantic representation for sense classification (Jimmy Callin): The goal is to improve sense classification in shallow discourse parsing by using continuous-space semantic representation obtained from deep neural network architectures.