LEGO is an Inria associate team between MAGNET (Inria) and Fei Sha’s group (USC). It was created in January 2016 for 3 years, and renewed for 3 more years starting 2019. LEGO is part of the Inria@SiliconValley program.
LEGO lies in the intersection of Machine Learning and Natural Language Processing (NLP). Its goal is to address the following challenges: what are the right representations for structured data and how to learn them automatically, in a robust and transferable way? How to apply such representations to complex and structured prediction tasks in NLP? The past years have seen an increasing interest in learning continuous vectorial embeddings, which can be trained together with the prediction model in an end-to-end fashion, as in recent sequence-to-sequence neural models. However, they are unsuitable to low-resource languages as they require massive amounts of data to train. They are also very prone to overfitting, which makes them very brittle, and sensitive to bias present in the original text as well as to confounding factors such as author attributes. LEGO strongly relies on the complementary expertise of the two partners in areas such as representation learning, structured prediction, graph-based learning, multi-task/transfer learning and statistical NLP to offer a novel alternative to existing techniques.
Specifically, we investigate the following research directions:
- optimize the embeddings based on annotations so as to minimize structured prediction errors,
- generate embeddings from rich language contexts represented as graphs,
- learn transferable representations across languages and domains, in particular for structured prediction problems on low-resource languages,
- optimize the representations to make them robust to bias and adversarial examples.
We intend to demonstrate the usefulness of the proposed methods on several NLP tasks, including multilingual dependency parsing, coreference resolution, discourse parsing, machine translation, question answering and text summarization.