Magnet seminars are usually held in room B21 on Thursdays, 11am. Check below for upcoming seminars and potential changes of schedule/location. You may also import the Magnet seminars public feed into your favorite calendar app. For further information, contact Aurélien.
Thu, February 15, 2018
Where? Inria B21
Natural language processing
Traditionally, natural language processing (NLP) relied on generative models with task specific and manually engineered features. Recently deep learning approaches provided state of-the-art results in various fields such as computer vision, speech processing and natural language processing. The central idea behind these approaches is to learn features and models simultaneously, in an end-to-end manner, and making as few assumptions as possible. In NLP, word embeddings, mapping words in a dictionary on a continuous low-dimensional vector space, have proven to be very efficient for a large variety of tasks while requiring almost no a-priori linguistic assumptions.
In this talk, I will present the results on representations of segments of sentences using deep neural network models. In particular, I will show how syntactic structures, such as consistuency and dependency syntactic trees, can be leveraged for the purpose of solving NLP tasks that involve complex sentence-level relationships. I will first introduce the key concepts of deep learning for NLP. I will then focus on two empirical studies concerning the tasks of syntactic parsing using recursive neural networks (RNN) and relationship extraction from text using tree-LSTM with dependency structures.
Thursday, February 15, 2018 - 11:00 to 12:00