Seminars

Magnet seminars are usually held in room B21 on Thursdays, 11am. Check below for upcoming seminars and potential changes of schedule/location. You may also import the Magnet seminars public feed into your favorite calendar app. For further information, contact Aurélien.

Upcoming seminars

Seminar calendar Add to google calendar
WhenWhat
Thu, November 22, 2018
11:00 am
12:00 pm

Where? Inria B21
Add event to google
Institutional tag: 

Inria MAGNET

Thematic tag(s): 

Machine learning
Natural language processing




Machine learning-based experiments in social sciences have opened new doors and enabled new insights. They allow us to exploit data samples of unprecedented size. Prime among these models are neural networks, whose flexibility and power have allowed us to explore new and exciting directions in computational social science. One of the most powerful features of these models is their ability to learn representations of the input that best explain the data (representation learning). In NLP, this has brought us word and document embeddings, which capture much of the complexity of language.
However, neural networks involve many parameters, require therefore large data sets, and are often hard to interpret. While they have the power to find complex explanations, they also have the risk of overfitting and finding wrong explanations. Representation learning, in turn is usually restricted to the data at hand, and only learns distinctions reflected within the data, ignoring external knowledge. I.e., while word embeddings can learn syntactic and semantic distinctions, they ignore pragmatic and demographic dimensions of language not present in text.
In this talk, I will present some recent work on retrofitting, i.e., adjusting learned representations to reflect certain structural facts. I show how this can be used to infuse the models with external knowledge (for example the geographic distribution and extent of words, but also the class membership of instances in classification tasks). This allows us to focus the models on certain aspects, to gain useful insights. It also helps increase discrimination in classification tasks, thereby improving performance.
I conclude with an outlook for future applications.


Dates: 

Thursday, November 22, 2018 - 11:00 to 12:00

Location: 

Inria B21

Speaker(s): 

Dirk Hovy

Previous seminars (over the past year)