Task representation learning

Speaker: Damien Siléo

Data and place: February 2, 2022, at 10:00 – Videoconference

Abstract: Current machine learning systems are often tuned for the desired task through annotated examples. This method is brittle or/and costly, as neural networks are still not very interpretable, and specifying their behavior through examples leads to many problems (unfairness and shallow reasoning).  Task embeddings are continuous representations of tasks that can steer a machine learning system in order to perform a given task. I will discuss the potential of task embeddings for data-efficient, robust and interpretable AI. I will describe a task representation learning architecture and some experiments on few-shot learning and model interpretability, then present new large-scale collections of datasets that can enable more general AI systems.