July 4th, 2018
“Explainable, Trustable and Emphatic Artificial Intelligence: from Formal Argumentation to Argumentation for Humans”
Argumentation is the process by which arguments are constructed, compared, evaluated in some respect and judged in order to establish whether any of them are warranted. The idea of argumentation as the process of creating arguments for and against competing claims, was a subject of interest to philosophers and lawyers. In recent years, however, there has been a growth of interest in the subject from formal and technical perspectives in Computer Science, and a wide use of argumentation technologies in practical applications. The field of artificial argumentation plays an important role in Artificial Intelligence research. The reason for this is based on the recognition that if we are to develop robust intelligent machines able to act in mixed human-machine teams, then it is imperative that they can handle incomplete and inconsistent information in a way that somehow emulates the way humans tackle such a complex task.
During my presentation, I will focus on different problems that I believe stand in the way of reaching this ambitious goal. I start from the observation that, in their deliberation process, humans use argumentation either internally, by evaluating arguments and counterarguments, or externally, by entering into a debate where arguments are exchanged. I will thus present my main contributions towards the development of argumentation-enhanced intelligent machines: (i) modeling and reasoning on socio-cognitive components like trust using computational models of argument which are able to deal with incomplete and conflicting information, (ii) mining argument structures in natural language text to detect, e.g., potential fallacies, recurrent patters, and inner strength, and (iii) analyzing and understanding the role of emotions in real world argumentative situations (e.g., debates) to inject such information in the computational models of argument to better cast incomplete and inconsistent information when emotions play a role.
Members of the jury:
- Fabien Gandon, Director of Research, Inria, Univ. Côte d’Azur, France
- Leila Amgoud, Director od Research, IRIT Toulouse, France – reviewer
- Simon Parsons, Professor, King’s College London, UK – reviewer
- Bernardo Magnini, Research Director, FBK Trento, Italy – reviewer
- Simone Teufel, Professor, University of Cambridge, UK