Return to Seminars

PAST SEMINARS (from January 2013)

************************************************************************************************************

December, 2, 2021, 4:00 PM: Martin Jedwabny (GraphIk)

– Titre : Probabilistic rule induction for transparent CBR under uncertainty

– Résumé : CBR systems leverage past experiences to make decisions. Recently, the AI community has taken an interest in making CBR systems explainable. Logic-based frameworks make answers straightforward to explain. However, they struggle in the face of conflicting information, unlike probabilistic techniques. We show how probabilistic inductive logic programming (PILP) can be applied in CBR systems to make transparent decisions combining logic and probabilities. Then, we demonstrate how our approach can be applied in scenarios presenting uncertainty.

************************************************************************************************************

November, 18, 2021, 4 PM: Rachid Yahiaoui (INRAe)

Titre : Coby – Pipeline d’annotation Sémantique

Résumé :

L’étude du fonctionnement des écosystèmes nécessite des approches multidisciplinaires et mobilise de nombreuses équipes de recherche. Les données produites sont généralement variées et peu ou mal standardisées. Dans ce contexte, le développement de l’interopérabilité sémantique devient un enjeu majeur pour le partage et la réutilisation des données.

Fort de cet enjeu, l’Infrastructure de Recherche AnaEE ( Analyse et Expérimentation sur les Écosystèmes ) a mis en œuvre des moyens conséquents pour essayer d’améliorer l’usage de l’approche sémantique et favoriser son développement au sein de la communauté scientifique.

C’est dans ce cadre que deux outils informatiques (dits pipelines) ont été développés :

1/ Le premier est constitué d’un enchaînement automatisé d’outils open source, dédié d’une partie à la production de données sémantiques à partir de différentes sources de données hétérogènes ( BDR, Csv.. ) ; et d’autre part à la simplification du processus d’annotation sémantique : processus de modélisation basé sur les graphes et l’utilisation de vocabulaires communs ( thésaurus et ontologies ) qui peut nécessiter un investissement et un coût important.

2/ Le deuxième pipeline ( qui ne sera pas prévu dans cette présentation ) est quant à lui consacré à l’exploitation des données sémantiques (produit par le 1er pipeline) au travers de la génération et de l’enregistrement des données et des données dans les données différents formats standardisés ( GeoDCAT, NetCDF.. ).

Cette présentation frontalière sous forme d’un retour d’expérience le travail réalisé sur le premier pipeline ( annotation et production de données sémantique ) : la première partie sera consacrée au choix de la technologie du web sémantique, des outils spécialement développés pour le projet et de ceux retenus comme la base de données orientée graphe : Blazegraph ( qui permet de croiser, de combiner et de faire de la fouille de données grâce au langage d’interrogation SPARQL ) ; puis en deuxième partie, un survol de la démarche du processus d’annotation sémantique.​

************************************************************************************************************

November, 17, 2021, 9:30 AM: David Carral (GraphIk)
– Titre : Decidable Fragments of Horn First-Order Logic
 – Sommaire : In this seminar, we present recent results about Existential Rules, an undecidable Horn fragment of first-order logic that is widely used in ontological modelling. Namely, we discuss the main approaches devised to retrieve decidability as well as efficient implementations to solve complex reasoning tasks.

************************************************************************************************************

November, 15, 2021, 3:30 PM: Nofar Carmeli, (Valda Team at Inria Paris)

 – Titre : The Fine-Grained Complexity of Answering Database Queries
 – Sommaire : We wish to identify the queries that can be solved with close to optimal time guarantees over relational databases. Computing all query answers requires at least linear time before the first answer (to read the input and determine the answer’s existence), and then we must allow enough time to print all answers (which may be many). Thus, we aspire to achieve linear preprocessing time and constant or
logarithmic time per answer.
A known dichotomy classifies Conjunctive Queries into those that admit such enumeration and those that do not: the main difficulty of query answering is joining tables, which can be done efficiently if and only if the join query is acyclic. However, the join query usually does not appear in a vacuum; for example, it may be part of a larger query, or it may be applied to a database with dependencies. We show how to use this context for more efficient computation and study how the complexity changes in these settings. Next, we aspire for an even more powerful solution for query answering: a structure that simulates an array containing the query answers. Such a structure can be used for example to enumerate all answers in a statistically meaningful order or to efficiently compute a boxplot of query answers. We call this simulation random access and study for which queries random access can be achieved with near-optimal guarantees.
Our results are accompanied by conditional lower bounds showing that our algorithms can be applied to all tractable queries in some cases. Among our results, we show that a union of tractable Conjunctive Queries may be intractable w.r.t. random access, but a union of intractable Conjunctive Queries may be tractable w.r.t. enumeration.

************************************************************************************************************

Presenter: Clément Sipieter

Title: Présentation des technologies d’Anabasis et des problématiques de ses clients

Abstract: Anabasis, basée à Boulogne-Billancourt, est une PME innovante dans le domaine de l’ingénierie des connaissances. Anabasis a développé “Karnyx”, une suite logicielle permettant la modélisation d’un domaine métier par la définition des concepts, relations entre concepts et règles du domaine. Karnyx permet l’édition, l’exécution et l’interrogation d’une telle modélisation métier et propose une restitution visuelle via une interface web dédiée. Anabasis présentera la suite Karnyx et son langage de modélisation “Kalamar” ainsi que les problématiques de mise en cohérence de systèmes hétérogènes et de mise en qualité des données rencontrées par ses clients.

************************************************************************************************************

October, 21, 2021, 5 PM: Federico Ulliana, Florent Tornil (GraphIk)

Titre: “Avancées dans Graal v2 : le nouveau module de chaînage avant et les stores SQL”

SommaireNous allons présenter les derniers développements réalisés dans le cadre de Graal V2. En particulier, nous allons montrer les travaux autour du nouveau module de chaînage avant et des stores SQL. La dernière version du module développée par G. Pérution dans Graal V1.3 a été intégrée et adaptée dans l’architecture actuelle. Dans ce cadre, les drivers pour les stores SQL ont aussi été repris.

************************************************************************************************************

Jean-François présentera une conférence pour notre prochaine réunion.

– Titre: Une interface Python pour Graal

– Présentateur: Jean-François Baget

– Abstract: Au cours de son stage dans l’équipe GraphIK, Tom Salembien a développé une bibliothèque Python permettant d’utiliser Graal sans avoir à écrire une seule ligne de code en JAVA. Au cours de cet exposé, nous présenterons l’outil py4J utilisé pour mettre en œuvre ce projet, les choix de programmation permettant de faire croire au développeur Python qu’il manipule des objets Python et non des objets Java, et finiront par quelques problèmes restant à résoudre pour faire de ce programme un logiciel diffusable (ce qui pourra faire l’objet de prochains stages).

– Date et l’heure: 30/9/21 (Thursday) at 16:00
– Endroit: Salle 3.124

************************************************************************************************************

Le 16 septembre, à 16h00, Salle 3.124. Maël et Lucas présenteront le travail qu’ils ont réalisé lors de leur stage dans notre équipe.

Join us on Zoom Meeting https://umontpellier-fr.zoom.us/j/3045906887

Presentation 1

– Présentateur : Maël Abily

– Title: Implementing the Core Chase for Horn-ALCH

– Abstract : The computation of universal models can be used to solve query entailment: a query υ follows from some first-order logic (FOL) theory T if and only if some universal model of T entails υ. Arguably, the best algorithm to compute universal models is the core chase, which is guaranteed to terminate on an input theory T if T admits a finite universal model at all. Unfortunately, due to the complexity of this theoretical procedure, no practical implementations have been developed yet. In this proposal, I discuss an approach to develop an implementation of the core chase for the Description Logic ALCH, which is a decidable fragment of FOL widely used in the context of ontological reasoning.

***********************************************

Presentation 2

– Présentateur : Lucas Larroque

– Title: Normalization Procedures for Existential Rules that Preserve Chase Termination

– Abstract: Recent work on existential rule consider existential rules with certain properties and provide procedures to ensure those properties on existential rule sets. A normalization procedures is a function that transforms any rule set into another one that has the same logical consequences and has the wanted property. The chase is an algorithm that, from a knowledge base, produces a universal model. Although it is sound and complete, it may not terminate. The impact of those normalization procedures on chase termination has not been studied. Given a terminating rule set, does the chase terminate on the normalized rule set? We answer this question for several chase variants, namely the oblivious chase, the semi-oblivious chase, the restricted chase, the Datalog-first restricted chase and the equivalent chase, and for several normalization procedures, one ensuring that rules only have a single piece in the head (with a piece being the minimal unit of information contained in a rule), and the two other that rules only have one atom in the head. To do so, we introduce the notion of universal-conservative extension to characterize normalization procedures that preserve the termination of the equivalent chase, and we use different proof techniques adapted to each chase variant.

************************************************************************************************************

September 2, 2021, 1:30 PM, Bat 5, 3.124:

Titre : Parallelisable Existential Rules: a Story of Pieces

Auteurs: Maxime~Buron, Marie-Laure~Mugnier, Michaël Thomazo

Résumé: In this paper, we consider existential rules, an expressive formalism well suited to the representation of ontological knowledge and data-to-ontology mappings in the context of ontology-based data integration. The chase is a fundamental tool to do reasoning with existential rules as it computes all the facts entailed by the rules from a database instance. We introduce parallelisable sets of existential rules, for which the chase can be computed in a single breadth-first step from any instance. The question we investigate is the characterization of such rule sets. We show that parallelisable rule sets are exactly those rule sets both bounded for the chase and belonging to a novel class of rules, called pieceful. The pieceful class includes in particular frontier-guarded existential rules and (plain) datalog. We also give another characterization of parallelisable rule sets in terms of rule composition based on rewriting.

************************************************************************************************************

Vendredi 26 février, 14h-16h (lien bbb à confirmer)

Title: Materializing Knowledge Bases via Trigger Graphs

by David Carral (joint work with Efthymia Tsamoura, Enrico Maliziaz, Jacopo Urbani, accepted at VLDB 2021)

Abstract: The chase is a well-established family of algorithms used to materialize Knowledge Bases (KBs),  like Knowledge Graphs (KGs), to tackle important tasks like query answering under dependencies or data cleaning. A general problem of chase algorithms is that they might perform redundant computations. To counter this problem, we introduce the notion of Trigger Graphs (TGs), which guide the execution of the rules avoiding redundant computations.
We present the results of an extensive theoretical and empirical study that seeks to answer when and how TGs can be computed and what are the benets of TGs when applied over real-world KBs. Our results include introducing algorithms that compute (minimal) TGs.
We implemented our approach in a new engine, and our experiments show that it can be signi ficantly more efficient than the chase enabling us to materialize KBs with 17B facts in less than 40 min on commodity machines.

************************************************************************************************************

July 10, 2020, 10:30 AM, Guillaume Kihli-Perution (GraphIK)
Titre : Optimisation de la matérialisation de bases de connaissances
Résumé :
Cet exposé présente les résultats obtenus à l’issue de mon stage de fin de master. Nous intéressons aux bases de connaissances dont la partie ontologique est décrite par des règles existentielles. Le chase est une procédure de chaînage avant (aussi appelée matérialisation) consistant en une application itérative des règles sur la base de faits pour produire des faits nouveaux jusqu’à obtention d’un point fixe. L’une des propriétés fondamentales du chase est qu’il calcule un modèle canonique de la base de connaissances qui suffit pour calculer les réponses à une requête conjonctive : les réponses à une telle requête sur le résultat du chase sont exactement les réponses à cette requête sur la base de connaissances. Néanmoins, le chase peut ne pas se terminer et il peut également produire des redondances. Plus les redondances sont évitées, plus le chase termine, mais détecter les redondances a un coût. De nombreuses variantes de chase ont été proposées dans la littérature, correspondant à différents compromis entre la puissance de détection des redondances et le coût du calcul du chase. Dans cet exposé, je présenterai un cadre commun à toutes ces variantes en décrivant le chase à l’aide d’opérateurs à combiner, ainsi que des résultats expérimentaux utilisant ce cadre.

************************************************************************************************************

May 6, 2020, 2 PM: Jean-François BAGET (GraphIk)

Cours modeles stables : Cours IV :  Au programme, « les semantiques de la skolemisation » et quelques timides avancées sur la trace de l’odieux wumpus.

************************************************************************************************************

April 28, 2020, 14:00 PM: Jean-François BAGET (GraphIk)

3eme cours (exemples de modélisation)

************************************************************************************************************

April 21, 2020, 10:00 AM: Jean-François BAGET (GraphIk)

« Modèles Stables II : ASPERIX contre-attaque » (titre provisoire), en direct sur https://meet.jit.si/GraphIK

************************************************************************************************************

April, 14, 2020, 2:00 PM: Jean-François BAGET (GraphIk)

Introduction aux modeles stables.

Cette séquence de cours est destinée à un public ayant l’experience des règles existentielles, et propose la sémantique des modèles stables comme solution pour traiter la négation par l’échec. Ce cours montrera comment doivent évoluer les raisonnements pour prendre en compte cette sémantique, proposera quelques pistes de recherche pour ces algorithmes, et présentera quelques structures de programmes permettant de modéliser des problèmes usuels.

************************************************************************************************************

Second ANR CQFD Project Meeting
2-3 Mars 2020, Inria Paris, 2 rue Simone Iff, 9h30-17h
Contact: Federico Ulliana (ulliana@lirmm.fr)

Talks:

Existential Rules: a Study Through Chase Termination, FO-Rewritability
and Boundedness – M.-L. Mugnier (GraphIK)

A Single Approach to Decide Chase Termination on Linear Existential
Rules – M. Thomazo (Valda)

Mixed-World Reasoning with Existential Rules under Active-Domain
Semantics – M. Bienvenu (LaBRI)

Ontology-Based RDF Integration of Heterogeneous Data – M. Buron
Graal : What’s Next ? – J.-F. Baget (GraphIK)

The Complexity of Answering Rooted Counting Queries in the Presence of
an Ontology – Q. Maniere (LaBRI – Valda)

On the Optimization of Recursive Relational Queries: Application to
Graph Queries – L. Jachiet (Télécom Paris)

Provenance for EL : First Results – C. Bourgaux (Valda)

Existential Rules and Transitivity : Open questions  – M. Thomazo (Valda)

The SIDES platform : M.-C. Rousset (LIG)

************************************************************************************************************

Journée annuelle GraphIK : 27/02/2020, Hôtel Océania Métropole, 2 rue du Clos René, Montpellier

Contact: Marie-Laure Mugnier (mugnier@lirmm.fr)

9h-17h

Liste des exposés:

Interrogation de données médiatisée par des règles existentielles : résultats et perspectives (Marie-Laure Mugnier)

Argumentation basée sur des règles logiques : résultats et perspectives (Madalina Croitoru)

Présentation des thèses en cours (Martin Jedwabny, Elie Najm, Olivier Rodriguez)

MyChoice; un outil d’argumentation orientée multicritère/multiacteur (Rallou Thomopoulos)

Graal: What’s next ? (Jean-François Baget)

************************************************************************************************************

February, 3, 2020, 2:00 PM: Jean-François Baget (Graphik)

Présentation publique de Jean-François Baget (équipe GraphiK), Salle des séminaires, Lirmm, Bat 4

Titre : Une plateforme pour raisonner sur les bases de connaissances: que manque-t-il à Graal?

Résumé :

La plateforme Graal est développée depuis 2014 au sein de l’équipe GraphIK. Elle permet de raisonner sur des bases de connaissances composées de bases de faits et de règles existentielles, implémente les algorithmes sur lesquels nous travaillons, et est la brique de base de nos projets applicatifs. Au cours de cet exposé, nous évoquerons 3 pistes d’évolution de cette plateforme, justifierons leur intérêt et soulignerons les difficultés théoriques et pratiques de leur mise en œuvre.

************************************************************************************************************

December, 12, 2019, 2:00 PM, Room 3/124, Bat 5.  Carlos SAEZ (Universitat Politècnica de València)

Title: Probabilistic methods for multi-source and temporal biomedical data quality and variability assessment: a review and case studies
Abstract: Biomedical Data repositories are becoming bigger both in terms of sample size and number of variables. Two significant reasons behind of this are the widespread adoption of data-sharing initiatives and technological infrastructures, and the continuous and systematic population of those repositories over long periods of time. However, these two situations can also introduce potential confounding factors in data which may hinder their reuse, such as for population research or machine learning. Concretely, differences in protocols, populations, or even unexpected biases or errors, either caused by systems or humans, can lead to undesired heterogeneity in data among their sources or over time. This multi-source and temporal variability of data can be reflected on their statistical distributions, representing a Data Quality (DQ) issue which must be addressed for a reliable data reuse.
In this talk we will first review proposed methods for multi-source and temporal DQ assessment. These include a set of metrics to measure and visualize the data concordance among multiple sources (e.g., the Global Probabilistic Deviation and the Source Probabilistic Outlyingness), and an exploratory methodology to describe the variability of data over time (e.g., Information Geometric Temporal (IGT) plots). In the second part, we will describe the application of these methods to a selection of case studies, including the Public Health Mortality Registry of the Region of Valencia, Spain, the US National Hospital Discharge Survey, and a pilot project by the Spanish Ministry of Health, Social Services and Equality envisaged to a standardized and DQ assessed maternal-child care integrated data repository for research and monitoring of best practices. We will describe the generic usage of these methods using the Open Source R package EHRtemporalVariability developed by the lab.
Speaker: Carlos Saez, PhD in Technologies for Health and Well-Being (2016), MsC in Artificial Intelligence, Pattern Recognition and Digital Imaging (2009), is a postdoctoral researcher at the Biomedical Data Science Lab of the ITACA Institute of the Universitat Politècnica de València (UPV), Spain. His current research addresses data quality (DQ) and variability assessment of biomedical data, focusing specially on automated characterization of big data variability over time and among multiple sources (e.g., hospitals, devices, professionals). Two of his first author publications were selected as best of published worldwide by the International Medical Informatics Association in the fields of Secondary Use of Patient Data (2016) and Health Information Systems (2013). He is a visiting research fellow at the Department of Biomedical Informatics, Harvard Medical School, US, and the Center for Research in Health Technologies and Information Sciences (CINTESIS), University of Porto, Portugal.

************************************************************************************************************

June 7, 2019, 2:00 PM, Room 03/124, Bat5 : Paolo Papotti (EURECOM)

Title: Rule Discovery in RDF Knowledge-Bases

Abstract: In this talk we present RuDiK, a system for the discovery of declarative rules over RDF knowledge- bases (KBs). RuDiK discovers rules that express positive relationships between entities, such as “if two  persons have the same parent, they are siblings”, and negative rules, i.e., patterns that identify contradictions in the data, such as “if two persons are married, one cannot be the child of the other”. While the former class infers new facts in the KB, the latter class is crucial for other tasks, such as detecting erroneous triples in data cleaning, or the creation of negative examples to bootstrap learning algorithms. RuDiK discovers rules with a more expressive rule language than previous approaches, and its mining is robust to existing errors and incompleteness in the KB. We report experiments over real-world KBs to show that RuDiK outperforms previous proposals in terms of efficiency and that it discovers more effective rules for data curation. Finally, we discuss how automatically discovered rules can support other applications, such as computational fact checking.

http://www.eurecom.fr/~papotti/

********************************************************************************************************************

March 22, 2019, 10:45 AM: Cyril Savelief (ComputableFacts) – Salle 3.152, Bat 5.

Présentation des besoins applicatifs de ComputableFacts et de l’utilisation des règles existentielles (via l’outil Graal)

********************************************************************************************************************

CQFD – Kick Off Meeting – Paris Télécom (46 Rue Barrault, 75013) – 09:20 AM, Salle C017

Summary: Ontology-Based Data Access (OBDA) is a novel paradigm for data access and integration that makes it easier to query legacy databases by exploiting domain knowledge provided by ontologies. In the OBDA approach, ontologies enable a unified view of information over a collection of datasets, by enriching, generalizing, and relating the vocabularies of different data sources. Ontologies also allow one to query incomplete databases, by relying on inference mechanisms including the creation of new values to represent unknown information. The goal of the CQFD project is to develop an OBDA framework applicable on heterogeneous data models and deployable over federated polystore systems.

This two-days kick-off meeting will present the project in more details and related challenges. The talks by members of the project will be organized in six sessions.

Monday: federation and heterogeneous data, use cases on data integration and temporal data, temporal reasoning.

Tuesday: JSON data and expressive queries, reasoning with provenance information, presentation of software tools.

Contact: Federico Ulliana (ulliana@lirmm.fr)

********************************************************************************************************************

January 10, 2019, 09:00 AM, GraphIK tools presentation:

Présentation d’outils logiciels développés dans l’équipe :

  • Voting module par Martin Jedwabny
  • Outil DOCAMEX par Clément Sipieter
  • Le nouveau COGUI et ses liens avec Graal par Alain Gutierrez

********************************************************************************************************************

December 21,2018, 10:00 am, Room 03.152, Bat5 : David Carral (TU Dresden)
Title: Reasoning over Existential Rules with Acyclicity Notions
Abstract: The chase is a sound and complete (albeit non-terminating) algorithm for conjunctive query answering over ontologies of existential rules. On the theoretical side, we develop sufficient conditions to guarantee its termination (i.e., acyclicity notions), and study several restrictions that furthermore ensure its polynomiality.
On the practical side, we empirically study the generality of these conditions and we extend the Datalog engine VLog to develop an efficient implementation of the chase. Furthermore, we conduct an extensive evaluation, and show that VLog can compete with the state of the art, regarding runtime, scalability, and memory efficiency.

********************************************************************************************************************

December 17, 2018, 9:00, GraphIK Annual Seminar :

Hypergraphes d’argumentation by Bruno Yun

Boundedness et k-boundedness pour les règles existentielles by Sthatis Delivorias

Existential logic for defeasible reasoning (ELDR) by Raouf Hecham

Interrogation de key-value stores avec des règles ontologiques by Olivier Rodriguez

Problèmes liés à l’intégration de données munies d’ontologies by Patrice Buche

Requêtage d’une base de connaissances d’assemblages mécaniques définis par une représentation CAO 3D by Federico Ulliana

Modélisation des systèmes biologiques by Michel Leclère

********************************************************************************************************************

jeudi 8 novembre 2018, 15h15, bâtiment 5, salle 3.152

Raisonnement temporel avec OWL Time et algèbre de Allen
Jean-François Baget (GraphIK)

********************************************************************************************************************

jeudi 25 octobre 2018, 14h-16h, bâtiment 5, salle 3.152

Ontology-Based Data Access avec RDFS comme langage pivot : travaux en cours
Marie-Laure Mugnier (GraphIK)
Collaboration avec Maxime Buron, François Goasdoué, Ioana Manolescu (EPI Inria CEDAR, Saclay)

********************************************************************************************************************

mardi 24 juillet 2018, 10h, bâtiment 5, salle 3.152

Restitutions des travaux de stage d’été

10h-10h30 Saturation avec des règles existentielles dans Postgres, Virtuoso et Graal
Mathieu Dodard et Renaud Colin (GraphIK)

10h30-11h Algorithme de calcul du core d’un ensemble d’atomes dans Graal
Guillaume Perution-Khili (GraphIK)

********************************************************************************************************************

PhD defence, on Monday, July 9th 2018, at 09:30, Building 5 – room 03.124 : Abdelraouf Hecham (GraphIK)

La présentation sera en anglais

Composition du Jury :
Guillermo Ricardo SIMARI, Professeur, Universidad Nacional del Sur, Buenos Aires, Argentine. (Rapporteur)
Nicolas MAUDET, Professeur, Université Pierre et Marie Curie, Paris, France. (Rapporteur)
Francesca TONI, Professeure, Imperial College London, London, Royaume-Uni. (Examinatrice)
Odile PAPINI, Professeure, Université Aix-Marseille, Marseille, France. (Examinatrice)
Madalina CROITORU, MC HDR, Université de Montpellier, Montpellier, France. (Directrice de thèse)
Pierre BISQUERT, CR2, INRA, Montpellier, France. (Co-encadrant de thèse)

Title
: “Defeasible Reasoning for Existential Rules”

Abstract
:
Knowledge representation and reasoning on the Semantic Web has recently focused, due to practical rationale, on the subset of first order logic called existential rules.
Reasoning amounts to answering queries over a data layer of factual knowledge and an ontological layer of deductive rules and negative constraints. These negative constraints express contradictions that may arise either from the data layer (erroneous facts that lead to inconsistency) or from the ontological layer (defeasible rules that lead to incoherence). Classical query answering in presence of contradictions is trivial since from falsehood everything follows (ex falso quodlibet).
In this thesis we investigate reasoning with existential rules in presence of conflicting information and introduce defeasible existential rules reasoning. We provide three main salient results as follows. First we show that classical defeasible reasoning techniques need to be revisited for existential rules and study their theoretical and implementation related challenges. Second, we provide a new combinatorial structure that allows for diverse variants of defeasible reasoning to be captured together and study its expressivity and versatility. Third we evaluate our work with respect to the state of the art in inconsistency handling in existential rules and investigate the human appeal of such reasoning techniques.

********************************************************************************************************************

June 28, 2018,2:00 pm, Room 03/124, Building 5 : Arthur Boixel (GraphIK)

Title (Arthur): Deliberation towards single peaked preferences: first thoughts

Abstract: In this internship we lay down first ideas regarding the use of argumentative deliberation for social choice. More precisely we are interested in avoiding certain known unfavorable properties of user preferences that are used as voting input. We focus on the single peakedness preference structuration and formalise a deliberation protocol that helps bringing preference profiles closer to such structure. We study the theoretical properties of such protocol and prove in an empirical manner its desirable behavior.

********************************************************************************************************************

June 28, 2018,2:00 pm, Room 03/124, Building 5 : Cesar Prouté (GraphIK)

Title: Distance based repair filtering for inconsistent knowledge bases

Abstract: The number of repairs of an inconsistent knowledge base can be exponential with respect to the number of facts. Furthermore, reasoning with repair based semantics can be computationally expensive. During this internship we focused on the following research question: “What is a *representative* subset of repairs?” In order to answer this question we defined a syntactic distance measure between repairs following the intuition that it can allow us to filter some repairs “close” to each other. We analysed this measure from a semantic point of view. and provided a purely semantic notion of optimal subsets of repairs.

********************************************************************************************************************

March 21, 2018, 10:00 am: Olivier Corby (WIMMICS, Inria Sophia Antipolis)
Titre : LDScript: a Linked Data Script Language (article à ISWC 2017)Résumé : In addition to the existing standards dedicated to representation or querying, Semantic Web programmers could really bene t from a dedicated programming language enabling them to directly de ne functions on RDF terms, RDF graphs or SPARQL results. This is especially the case, for instance, when de ning SPARQL extension functions.
The ability to capitalize complex SPARQL lter expressions into extension functions or to de ne and reuse dedicated aggregates are real cases where a dedicated language can support modularity and maintenance of
the code. Other families of use cases include the de nition of functional properties associated to RDF resources or the de nition of procedural attachments as functions assigned to RDFS or OWL classes with the
selection of the function to be applied to a resource depending on the type of the resource. To address these needs we de ne LDScript, a Linked Data script language on top of the SPARQL lter expression language.
We provide the formal grammar of the syntax and the Natural Semantics inference rules of the semantics of the language. We also provide a benchmark and perform an evaluation using real test bases from W3C
with di erent implementations and approaches comparing, in particular, script interpretation and Java compilation.

********************************************************************************************************************

March, 8, 2018, 10:00, AbdelRaouf Hecham (GraphIK) – Salle 03.152, BAT 5, 10h00

 Title: Combining Apples and Oranges: A Flexible Representation for Defeasible Logics and Repair Semantics.
Abstract: We propose Statement Graphs (SG), a new logical formalism for defeasible reasoning and repair semantics based on argumentation. Using a flexible labeling function, SGs can capture the variants of defeasible reasoning (ambiguity blocking or propagating, with or without team defeat, and circular reasoning) and certain repair semantics (IAR, ICAR, and -possibly- ICR, AR). We also evaluate our approach with respect to human reasoning and propose a working first order defeasible reasoning tool that, compared to the state of the art, has richer expressivity at no added computational cost.

********************************************************************************************************************Bruno Yun, salle 3.152 (Bâtiment 5) le 15 février 2018 à 10h00

Title: A Structural Benchmark for Logical AFs.


Abstract:
This paper proposes a practically-oriented benchmark suite for computational argumentation. We instantiate abstract argumentation frameworks with existential  rules, a language widely used in Semantic Web applications and provide a generator of such instantiated graphs. We analyse performance of argumentation solvers on these benchmarks.

********************************************************************************************************************Alain Gutierrez,  le 1er février à 10h, salle 3.152 en visio avec Grenoble :“Présentation de la nouvelle version de Cogui”.

********************************************************************************************************************

Jeudi 18 janvier, 10h, bat 5, salle 1.124, Angela Bonifati and Ugo Comignani

* Functional Dependencies Unleashed for Scalable Data Exchange (presented by Angela Bonifati)

We address the problem of efficiently evaluating target functional dependencies (fds) in the Data Exchange (DE) process. Target fds naturally occur in many DE scenarios, including the ones in Life Sciences in which multiple source relations need to be structured under a constrained target schema. However, despite their wide use, target fds’ evaluation is still a bottleneck in the state-of-the-art DE engines. Systems relying on an all-SQL approach typically do not support target fds unless additional information is provided. Alternatively, DE engines that do include these dependencies typically pay the price of a significant drop in performance and scalability. In this paper, we present a novel chase-based algorithm that can efficiently handle arbitrary fds on the target. Our approach essentially relies on exploiting the interactions between source-to-target (s-t) tuple-generating dependencies (tgds) and target fds. This allows us to tame the size of the intermediate chase results, by playing on a careful ordering of chase steps interleaving fds and (chosen) tgds. As a direct consequence, we importantly diminish the fd application scope, often a central cause of the dramatic overhead induced by target fds. Moreover, reasoning on dependency interaction further leads us to interesting parallelization opportunities, yielding additional scalability gains. We provide a proof-of-concept implementation of our chase-based algorithm and an experimental study aimed at gauging its scalability and efficiency. Finally, we empirically compare with the latest DE engines, and show that our algorithm outperforms them.

* Interactive Mapping Specification with Exemplar Tuples (presented by Ugo Comignani)

While schema mapping specification is a cumbersome task for data curation specialists, it becomes unfeasible for non-expert users, who are unacquainted with the semantics and languages of the involved transformations. In this paper, we present an interactive framework for schema mapping specification suited for non-expert users. The underlying key intuition is to leverage a few exemplar tuples to infer the underlying mappings and iterate the inference process via simple user interactions under the form of boolean queries on the validity of the initial exemplar tuples. The approaches available so far are mainly assuming pairs of complete universal data examples, which can be solely provided by data curation experts, or are limited to poorly expressive mappings. We present several exploration strategies of the space of all possible mappings that satisfy arbitrary user exemplar tuples. Along the exploration, we challenge the user to retain the mappings that fit the user’s requirements at best and to dynamically prune the exploration space, thus reducing the number of user interactions. We prove that after the refinement process, the obtained mappings are correct. We present an extensive experimental analysis devoted to measure the feasibility of our interactive mapping strategies and the inherent quality of the obtained mappings.

********************************************************************************************************************

Jeudi 21 septembre, 9h30, Bat 5, Salle 3.156

Federico Ulliana will present a paper by Markus Kroetzsch entitled “Ontologies for Knowledge Graphs?” (invited talk at DL 2017, see https://project.inria.fr/dl2017/) and connect it to issues and techniques studied in the team.

Abstract (of Markus Kroetzsch DL 2017 invited talk): Modern knowledge representation, and DL in particular, promises many advantages for information management, based on an unambiguous, implementation-independent semantics for which a range of reasoning services is available. These strengths align well with the needs of an ever growing information industry that embraces declarative knowledge modelling as one of the pillars of their businesses and operations.
Today, giants like Google, Facebook, and Wikimedia consciously deploy ontological models, and store information in graph-like data structures that are more similar to DL ABoxes than to traditional relational databases. Many smaller organisations follow, and “knowledge graphs” appear in numerous places. One would therefore expect to see a steep increase in the practical adoption of logic-based KR. In practice, however, this is rarely observed. I will discuss the current state of affairs and ask which technical and pragmatic issues might contribute to the surprisingly limited impact of DLs in this area. Wikidata, the knowledge graph of Wikipedia, will serve as a concrete illustration for a lack of formal KR in an application where other knowledge engineering and (semantic) data management technologies have been adopted with great enthusiasm. Focusing on this particular example, I will motivate some requirements arising in such systems, and present initial ideas for addressing them in various logics.

********************************************************************************************************************

18-21 July, 2017, 30th International Description Logic Workshop (DL 2017)
The DL workshop is the major annual event of the Description Logic research community. DL 2017 is organized by GraphIK (general chairs: Meghyn Bienvenu et Marie-Laure Mugnier) and will be held at the faculty of Medecine.https://project.inria.fr/dl2017/

********************************************************************************************************************

July 17, 2017, 1:45 am, Pagoda Workshop – Faculté de Médecine, Salle du conseil, 13:45-18:30

This mini-workshop gathers participants to the Pagoda ANR project (Practical algorithms for ontology-based data access) and invited international collaborators. Planning13:45 – 16:15
– Overview of the Pagoda project (Meghyn Bienvenu, GraphIK)– Inconsistency handling (Camille Bourgaux, TU Dresden)– Succinctness of rewritings (Roman Kontchakov, Birkbeck, University of London)– Optimal query rewriting (Stanislav Kikot, Birkbeck, University of London)16:15 – 17:00 Coffee, tea, & discussion17:00 – 18:30– Linking open-world knowledge bases using nonmonotonic rules (Mantas Simkus, TU Vienna)– Dichotomies in ontology-mediated querying with the guarded fragment (Frank Wolter, University of Liverpool)

********************************************************************************************************************

June 25, 2017, 9:30 am, Clément Sipieter (GraphIK) – Bt 5, Room 3.152

Titre : Présentation de la nouvelle version de GraalDétail : Je présenterai la nouvelle version de Graal, notre plate-forme dédiée aux règles existentielles, ainsi que la documentation du site web. Venez avec votre machine pour faire l’installation de Graal et tester les exemples.

********************************************************************************************************************

Jeudi 22 juin, 9h30, bat 5, salle 1.124 : Exposés des travaux de stages (2)

* Boundedness et règles existentielles linéaires, Benjamin Boisson, Stathis Delivorias et Marin Julien

* Implémentation de fonctions logiques sur des séquences génétiques, Guillaume Kihli

******************************************************************************************************************** Mercredi 21 juin, 9h30, bat 5, salle 3.124 : Exposés des travaux de stages (1)

* Stratifiabilité d’un ensemble de règles existentielles avec négation par défaut, Arthur Boixel* Affinement du graphe de dépendances de règles existentielles, Thibault Bondetti

********************************************************************************************************************

Vendredi 19 mai 2017, 14h00, Salle Séminaire, Lirmm  

IA Seminar, Thomas Schiex (INRA Toulouse)

Promenade autour de l’optimisation sur modèles graphiques probabilistes, réseaux de fonctions de coût et application en bioinfo

Title: Optimisation and Counting in Graphical Models “Graphical models”

(GMs) define a family of mathematical modeling frameworks that have been independently explored as deterministic models (Constraint and Cost Function Networks – CN/CFNs) and stochastic models (eg. Markov Random Fields – MRFs, factor graphs).

In this talk, I’d like to show that while dynamic and linear programming underlie both local consistency filtering in CN/CFNs and message passing in MRFs, the specific focus on optimized guaranteed methods for deterministic GMs translates into a technology that is directly useful for stochastic models.

This is true both for decision NP-complete optimization (Maximum a Posteriori or MAP-MRF problem) and for #P-complete counting (computing Z, the normalizing constant aka the Partition Function). This will be illustrated on recent results obtained in Computational Structural Biology in the context of protein design.

********************************************************************************************************************

Jeudi 20 avril 2017, 10h30, Bat 5, salle 1.156 – Nikos Karanikolas (GraphIK, IATE)

Title: Computational aspects of social choice theory.

This talk will focus on computational problems arising from the theory of social choice and especially voting theory.

A central problem of voting theory is the computation of the winner of the elections when we have as input the preferences of the voters. In the literature there are many voting rules according to which the computation of the winner of the elections is done. Voting theory is a seminal subject in the Computational Social Choice (CSC) with applications in the society. First, we will start by defining the voting problem and some voting rules and give results about the approximation of these rules. The voting rules we primarily focus are the rules proposed by Dodgson and Young, that are designed to to find an alternative closest to being a Condorcet winner. According to the Condorcet criterion, the winner of the elections should be the one that the majority of the voters prefer in relation to every other candidate. Then, we will point out the approximability of these rules in satisfying social desirable properties. The last issue we are going to mention, is the interaction of InfoVis and CSC and specify how this approach can  help people understand the structure of preferences  and voting theory’s notions.

********************************************************************************************************************

24 mars 2017 à 14h, salle des séminaires (Bâtiment 4)

Orateur : Amedeo Napoli (DR CNRS – LORIA, Nancy)

Title: An Exploratory Knowledge Discovery Process based on Formal Concept Analysis

Abstract: Knowledge discovery (KD) in complex datasets can be considered from several points of view, e.g. data, knowledge, and problem solving. KD is applied to datasets and has a direct impact on the design of knowledge bases and problem solving. Dually, principles supporting knowledge representation can be reused in knowledge discovery, as in declarative data mining, for making KD more exploratory, iterative and interactive. Accordingly, one of our main objectives in KD is to design an exploratory KD approach supporting interactions between data and domain knowledge, discovery and representation, and between the analyst and the KD process. Accordingly, in this presentation, we discuss the process of Exploratory Knowledge Discovery based on Formal Concept Analysis (FCA). FCA starts with a binary context and outputs a concept lattice, which can be visualized, navigated and interpreted by human agents, and, as well, which can be processable by software agents. FCA can be extended to pattern structures for dealing with complex data such as Linked Data (RDF). We will present a methodology based on FCA and pattern structures for mining definitions in RDF data. Such data are classified into a concept lattice allowing data exploration and the discovery of implications, used to automatically detect missing information and to complete RDF data. Contrasting the preceding data-directed KD process, we will present a goal-directed KD process, where the search for patterns of high interest is guided by the analyst preferences and domain constraints. Finally, we will conclude by discussing the benefits of FCA and pattern structures for exploratory knowledge discovery.

********************************************************************************************************************

Jeudi 9 mars 2017, 10h, Bat 5, Room 3.152

Title: Are ranking semantics sensitive to the notion of core?

Par Bruno Yun en collaboration avec Madalina Croitoru et Pierre Bisquert

Abstract: In this paper, we study the impact of two notions of core on the output of ranking semantics in logical  argumentation frameworks. We consider the existential rules fragment, a language widely used in Semantic Web and Ontology Based Data Access applications. Using burden semantics as example we show how some ranking semantics yield different outputs on the argumentation graph and its cores. We extend existing results in the literature regarding core equivalences on logical argumentation frameworks and propose the first formal characterisation of core-induced modification for a class of ranking semantics satisfying given postulates.

********************************************************************************************************************

Mercredi 8 mars 2017, 11h, Bat 5, Room 3.152

Title : Ranking Arguments With Compensation-Based Semantics

Par Srdjan Vesic (CRIL)en collaboration avec Leila Amgoud, Joanathan Ben-Naim, Dragan Doger.

Abstract : In almost all existing semantics in argumentation, a strong attack has a lethal effect on its target, that a set of  several weak attacks may not have. This work investigates the case where several weak attacks may compensate one strong attack. It defines a broad class of ranking semantics, called alpha-BBS, which satisfy compensation. alpha-BBS assign a burden number to each argument and order the arguments with respect to those numbers. We study formal properties of alpha-BBS, implement an algorithm that calculates the ranking, and perform experiments that show that the approach computes the ranking very quickly.

Moreover, an approximation of the ranking can be provided at any time.

********************************************************************************************************************

24 février 2017 à 14h, salle des séminaires (Bâtiment 4)

Orateur : Jérôme Lang (DR au Lamsade, Université Paris-Dauphine)

Titre : About Knowledge Representation and Computational Social Choice

Résumé : Social choice theory studies the aggregation of individual preferences towards a collective choice. Computational social choice emerged in the late 1980s, and mostly uses computational paradigms and techniques to provide a better analysis of social choice mechanisms (especially in the fields of voting and of fair division of resources), and to construct new ones. Among the subfields of artificial intelligence that are concerned by this interaction, knowledge representation plays an important role (other subfields being machine learning, reasoning with uncertainty, search, and constraint programming). The reasons for which KR plays an important role include: representing preferences and reasoning about them; computing collective decisions with incomplete knowledge of agents’ preferences; the role of knowledge in strategic behavior; and using logic for automated theorem proving in social choice. The talk will give an overview of part of these topics, and will try to identify challenges for KR researchers interested in computational social choice.

********************************************************************************************************************

February 2nd, 2017,10 AM, Michel Leclère (GraphIK)

Bat 5, 3.152. Bilan scientifique du projet Qualinca : qualité des liens dans la base SUDOC (ABES).

********************************************************************************************************************

December 1st, 2016, GraphIK annual seminar + invited talk by Pierre Bourhis (LIFL, Inria Links) on query answering with hidden relations.

9h30-18h, La Gazette Café.

********************************************************************************************************************

November 25, 2016, 2 pm, Seminar Room, Building 4, PhD Defense Swan Rocher (Graphik)

Title: Querying Existential Rule Knowledge Bases: Decidability and Complexity

Abstract: In this thesis we investigate the issue of querying knowledge bases composed of data and general background knowledge, called an ontology. Ontological knowledge can be represented under different formalisms and we consider here a fragment of first-order logic called existential rules (also known as tuple-generating dependencies and Datalog+/-). The fundamental entailment problem at the core of this thesis asks if a conjunctive query is entailed by an existential rule knowledge base. General existential rules are highly expressive, however at the cost of undecidability. Various restrictions on sets of rules have been proposed to regain the decidability of the entailment problem. Our specific contribution is two-fold. First, we propose a new tool that allows to unify and extend most of the known existential rule classes that rely on acyclicity conditions to tame infinite forward chaining, without increasing the complexity of the acyclicity recognition. Second, we study the compatibility of known decidable rule classes with a frequently required modeling construct, namely transitivity of binary relations. We help clarifying the picture of negative and positive results on this question, and provide a technique to safely combine transitivity with one of the simplest, yet useful, decidable rule classes, namely linear rules.

********************************************************************************************************************

November 25, 2016, 10:15 am, Bat 4, Seminar Room,

Title: The Curse of Finiteness: Undecidability of Database-Inspired Reasoning Problems in Very Expressive Description Logics

Recently, the field of knowledge representation is drawing a lot of inspiration from database theory. In particular, in the area of description logics and ontology languages, interest has shifted from satisfiability checking to query answering, with various query notions adopted from databases, like (unions of) conjunctive queries or different kinds of path queries. Likewise, the finite model semantics is being established as a viable and interesting alternative to the traditional semantics based on unrestricted models. In this talk, we discuss diverse database-inspired reasoning problems for very expressive description logics (all featuring the worrisome triad of inverses, counting, and nominals) which have in common that role paths of unbounded length can be described (in the knowledge base or the query), leading to a certain non-locality of the reasoning problem. We show that for all the cases considered, undecidability can be established by very similar means. Most notably, we show undecidability of finite entailment of unions of conjunctive queries for a fragment of SHOIQ (the logic underlying the OWL DL ontology language), and undecidability of finite entailment of conjunctive queries for a fragment of SROIQ (the logical basis of the more recent and popular OWL 2 DL standard).

********************************************************************************************************************

November 10,2016, 10:45 am, Room 02/022, Building 5, Saint Priest – Stanislav Kikot (Birkbeck College, University of London)

Title: The Complexity of Ontology-Based Data Access with OWL 2 QL and Bounded Treewidth Queries

Abstract: Our concern is the overhead of answering OWL 2 QL ontology-mediated queries (OMQs) in ontology-based data access compared to evaluating their underlying tree-shaped and bounded treewidth conjunctive queries (CQs). We will demonstrate that OMQs with bounded-depth ontologies have nonrecursive datalog (NDL) rewritings that can be constructed and evaluated in LOGCFL for combined complexity, even in NL if their CQs are tree-shaped with a bounded number of leaves, and so incur no overhead in complexity-theoretic terms. For OMQs with arbitrary ontologies and bounded-leaf CQs,  NDL-rewritings are constructed and evaluated in LOGCFL. We will also present experimental results suggesting  feasibility and scalability of our rewritings compared to standard NDL-rewritings. On the negative side, we prove that answering OMQs with tree-shaped CQs is not fixed-parameter tractable if the ontology depth or  the number of leaves in the CQs is regarded as the parameter, and that answering OMQs with a fixed ontology (of infinite depth)  is NP-complete for tree-shaped and LOGCFL for bounded-leaf CQs.

********************************************************************************************************************

November 10, 2016, 9:30 am, Room 02/022, Building 5, Saint Priest – Elena Botoeva (Free University of Bozen-Bolzano)

Title: Query Inseparability of Description Logic Knowledge Bases and TBoxes.

Abstract: Deciding inseparability of description logic knowledge bases (KBs) with respect to conjunctive queries is fundamental for many KB engineering and maintenance tasks including versioning, module extraction, knowledge exchange and forgetting.   We study the combined and data complexity of this inseparability problem for ALC and for fragments of Horn-ALCHI, including the description logics underpinning OWL 2 QL and OWL 2 EL.  In the Horn case, we give a uniform game-theoretic characterisation of KB conjunctive query inseparability, and devise a number of combined complexity results ranging from P- to ExpTime- to 2ExpTime-completeness. In the ALC case, we also consider unions of conjunctive queries; we provide model-theoretic criteria of KB query inseparability, and prove that this problem is undecidable for conjunctive queries, but 2ExpTime-complete for unions of conjunctive queries. We also report some known results on the problem of whether two TBoxes give the same answers to all queries in a given vocabulary over all ABoxes.

 ********************************************************************************************************************

Avis de soutenance de thèse – Abdallah Arioua 17 Oct 2016 à 14h, campus St Priest, bât 5, salle bat5-3/124

Title:  Formalizing and Studying Dialectical Explanations in Inconsistent Knowledge Bases

Abstract:

Knowledge bases are deductive databases where the machinery of logic is used to represent domain-specific and general-purpose knowledge over existing data. In the existential rules framework, a knowledge base is composed of two layers: the data layer which represents the factual knowledge, and the ontological layer that incorporates rules of deduction and negative constraints. The main reasoning service in such framework is answering queries over the data layer by means of the ontological layer. As in classical logic, contradictions trivialize query answering since everything follows from a contradiction (ex falso quodlibet).

Recently, inconsistency-tolerant approaches have been proposed to cope with such problem in the existential rules framework. They deploy repairing strategies on the knowledge base to restore consistency and overcome the problem of trivialization. However, these approaches are sometimes unintelligible and not straightforward for the end-user as they implement complex repairing strategies. This would jeopardize the trust relation between the user and the knowledge-based system. In this thesis we answer the research question: “How do we make query answering intelligible to the end-user in presence of inconsistency?”. To answer the question we consider the general framework of argumentation and we propose three types of explanations: (1) One-shot Argument-based Explanations, (2) Meta-level Dialectical Explanations, and (3) Object-level Dialectical Explanations.

The First one is a set of arguments in favor or against the query in question. The two others take the form of a dialogue between the user and the reasoner about the entailment of a given query. We study these explanations in the framework of logic-based argumentation and dialectics and we study their properties and their impact on users.

********************************************************************************************************************

Vendredi 07 octobre 2016 à 14h – BAT5 : Salle JP Nougier (2/022)

Avis de soutenance de thèse : Namrata Patel –

Title: Preference Handling in Decision-Making Problems”,

Abstract: Intelligent `services’ are increasingly used on e-commerce platforms to provide assistance to customers. In this context, preferences have gained rapid interest for their utility in solving problems related with decision making. Research on preferences in artificial intelligence (AI) has shed light on various ways of tackling this problem, ranging from the acquisition of preferences to their formal representation and eventually their proper manipulation.

Following a recent trend of stepping back and looking at decision-support systems from the user’s point of view, i.e. designing them on the basis of psychological, linguistic and personal considerations, we take up the task of developing an“intelligent” tool which uses comparative preference statements for personalised decision support. We tackle and contribute to different branches of research on preferences in AI: (1) their acquisition (2) their formal representation and (3) their implementation.

We first address a bottleneck in preference acquisition by proposing a method of acquiring user preferences, expressed in natural language (NL), which favours their formal representation and further manipulation. We then focus on the theoretical aspects of handling comparative preference statements for decision support. We finally describe our tool for product recommendation that uses: (1) a review-based analysis to generate a product database, (2) an interactive preference elicitation unit to guide users to express their preferences, and (3) a reasoning engine that manipulates comparative preference statements to generate a preference-based ordering on outcomes as recommendations.

*********************************************************************************************************************

June 20, 2016, Bat 5, 3.124, 9:30 AM

Several talks in the context of PAGODA project’s meeting

9h30 – 9h45 Introduction, Meghyn Bienvenu (GraphIK)

9h45 – 10h15 “Query-driven repairing of inconsistent DL-Lite knowledge bases”, Camille Bourgaux (LRI)

10h15 – 10h45 “Deductive RDF triplestores: a declarative framework for data integration and analysis”, Marie-Christine Rousset (LIG)

11h – 11h30 “On Bounded Positive Existential Rules”, Marie-Laure Mugnier(GraphIK)

********************************************************************************************************************

June 13-17, 2016, SupAgro Campus, Journées du pré-GDR IA et JIAF

Les équipes Coconut et GraphIK accueillent les journées du pré-GDR IA, ainsi que les JFPC (Journées francophones de programmation par contraintes) et JIAF (Journées de l’intelligence artificielle fondamentale).

http://www.gdria.fr/journees/

https://www.supagro.fr/jfpc_jiaf_2016/

********************************************************************************************************************

10 juin 2016, à 14h, salle BAT5-3/124

Leila Amgoud (DR-CNRS à l’IRIT, Toulouse)

Title: Argumentation-based defeasible reasoning

Abstract: Argumentation is an alternative approach for defeasible reasoning. It is based on the idea of justifying plausible conclusions by “strong” arguments. Starting from a knowledge base encoded in a logical language, an argumentation system defines arguments and attacks between them using the consequence operator associated with the language. Finally, it uses a semantics for evaluating the arguments. The plausible conclusions to be drawn from the knowledge base are those supported by “good” arguments.

In this talk, we present two families of such systems: the family using extension semantics and the one using ranking semantics. We discuss the outcomes of both families and compare them. We then compare the argumentation approach with other well-known approaches for defeasible reasoning, namely default logic.

********************************************************************************************************************

 May 20, 2016, 10:00 AM, Room  3/124, BT 5 – Alain Gutierrez (Graphik Team Inria)

Resume: J’essaye de porter Cogui sur une nouvelle architecture logicielle de type RCP (Rich Client Platform). Je présenterai la plateforme NetBeans, en particulier les bénéfices que pourrait en tirer Cogui:

  • Architecture plus ouverte fondé sur la notion de plugin
  • Facilité de déploiement et mise à jour automatique
  • Amélioration de l’interface utilisateur et nouveaux éditeurs textuels et graphiques
  • Environnement d’exécution des scripts non bloquant et mieux contrôlé
  • etc.

La mise en synergie du code par le couplage faible des composants logiciels permet d’envisager une meilleure séparation entre les modèles de représentation et de raisonnement et le code qui les implémente. Cogui pourrait enfin jouer pleinement son rôle d’intégrateur et être l’ébauche d’une véritable plateforme d’ingénierie des connaissances.

****************************************************************

 March 18, 2016, 09:30 AM, Room 3/124: Abdelraouf Hecham (Graphik Team Inria)

Title: Extending games with a purpose for building profile aware associations

Abstract: Associative networks have been long used as a way to provide intelligent machines with a working memory and applied in various domains such as Natural Language Processing or customer associations analysis. While giving out numerous practical advantages, existing Games With a Purpose (GWAPs) for eliciting associative networks cannot be employed in certain domains (for example in customer associations analysis) due to the lack of profile based filtering. We extend upon the state of the art and ask the following research question: “How does considering players profile information when constructing a semantic network by a game with a purpose change the outcome of the game (i.e. the resulting associative network)?”. In order to answer this question we present the KAT (Knowledge AcquisiTion) game that extends upon the state of the art by considering agent profiling. We formalise the game, implement it and carry out a pilot study that validates the above mentioned research hypothesis.

********************************************************************************************************************

11 mars 2016 à 10h en salle des séminaires, Bt 4, Alexis Tsoukias (DR-CNRS au LAMSADE, Paris Dauphine)

Title: *What is a Decision Problem?*

The talk introduces a general framework through which any type of decision problem should be characterisable. The aim of the framework is to show that only a finite number of primitive characteristics describe a decision problem, thus reducing the number of archetypes to a finite number of combinations. In the talk we present in some details some of these characteristics. We also provide a number of simple examples in order to show how this framework helps in categorising current decision problems, in reformulating decision problems and in constructing formal arguments for the decision aiding process.

********************************************************************************************************************

February 5, 2016, 09:30 AM, Room 3/124, Building 5: Michael Thomazo (OAK Team Inria)

Titre : Approche combinée : classification, limitations, et le cas des règles linéaires

Résumé : nous nous intéresserons à l’interrogation de données en présence d’ontologies. Différentes techniques de combinaison de matérialisation (modification des données en fonction de l’ontologie) et de réécriture (modification de la requête en fonction de l’ontologie) ont été proposées dans la littérature. Après avoir rappelé la formalisation des approches combinées polynomiales, nous proposerons une classification de ces approches (“data-oblivious” and “schema-agnostic”), mettant en avant des possibilités d’amélioration. Nous terminerons par l’étude de certaines règles linéaires.

*******************************************************************************************************************

January 8, 2016, 9:30-12:30 AM: Working group on Ontology-Based Data Access (3rd session)

GraphIK Friday 09:30 – 12:30 – Room 3/156 : Working group on Ontology-Based Data Access: study of several papers on the combined approach

 *******************************************************************************************************************

December 18, 2015, 9:30-12:30 AM: Working group on Ontology-Based Data Access: study of several papers on the combined approach (2nd session)

********************************************************************************************************************

December 11, 2015, 2:00 PM, IA Seminar, P. Perny (LIP6, UPMC)

Titre : Elicitation incrémentale de poids pour la décision multicritère ou collective

Résumé : Dans cet exposé, nous présentons de nouvelles méthodes d’élicitation incrémentale de préférences pour la décision multicritère ou collective. Plus précisément, l’exposé sera focalisé sur l’apprentissage actif de pondérations dans les fonctions d’agrégation. Nous nous intéressons ici à des protocoles d’élicitation qui visent, à chaque étape du processus de décision, à identifier la question la plus informative possible pour réduire l’incertitude sur la valeur des solutions potentielles (minimisation de regrets) et ainsi favoriser la détermination rapide d’une solution nécessairement optimale ou quasi-optimale.

********************************************************************************************************************

December 11, 2015, 09:30 – 12:30 AM: Working group on Ontology-Based Data Access: study of several papers on the combined approach

********************************************************************************************************************

November 2, 2015,10:15 AM: Several talks in the context of PAGODA Meeting,

10h15 – 10h30: PhD thesis topic presentation, Modular reuse of ontologies with existential rules by Stathis Delivorias

10h30 – 11h15: Query-based explanation and repairing of inconsistent DL- Lite KB by Camille Bourgaux

16h30-17h15: Ontology-based data access with key-value stores by Federico Ulliana

********************************************************************************************************************

October 23, 2015, LIRMM, Bat 5, Room 3/152 – 10:00 AM, Antoine Amarilli (LTCI Lab, Télécom ParisTech)

Title : Combining Existential Rules and Description Logics

Query answering under existential rules — implications with existential quantifiers in the head — is known to be decidable when imposing restrictions on the rule bodies such as frontier-guardedness [Baget, Leclère, Mugnier, 2010; Baget, Leclère, Mugnier, Salvat, 2011]. Query answering is also decidable for description logics [Baader, 2003], which further allow disjunction and functionality constraints (assert that certain relations are functions), however, they are focused on ER-type schemas, where relations have arity two. This work investigates how to get the best of both worlds: having decidable existential rules on arbitrary arity relations, while allowing rich description logics, including functionality constraints, on arity-two relations. We first show negative results on combining such decidable languages. Second, we introduce an expressive set of existential rules (frontier-one rules with a certain restriction) which can be combined with powerful constraints on arity-two relations (e.g. GC 2, ALCQIb) while retaining decidable query answering. Further, we provide conditions to add functionality constraints on the highe-rarity relations.

********************************************************************************************************************

September 25, 2015, 10:00 AM:LIRMM, Bat. 5, 3.124, Several talks in the context of Qualinca project’s meeting

10h15-12h: SudoQual (ABES, LIRMM-INRIA):

– Méthode (approches de la  qualité référentielle, modèle d’expertise, système d’enrichissement et de liage) par Michel Leclère (GraphIK)

– Prototype (architecture logicielle, démo) par Alain Gutierrez (GraphIK)

– Evaluation (évaluation du modèle d’expertise, mise au point de l’heuristique de liage) par Aline Le Provost (ABES)

– Spécifications du démonstrateur par Yann Nicolas (ABES)

12h-12h30: Nouveaux résultats quantitatifs et qualitatifs de liage de données obtenus à l’aide de ProbFR sur des données de l’INA, de DBPedia et de Musicbrainz (Marie-Christine Rousset, LIG)

12h30-13h: Outil de visualisation du liage de données développé à l’INA (Steffen Lalande, INA)

14h30-15h : Argumentation et liage (Abdallah Arioua, GraphIK)

********************************************************************************************************************

September 18, 2015: GraphIK annual seminar – (9h-17h30, Aquarium, Odysseum)


Plusieurs exposés dans le cadre de cette journée :

1) Thème “Accès aux données médiatisé par une ontologie” (OBDA)

* Exposé: “mappings” dans le paradigme OBDA (Meghyn)

* Exposé : Présentation de @Web et travaux en cours sur les contraintes d’intégrité SPARQL (Patrice)

2) Thème “Raisonnement sur des connaissances imparfaites et aide à la décision”

* Exposé : Dialogue d’explication en interrogation tolérante aux inconsistances (Abdallah)

* Exposé : capitalisation d’expertise : type de règles, partenariat applicatifs (Jérôme et Patrice)

* Exposé OBDA : Règles existentielles et transitivité (Swan)

3) Thème “Qualité et interopérabilité de grandes bases documentaires”

* Exposé : évaluation de la qualité des liens dans les bases documentaires : méthodologie et cadre formel, démo (Michel et Alain)

********************************************************************************************************************

July 2, 2015, 2:30 PM, Bt 5, Room 3/156 – Stathis Delivorias (Dresden University)

Il nous parlera de son travail de M2 à Dresden : Grounded Circumscription in Description Logics

https://ddll.inf.tu-dresden.de/web/Computational_Logic/en

********************************************************************************************************************

June 24, 2015, 09:30 – 10:30, BAT5-3/124 – Florence Bannay (IRIT)

Title: Towards a dual process cognitive model for argument evaluation.

Abstract: In this presentation we are interested in the computational and formal analysis of the persuasive impact that an argument can produce on a human agent.

We propose a dual process cognitive computational model based on the highly influential work of Kahnemann and investigate its reasoning mechanisms in the context of argument evaluation. This formal model is a first attempt to take a greater account of human reasoning and is a first step to a better understanding of persuasion processes as well as human argumentative strategies, which is crucial in collective decision making domain.

The work presented is based on a collaboration with Pierre Bisquert (GraphIK, INRA) and Madalina Croitoru (GraphIK, LIRMM).

*******************************************************************************************************************

June 19, 2015, 9h30-16h, Bat 5, Room 3/156: WIMMICS-GraphIK software day

Programme indicatif

09h15 : accueil café

09h30- 11h : Marie-Laure Mugnier, Clément Sipieter (GraphIK)

– règles existentielles : techniques algorithmiques

– présentation de Graal et outils associés

11h : pause

11h15 -12h45 : Olivier Corby, Catherine Faron (Wimmics)

– présentation de Corese et du SPARQL Template Transformation Language

– règles SPARQL

13h-14h : repas

14h-14h30 : David Genest (LERIA)

– un langage d’interrogation à la SPARQL pour les graphes conceptuels et son implémentation dans Cogitant

14h30-15h : Alain Gutierrez (GraphIK)

– nouveautés dans Cogui

15h-16h : brainstorming sur l’interopérabilité de nos outils

********************************************************************************************************************

June 12, 2015, 09:30 – 10:30 BAT5-3/124 – (Abdallah, Swan, Madalina)

Title: Explanation Dialogues on Erroneous SameAs Using Argumentation Theory

Abstract: Assuring the quality of resources and semantic links in the LOD is becoming a very important issues, due to the gigantic growing of the LOD graph in the last years. Systems which aim at consuming LOD data as well as Semantic Web frameworks for linking or integration, must rely on both the quality of facts that describe a resource in a dataset and on the semantic links that inter-connect resources between different datasets. Now, more than ever, it is necessary to provide support to experts in the validation of facts in the LOD.

In this paper, we present a novel idea which applies a specific argumentation semantics to explain inconsistencies, found in sameAs statements, to the experts and thus it supports successive recovery actions. We demonstrate the expressive power of the proposed solution by providing significative explanatory examples.

The work presented is based on a collaboration with Nathalie Pernelle and Laura Papaleo (LRI).

********************************************************************************************************************

May 29, 2015, 10:00 – 11:00 BAT5-2/156 (Christian RETORE, TEXTE)

Title: L’opérateur epsilon de Hilbert ou comment les éléments génériques permettent de quantifier sans quantificateur.

Abstract: Je propose une petite introduction historique au workshop que nous organisons à Montpellier du 10 au 12 juin : Hilbert’s Epsilon and Tau in Logic, Informatics and Linguistics https://sites.google.com/site/epsilon2015workshop/

Dans son programme de fondements logiques des mathématiques, Hilbert (avec Ackermann et Bernays) en combinant une tradition venue des preuves formelles et une autre issue des éléments génériques de la philosophie du langage, propose de formuler la quantification au moyen de deux opérateurs duaux qui forment un terme à partir d’une formule (deux subnecteurs): – epsilon x. F(x) terme qui a la propriété F ssi au moins un individu a la propriété F, – tau x. F(x) terme qui a la propriété F ssi tout individu a la propriété F, – epsilon x. F(x) = tau x. ~F(x) (donc un des deux suffit, généralement epsilon) Les règles de déduction sont simples et naturelles. Elles conduisent à une extension stricte mais conservative de la logique du premier ordre. C’est le cadre de la première preuve correcte du théorème de Herbrand et de quelques résultats mathématiques (élimination des quantificateurs). Ensuite, paraissent divers résultats faux sur la complétude, la normalisation etc. Aujourd’hui, ces questions restent ouvertes mais epsilon et tau connaissent un vif regain d’intérêt que ce soit pour la démonstration assistée par ordinateur (HOL/Isabelle), ou pour l’analyse sémantique du langage naturel. Nous monterons surtout comment le subnecteur epsilon correspond mieux à la structure des propositions exprimées en langage naturel et, si le temps le permet, comment il permet de revoir le carré des oppositions d’Aristote.

Quelques références : David Hilbert, Paul Bernays Grundlagen der Mathematik. Bd. 2. Springer (1939) Traduction française de F. Gaillard, E. Guillaume et M. Guillaume, L’Harmattan, 2001. Jeremy Avigad, Richard Zach (2008). The epsilon calculus. In ZALTA, E. N., éditeur : The Stanford Encyclopedia of Philosophy. Center for the Study of Language and Information. http://plato.stanford.edu/entries/epsilon-calculus/ Hartley Slater (2005). Epsilon calculi. The Internet Encyclopedia of Philosophy. http://www.iep.utm.edu/ep-calc/ Christian Retoré Typed Hilbert Epsilon Operators and the Semantics of Determiner Phrases (Invited Lecture) In G. Morrill, R. Muskens, R. Osswald, and F. Richter (eds) 19th conference on Formal Grammar LNCS 8612, Springer 2014. pp. 15-33 http://dx.doi.org/10.1007/978-3-662-44121-3_2

********************************************************************************************************************

May 22, 2015, 14h-16h15, Bat 5, 2/124 : Pole IA applications and tools,

* 14h : Coconut team

“Programmation par Contraintes pour l’Agriculture de Précision”, qui regroupe : – Assemblage de Vin: (contrat Nyseos) – Planning technicien pour le suivi du stress hydrique dans les vignes (contrat Fruition Sciences) – Vendange Sélective: (coloration avec des agronomes de SupAgro )

* 14h30 : Smile team

ANR “SIFR: Semantic Indexing of French Biomedical Data Resources” (Clément)

* 15h15 : GraphIK team

Three demonstrations of: -Graal (a KRR platform for existential rules) -Qualinca (ANR project with ABES and INA) -EcoBioCap (EU FP7 project on biodegradable packaging)

* 15h45 : Texte team

ANR “Parsing and synthesis with abstract categorial grammars. From lexicon to discourse” (Christina R.)

********************************************************************************************************************

May 21, 2015, 10:00 – 12:00 Bat 5 – Room3/124, Federico Ulliana (GraphIK)

Title: Selectivity estimation : applications to RDF view-selection and to query-rewriting for RDF and DL-Lite

Abstract: Selectivity estimation is the problem of estimating the size of complex query expressions containing multiple joins and selections, over a given database instance. This is a critical operation for relational systems continuously having to pick the “best” evaluation strategy for queries from an exponential space of theoretical solutions. Recent work by our colleagues within the PAGODA project suggested that selectivity estimation can be also helpful when one has to deal with rules and reasoning. I will present a family of practical algorithms for reasoning on RDF data [1,2] and DL-Lite [3] whose computational choices are driven by selectivity estimation. [1] View selection in Semantic Web databases. François Goasdoué, Konstantinos Karanasos, Julien Leblay, Ioana Manolescu. VLDB 12 [2] Optimizing Reformulation-based Query Answering in RDF. Damian Bursztyn, François Goasdoué, Ioana Manolescu. EDBT 14 [3] Efficient Query Answering in DL-Lite through FOL Reformulation. Damian Bursztyn, François Goasdoué, Ioana Manolescu. Submitted

********************************************************************************************************************

April 3, 2015, 10:00 AM, Room BAT5-3/124 – Elise Bonzon (University Paris Descartes)

Title: Multiparty argumentative debates

Abstract: Debating agents have often different areas of expertise and conflicting opinions on the subjects under discussion. They are faced with the problem of deciding how to contribute to the current state of the debate in order to satisfy their personal goals. In this talk, we will present some multiparty argumentative debate protocols, and we will study a number of strategies based on the concept of minimal changes on the xurrent state of the debate.

Bio: http://www.math-info.univ-paris5.fr/~bonzon/

********************************************************************************************************************

March 6, 2015, 9:30 AM, Room Bat 5, 3.152 – Laura Papaleo (LRI)

Title:  On the reasoning and validation of SameAs Statements in RDF Data,

Abstract: In the last years, thanks to the standardization of Semantic Web technologies, we are experiencing an unprecedented production of data, published online as Linked Data.

In this context, when a typed link is instantiated between two different resources referring to the same real world entity, the usage of owl:sameAs is generally predominant.

However, recent research discussions have shown issues in the use of owl:sameAs.

Problems arise both in cases in which sameAs is automatically discovered by a data linking tool erroneously, or when users declare it but meaning something less ’strict’ than the semantics defined by OWL.

In this seminar, this issue will be further discusses and a method for logically detect invalid sameAs statements under specific circumstances will be presented, reporting experimental results, performed on OAEI datasets. The research activity presented is a joint work of Laura Papaleo, Nathalie Pernelle and Fatiha Sais from the University of Paris SUD, conducted within the QUALINCA project.

********************************************************************************************************************

March 2, 2015,  Room Bat 5, 3.152 – Several talks in the context of PAGODA project meeting,

Several talks in the context of PAGODA project meeting:

9h45 – 10h15: Optimizing reformulation-based query answering in RDF, by François Goasdoué (IRISA) – Joint work with Damian Bursztyn and Ioanna Manolescu, to appear at EDBT 2015

10h15 – 10h45: Existential rules and transitivity, Jean-Francois Baget (GraphIK) – Joint work with Meghyn Bienvenu, Marie-Laure Mugnier and Swan Rocher

11h – 11h30: Succinctness and complexity of query rewriting in DL-Lite, Meghyn Bienvenu (LRI) – Joint work with Stanislav Kikot and Vladimir V. Podolskii

16h-16h30: Explaining inconsistency-tolerant query answering, Camille Bourgaux (LRI) – Joint work with Meghyn Bienvenu

********************************************************************************************************************

January 23, 2015, 2:00 PM, Bat 4, Seminar Room, Paolo Viappiani (LIP6, France)

Title: Preference Elicitation in Recommender Systems

Abstract: Many applications, such as decision support systems and recommender systems, need to reason about the user’s preferences. However, acquiring the preferences of the user is a challenging task for a number of reasons. First, elicitation of user preferences is usually expensive (w.r.t. time, cognitive effort, etc.). Second, many decision problems have large outcome or decision spaces. Third, users are inherently “noisy” and inconsistent. Since classical elicitation paradigm from decision theory are impractical for modern software applications, a number of approaches for effectively eliciting preferences have been recently proposed in the artificial intelligence and machine learning community. This talk, after an introduction to recommender systems, will review the most prominent techniques for preference elicitation (also called preference learning in the machine learning community), including learning from ranked data. We will put a particular emphasis on interactive methods that aim at asking the most informative questions in order to make good (or even optimal) recommendations with sparse knowledge of the user’s utility function.

********************************************************************************************************************

January 23, 2015, 11:00, Bt5 room 3/152 Alain Gutierrez (Graphik):

Title: Text costruction under constraints

Teaser: This talk will present the implementation of a module that produces text by randomly selecting fragments of text. The tools used include an innovative and original use of indexation methods

Jan 16, 2015, 2:00 PM, Pole IA Seminar: Nadjib Laazar (Coconut) and Federico Ulliana (GraphIK, Bat 5, 3.152

14h-14h30 : Programmes à contraintes : de l’acquisition à la validation (Nadjib Laazar)

Abstract: Constraint programs are declarative programs written in high-level constraint languages such as OPL (Optimization Programming Language), CHOCO, GECODE or ZINC. These programs are more and more used in the design of safety-critical or business-critical systems with, for example, application to high-frequency trading, air-traffic control, car manufacturing or hardware verification.  However, i) we need a fair expertise to write down a constraint program and, ii) as any other computer programs, constraint programs must be thoroughly tested and corrected before being used in operational mode. My research interests are mainly related to the two previous questions. In my presentation, I will give an overview on previous, ongoing and future works on constraint acquisition and constraint validation.

14h45-15h15 : Modularisation de triplestores RDF déductifs (Federico Ulliana)

Abstract: Reusing modules extracted from well-established knowledge bases allows to capitalize on the efforts made in the last years by many institutions, organizations, and domain experts, to publish quality information and make it available online. Today, this practice is crucial to ensure a coherent development of the Semantic Web.

I will present a novel approach to the extraction of modules of data and knowledge from RDF Triplestores augmented with safe inference rules, à la Datalog. Dealing with a recursive rule language poses challenging issues for defining the module semantics, and also makes module extraction algorithmically unsolvable in some cases. I will present a set of module extraction algorithms compliant with the novel semantics, and experimental results showing the succinctness of the modules extracted with this approach.

********************************************************************************************************************

January 16, 2015, 9;30 AM: Room Bat 5, 3.152, Namrata Patel (GraphIK)

Title: From NL Preference Expressions to Comparative Preference Statements: A Preliminary Study in Eliciting Preferences for Customised Decision Support

Joint work with Souhila Kaci and Violaine Prince

Abstract: Intelligent `services’ are increasingly used on e-commerce platforms to provide assistance to customers. Numerous preference elicitation methods developed in the literature are now employed for this purpose. However, it is commonly known that there is a real bottleneck in preference handling as concerns the elicitation of preferences because it does not cater to the wide range of preference representation languages available. Thus, as a first step in developing a decision-support tool using an AI based on such languages, this paper describes a preliminary study conducted to address this issue. We propose a method of eliciting real-time user preferences expressed in natural language (NL) which can be formally represented using comparative preference statements complying with different semantics, and provide a proof of concept to demonstrate its feasibility. Since we develop NL resources to detect preference semantics, we also make a comparative study with existing resources to underline the peculiarities of our model.

********************************************************************************************************************

December 12, 2014, 9h30, Room 3.152 (bat 5)

Half Day on Reasoning with Inconsistency:

– Inconsistency tolerant Semantics by Swan Rocher (GraphIK)

– Explanatory Dialogues in Consistent Query Answering by Abdallah Arioua and Madalina Croitoru (GraphIK)

********************************************************************************************************************

November 21, 2014, 2:00 PM, PhD defense, Léa Guizol (GraphIK)

21 novembre, 14h, salle des séminaires du LIRMM (bâtiment 4), soutenance de thèse, Léa Guizol (GraphIK),

Title : “Partitioning semantics for entity resolution and link repairs in bibliographic knowledge bases” (in French)

Abstract: We propose a qualitative Entity Resolution approach for repairing links in a bibliographic knowledge base. Our research question is : How to detect and repair erroneous links in a bibliographic knowledge base using qualitative methods? The proposed approach is decomposed into two major parts. The first contribution consists of a partitioning semantics using symbolic criteria used in order to detect erroneous links. The second consists of a repair algorithm which restores link quality. We implemented our approach and proposed qualitative and quantitative evaluation for the partitioning semantics as well as proving certain properties for the repair algorithms.

********************************************************************************************************************

November 7, 2014, 2:00 PM, Habilitation à Diriger des Recherches” Defense – Madalina Croitoru

Title: “Reasoning about Knowledge as Graphs: Practical Artificial Intelligence Applications”,

Abstract: During and since my PhD and throughout my postdoctoral and lectureship I have investigated graph based knowledge representation and reasoning languages applied to different Artificial Intelligence fields. Syntactically the objects I am interested in are graphs. The graphs are endowed with semantics sound and complete with respect to syntactic graph operations. I will detail work done in the field of Conceptual Graphs, network flow representation for preference representation (coalition formation, combinatorial auctions) and argumentation graphs. I will also present our work within inconsistency tolerant reasoning and present some practical applications (agronomy, bibliographic data) that have benefitted from the formalisms described above.

Résumé: Pendant mon doctorat, puis tout au long de mes expériences de post-doctorant et Maître de conférences, j’ai étudié les représentations à base de graphes et les langages de raisonnement appliqués à différents domaines de l’Intelligence Artificielle. D’un point de vue syntaxique, les objets auxquels je m’intéresse sont des graphes. Ces graphes sont dotés d’une sémantique correcte et complète vis-à-vis des opérations syntaxiques correspondantes sur les graphes.

Je détaillerai les travaux menés dans le domaine des Graphes Conceptuels, des réseaux de flux pour la représentation des préférences (formation de coalitions, enchères combinatoires) et des graphes d’argumentation. Je présenterai également nos travaux dans le cadre du raisonnement en présence d’inconsistances et décrirai les applications concrètes (agronomie, données bibliographiques) qui ont bénéficié des formalismes précédemment exposés.

********************************************************************************************************************

November 7, 2014, 9:30, Salle 3.152 (Bat 5, Campus St Priest):

Le point sur nos logiciels et pratique : Cogui, Graal, Hyenna. Venez avec votre machine !

********************************************************************************************************************

October 24, 2014, 13h45, soutenance de thèse de Mélanie König, Salle des séminaires du LIRMM (bâtiment 4),

Titre : “Interrogation de grandes bases de connaissances : algorithmes de réécriture de requêtes conjonctives en présence de règles existentielles

RESUME : La problématique d’interrogation d’une base de données en présence d’une ontologie (OBQA pour “Ontology-based Query Answering”) consiste à prendre en compte des connaissances générales, typiquement une ontologie de domaine, lors de l’évaluation d’une requête. Dans le cadre de cette thèse, ces connaissances sont représentées par des formules de la logique du premier ordre appelées “règles existentielles”. Les règles existentielles sont aussi connues sous le nom de règles Datalog+/- ou “tuple-generating dependencies”. Nous considérons une approche couramment utilisée, qui consiste à réécrire la requête en exploitant les règles de façon à se ramener à un problème classique d’interrogation d’une base de données. Nous définissons un cadre théorique d’étude des algorithmes de réécriture d’une requête conjonctive en une union de requêtes conjonctives, accompagné d’un algorithme de réécriture générique, prenant en paramètre un opérateur de réécriture. Nous proposons ensuite plusieurs opérateurs de réécriture et  développons différentes optimisations, que nous évaluons sur des benchmarks du domaine.

ABSTRACT: The issue of querying a knowledge base, also called Ontology-based Query Answering (OBQA), consists of taking into account general knowledge, typically a domain ontology, when evaluating queries. In this thesis, ontological knowledge is represented by first-order logical formulas, called existential rules. Existential rules are also known as Datalog+/- and tuple-generating dependencies. We adopt a well-known approach, which consists of rewriting the query with the rules to reduce the problem to a classical database query answering problem. We define a theoretical framework to study algorithms that rewrite a conjunctive query into a union of conjunctive queries, as well as a generic rewriting algorithm that takes into account a rewriting operator. Then, we propose several rewriting operators and develop several optimisations, which we evaluate on benchmarks of the domain.

********************************************************************************************************************

October 19, 2014, 9h30, salle 1.4, bât 4 : matinée autour des règles existentielles

– Introduction aux règles existentielles (Ontology-based Query Answering with Existential Rules: Basic Notions), Marie-laure Mugnier
– Answer Set Programming et règles existentielles avec négation stable, Jean-François Baget

********************************************************************************************************************

September 29, 2014 – Séminaire du pôle IA – 14h, salle E.324, Bt 4 : Jérôme Lang – (LAMSADE)

Titre : Introduction au choix social computationnel

La théorie du choix social vise à la construction et l’analyse de méthodes pour la décision collective; c’est une branche importante de l’économie mathématique. Voici quelques exemples typiques de décision collective : élections de représentants politiques; votes profanes (par exemple, un groupe d’amis décidant du choix d’un restaurant); partage équitable de ressources (par exemple, répartition des biens entre ex-conjoints dans un jugement de divorce, répartition des classes et des créneaux horaires dans un lycée); recherche d’un consensus sur un verdict lors d’un jury d’assises. Jusqu’à présent, les théoriciens du choix social se sont peu préoccupés de questions algorithmiques. C’est là que l’informatique, et plus spécifiquement l’intelligence artificielle, entre en jeu. Depuis une dizaine d’années se développe ainsi un champ de recherche, à la rencontre du choix social et de l’informatique, appelé “choix social computationnel”. On peut distinguer deux directions de recherche. La première vise à importer des concepts et procédures de la théorie du choix social pour résoudre des problèmes issus d’applications provenant de l’informatique, notamment les procédures d’agrégation pour le classement de pages web. La seconde (et la plus importante) vise à utiliser des notions et méthodes venant de l’informatique pour résoudre ou repenser des problèmes complexes de décision collective. L’exposé passera en revue quelques-unes des problématiques-clé du choix social computationnel, et notamment le calcul exact ou approché de règles de vote complexes, le vote sur des domaines combinatoires, les barrières computationnelles aux comportements stratégiques, et la conception de protocoles d’interaction pour la décision collective.

********************************************************************************************************************

June 20th, 2014, 11:00 AM, Seminar Room, Lirmm, Slawek Staworko (INRIA Lille)

Title: Prioritized Repairing and Consistent Query Answering in Relational Databases Abstract:

I will present my work on incorporating preferences on how to resolve conflicts into the framework of consistent query answers (CQA) for relational databases. The standard framework defines consistent answers to a query essentially by evaluating the query in every possible repair of the given (inconsistent) databases. A repair is a minimally-different consistent database, which corresponds to a particular choice of resolution of all conflicts. However, in many scenarios, we may have a preference on how certain conflicts should be resolved. For instance, when conflicts arise from integrating a number of consistent independent databases, we may have a measure of confidence in each database and wish to use it to influence the conflict resolution process. This ideally should lead to a smaller number of more refined repairs, and consequently, more informative consistent answers to the query. The framework we propose allows to express the preferences on solving conflicts as priority, binary relation on conflicting tuples indicating the preferred resolution of every conflict. The main purpose of the priority relation is to define a preference relation on repairs, ideally allowing to identify the most preferred repairs. This task is far from being trivial because: 1) lifting a relation on objects to a relation on sets of objects is a (well-studied) problem with a number of possible solutions and 2) conflict resolution process is a convoluted task: resolving one conflict in one particular way may influence the number of possible ways other conflicts can be resolved. We formalise the problem of lifting priorities on conflicts to a preference relation on repairs and identify a number of desirable properties that a suitable solution should satisfy. We then propose a number of solutions that vary with the strength of preference enforcement in the conflict resolution and identify relationships among the proposed solutions. Finally, we analyse the computational implications of extending the framework of CQA with preferences.

********************************************************************************************************************

April 4th, 2014, 10:30 AM, E323, Lirmm, Frederico Ulliana (INRIA Grenoble)

Title : Deductive RDF Triplestores : domain-specific applications and bounded-size module extraction

Ontologies became a powerful tool for experts aiming at transcribing domain-specific knowledge, so as to make it understandable and processable also by machines. This is witnessed for instance by the number of ontologies covering life-science domains that have been published in the Semantic Web during the last years. In this talk, I will present some practical and theoretical aspects of ontology development motivated by the My Corporis Fabrica system (MyCF): an ontology-based tool blending anatomy with 3D simulation, which has been developed by our team. The ontology at the core of MyCF is expressed as an RDF Triplestore equipped with safe Datalog rules, we called Deductive RDF Triplestore. This formalisms allows to extend RDF(S) towards some OWL constraints and domain-specific rules, that are crucial for developing domain-specific applications. I will briefly introduce the MyCF system and its ontology in light of Deductive RDF Triplestores. Then, I will present a modularity study for this formalism, showing how to build novel Semantic Web applications by extracting and personalizing bounded-size fragments of pre-existing ontologies, whilst ensuring the logical robustness of the applications.

********************************************************************************************************************

21 février 2014 à 14h – Séminaire du pôle IA – salle des séminaires du LIRMM, Lakhdar Saïs (CRIL, France)

Titre : Approches déclaratives pour la fouille de données

Résumé : Cet exposé fait un tour d’horizon de nos contributions à la fouille de donnée et plus généralement à la fertilisation croisée entre la fouille de données, la programmation par contraintes et la satisfiabilité propositionnelle (http://www.cril.univ-artois.fr/decMining/). Ces travaux ont été soutenus par le projet ANR DAG « Approches déclaratives pour l’énumération de motifs intéressants » (http://liris.cnrs.fr/dag/).

********************************************************************************************************************

February 14th, 2014,  9:30, Salle E324 LIRMM, Pierre Bisquert:

Thématique : argumentation

Titre: Étude du changement en argumentation.

L’argumentation, au sens de l’intelligence artificielle, est un formalisme permettant de raisonner à partir d’informations incomplètes et/ou contradictoires ainsi que de modéliser un échange d’arguments entre plusieurs agents. Un système d’argumentation consiste généralement en un ensemble d’arguments interagissant les uns avec les autres, et duquel il est possible d’extraire un ou plusieurs points de vue cohérents.
Dans cette thèse, nous nous plaçons dans le cadre de l’argumentation abstraite dans lequel les arguments sont manipulés en tant qu’entités abstraites dont le sens nous est inconnu et dans lequel les interactions représentent des conflits. Ceci nous permet de nous concentrer sur le point particulier de la dynamique dans les systèmes d’argumentation abstraits, c’est-à-dire les changements pouvant impacter ces systèmes, notamment dans le cadre d’un dialogue. Nous commençons par justifier l’intérêt d’un tel cadre formel puis nous nous intéressons au comment et au pourquoi du changement en argumentation abstraite. Le comment est approché en établissant une liste des modifications que peut subir un système d’argumentation et en étudiant sous quelles conditions elles peuvent survenir. Le pourquoi est abordé par l’introduction de la notion de but motivant un changement et le choix du meilleur changement à faire pour satisfaire un but en prenant en considération des contraintes portant sur l’agent à convaincre. Enfin, nous concrétisons notre étude en proposant un outil logiciel implémentant les notions introduites et nous étudions ses performances.

********************************************************************************************************************

February 7th, 2014, 15h, LIRMM E323, Aymeric Ledorze (LERIA)

Thématique : cartes cognitives

Titre de l’exposé: Validation, synthèse et paramétrage des cartes cognitives

Résumé: Les cartes cognitives sont un modèle graphique qui permet de représenter visuellement des influences entre des concepts, chacune de cesinfluences portant une valeur qui la quantifie. Ce modèle est utilisé pour l’aide à la prise de décision. Il dispose pour cela d’une opération, appelée influence propagée, qui calcule l’influence globale de n’importe quel concept de la carte sur n’importe quel autre.

Notre première contribution consiste à étudier la sémantique qu’on peut donner à une carte cognitive en comparant ce modèle à d’autres modèles de représentation des connaissances, notamment les réseaux bayésiens. De cette étude, nous définissons le modèle des cartes cognitives probabilistes dans lequel les valeurs des influences sont apparentées à des probabilités.

Notre deuxième contribution est de construire une carte unique en synthétisant un ensemble de cartes, chacune ayant été construite par un designer différent. Nous utilisons pour cela une taxonomie de concepts qui exprime les liens entre les concepts utilisés dans les différentes cartes et un ordre de préférence sur ces designers pour gérer les informations contradictoires.

Notre troisième contribution consiste à évaluer la qualité d’une carte cognitive en proposant différents critères de qualité afin de la valider. Ces critères permettent notamment de s’assurer que l’influence propagée d’un concept sur un autre n’est pas ambiguë.

********************************************************************************************************************

January 28, 2014, 10h, LIRMM E323, Florent Domenach (Nicosia, Chypre) :  (Séminaire commun MAREL/GRAPHIK)

Thématique : analyse formelle de concepts, application à l’analyse d’annotations sémantiques.

L’Analyse Formelle de Concepts (Formal Concept Analysis, ou FCA) est une méthode permettant de dériver une hiérarchie de concepts à partir d’une collection d’objets et de leurs propriétés. On montrera dans un premier temps comment la FCA permet de structurer des données dans le cadre d’une étude en course d’analyse d’annotations de documents biomédicaux en incorporant un savoir expert. Ce savoir est représenté par des ontologies qui sont incorporées en utilisant des structures de pattern. Dans un deuxième temps, on proposera une approche basée sur différentes mesures de similarité permettant de réduire le nombre – potentiellement exponentiel – de concepts.

Florent Domenach est maître de conférences à l’Université de Nicosie. Après un doctorat à Paris 1 Panthéon – Sorbonne, il a fait un post-doc à l’Université de Tsukuba, au Japon. Ses intérêts de recherche portent sur l’analyse formelle de concepts, sur les cryptomorphismes de systèmes de fermeture et les problèmes de consensus axiomatiques.

********************************************************************************************************************

January 24, 2014, Annual Team Seminar

Camille Pradel (IRIT) : mardi 21 janvier, 10h, LIRMM E324

Titre: D’un langage de haut niveau à des requêtes graphes permettant d’interroger le web sémantique

Résumé: Les modèles graphiques sont de bons candidats pour la représentation de connaissances sur le Web, où tout est graphes : du graphe de machines connectées via Internet au “Giant Global Graph” de Tim Berners-Lee, en passant par les triplets RDF et les ontologies.

Dans ce contexte, le problème crucial de l’interrogation ontologique est le suivant~: est-ce qu’une base de connaissances composée d’une partie terminologique et d’une partie assertionnelle implique la requête, autrement dit, existe-t-il une réponse à la question~? Ces dernières années, des logiques de description ont été proposées dans lesquelles l’expressivité de l’ontologie est réduite de façon à rendre l’interrogation calculable (familles DL-Lite et EL). OWL 2 restreint OWL-DL dans ce sens en se fondant sur ces familles.

Nous nous inscrivons dans le contexte d’utilisation de formalismes graphiques pour la représentation (RDF, RDFS et OWL) et l’interrogation (SPARQL) de connaissances. Alors que les langages d’interrogation fondés sur des graphes sont présentés par leurs promoteurs comme étant naturels et intuitifs, les utilisateurs ne pensent pas leurs requêtes en termes de graphes. Les utilisateurs souhaitent des langages simples, proches de la langue naturelle, voire limités à des mots-clés. Nous proposons de définir un moyen générique permettant de transformer une requête exprimée en langue naturelle vers une requête exprimée dans le langage de graphe SPARQL, à l’aide de patrons de requêtes. Le début de ce travail coïncide avec les actions actuelles du W3C visant à préparer une nouvelle version de RDF, ainsi qu’avec le processus de standardisation de SPARQL 1.1 gérant l’implication dans les requêtes.

********************************************************************************************************************

January, 13, 2014, 2 PM, PhD Defense Bruno Paiva Da Silva, Seminar Room

Title : Data Access over Large Semi-Structured Databases,

RESUME : Accès aux données en présence d’ontologies est un problème qui vise à répondre à des requêtes conjonctives en utilisant des inférences rendues possibles par une ontologie. Les deux grandes familles de langages utilisées pour coder une telle ontologie sont les logiques de description et les langages à base de règles. L’émergence de très grandes bases de connaissances, souvent peu structurées, complexifie aujourd’hui ce problème, d’autant plus que les données peuvent être stockées sous de nombreux formats. Nous avons ainsi développé ALASKA, une architecture logicielle générique dédiée à ce problème. ALASKA permet la manipulation (insertion, interrogation) des données indépendamment du système de stockage utilisé, et peut donc être vu comme la couche abstraite requise pour notre problème. Nous avons utilisé ALASKA pour tester l’efficacité de différents systèmes de stockages (bases de données relationnelles, bases de graphes, triple stores), que ce soit quant à la rapidité de l’insertion de nouvelles connaissances dans une base ou quant à l’efficacité des opérations élémentaires requises par les systèmes de requêtage.

ABSTRACT : Ontology-Based Data Access is a problem aiming at answering conjunctive queries over facts enriched by ontological information. There are today two manners of encoding such ontological content: Description Logics and rule-bases languages. The emergence of very large knowledge bases, often with unstructured information has provided an additional challenge to the problem. In this work, we will study the elementary operations needed in order to set up the storage and querying foundations of a rule-based reasoning system. The study of different storage solutions have led us to develop ALASKA, a generic and logic-based architecture regrouping different storage methods and enabling one to write reasoning programs generically. This thesis features the design and construction of such architecture, and the use of it in order to verify the efficiency of storage methods for the key operations for RBDA, storage on disk and entailment computing.

********************************************************************************************************************

December 13, 2013, 10:00 AM, Room E323, Maxime Lefrançois (INRIA Sophia Antipolis)

Speaker : Maxime Lefrançois (ATER, Wimmics, Inria Sophia-Antipolis)

Title : Representation of Linguistic Predicates and Formal Lexicographic Definitions with the Unit Graphs KR Formalism

Abstract :At first sight, the Semantic Web formalisms and the Conceptual Graphs formalism seem both interesting for the representation of linguistic knowledge. Yet, when it comes to Meaning-Text Theory (MTT) and its highly descriptive lexicon, both KR formalisms appear to be inadequate. In fact, linguistic predicates, basic piece of linguistic knowledge, can be considered as concepts, but can also have a so-called actantial structure with obligatory or optional actant slots. During my thesis, we drew our inspiration from the Conceptual Graphs to introduce the new so-called Unit Graphs KR framework, that allows for the representation, manipulation, query, and reasoning over linguistic predicates and formal lexicographic definitions. I will hence present the rationale, fundamental concepts and current outcome of the Unit Graphs Framework.

Reference: Lefrançois, M., Gandon, F., Rationale, Concepts, and Current Outcome of the Unit Graphs Framework, RANLP, 2013

*******************************************************************************************************************

December 5, 2013 : Rallou Thomopoulos HDR defenseAmphithéâtre 206 du campus Montpellier Supagro / INRA, 2, place Viala, à 14h15

Dans les sciences expérimentales telles que les sciences de l’aliment, les données jouent un rôle essentiel, puisque les théories du domaine sont fondées sur les données expérimentales, leur exploitation et leur analyse. Cependant, l’état de l’art montre que les données expérimentales disponibles sont souvent partielles, éparpillées sur des supports variés, ou sans modèle mathématique sous-jacent établi.

Une autre source d’information est également disponible : les connaissances expertes, toutefois pas toujours formalisées sur des supports écrits. Les connaissances expertes peuvent exprimer des points de vue différents, potentiellement conflictuels s’ils visent des objectifs divergents. Un défi majeur est donc d’intégrer ces données et ces connaissances et de développer des méthodes permettant de les utiliser pour l’aide à la décision. Cette présentation expose un ensemble de stratégies et méthodes complémentaires définies et développées pour, ensemble, traiter cette problématique.

********************************************************************************************************************

November 28, 2013, 10:00, LIRMM seminar Room

Titre : Theory and practice of ontology-based data access with OWL 2 QL

Orateurs : Roman Kontchakov et Michael Zakharyaschev (Birkbeck College, Londres)

Résumé : Ontology-based data access (OBDA) is widely regarded as a key ingredient for the new generation of information systems. In the OBDA paradigm, an ontology defines a high-level global schema of data sources and provides a vocabulary for user queries. An OBDA system rewrites such queries and ontologies into the vocabulary of the data sources and then delegates the actual query evaluation to a suitable query answering system such as a relational database management system, RDF store or a datalog engine.

This talk will focus on OBDA with the ontology language OWL 2 QL, one of three profiles of the W3C standard Web Ontology Language OWL 2, and relational databases. It will cover the following topics:

– the succinctness problem for conjunctive query rewriting over OWL 2 QL ontologies;

– different architectures of OBDA systems;

– practical OBDA with Ontop (http://ontop.inf.unibz.it).

********************************************************************************************************************

October 24, 2013, 3:00 PM : Michael Thomazo PhD defense, Lirmm Seminar Room,

Title : Conjunctive Query Answering Under Existential Rules – Decidability, Complexity, and Algorithms

Abstract: Ontology-based data access (OBDA) aims at enriching query answering by taking general background knowledge into account when evaluating queries. This background knowledge is represented by means of an ontology, that is expressed in this thesis by a very expressive class of first-order formulas, called existential rules (sometimes also tuple- generating dependencies and Datalog+/-) The high expressivity of the used formalism results in the undecidability of query answering, and numerous decidable classes (that is, restrictions on the sets of existential rules) have been proposed in the literature. The contribution of this thesis is two-fold: first, we propose a unified view of a large part of these classes, together with a complexity analysis and a worst-case optimal algorithm for the introduced generic class. Second, we consider the popular approach of query rewriting, and propose a generic algorithm that overcomes trivial causes of combinatorial explosion that make classical approaches inapplicable.

*******************************************************************************************************************

October 24, 2013, 11:30, LIRMM seminar Room

Title : Robust Constraint Satisfaction and Local Hidden Variables in Quantum Mechanics by Georg Gottlob (Univ. Oxford)

Abstract : Motivated by considerations in quantum mechanics, we introduce the class of robust constraint satisfaction problems in which the question is whether every partial assignment of a certain length can be extended to a solution, provided the partial assignment does not violate any of the constraints of the given instance.

We explore the complexity of specific robust colorability and robust satisfiability problems, and show that they are NP-complete. We then use these results to establish the computational intractability of detecting local hidden-variable models in quantum mechanics.

A paper on these results has been published at IJCAI 2013. We also report about a more recent result about the complexity of detecting hidden variables in possibilistic models with two experimenters and Boolean outcomes. In fact, this problem is not only tractable, but is actually in first order (FO), and thus decidable in AC_0. Joint work with Samson Abramsky and Phokion Kolaitis.

*******************************************************************************************************************

October 24, 2013, 10:15 AM : Carsten Lutz, Univ. of Bremen, Lirmm Seminar Room

Title : Ontology-Based Data Access : A Study Through Disjunctive Datalog, CSP, and MMSNP by Carsten Lutz (Univ. of Bremen)

Abstract : Ontology-based data access is concerned with querying incomplete data sources in the presence of domain specific knowledge provided by an ontology.  A central notion in this setting is that of an ontology mediated query, which is a database query coupled with an ontology.

In this talk, we consider several classes of ontology-mediated queries where the database queries are given as some form of conjunctive query and the ontologies are formulated in description logics or other relevant  fragments of first-order logic, such as the guarded fragment and the unary-negation fragment. The aim is to characterize ontology mediated queries in terms of more conventional database query languages, to study their descriptive complexity, and to derive relevant new results for ontology-based data access. Specifically, we show that ontology-mediated queries have the same expressive power as natural  fragments of disjunctive datalog and establish close connections to constraint satisfaction problems (CSP) and their logical generalization, MMSNP formulas. These connections are then exploited to obtain new results regarding first-order rewritability, datalog rewritability, and the query containment problem for ontology-mediated queries, and to study P/NP dichotomies for them.

********************************************************************************************************************

September 13, 2013 : Qualinca meeting at LIRMM http://www.lirmm.fr/qualinca/ Seminar room

********************************************************************************************************************

June 7, 2013, Fabien (et Jean-François) nous parlerons de l’extension d’Answer Set Programming à des règles existentielles (travaux en cours dans le cadre du projet ASPIQ).

********************************************************************************************************************

May 30, 2013, Michel Leclère will present current work on key discovery in the Web of Data (joint
work, Qualinca project)

********************************************************************************************************************

May 29, 2013, Georg Gottlob, University of Oxford

May 29, 11 AM, LIRMM seminar room

Title: The Hypergraph Transversal Problem: Applications,  Complexity, and Tractable Cases

Abstract : Generating minimal transversals of a hypergraph, or, equivalently, dualizing a monotone DNF, is an important problem which has many applications in Computer Science. In this talk I will address this problem and its decisional variant, the recognition of the transversal hypergraph for another hypergraph. I will survey some results on problems which are known to be related to computing the transversal hypergraph, where I focus on problems in propositional Logic, and databases, data mining. I will then address the computational complexity of recognizing the transversal hypergraph. While the tractability and exact complexity of this problem have been open for over 30 years, it is known that the decision problem can be solved in quasipolynomial time, and in polynomial time with limited nondeterminism. Regarding the space complexity, it was recently shown that the problem is in quadratic logspace. I will also discuss large classes of instances on which the problem is known to be tractable.

Speaker Bio:Georg Gottlob is a Professor of Informatics at Oxford University, a Fellow of St John’s College, Oxford, and an Adjunct Professor at TU Wien. His interests include data extraction, database theory, graph decomposition techniques, AI, knowledge representation, logic and complexity. Gottlob has received the Wittgenstein Award from the Austrian National Science Fund, is an ACM Fellow, an ECCAI Fellow, a Fellow of the Royal Society, and a member of the Austrian Academy of Sciences, the German National Academy of Sciences, and the Academia Europaea. He chaired the Program Committees of IJCAI 2003 and ACM PODS 2000. He is the main founder of Lixto, a company that provides tools and services for web data extraction. Gottlob was recently awarded an ERC Advanced Investigator’s Grant for the project “DIADEM:
Domain-centric Intelligent Automated Data Extraction Methodology”.

********************************************************************************************************************

May 17, 2013, Meghyn Bienvenu (LRI, Univ. Paris Sud) and Jean-François Baget , Graphik  will present a paper from Despoina Magka, Markus Krötzsch, Ian Horrocks: “Nonmonotonic Existential Rules for Non-Tree-Shaped Ontological Modelling” (IJCAI 2013)

********************************************************************************************************************

April 17, 2013, Salle 1.4, 9:30, Lirmm

Meghyn nous présentera au tableau l’un de ses papiers à IJCAI: “first-order rewritability in Horn description logics”

********************************************************************************************************************

April 8, 2013, 3 oclock, Lirmm E324

The Impact of Disjunction on Query Answering Under Guarded-based Existential Rules :

We study the complexity of conjunctive query answering under (weakly-)(frontier-)guarded disjunctive existential rules, i.e., existential rules extended with disjunction, Our main result states that conjunctive query answering under a fixed set of disjunctive IDs is 2\EXP-hard. This quite surprising result together with a 2\EXP~upper bound for weakly-frontier-guarded disjunctive rules, obtained by exploiting recent results guarded negation first-order logic, gives us a complete picture of the computational complexity of our problem. We also consider a natural subclass of disjunctive IDs, namely frontier-one (only one variable is propagated), for which the combined complexity decreases to exptime. Finally, we show that frontier-guarded rules, combined with negative constraints, are strictly more expressive than DLITE^{H}_{bool}, one of the most expressive languages of the DL-Lite family. We also show that query answering under DLITE^{H}_{bool} is 2 exptime-complete in combined complexity.

********************************************************************************************************************

April 3,  2013, 2PM, E.324 : Christophe Lecoutre — CRIL
Algorithmes efficaces pour la résolution de problèmes sous contraintes
Abstract : http://www.lirmm.fr/xml/fr/0259-08.html

********************************************************************************************************************

March 25, 2013, 10 AM, Seminar room, IA Group Seminar (pôle IA)
Wojtek Jamroga — Université du Luxembourg
Some Funny Complexity Results for Judgment Aggregation
Abstract
<http://www.lirmm.fr/xml/fr/0259-08.html>

********************************************************************************************************************

March 22, 2013, IA Group Seminar (pôle IA)
Richard Booth — Université du Luxembourg
Quantifying disagreement in argument-based reasoning
Abstract
<http://www.lirmm.fr/xml/fr/0259-08.html>

********************************************************************************************************************

March 15, 2013, 2:30 PM, Seminar room, IA Group Seminar (pôle IA) Felip Manyà — CSIC Barcelone

Encoding and Solving Optimization Problems with MinSAT and MaxSAT Abstrac: http://www.lirmm.fr/xml/fr/0259-08.html

********************************************************************************************************************

February 22, 2013, 9:30 AM, E324:  What Can Argumentation Do For Inconsistent Ontology Query Answering? by Madalina Croitoru

Abstract: The area of inconsistent ontological knowledge base query answering studies the problem of inferring from an inconsistent ontology. To deal with such a situation, different semantics have been defined in the literature (e.g.\ AR, IAR, ICR). Argumentation theory can also be used to draw conclusions under inconsistency. Given a set of arguments and attacks between them, one applies a particular semantics (e.g.\ stable, preferred, grounded) to calculate the sets of accepted arguments and conclusions. However, it is not clear what are the similarities and differences of the semantics from ontological knowledge base query answering and those from argumentation theory.

This talk provides the answer to that question. Namely, we prove that:(1) sceptical acceptance under stable and preferred semantics corresponds  to ICR semantics; (2) universal acceptance under stable and preferred semantics corresponds to AR semantics; (3) acceptance under grounded semantics corresponds to IAR semantics.We also prove that the argumentation framework we define satisfies the rationality postulates (e.g.\ consistency, closure).

********************************************************************************************************************

February 15, 2013: Qualinca meeting at LIRMM http://www.lirmm.fr/qualinca/

********************************************************************************************************************

February, 8, 2013: Mini workshop Coconut/GraphIK
demander à Jean-François Baget ou Rémi Coletta pour le programme et le lieu

********************************************************************************************************************

January, 18, 2013: “Compact rewritings for existential rules” by Michael Thomazo
10:30 AM, E223

 

Permanent link to this article: https://team.inria.fr/graphik/news-3/aa/