Links' Seminars and Public Events |
2022 | |
---|---|
Fri 25th Feb 11:00 am 12:00 pm | Séminaire Nico |
Fri 28th Jan 11:00 am 12:00 pm | Alexandre Vigny (visio) Title: Separator logic, expressive power and algorithmic applications Abstract: First-order logic (FO) can express many algorithmic problems on graphs, but fails to express whether two vertices are connected. We define a new logic (separator logic) by enriching FO with connectivity predicates connk(x, y, z1, . . . , zk) that hold true in a graph if there exists a path between x and y after deletion of z1, . . . , zk. In this talk I will first present a study of the expressive power of this new logic. I will then present algorithmic results for this logic on graph classes that exclude a topological minor. These results were obtained in collaboration with Michał Pilipczuk, Nicole Schirrmacher, Sebastian Siebertz, and Szymon Toruńczyk. |
Fri 21st Jan 11:00 am 12:00 pm | Aurélien Lemay in Seminar |
2021 | |
Fri 10th Dec 11:00 am 12:00 pm | Séminaire Sebastien Tavenas Title: Bornes inférieures superpolynomiales pour les circuits de profondeur constante Abstract: Tout polynôme multivarié P(X_1,...,X_n) peut être écrit comme une somme de monômes, i.e., une somme de produits de variables et de constantes du corps. La taille naturelle d'une telle expression est le nombre de monômes. Mais, que se passe-t-il si on rajoute un nouveau niveau de complexité en considérant les expressions de la forme : somme de produits de sommes (de variables et de constantes) ? Maintenant, il devient moins clair comment montrer qu'un polynôme donné n'a pas de petite expression. Dans cet exposé nous résoudrons exactement ce problème. Plus précisément, nous prouvons que certains polynômes explicites n'ont pas de représentations "somme de produits de sommes'' (SPS) de taille polynomiale. Nous pouvons aussi obtenir des résultats similaires pour les SPSP, SPSPS, etc... pour toutes les expressions de profondeur constante. " |
Thu 25th Nov 2:00 pm 3:00 pm | Nofar Carmeli in Links' Seminar |
Fri 29th Oct 11:00 am 12:00 pm | Séminaire Antoine Amarilli |
Fri 22nd Oct 11:00 am 12:00 pm | Mikaël Monet in Links' Seminar |
Fri 15th Oct 11:00 am 12:00 pm | Claire Soyez-Martin in Links' seminar |
Fri 17th Sep 11:00 am 12:00 pm | Séminaire Corentin Barloy Title: Stackless Processing of Streamed Trees Abstract: Processing tree-structured data in the streaming model is a chal-lenge: capturing regular properties of streamed trees by means of astack is costly in memory, but falling back to finite-state automata drastically limits the computational power. We propose an intermediate stackless model based on register automata equipped with a single counter, used to maintain the current depth in the tree. We explore the power of this model to validate and query streamed trees. Our main result is an effective characterization of regular path queries (RPQs) that can be evaluated stacklessly—with and without registers. In particular, we confirm the conjectured characterization of tree languages defined by DTDs that are recognizable without registers, by Segoufin and Vianu (2002), in the special case of tree languages defined by means of an RPQ. Link: paperman.name/data/pub.....0.pdf lille-Salle |
Fri 10th Sep 10:00 am 11:00 am | Séminaire de Patrick Baillot titre: Type-based complexity analysis in a parallel process calculus Abstract: Some type systems have been designed to analyse statically the time coplexity of functional languages. A natural question is whether this approach can be extended to parallel languages. We address this problem for the Pi-calculus, a paradigmatic calculus for parallel and concurrent computation. In Pi-calculus, processes communicate through channels that can carry values and channel names. We will define notions of sequential and parallel complexity for Pi-calculus, and present a type system that provides an upper bound on the time complexity of processes. This is based on joint work with Alexis Ghyselen (ESOP 2021). Based on: link.springer.com/chap.....9-3_3 |
Fri 9th Jul all day | Seminar - Antonio AL SERHALI Title: Integrating Schema-Based Cleaning into Automata Determinization Abstract : Schema-based cleaning for automata on trees or nested words was proposed recently to compute smaller deterministic automata for regular path queries on data trees. The idea is to remove all rules and states, from an automaton for the query, that are not needed to recognize any tree recognized by a given schema automaton. Unfortunately, how- ever, deterministic automata for nested words may still grow large for au- tomata for XPath queries, so that the much smaller schema-cleaned ver- sion cannot always be computed in practice. We therefore propose a new schema-based determinization algorithm that integrates schema-based cleaning directly. We prove that schema-based determinization always produces the same deterministic automaton as schema-based cleaning after standard determinization. Nevertheless, the worst-case complex- ity is considerably lower for schema-based determinization. Experiments confirm the relevance of this result in practice. |
Fri 4th Jun 10:00 am 12:30 pm | Séminaire Pierre Ohlmann Zoom link: univ-lille-fr.zoom.us/j/95419000064 Titre: Lower bound for arithmetic circuits via the Hankel matrix Abstract: We study the complexity of representing polynomials by arithmetic circuits in both the commutative and the non-commutative settings. To analyse circuits we count their number of parse trees, which describe the non-associative computations realised by the circuit. In the non-commutative setting a circuit computing a polynomial of degree d has at most 2^{O(d)} parse trees. Previous superpolynomial lower bounds were known for circuits with up to 2^{d^{1/3-ε}} parse trees, for any ε>0. Our main result is to reduce the gap by showing a superpolynomial lower bound for circuits with just a small defect in the exponent for the total number of parse trees, that is 2^{d^{1-ε}}, for any ε>0. In the commutative setting a circuit computing a polynomial of degree d has at most 2^{O(d \\log d)} parse trees. We show a superpolynomial lower bound for circuits with up to 2^{d^{1/3-ε}} parse trees, for any ε>0. When d is polylogarithmic in n, we push this further to up to 2^{d^{1-ε}} parse trees. While these two main results hold in the associative setting, our approach goes through a precise understanding of the more restricted setting where multiplication is not associative, meaning that we distinguish the polynomials (xy)z and yz). Our first and main conceptual result is a characterization result: we show that the size of the smallest circuit computing a given non-associative polynomial is exactly the rank of a matrix constructed from the polynomial and called the Hankel matrix. This result applies to the class of all circuits in both commutative and non-commutative settings, and can be seen as an extension of the seminal result of Nisan giving a similar characterization for non-commutative algebraic branching programs. Our key technical contribution is to provide generic lower bound theorems based on analyzing and decomposing the Hankel matrix, from which we derive the results mentioned above. The study of the Hankel matrix also provides a unifying approach for proving lower bounds for polynomials in the (classical) associative setting. We demonstrate this by giving alternative proofs of recent lower bounds as corollaries of our generic lower bound results. |
Fri 28th May 10:00 am 11:00 am | Seminar Anastasia Dimou Title: Knowledge graph generation and validation |
Fri 21st May 10:00 am 12:00 pm | Seminar Dimitrios Myrisiotis Title : One-Tape Turing Machine and Branching Program Lower Bounds for MCSP Abstract: eccc.weizmann.ac.il/report/2020/103/ Speaker' webpage : dimyrisiotis.github.io/ zoom |
Fri 7th May 10:00 am 12:00 pm | Seminar Nicole Schweikardt Title: Spanner Evaluation over SLP-Compressed Documents Abstract: We consider the problem of evaluating regular spanners over compressed documents, i.e., we wish to solve evaluation tasks directly on the compressed data, without decompression. As compressed forms of the documents we use straight-line programs (SLPs) -- a lossless compression scheme for textual data widely used in different areas of theoretical computer science and particularly well-suited for algorithmics on compressed data. In terms of data complexity, our results are as follows. For a regular spanner M and an SLP S that represents a document D, we can solve the tasks of model checking and of checking non-emptiness in time O(size(S)). Computing the set M(D) of all span-tuples extracted from D can be done in time O(size(S) size(M(D))), and enumeration of M(D) can be done with linear preprocessing O(size(S)) and a delay of O(depth(S)), where depth(S) is the depth of S's derivation tree. Note that size(S) can be exponentially smaller than the document's size |D|; and, due to known balancing results for SLPs, we can always assume that depth(S) = O(log(|D|)) independent of D's compressibility. Hence, our enumeration algorithm has a delay logarithmic in the size of the non- compressed data and a preprocessing time that is at best (i.e., in the case of highly compressible documents) also logarithmic, but at worst still linear. Therefore, in a big-data perspective, our enumeration algorithm for SLP-compressed documents may nevertheless beat the known linear preprocessing and constant delay algorithms for non-compressed documents. [This is joint work with Markus Schmid, to be presented at PODS'21.] Link to the paper: arxiv.org/pdf/2101.10890.pdf for the paper at least Link to the ACM video: TBA |
Fri 30th Apr 10:00 am 12:00 pm | Présentation de NetworkDisk Je présenterais mon projet avec Bruno: NetworkDisk. Abstract and Title: TBA link to the project: TBA |
Fri 9th Apr 10:00 am 12:00 pm | Seminaire Pascal Weil titre: Problèmes algorithmiques en théorie des groupes infinis resumé: Malgré le titre très général, il s'agira uniquement de problèmes concernant les sous-groupes de groupes infinis, et même juste les sous-groupes de groupes libres. Les résultats et méthodes que je présenterai sont issus de près de 40 ans de littérature et sont dûs à un grand nombre d'auteurs. Je commencerai par poser le paysage, y compris pour ceux qui ne savent plus ce qu'est le groupe libre -- où l'on verra qu'on est, du point de vue algorithmique, dans une variante de la combinatoire des mots. Je présenterai ensuite l'outil central de la plupart des algorithmes efficaces sur les sous-groupes du groupe libre : la représentation de chaque sous-groupe finiment engendré par un graphe étiqueté et enraciné (disons : d'un automate :-)…) unique et facilement calculable à partir d'un ensemble de générateurs du sous-groupe considéré, qu'on appelle le graphe de Stallings. Le jeu consiste ensuite à traduire les problèmes algorithmiques sur les sous-groupes en problèmes algorithmiques sur les graphes de Stallings, et à résoudre ces problèmes de la façon la plus efficace possible. On considèrera notamment les problèmes suivants -- bon, juste le début de cette longue liste. - Le problème du mot généralisé : étant donnés k+1 éléments du groupe libre (ce sont des mots), le dernier appartient-il au sous-groupe engendré par les k premiers ? - Le problème de l'indice : étant donné un tuple d'éléments du groupe libre, le sous-groupe qu'ils engendrent est-il d'indice fini ? - Le problème de la base : étant donné un tuple d'éléments du groupe libre, trouver le rang, et une base du sous-groupe qu'ils engendrent. - Le problème de l'intersection : étant donnés deux tuples d'éléments du groupe libre, calculer l'intersection des sous-groupes qu'ils engendrent (ou calculer une base de cette intersection). - Le problème de la conjugaison : étant donnés deux tuples d'éléments du groupe libre, engendrent-ils le même sous-groupe ? deux sous-groupes conjugués ? - Et de nombreux autres problèmes (mots clés : minimalité de Whitehead, facteur libre, malnormalité, clôture par radical, clôture au sens de la topologie pro-p, etc…) title: Algorithmic problems in the theory of infinite groups abstract: In spite of the very general title, we will talk only about problems on subgroups of infinite groups, and in fact, only on subgroups of free groups . The results and methods I will present have been obtained over the past 40 years and are due to many researchers. I will start by setting the landscape, including for those who forgot what the free group is --- and we will see that we are dealing here, from the algorithmic point of view, with a variant of combinatorics on words. I will then present the tool that is central to most efficient algorithms on subgroups of free groups: the representation of each finitely generated subgroup by a labeled rooted graph (shall we say… an automaton?) which is unique and easily computable when a tuple of generators of the subgroup under consideration is given. This graph is called the Stallings graph. The game consists, then, in translating algorithmic problems on subgroups into algorithmic problems on Stallings graphs, and in solving these problems as efficiently as possible. We will discuss in particular the following problems (clearly: just the beginning of this long list). - The generalized word problem: given k+1 elements of the free group (these are words), does the last one belong to the subgroup generated by the k first ones? - The index problem: given a tuple of elements of the free group, does the subgroup they generate have finite index? - The basis problem: given a tuple of elements of the free group, find the rank and a basis of the subgroup they generate. - The intersection problem: given two tuples of elements of the free group, compute the intersection of the subgroups they generate (compute a basis of this intersection). - The conjugacy problem: given two tuples of elements of the free group, are the subgroups they generate equal? conjugated? - And many other problems (keywords: Whitehead minimality, free factors, malnormality, closure under radicals, closure in the sense of the pro-p topology, etc…) |
Fri 26th Mar 10:00 am 11:00 am | Séminaire Anne Etien Title: Managing structural and behavioral evolution in relational database: Application of Software Engineering techniques. Abstract: Relational databases play a central role in many information systems. Their schemas usually contain structural and behavioral entity descriptions. However, as any piece of software, they must continuously evolve to adapt to new requirements of a world in constant change. From an evolution point of view, problems are twofold: (1) relational database management systems do not allow inconsistencies i.e., no entity can reference a non existing entity; (2) stored procedures bodies are not described by meta-data i.e., DBMS as PostgreSQL consider stored procedure bodies as plain text and references to entities are unknown. As a consequence, evaluating the impact of an evolution of the database schema is a difficult task. In this seminar, we present a semi-automatic approach based on recommendations (sort of nested code transformations). Recommendations are proposed to architects who select the ones fitting their needs. Selected recommendations are then analysed and compiled to generate SQL script respecting the constraints imposed by the RDBMS. To support recommendations, we designed a meta-model for relational databases easing computation of change impact. We performed an experiment to validate the approach by reproducing a real evolution on a database. The results of our experiment show that our approach is able to reproduce exactly a manual modification in 75% less time. Zoom link: univ-lille-fr.zoom.us/j/95419000064 |
Fri 19th Mar 10:00 am 12:00 pm | Seminar Pablo Ferragin Title: Theory and practice of learning-based compressed data structures Presenter: Giorgio Vinciguerra Abstract: We revisit two fundamental and ubiquitous problems in data structure design: predecessor search and rank/select primitives. We show that real data present a peculiar kind of regularity based on geometric considerations. We name it “approximate linearity”. We thus expand the horizon of compressed data structures by presenting two solutions for the problems above that discover, or “learn”, in a principled algorithmic way, these approximate linearities. We provide a walkthrough of these new theoretical achievements, also with a focus on open-source libraries and their experimental improvements. We conclude by discussing the plethora of research opportunities that these new learning-based approaches to data structure design open up. Zoom link: univ-lille-fr.zoom.us/j/95419000064 |
Fri 12th Mar 10:00 am 12:00 pm | Seminar: Antonio AL SERHALI Title: Can Earliest Query Answering on Nested Streams be achieved in Combined Linear Time? |
Fri 19th Feb 10:00 am 11:00 am | Seminar: Bernardo Subercaseau Title: Foundations of Languages for Interpretability. Abstract: The area of interpretability in Machine Learning aims for the design of algorithms that we humans can understand and trust. One of the fundamental questions of interpretability is: given a classifier M, and an input vector x, why did M classify x as M(x)? In order to approximate an answer to this "why" question, many concrete queries, metrics and scores have emerged as proxies, and their complexity has been studied over different classes of models. Many of these analyses are ad-hoc, but they tend to agree on the fact that these queries and scores are hard to compute over Neural Networks, but easy to compute over Decision Trees. It is thus natural to think of a more general approach, like a query language in which users could write an arbitrary number of different queries, and that would allow for a generalized study of the complexity of interpreting different ML models. Our work proposes foundations for such a language, tying to First Order Logic, as a way to have a clear understanding of its expressiveness and complexity. We manage to define a minimalistic structure over FO that allows expressing many natural interpretability queries over models, and we show that evaluating such queries can be done efficiently for Decision Trees, in data-complexity. Zoom link: univ-lille-fr.zoom.us/j/95419000064 |
Fri 12th Feb 10:00 am 12:00 pm | Seminar: Florent Capelli Title: Regularizing the delay of enumeration algorithms Zoom link: univ-lille-fr.zoom.us/j/95419000064 Abstract: Enumeration algorithms are algorithms whose goal is to output the set of all solutions to a given problem. There exists different measures for the quality of such algorithm, whose relevance depends on what the user wants to do with the solutions set. If the goal of the user is to explore some solutions or to transform the solutions as they are outputted with a stream-like algorithm, a relevant measure of the complexity of an enumeration algorithm is the delay between the output of two distinct solutions. Following this line of thoughts, significant efforts have been made by the community to design polynomial delay algorithms, that is, algorithms whose delay between the output of two new solutions is polynomial in the size of the input. While this measure is interesting, it is not always completely necessary to have a bound on the delay and it is enough to ask for a guarantee that running the algorithm for O(t poly(n)) will result in the output of at least t solutions. Of course, by storing each solution seen and outputting them regularly, one can simulate a polynomial delay but if the number of solutions is large, it may result in a blow up in the space used by the enumerator. In this talk, we will present a new technique that allow to transform such algorithm into polynomial delay algorithm using polynomial space. This is joint work with Yann Strozecki. |
Fri 15th Jan 10:00 am 12:00 pm | Séminaire de Kim Nguyễn Titile: The BOLDR project Abstract: I n this presentation, I will give an account of the BOLDR project and perspectives in the field of language integrated queries. Several classes of solutions allow programming languages to express queries: specific APIs such as JDBC, Object-Relational Mappings (ORMs) such as Hibernate, and language-integrated query frameworks such as Microsoft's LINQ. However, most of these solutions do not allow for efficient cross-databases queries, and none allow the use of complex application logic from the programming language in queries. We study the design of a new language-integrated query framework called BOLDR that allows the evaluation in databases of queries written in general-purpose programming languages containing application logic, and targeting several databases following different data models. In this framework, application queries are translated to an intermediate representation. Then, they are typed with a type system extensible by databases in order to detect which database language each subexpression should be translated to. This type system also allows us to detect a class of errors before execution. Next, they are rewritten in order to avoid query avalanches and make the most out of database optimizations. Finally, queries are sent for evaluation to the corresponding databases and the results are converted back to the application. Our experiments show that the techniques we implemented are applicable to real-world database applications, successfully handling a variety of language-integrated queries with good performances. This talk will give an overview of what has been achieved so far (mainly in the context of Julien Lopez' PhD Thesis) and will glimpse at preliminary work that is being done in the context of a collaboration with Oracle Labs. |
Fri 8th Jan 10:45 am 12:30 pm | Séminaire @ Lê Thành Dũng (Tito) Nguyễn Title: The planar geometry of first-order string transductions (joint work with Pierre Pradic) Abstract: hal.archives-ouvertes......ument We propose a new machine model recognizing star-free languages, with a geometric flavor. Our starting point is the characterization of regular languages using two-way automata (2DFA). The idea is to take seriously the visual representations found throughout the literature of the behavior of a 2DFA on a word ; by putting a total order on the set of states, one can formally define what it means for such a behavior to be planar, in a sense analogous to the planarity of combinatorial maps. Star-free languages are then exactly the languages recognized by "planar 2DFA". We also show that the corresponding planar transducer model characterizes the class of first-order transductions (a.k.a. aperiodic regular functions). If time allows, the talk will briefly discuss the connections of this work with the non-commutative lambda-calculus (cf. our recent paper Aperiodicity in a non-commutative logic, ICALP'20). |
2020 | |
Thu 17th Dec 2:00 pm 4:00 pm | Nofar Carmeli Speaker: Nofar Carmeli (nofar.carme.li/) Zoom link: univ-lille-fr.zoom.us/j/95419000064 Title: The Complexity of Answering Unions of Conjunctive Queries. Abstract: We discuss the fine-grained complexity of enumerating the answers to a query over a relational database. With the ideal guarantees, linear time is required before the first answer to read the input and determine its existence, and then we need to print the answers one by one. Consequently, we wish to identify the queries that can be solved with linear preprocessing time and constant or logarithmic delay between answers. A known dichotomy classifies CQs into those that admit such enumeration and those that do not. The computationally expensive component of query answering is joining tables, which can be done efficiently if and only if the join query is acyclic. However, the join query usually does not appear in a vacuum; for example, it may be part of a larger query, or it may be applied to a database with dependencies. We inspect how the complexity changes in these settings and chart the borders of tractability within. In addition, we consider the task of enumerating query answers with a uniformly random order, and we propose to do so using an efficient random-access structure for representing the set of answers. We also prove conditional lower bounds showing that our algorithms capture all tractable queries in some cases. Among our results, we show that a union of tractable conjunctive queries may be intractable w.r.t. random access; on the other hand, a union of intractable conjunctive queries may be tractable w.r.t. enumeration. |
Fri 11th Dec 10:00 am 11:30 am | Alexandre Vigny Title: Elimination Distance to Bounded Degree on Planar Graphs Link to the zoominar: univ-lille-fr.zoom.us/j/95419000064 Abstract: What does it mean for a graph to almost be planar? Or to almost have bounded degree? On such simple graphs classes, some difficult algorithmic problems become tractable. Ideally, one would like to use (or adapt) existing algorithms for graphs that are "almost" in such a simple class. In this talk, I will discuss the notion of elimination distance to a class C, a notion introduced by Bulian and Dawar (2016). The goals of the talk are: 1) Define this notion, and discuss why it is relevant by presenting some existing results. 2) Show that we can compute the elimination distance of a given planar graph to the class of graph of degree at most d. I.e. answer the question: "Is this graph close to a graph of bounded degree?" The second part is the result of a collaboration with Alexandre Lindermayer and Sebastian Siebertz. |
Fri 4th Dec 10:00 am 11:00 am | Seminar: Pierre Pradic Title: Extracting nested relational queries from implicit definitions Abstract: arxiv.org/pdf/2005.06503.pdf In this talk, I will present results obtained jointly with Michael Benedikt establishing a connection between the Nested Relational Calculus (NRC) and sets implicitly definable using Δ₀ formulas. Call a formula φ(I,O) an implicit definition of the relation O(x,...) in terms of I(y,...) if O is functionally determined by I: for every I, O, O', if both φ(I,O) and φ(I,O') hold, then we have O ≡ O'. When φ is first-order and I and O are relations over base sorts, then Beth's definability theorem states that there is a first-order formula ψ(I,x,...) corresponding to O whenever φ(I,O) holds. Further, this explicit definition ψ can be effectively be computed from a sequent calculus proof witnessing that φ is functional. This allows for practical use of implicit definitions in the context of database programming, as there is a well-established link between fragments of explicitly FO definable relations and relational calculi. NRC is a conservative extension of relational calculi from database theory with limited powerset types in addition to tupling and anonymous base types. NRC expressions thus not only encompass flat relations over primitive datatypes like SQL but also nested collections, while remaining useful in practice. We extend the above correspondence between first-order logic and flat relational queries to NRC and implicit definitions using set-theoretical Δ₀ formulas over (typed) nested collection. Our proof of the equivalence goes through a notion of Δ₀-interpretation and a generalization of Beth definability for multi-sorted structures. This proof is non-constructive and thus does not yield any useful algorithm for converting implicit definitions into NRC terms. Using an approach more closely related to proof-theoretic interpolation, we give a constructive proof of the result restricted to intuitionistic provability, i.e, when the input functionality proof π of φ(I,O) is carried out in intuitionistic logic. Further, if π is cut-free, this can be done efficiently. Whether or not there exists a polynomial-time procedure working with classical proofs of functionality is still an open problem. I will focus on the effective result for the talk, and if time allows, discuss the difficulties with extending it to classical logic. I will not assume any background in either database or model theory. |
Fri 27th Nov 10:00 am 11:30 am | Seminar: Charles Paperman Title: Stackless processing of streamed trees Abstract: In this talk, I will first present the state of the art of efficiency implementation of streaming-text algorithms on modern architecture. Then some recent results on the extraction of information on streamed of structured documents without stack overhead. For more info: paperman.name/data/pub.....d.pdf |
Fri 13th Nov 10:00 am 12:00 pm | Seminar: Mikaël Monet Title: The Complexity of Counting Problems over Incomplete Databases Abstract: In this presentation, I will talk about various counting problems that naturally arise in the context of query evaluation over incomplete databases. Incomplete databases are relational databases that can contain unknown values in the form of labeled nulls. We will assume that the domains of these unknown values are finite and, for a Boolean query $q$, we will consider the following two problems: given as input an incomplete database $D$, (a) return the number of completions of $D$ that satisfy $q$; or (b) return or the number of valuations of the nulls of $D$ yielding a completion that satisfies $q$. We will study the computational complexity of these problems when $q$ is a self-join--free conjunctive query, and study the impact on the complexity of the following two restrictions: (1) every null occurs at most once in $D$ (what is called *Codd tables*); and (2) the domain of each null is the same. Roughly speaking, we will see that counting completions is much harder than counting valuations, and that both (1) and (2) can reduce the complexity of our problems. I will also talk about the approximability of these problems and prove that, while counting valuations can efficiently be approximated, in most cases counting completions cannot. On our way, we will encounter the counting complexity classes #P, Span-P and Span-L. The presentation will be based on joint work with Marcelo Arenas and Pablo Barcelo; see arxiv.org/abs/1912.11064 |
Fri 16th Oct 11:00 am 12:00 pm | Seminar: Aurélien Lemay Title: ShEx Learning from Typed Graphs Abstract: In knowledge graphs, schemas are becoming a new asset to describe the organization of data. The new world-leading format Shex is becoming a de-facto standard in the industry that allows defining flexible and powerful schemas. In this context, the inference of schemas can become a solution to provide shex expressions that describe already existing data. Typically, the inference starts from untyped graphs. However, these tasks appears to be more complex than expected in general, and is possible only for subclasses of Shex. The inference of schemas from typed graph gives a baseline for those algorithms. Its comprehension allows to understand better the underlying difficulties of the task. It presents already unexpected difficulties. We present an algorithm that infers Shex-defined schemas from fully typed graphs. We also present some encountered difficulties, as well as the limitations of the approach. |
Fri 24th Jul 2:30 pm 4:30 pm | Momar Sakho, PhD defense |
Wed 8th Jan 1:30 pm 3:30 pm | Introduction to argumentation theory Salle Agora 1, Bâtiment ESPRIT |
2019 | |
Thu 19th Dec 11:00 am 1:30 pm | Thèse L. Gallois amphi Bâtiment B Inria |
Fri 13th Dec 11:45 am 1:00 pm | 1. On Parsing Gpath (Jérémy and Antonio) 2. On Nested Regular Expression (Joachim) |
Fri 13th Dec 10:30 am 11:45 am | Repet Lily pour l'équipe "Lille-Salle B31 " |
Tue 24th Sep 10:00 am 11:00 am | Stijn Vansummeren Title: General Dynamic Yannakakis: Conjunctive Queries with Theta Joins Under Updates Abstract: The ability to efficiently analyze changing data is a key requirement of many real-time analytics applications like Stream Processing, Complex Event Recognition, Business Intelligence, and Machine Learning. Traditional approaches to this problem are based either on the materialization of subresults (to avoid their recomputation) or on the recomputation of subresults (to avoid the space overhead of materialization). Both techniques have recently been shown suboptimal: instead of fully materializing results and subresults, one can maintain a data structure that supports efficient maintenance under updates and can quickly enumerate the full query output, as well as the changes produced under single updates. In our work we are concerned with designing a practical family of algorithms for dynamic query evaluation based on this idea, and for queries featuring both equi-joins and inequality joins, as well as certain forms of aggregation. Our main insight is that, for acyclic conjunctive queries, such algorithms can naturally be obtained by modifying Yannakakis' seminal algorithm for processing acyclic joins in the static setting. In this talk I present the main ideas behind this modfication, offset it against the traditional ways of doing incremental view maintenance, and discuss recent extensions such as dealing with general theta-joins. Amphitheater of INRIA Building B. |
Tue 25th Jun 11:30 am 5:30 pm | Happy Hours Inria Lille |
Tue 25th Jun 10:30 am 11:30 am | Seminar Véronique Benzaken and Évelyne Contejean Elles présenteront un outil qui prend en entrée une requête SQL et sa compilation par Postrgres sous forme de plan d'exécution, et démontre (avec Coq) que la requête initiale est équivalente au plan d'exécution. Lille-Salle B21 |
Fri 21st Jun 11:00 am 12:00 pm | Charles |
Fri 24th May 11:00 am 12:00 pm | Seminaire Sławek |
Fri 10th May 11:00 am 12:00 pm | Seminaire Iovka |
Fri 12th Apr 11:00 am 12:30 pm | Alexandre Vigny in Links Seminar |
Fri 5th Apr 11:00 am 12:00 pm | Semyon Grigorev in Links' seminar |
Fri 5th Apr 11:00 am 12:30 pm | Talk of Semyon Grigorev Title: Parsing techniques for context-free path querying Abstract: Context-free path querying (CFPQ) is a case of language constrained path querying: the way to specify constraints on paths in a graph in terms of formal languages. In CFPQ language is restricted to be a context-free. Classical parsing techniques and algorithms, such as generalized LR and LL parsing, or parser combinators, can be used for CFPQ. Results of adaptation of different parsing techniques for CFPQ will be presented. B31 |
Fri 22nd Mar 10:00 am 11:30 am | Seminar LINKS by Aurelien Lemay "Tutorial: Grammatical Inference" |
Fri 8th Mar 11:00 am 12:00 pm | Seminar Momar Title: Regular Matching and Inclusion on Compressed Tree Patterns with Context Variables
Abstract: We study the complexity of regular matching and inclusion for compressed tree patterns extended by context variables. The addition of context variables to tree patterns permits us to properly capture compressed string patterns but also compressed patterns for unranked trees with tree and hedge variables. Regular inclusion for the latter is relevant to certain query answering on Xml streams with references. |
Fri 15th Feb 11:00 am 12:00 pm | Seminar [Florent] |
Wed 13th Feb 1:30 pm 2:30 pm | 30mn de science : Florent Capelli on Knowledge Compilation Inria salle Plénière (Bâtiment A) |
Fri 1st Feb 11:00 am 12:30 pm | Bruno Guillon in Links' seminar Title: Finding paths in large graphs Abstract: When dealing with large graphs, classical algorithms for finding paths such as Dijkstra's Algorithm are unsuitable, because they require to perform too many disk accesses. To avoid this while keeping a data structure of size quasi-linear in the size of the graph, we propose to guide the path search with a distance oracle, obtained from a topological embedding of the graph. I will present fresh experimental research on this topic, in which we obtain graph embeddings using learning algorithms from natural language processing. On some graphs, such as the graph of publications from DBLP, our topologically-guided path search allows us to visit a small portion of the graph only, in average. This is joint work with Charles Paperman. B21 Room |
2018 | |
Fri 23rd Nov 11:00 am 12:30 pm | Filip Mazowiecki in Links' seminar Title: Containment for Probabilistic automata. Abstract: This is an ICALP 2018 paper. We analyze when the model of probabilistic automata has decidable properties, when restricting the ambiguity. The notion of ambiguity is usually used in weighted automata or transducers, but we follow a recent paper by Fijalkow, Riveros and Worrell, which introduced this approach. We do not solve everything but our decidability results rely unexpectedly on Schanuel's conjecture and we provide some geometric intuition behind the hardness of the problem. |
Fri 16th Nov 11:00 am 12:30 pm | Aurelien Lemay's Habilitation defense IRCICA |
Thu 15th Nov 4:30 pm 5:30 pm | Andreas Maletti in Aurélien Lemay's prehabilitation seminar Lille-Salle B21 |
Thu 15th Nov 3:30 pm 4:30 pm | Henning Fernau in Aurélien Lemay's prehabilitation seminar: Lille-Salle B21 |
Fri 9th Nov 11:00 am 12:30 pm | Talk of Bruno Guillon Abstract: The time complexity of 1-limited automata is investigated from a descriptional complexity view point. Though the model recognizes regular languages only, it may use quadratic time in the input length. We show that, with a polynomial increase in size and preserving determinism, each 1-limited automaton can be transformed into a linear-time equivalent one. We also obtain polynomial transformations into related models, including weight-reducing Hennie machines (i.e., one-tape Turing machines syntactically forced to operate in linear-time), and we show exponential gaps for converse transformations in the deterministic case. |
Fri 26th Oct 11:00 am 12:30 pm | Momar Sakho in Links seminar "Lieu : Lille, Salle : A12" |
Thu 18th Oct 4:00 pm 5:00 pm | Talk of Mikael Monet Title: Combined Complexity of Probabilistic Query Evaluation Abstract: Query evaluation over probabilistic databases (probabilistic query evaluation, or PQE) is known to be intractable in many cases, even in data complexity, i.e., when the query is fixed. Although some restrictions of the queries and instances have been proposed to lower the complexity, these known tractable cases usually do not apply to combined complexity, i.e., when the query is not fixed. This talk gives an overview of my PhD research, which investigates which queries and instances ensure the tractability of PQE in combined complexity. I will first present our work on PQE of conjunctive queries on binary signatures, which can be rephrased as a probabilistic graph homomorphism problem. We restrict the query and instance graphs to be trees and show the impact on the combined complexity of diverse features such as edge labels, branching, or connectedness. This is joint work with Antoine Amarilli and Pierre Senellart and was presented at PODS'2017. Second, we will explore the combined complexity of evaluating queries on treelike databases, i.e., databases whose treewidth is bounded by a constant. We introduce a class of queries (named 'CFG-Datalog') which generalizes many known query languages that are tractable in this context. Specifically, we show that the (non-probabilistic) evaluation of CFG-Datalog on treelike databases can be solved with complexity linear in the product of the instance size and of the query size. In the process, we introduce a new representation of the provenance of a query on a database, based on cyclic Boolean circuits. This is joint work with Antoine Amarilli, Pierre Bourhis, and Pierre Senellart, and was presented at ICDT'2017. Last, we will move to the field of knowledge compilation and present our work that connects various notions of width for Boolean circuits. We show that circuits of bounded treewidth can be efficiently compiled into structured deterministic decomposable normal forms (d-SDNNFs), which in particular allows efficient probability computation. We show the implications of this result for PQE of CFG-Datalog on treelike databases. We also prove general lower bounds on knowledge compilation formalisms, which imply lower bounds for provenance computation. This is joint work with Antoine Amarilli and Pierre Senellart and was presented at ICDT'2018. "Lieu : Lille, Salle : B21" |
Fri 28th Sep 10:15 am 11:45 am | José Lozano Links seminar |
Fri 21st Sep 10:30 am 12:00 pm | Fabian Reiter in Links' Seminar: Descriptive distributed complexity This talk connects two classical areas of theoretical computer science: descriptive complexity and distributed computing. The former is a branch of computational complexity theory that characterizes complexity classes in terms of equivalent logical formalisms. The latter studies algorithms that run in networks of interconnected processors. Although an active field of research since the late 1970s, distributed computing is still lacking the analogue of a complexity theory. One reason for this may be the large number of distinct models of distributed computation, which make it rather difficult to develop a unified formal framework. In my talk, I will outline how the descriptive approach, i.e., connections to logic, could be helpful in this regard. Salle B21 |
Fri 7th Sep 11:00 am 12:30 pm | Rustam Azimov in Links Seminar: "Context-Free Path Querying by Matrix Multiplication" |
Fri 25th May 10:00 am 11:30 am | Nicolas Crosetti in Links' Seminar: Dependency weighted aggregation Lille B21 |
Fri 27th Apr 10:30 am 12:30 pm | Yann Strozecki in Links' Seminar: Methods in enumeration In enumeration we are interested in generating a set of solutions, while bounding the time needed to generate one solution. We will first present the complexity measures used in this context, simple theoritical results and a few open questions. We then introduce classical problems in this area such as the enumeration of: trees, models of a DNF, model of a FO or MSO formula, the maximal cliques of a graph, circuits of a matroid ... We use them to illustrate the algorithmic toolbox of enumeration (Gray Code, backtrack search, reverse search, saturation...). Lille B21 |
Wed 25th Apr 2:15 pm 3:45 pm | Nicolas Stage Jan's office |
Fri 20th Apr 2:15 pm 3:45 pm | Nicolas Stage Jan's office |
Fri 13th Apr 2:15 pm 3:45 pm | Nicolas Stage Jan's office |
Fri 13th Apr 10:00 am 12:00 pm | Iovka Boneva and Jérémie Dusart in Links' Seminar: Shape Expressions Schemas 2.0 : Semantics and Implementation We will present the semantics of the ShEx language, its implementation in java, and future directions of research. Salle B21 |
Fri 6th Apr 2:15 pm 3:45 pm | Nicolas Stage Jan's office |
Fri 30th Mar 2:15 pm 3:45 pm | Nicolas Stage Jan's office |
Fri 23rd Mar 10:00 am 11:30 am | Paul Gallot: High-Order Tree Transducers Paul présentera le papier de Sylvain, Aurélien et Paul, soumis à LICS 2018, sur le sujet des transducteurs d'arbres d'ordre supérieur. |
Wed 21st Mar 2:00 pm 3:15 pm | répétition Delta |
Fri 16th Mar 10:00 am 11:30 am | Luc Dartois in Links' Seminar: A Logic for Word Transductions with Synthesis In this talk I present a logic, called LT, to express properties of transductions, i.e. binary relations from input to output (finite) words. I argue that LT is a suitable candidate as a specification language for verification of non reactive systems, extending the successful approach of verifying synchronous systems via Mealy Machines and MSO. In LT, the input/output dependencies are modelled via an origin function which associates to any position of the output word, the input position from which it originates. LT is well-suited to express relations (which are not necessarily functional), and can express all regular functional transductions, i.e. transductions definable for instance by deterministic two-way transducers. Despite its high expressive power, LT has decidable satisfiability problems. The main contribution is a synthesis result: it is always possible to synthesis a regular function which satisfies the specification. Finally, I explicit a correspondence between transductions and data words. As a side-result, we obtain a new decidable logic for data words. Inria Lille |
Fri 9th Mar 10:00 am 11:00 am | Benjamin Bergougnoux : Counting minimal transversals of hypergraphs A transversal of a hypergraph H is a subset of vertices that intersects all the hyper-edges H. The enumeration and the counting of the minimal transversals have a lot of applications in many domains (graph theory, AI, datamining, etc). Counting problems are generally harder than theirs associated decision problems. For example, finding a minimal transversal is doable in polynomial time but counting them is #P-complet (the equivalent of NP-complet for counting problems). We have proved that we can count the minimal transversals of any beta-acyclique hypergraph in polynomial time. Our result is based on a recursive decomposition of the beta-acyclique hypergraph founded by Florent Capelli and by the introduction of a new notion that generalize the minimal transversals. A lot of exciting open questions live in the neighborhood of our result. In particular, our algorithm is able to count the minimum dominating set of a strong-chordal graph. But counting the minimum dominating set is #P-complete on split graphs. Is it the beginning of a complete characterization of the complexity of counting minimal dominating sets in dense graphs ? Salle B21 |
Fri 16th Feb 10:30 am 11:30 am | Victor Marsault : Formal semantics of the query-language Cypher Cypher is a query-language for property-graphs. It was originally designed and implemented as part of the Neo4j graph database, and it is currently used by several commercial database products and researchers. The semantics of Cypher queries is currently described using natural language and, as a result, it is often not well defined. This work is part of a project to define a full denotational semantics of Cypher queries. The talk will first present the main features of Cypher through examples, including the core mecanism: graph pattern-matching, and then will describe the formal semantics in its current state. Salle B21 - INRIA Institut National Recherche Informatique Automatique; 40 Avenue Halley, 59650 Villeneuve d'Ascq, France |
Wed 31st Jan 5:30 pm 7:00 pm | Bien avant l'avènement des ordinateurs personnels, de l'internet et des smartphones , l'Interaction Homme-Machine (IHM) était déjà une préoccupation au cœur de certaines des visions qui ont contribué à forger l'informatique moderne, qu'elle soit personnelle ou professionnelle. Pour autant, la conception et l'étude des interactions est encore souvent considérée comme secondaire dans la conception des systèmes, la priorité étant souvent mise sur le développement des fonctionnalités plutôt que sur les moyens pour les utiliser. Cette situation s'est progressivement améliorée, avec notamment l'avènement des dispositifs tactiles (smartphones et tablettes) ou de divertissement (consoles de jeux) pour lesquels l'argument de simplicité d'utilisation a détrôné celui de la puissance intrinsèque. Cela a bien évidemment permis de populariser et démocratiser l'accès à la technologie. Mais une conséquence est, selon nous, un relatif appauvrissement des possibilités offertes par ces technologies paradoxalement plus puissantes que jamais. En masquant la complexité plutôt qu'en aidant à la maîtriser, en entretenant le mythe qu'avec ces dispositifs il est aisé pour chacun de faire beaucoup sans efforts, la tendance est à sacrifier le potentiel de l'outil informatique et la performance des utilisateurs pour la rapidité de prise en main, sans permettre un usage plus avancé, plus performant, et peut-être plus gratifiant pour l'utilisateur. Cet équilibre entre simplicité d'usage et puissance de l'outil est un compromis difficile à trouver, et c'est selon nous un des défis et une difficulté majeure de l'IHM : observer et comprendre les phénomènes sensorimoteurs et psychomoteurs, cognitifs, sociaux et technologiques mis en œuvre lors de l'interaction entre des personnes et des systèmes afin d'améliorer cette interaction et d'en guider la conception pour encapaciter les utilisateurs. Le but étant finalement de leur permettre de réaliser ce qu'il leur serait impossible de faire sans cet outil, même si cela requiert de leur part des efforts certains d'apprentissage. Dans ce séminaire, nous commencerons par présenter ce qu'est l'Interaction Homme-Machine en tant que domaine de recherche avec ses objectifs, ses méthodes et ses pratiques. Ensuite, au travers d'une brève histoire de l'informatique sous le prisme de l'interaction, nous évoquerons quelques innovations d'aujourd'hui qui découlent des visions de pionniers du domaine, en considérant notamment ce compromis simplicité / puissance de l'outil. Nous verrons aussi avec des exemples et contre-exemples issus de nos environnements numériques actuels, ainsi qu'avec des travaux de recherche récents, que ces visions portent encore de nombreux défis présents et futurs de l'IHM. En particulier, nous conclurons en discutant de la nécessité d'adopter une approche centrée utilisateur et interaction à l'heure des grands défis scientifiques, technologiques et sociétaux du numérique tels que la conception des systèmes autonomes ou le traitement et l'exploitation automatique des données. Mots-clés : Machines et langages Algorithme Interaction Homme machine (IHM). Liliad |
Wed 31st Jan 4:00 pm 5:30 pm | Cours extérieur de Gérard Berry du Collège de France Le centre Inria Lille - Nord Europe reçoit Gérard Berry, du Collège de France pour son cours sur la photographie numérique. Ce cours se prolongera par un séminaire de Stéphane Huot, responsable de l'équipe Mjolnir. Cette manifestation sera suivie d'un cocktail au cours duquel Isabelle Herlin, directrice du centre de recherche Inria Lille - Nord Europe présentera ses voeux. Date : 31/01/2018 Lieu : Lilliad, Campus Université Lille - sciences et technologies - 2 avenue Jean Perrin, Villeneuve d'Ascq Programme 15h30 : Accueil 15h45 : Introduction par Isabelle Herlin 16h - 17h30 : Cours de Gérard Berry, "La photographie numérique, un parfait exemple de la puissance de l'informatique" 17h30 - 18h30 : Séminaire de Stéphane Huot, "Interaction humain-machine : passé composé et futur simple... ou l'inverse" 18h30 - 18h45 : Questions aux deux orateurs 19h - 20h30 : Cocktail Cours de Gérard Berry Bio express : Gérard Berry est informaticien, professeur au Collège de France où il est titulaire de la chaire d'Algorithmes, machines et langages. Résumé L’appareil photo numérique est un excellent exemple de l’évolution actuelle des systèmes cyberphysiques, c’est-à-dire des systèmes couplant intimement mécanique, physique, électronique et logiciel. C’est aussi un exemple merveilleux et accessible à tous de la puissance des méthodes de l’informatique par rapport à celles de la physique et de la mécanique seules. Le cours présentera la panoplie des algorithmes embarqués dans les appareils photos modernes et dans les logiciels de postproduction, puis discutera l’impact majeur qu’ils ont sur la conception des appareils et des objectifs, totalement bouleversée en ce moment, et celui qu’ils ont sur les photographes professionnels ou amateurs. La photographie argentique, fort ancienne, n’a que lentement progressé au cours du XXe siècle : amélioration lente des pellicules et papiers, introduction de l’exposition automatique calculée analogiquement à partir de cellules photo-électriques, visée télémétrique ou reflex, tout cela a demandé des dizaines d’années. Au contraire, à partir de la commercialisation du premier appareil numérique en 1990, la photographie numérique a évolué extrêmement vite. En 2003, on trouvait déjà des appareils semi-professionnels corrects et, dès 2009, des appareils reflex de haute qualité à un prix abordable. Maintenant, il existe toute une panoplie d’appareils de tailles variées, tous capables de fournir des images de grande qualité. Même les téléphones sont devenus de très bons appareils photos et caméras vidéo, principalement grâce aux algorithmes qu’ils mettent en œuvre. Comme ils savent faire bien d’autres choses, par exemple envoyer immédiatement les images sur Internet, ils sont en train de remplacer les anciens petits appareils compacts et de servir d’équipement unique pour les photographes occasionnels et pour tous dans les pays où la photo argentique était d’un coût inabordable pour les habitants. La logique de la photo numérique est ainsi devenue très différente de celle de l’argentique, ce qui n’empêche cependant pas que cette dernière garde toujours les faveurs de certains artistes. Qu’est-ce qui a permis cette révolution et pourquoi est-elle allée aussi vite ? Il y a trois raisons principales : la conception par les physiciens et la fabrication industrielle en grand volume de capteurs de haute qualité ; l’augmentation considérable de la puissance et la diminution de la dépense énergétique des ordinateurs embarqués, grâce à la fameuse loi de Moore ; enfin, et surtout, l’amélioration continue des algorithmes de la photographie, qui jouent en fait un rôle plus important que celui des capteurs. Dans les quinze dernières années, nous avons gagné au moins 4 degrés de sensibilité, dont les trois quarts grâce aux algorithmes. Même des appareils aux capteurs relativement petits savent faire des photos de très haute qualité à 3200 ISO, ce qui était complètement impossible avec l’argentique. Le cours détaillera d’abord la suite des transformations algorithmiques subtiles qui permettent le développement des images des données numériques du capteur, aboutissant à l’image finale en gérant au mieux la lumière, la netteté et le bruit. Ensuite, il étudiera les algorithmes dédiés à la correction automatique des divers défauts optiques des objectifs ; il montrera que la puissance de ces algorithmes fait que ces objectifs ne seront plus conçus comme auparavant : leur conception intègre désormais totalement physique et algorithmique, fournissant des optiques de meilleure qualité, moins encombrantes, plus légères et moins chères. Il insistera sur l’importance que prennent les nouveaux traitements fondés sur la fusion de prises de vue successives pour l’amélioration de la qualité selon divers objectifs (lumière, bruit, profondeur de champ, etc.), en particulier pour les téléphones. Il montrera pourquoi fonder les nouveaux appareils directement sur les algorithmes modifie de plus en plus le cœur de leur conception, ce qui fait que bien d’autres nouveautés surprenantes pourront apparaître. Des évolutions similaires bouleversent d’ailleurs tout autant les imageries médicale et astronomique. Enfin, le cours soulignera l’importance des nouveaux algorithmes destinés à l’amélioration de l’ergonomie de la prise de vue, qui rendent la vie technique du photographe bien plus facile sur quasiment tous les aspects : interaction humain-machine bien conçue, stabilisation du capteur et de l’objectif pour supprimer le flou de bougé, gestion sophistiquée de la lumière et de la mise au point, nombreuses aides à la prise de vue dans le viseur devenant électronique, liaison directe avec les ordinateurs et téléphones. Séminaire de Stéphane Huot Résumé Bien avant l'avènement des ordinateurs personnels, de l'internet et des smartphones , l'Interaction Homme-Machine (IHM) était déjà une préoccupation au cœur de certaines des visions qui ont contribué à forger l'informatique moderne, qu'elle soit personnelle ou professionnelle. Pour autant, la conception et l'étude des interactions est encore souvent considérée comme secondaire dans la conception des systèmes, la priorité étant souvent mise sur le développement des fonctionnalités plutôt que sur les moyens pour les utiliser. Cette situation s'est progressivement améliorée, avec notamment l'avènement des dispositifs tactiles (smartphones et tablettes) ou de divertissement (consoles de jeux) pour lesquels l'argument de simplicité d'utilisation a détrôné celui de la puissance intrinsèque. Cela a bien évidemment permis de populariser et démocratiser l'accès à la technologie. Mais une conséquence est, selon nous, un relatif appauvrissement des possibilités offertes par ces technologies paradoxalement plus puissantes que jamais. En masquant la complexité plutôt qu'en aidant à la maîtriser, en entretenant le mythe qu'avec ces dispositifs il est aisé pour chacun de faire beaucoup sans efforts, la tendance est à sacrifier le potentiel de l'outil informatique et la performance des utilisateurs pour la rapidité de prise en main, sans permettre un usage plus avancé, plus performant, et peut-être plus gratifiant pour l'utilisateur. Cet équilibre entre simplicité d'usage et puissance de l'outil est un compromis difficile à trouver, et c'est selon nous un des défis et une difficulté majeure de l'IHM : observer et comprendre les phénomènes sensorimoteurs et psychomoteurs, cognitifs, sociaux et technologiques mis en œuvre lors de l'interaction entre des personnes et des systèmes afin d'améliorer cette interaction et d'en guider la conception pour encapaciter les utilisateurs. Le but étant finalement de leur permettre de réaliser ce qu'il leur serait impossible de faire sans cet outil, même si cela requiert de leur part des efforts certains d'apprentissage. Dans ce séminaire, nous commencerons par présenter ce qu'est l'Interaction Homme-Machine en tant que domaine de recherche avec ses objectifs, ses méthodes et ses pratiques. Ensuite, au travers d'une brève histoire de l'informatique sous le prisme de l'interaction, nous évoquerons quelques innovations d'aujourd'hui qui découlent des visions de pionniers du domaine, en considérant notamment ce compromis simplicité / puissance de l'outil. Nous verrons aussi avec des exemples et contre-exemples issus de nos environnements numériques actuels, ainsi qu'avec des travaux de recherche récents, que ces visions portent encore de nombreux défis présents et futurs de l'IHM. En particulier, nous conclurons en discutant de la nécessité d'adopter une approche centrée utilisateur et interaction à l'heure des grands défis scientifiques, technologiques et sociétaux du numérique tels que la conception des systèmes autonomes ou le traitement et l'exploitation automatique des données. Mots-clés : Machines et langages Algorithme Interaction Homme machine (IHM). Lililad |
Wed 31st Jan 3:30 pm 8:30 pm | Le centre Inria Lille - Nord Europe reçoit Gérard Berry, du Collège de France pour son cours sur la photographie numérique. Ce cours se prolongera par un séminaire de Stéphane Huot, responsable de l'équipe Mjolnir. Cette manifestation sera suivie d'un cocktail au cours duquel Isabelle Herlin, directrice du centre de recherche Inria Lille - Nord Europe présentera ses voeux. Programme 15h30 : Accueil 15h45 : Introduction par Isabelle Herlin 16h - 17h30 : Cours de Gérard Berry, "La photographie numérique, un parfait exemple de la puissance de l'informatique" 17h30 - 18h30 : Séminaire de Stéphane Huot, "Interaction humain-machine : passé composé et futur simple... ou l'inverse" 18h30 - 18h45 : Questions aux deux orateurs 19h - 20h30 : Cocktail Liliad |
Fri 19th Jan 10:00 am 12:00 pm | Sylvain Salvati: "On magic set rewriting for Datalog" Cet exposé se veut une introduction à la transformation de programmes datalog. En particulier, je présenterai la transformation appelée "supplementary magic set rewriting" qui permet d'obtenir des programmes datalog dont l'évaluation semi-naïve se comporte de façon similaire à l'évaluation des programmes originaux par résolution SLD. Je montrerai l'algorithme et des exécutions de programmes sur des exemples issus de problèmes d'analyses grammaticales. Lille B21 |
2017 | |
Fri 10th Nov 10:00 am 11:00 am | Momar Sakho: "Complexity of Certain Query Answering on Hyperstreams" A hyperstream is a sequence of streams with references to others. We study the complexity of computing certain answers for queries defined by automata and evaluated on hyperstreams of words. We show that the problem is PSPACE-complete for deterministic query automata, but that it can be solved in PTime for linear hyperstreams even with factorization. Salle B21 |
Fri 3rd Nov 10:30 am 12:00 pm | Joanna Ochremiak, Paris 7: "Proof complexity of constraint satisfaction problems" Many natural computational problems, such as satisfiability and systems of equations, can be expressed in a unified way as constraint satisfaction problems (CSPs). In this talk I will show that the usual reductions preserving the complexity of the constraint satisfaction problem preserve also its proof complexity. As an application, I will present two gap theorems, which say that CSPs that admit small size refutations in some classical proof systems are exactly the constraint satisfaction problems which can be solved by Datalog. This is joint work with Albert Atserias. B21 |
Fri 13th Oct 11:00 am 1:00 pm | Dimitri Gallois: On parallel rewriting B21 |
Fri 29th Sep 10:00 am 12:00 pm | Nicolas Bacquey: "An algorithm for deciding the equivalence of tree transducers" As an extension of word transformations, tree transformations have numerous applications in computer science : XSLT transformations, Unix packages installation and removal, databases queries... Likewise, there are many formal models to describe these transformations. However, the proof of formal properies on these models is often difficult, or even undecidable. In this talk, I will be interested in one of the simplest model for tree transformations, namely deterministic top-down tree transducers (DTOP). It has been known for a while that the equivalence problem of DTOPs can be solved via an earliest normal form comparison algorithm, that is in 2EXPTIME. However, when applying this algorithm to practical cases, it seemed that the worst case was not bound to happen often, if ever. I will present a new algorithm for the problem, based on the search of counterexamples via the expansion and unification of a set of rules over states of DTOPs. The most interesting feature of this algorithm is that it runs in exponential time, thus proving that the equivalence problem of DTOPs is in fact EXPTIME-complete. Lille B31 |
Thu 6th Jul all day | ANR Headwork: General Meeting Rennes |
Fri 16th Jun all day | 09h15-09h45 Coffee Welcome 09h45-10h30 Michel de Rougemont: Approximate integration of streaming graph edges 10h30-11h15 Florent Cappelli: Understanding the complexity of #SAT using knowledge compilation 11h15-11h45 Yann Strozecki: Enumerating maximal solutions of saturation problems 12h00 Lunch 14h00 Discussion libre 16h00 End Inria Lille |
Thu 15th Jun all day | 09h15-09h45 Welcome coffee 09h45-10h30 Pierre Bourhis: Introduction of circuit from database queries 10h30-11h15 Jen Keppeler: Answering FO+MOD queries under updates on bounded degree databases 11h15-12h00 Antoine Amarilli: Enumeration of valuation of circuits 12h00-13h30 Lunch + Café 13h30-14h30 Jan Ramon: Question around IA 14h30-15h15 Ahmet Kara: Covers of Query Results 15h15-15h45 Break 15h45-16h30 Alexandre Vigny: Constant delay enumeration for FO queries over databases with local bounded expansion 20h00 Dinner at Le Palermo Inria Lille |
Fri 9th Jun 10:30 am 12:30 pm | Valentin Montmirail: "A Recursive Shortcut for CEGAR: Application to the Modal Logic K Satisfiability Problem" Counter-Example-Guided Abstraction Refinement (CEGAR) has been very successful in model checking. Since then, it has been applied to many different problems. It is especially proved to be a highly successful practical approach for solving the PSPACE complete QBF problem. In this paper, we propose a new CEGAR-like approach for tackling PSPACE complete problems that we call RECAR (Recursive Explore and Check Abstraction Refinement). We show that this generic approach is sound and complete. Then we propose a specific implementation of the RECAR approach to solve the modal logic K satisfiability problem. We implemented both CEGAR and RECAR approaches for the modal logic K satisfiability problem within the solver MoSaiC. We compared experimentally those approaches to the state-of-the-art solvers for that problem. The RECAR approach outperforms the CEGAR one for that problem and also compares favorably against the state-of-the-art on the benchmarks considered. "Lille-Salle B21" |
Tue 6th Jun to Fri 9th Jun all day | Visit of Jean-Marc Talbot, Université de Marseille |
Fri 2nd Jun all day | Visit of Floris Geerts, University of Antwerp |
Fri 21st Apr all day | Visit of Florent Capelli, London University |
Fri 24th Mar all day | Visit of Charles Paperman, Université Paris 7 Université Paris 7 www.liafa.univ-paris-diderot.fr/~paperman/ INRIA Institut National Recherche Informatique Automatique 40 Avenue Halley, 59650 Villeneuve d'Ascq, France |
Wed 15th Mar 10:30 am 12:00 pm | Emmanuel Filliot, Université Libre de Bruxelles: "Automata, Logic and Algebra for Word Transductions" This talk will survey old and recent results about word transductions, i.e. functions mapping (finite) words to words. Connections between automata models (transducers), logic and algebra will be presented. Starting with rational functions, defined by (one-way) finite transducers, and the canonical model of bimachines introduced by Reutenauer and Schützenberger, the talk will also target the more expressive class of functions defined by two-way transducers and their equivalent MSO-based formalism. "Lille-Salle B21" |
Wed 15th Mar all day | Visit of Emmanuel Filliot, Université Libre de Bruxelles |
Wed 1st Feb 11:00 am 12:30 pm | Pierre Bourhis: The Chase Inria Lille |
Fri 20th Jan 10:30 am 12:30 pm | Pierre Bourhis: "Tree Automata for Reasoning in Databases and Artificial Intelligence" In database management, one of the principal task is to optimize the queries to evaluate them efficiently. It is in particular the case for recursive queries for which their evaluation can lead to crawl all the database. In particular, one of the main question is to minimize the queries in order to avoid to evaluate useless parts of the query. The core theoretical question around this line of work is the problem of inclusion of a query in another. Interestedly, this question is related to an important question in IA which is to answer a query when the data is incomplete but rules are given to derive new information. This problem is called certain query answering. In both context, if both problem are undecidable in general, there are fragments based on guardedness that are decidable due to the fact there exists witness of the problems that have a bounded tree width and that their encoding in trees is regular. Furthermore, the queries can be translated in MSO. In both contexts, Courcelle’s Theorems imply the decidability of both problems. I will present to the different results on the translation of logic class of formula for our problems into tree automata to obtain tight bounds to the problems of inclusion of recursive queries or certain query answering. Inria Lille |
Wed 11th Jan 2:15 pm 3:25 pm | Michael vanden Boom, Oxford University : Decidable fixpoint logics Fixpoint logics can express dynamic, recursive properties, but often fail to have decidable satisfiability. A notable exception to this is the family of well-behaved "guarded" fixpoint logics, which subsume a variety of query languages and integrity constraints of interest in databases and knowledge representation. In this talk, I will survey some recent results about these logics. Lille B21 |
Mon 9th Jan to Fri 13th Jan all day | Visite Michael vanden Boom, Oxford University |
2016 | |
Fri 9th Dec all day | Kickoff Headwork Paris MNHN |
Fri 18th Nov 10:30 am 12:00 pm | Florent Capelli Links Seminar "Lille-Salle B21" |
Fri 18th Nov all day | Florent Capelli visit |
Tue 8th Nov 2:30 pm 4:30 pm | Seminar Link by Helmut Seidl: "Equivalence of Deterministic Top-Down Tree-to-String Transducers is Decidable" Abstract: We show that equivalence of deterministic top-down tree-to-string transducers is decidable, thus solving a long standing open problem in formal language theory. We also present efficient algorithms for subclasses: polynomial time for total transducers with unary output alphabet (over a given top-down regular domain language), and co-randomized polynomial time for linear transducers, these results are obtained using techniques from multi-linear algebra. For our main result, we prove that equivalence can be certified by means of inductive invariants using polynomial ideals. This allows us to construct two semi-algorithms, one searching for a proof of equivalence, one for a witness of non-equivalence. "Lille-Salle B31 " |
Mon 7th Nov 2:00 pm 4:00 pm | PhD defense Adrien Boiret |
Fri 4th Nov all day | colis general meeting Paris |
Thu 27th Oct 10:00 am 6:00 pm | Links day |
Thu 27th Oct all day | links day |
Thu 20th Oct 2:00 pm 4:00 pm | Seminar Links by Vincent Hugot: "Top-Down Transducers for Data Trees" Abstract: Tree transducers have a wide range of application domains ranging from compiler construction, program analysis, and computational linguistics, to semi-structured databases and file system transformations. A common application of these domains is to specify and verify transformations of data trees, i.e., trees whose nodes are labeled by data values from an infinite domain. Most existing classes of tree transducers and their formal studies, however, are restricted to trees over finite signatures without data. In this paper, we lift the most prominent class of top-down tree transducers to data trees, such that its good properties are preserved. In particular, we show that top-down transducers for data trees have a decidable equivalence problem, without imposing any linearity restriction as in previous approaches based on symbolic top-down tree transducers. "Lille-Salle B21" |
Thu 13th Oct 2:00 pm 5:30 pm | comité de projet |
Thu 13th Oct 2:00 pm 3:00 pm | Seminar Christof Löding "Lille-Salle B21" |
Thu 13th Oct to Fri 14th Oct all day | visit christof löding |
Fri 30th Sep all day | arrivée de Jose Lozano |
Thu 29th Sep 2:00 pm 4:00 pm | Seminar Links by Aurélien Lemay "Lille-Salle B21" |
Tue 27th Sep all day | Ircica fetes ces 10 ans Lille |
Fri 9th Sep 2:00 pm 4:00 pm | Momar Sakho "Lille-Salle B21" |
Wed 7th Sep 11:00 am 12:00 pm | jason demagoj |
Wed 31st Aug 10:00 am 1:00 pm | Links Seminar by Domagoj Vrgoč: "Querying Graph with Data" "Lille-Salle B21" |
Thu 28th Jul all day | Visit of Serge Abiteboul and Victor Vianu |
Mon 11th Jul to Tue 12th Jul all day | Aggreg meeting Marseille |
Mon 27th Jun all day | Colis ANR project: general meeting Inria Paris, Salle 119 "Ada Lovelace" |
Fri 24th Jun 2:00 pm 4:00 pm | Fatima Belkouch: on the hypercube algorithm for conjunctive queries Abstract: We consider the problem of computing a conjunctive query on a large database in a parallel setting with p servers. Unlike traditional query processing, the complexity is no longer dominated by the number of disk accesses. Typically, a query is evaluated by a sufficiently large number of servers such that the entire data can be kept in the main memory of these servers. The dominant cost becomes that of communicating data and synchronizing among the servers. I will present some interesting results in [1, 2, 3, 4] dealing with the communication complexity of massively parallel computation of a query. The computation is performed in "rounds". First, I will present the Massively Parallel Communication (MPC) model to analyze the tradeoff between the number of rounds and the amount of communication required in a massively parallel computing environment. Then I will present the HyperCube (HC) algorithm that computes a full conjunctive query q in one round. I will discuss the communication complexity [2]. The main result is the optimal load O(m/p1/τ ) where τ is the fractional vertex cover of the hypergraph of q and m the input data size. References [1] Parallel Evaluation of Conjunctive Queries. Paris Koutris, Dan Suciu PODS2011 [2] Communication Steps for Parallel Query Processing. Paul Beame, Paris Koutris, Dan Suciu PODS2013 [3] Skew in Parallel Query Processing. Paul Beame, Paris Koutris Dan Suciu PODS'2014 [4] Worst-Case Optimal Algorithms for Parallel Query Processing. Paris Koutris, Paul Beame, Dan Suciu ICDT2016 "Lille-Salle B11" |
Thu 23rd Jun 2:00 pm 3:30 pm | Victor Vianu in Polaris Auditorium IRCICA |
Thu 23rd Jun all day | victor vianu visit |
Mon 20th Jun to Wed 22nd Jun all day | journee scientique inria à rennes |
Fri 17th Jun 9:00 am 12:30 pm | PhD Thesis Defense by Tom Sebastian: Evaluation of XPath Queries on XML streams with Networks of Early Nested Word Automata Abstract: The challenge that we tackle in this thesis is the problem of how to answer XPath queries on XML streams with low latency, full coverage, high time efficiency, and low memory costs. We first propose to approximate earli- est query answering for navigational XPath queries by compilation to early nested word automata. It turns out that this leads to almost optimal la- tency and memory consumption. Second, we contribute a formal semantics of XPath 3.0. It is obtained by mapping XPath to the new query language λXP that we introduce. We then show how to compile λXP queries to net- works of early nested word automata, and develop streaming algorithms for the latter. Thereby we obtain a streaming algorithm that indeed covers all of XPath 3.0. Third, we develop an algorithm for projecting XML streams with respect to the query defined by an early nested word automaton. Thereby we are able to make our streaming algorithms highly time efficient. We have implemented all our algorithms with the objective to obtain an industrially applicable streaming tool. It turns out that our algorithms outperform all previous approaches in time efficiency, coverage, and latency. |
Thu 16th Jun 2:00 pm 4:00 pm | Nicolas Bacquey Links seminar: Introduction to uniform periodical computation : leader election on periodical cellular automata "Lille-Salle B21" |
Thu 16th Jun 10:00 am 12:00 pm | Hubie Chen, Semainar and Visit "Lille-Salle B21" |
Fri 22nd Apr 10:00 am 11:30 am | Assemblée générale Inria Lille |
Fri 1st Apr all day | Laurent d'Orazio (cancelled) |
Fri 25th Mar all day | Datacert ANR project: general meeting Lyon |
Fri 18th Mar 10:30 am 12:00 pm | Charles Paperman: "Streaming and circuit complexity" Abstract: In this talk, I will present a connection between the streaming complexity and the circuit complexity of regular languages through a notion of streaming by block . This result provides tight constructions of boolean circuits computing an automaton, thanks to some classical and recent results on the circuit complexity of regular languages. I will apply this framework to the schema validation in streaming of XML-documents. Inria Lille |
Fri 18th Mar all day | Visit of Charles Paperman, Université Paris 7 Inria Lille |
Fri 11th Mar 10:30 am 12:00 pm | Seminar Links by Sylvain Salvati: Behavioral verification of higher-order programs Abstract: Higher-order constructions make their way into main stream programming languages like Java, C++, python, rust... These constructions bring new challenges to the verification of programs as they make their control flow more complex. In this talk, I will present how methods coming from denotational semantics can prove decidable the verification of certain properties of higher-order programs. These properties are expressed by means of finite state automata of the possibly infinite execution trees generated by the programs and can capture safety properties but also liveness and fairness properties. |
Fri 11th Mar all day | Sylvain Salvati: visit and Talk |
Wed 9th Mar 1:30 pm 2:00 pm | cristan duriez 30 minutes de science inria lille |
Fri 4th Mar all day | Colis ANR project: general meeting Inria Lille, Salle B21 |
Thu 3rd Mar all day | Kim Nguyen: visit for discussion with Links' members (no talk) Université Paris Sud www.lri.fr/~kn/ B218 |
Fri 19th Feb 11:00 am 3:00 pm | CNRS, Université Lens |
Thu 21st Jan 11:00 am 1:00 pm | Seminar by Vincent Penelle: "Rewriting high-order stack trees" Higher-order pushdown systems and ground tree rewriting systems can be seen as extensions of suffix word rewriting systems. Both classes generate infinite graphs with interesting logical properties. Indeed, the satisfaction of any formula written in monadic second order logic (respectively first order logic with reachability predicates) can be decided on such a graph. The purpose of this talk is to propose a common extension to both higher-order stack operations and ground tree rewriting. We introduce a model of higher-order ground tree rewriting over trees labelled by higher-order stacks (henceforth called stack trees), which syntactically coincides with ordinary ground tree rewriting at order 1 and with the dynamics of higher-order pushdown automata over unary trees. The infinite graphs generated by this class have a decidable first order logic with reachability. Formally, an order n stack tree is a tree labelled by order n-1 stacks. Operations of ground stack tree rewriting are represented by a certain class of connected DAGs labelled by a set of basic operations over stack trees describing of the relative application positions of the basic operations appearing on it. Applying a DAG to a stack tree t intuitively amounts to paste its input vertices to some leaves of t and to simplify the obtained structure, applying the basic operations labelling the edges of the DAG to the leaves they are appended to, until either a new stack tree is obtained or the process fails, in which case the application of the DAG to t at the chosen position is deemed impossible. This model is a common extension to those of higher-order stack operations presented by Carayol and of ground tree transducers presented by Dauchet and Tison. As further results we can define a notion of recognisable sets of operations through a generalisation. The proof that the graphs generated by a ground stack tree rewriting system have a decidable first order theory with reachability is inspired by the technique of finite set interpretations presented by Colcombet and Loding. "Lille-Salle B21" |
Thu 14th Jan all day | visite pierre senellart |
Tue 12th Jan to Thu 14th Jan all day | visite Antoine Amarilli |
2015 | |
Mon 14th Dec 2:00 pm 4:00 pm | Slawek Staworko's HDR defense: "Symbolic Inference Methods for Databases" M2, salle de réunion |
Fri 20th Nov 10:30 am 12:30 pm | Seminar Links by Stéphane Demri: "Separation Logic and Friends" Abstract: Separation logic is used as an assertion language for Hoare-style proof systems about programs with pointers, and there is an ongoing quest for understanding its complexity and expressive power. There are also a lot of activities to develop verification methods with decision procedures for fragments of practical use. Actually, there exist many variants for separation logic that can be viewed as fragments of second-order logic, as well as variants of modal or temporal logics in which models can be updated dynamically. In this talk, after introducing first principles on separation logic, issues related to decidability, computational complexity and expressive power are discussed. We provide several tight relationships with second-order logics, interval temporal logics or data logics, depending on the variants of the logic and on the syntactic resources available. "Lille-Salle B21" |
Fri 13th Nov 10:30 am 12:00 pm | Seminar Links by Iovka Boneva: "Shape Expressions Schemas" Abstract: Shape Expressions Schemas is an expressive schema and constraint language for RDF data. I am going to define the language, illustrate it with examples, then give a validation algorithm and talk about ongoing work. "Lille-Salle B21" |
Thu 29th Oct 10:00 am 12:00 pm | Seminar Links by Antoine Amarilli "Lille-Salle A11" |
Fri 9th Oct 10:30 am 12:30 pm | Seminar Links: Adrien Boiret "Lille-Salle B21" |
Thu 1st Oct 10:30 am 12:30 pm | Seminar Links by Eric Prud'hommeaux: Shape Expressions: (finally) a schema language for RDF graph structure Initial architects envinsioned RDF as a knowledge representation language, freeing users from syntactic limitations and revolutionizing the way information was exchanged. While inference and description logic are applied to RDF, the foundation of simple assertions composed of global, unambiguous identifiers, has many more mondane and practical applications. Distributed contributions to large (web-scale) data graphs demands adaptation of tree and stream-based validation techniques to operate over a graph. Shape Expressions performs an ordered traversal of RDF graphs to 1 validate of structural constraints. 2 perform generative semantic actions. "Lille-Salle B21" |
Fri 11th Sep 12:00 pm 1:00 pm | Florent Capelli |