Dec 03

Zenith Seminar: Christophe Godin “Can we Manipulate Tree-forms like Numbers ?” 7 dec. 2017

Can we manipulate tree-forms like numbers ?
Christophe Godin, Inria

Thursday 7 December at 14h30

Salle des séminaires, Bat. 4

Abstract: Tree-forms are ubiquitous in nature and recent observation technologies make it increasingly easy to capture their details, as well as the dynamics of their development, in 3 dimensions, with unprecedented accuracy. These massive and complex structural data raise new conceptual and computational issues related to their analysis and to the quantification of their variability. Mathematical and computational techniques that usually successfully apply to traditional scalar or vectorial datasets fail to apply to such structural objects: How to define the average form of a set of tree-forms ? how to compare and classify tree-forms ? Can we solve efficiently optimization problems in tree-form spaces ? how to approximate tree-forms ? Can their intrinsic exponential computational curse be circumvented ? In this talk, I will present a recent work that we have made with my colleague Romain Azais to approach these questions from a new perspective, in which tree-forms show properties similar to that of numbers or real functions: they can be decomposed, approximated, averaged, transformed in dual spaces where specific computations can be carried out more efficiently. I will discuss how these first results can be applied to the analysis and simulation of tree-forms in developmental biology

Permanent link to this article:

Nov 09

Zenith seminar: Ji Liu “Efficient Uncertainty Analysis of Very Big Seismic Simulation Data ” 6 dec. 2017

Efficient Uncertainty Analysis of Very Big Seismic Simulation Data
Ji Liu
Zenith Postdoc
Wednesday 6 December at 11h
Room: 02/124, Bat 5
In recent years, big  simulation data is commonly generated from specific models,  in  different applications domains (astronomy, bioinformatics social networks, etc). In general, the simulation data corresponds to meshes that represent for instance a seismic soil area. It is of much importance to analyze the uncertainty of the simulation data in order to safely identify geological or seismic phenomenons, e.g. seismic faults. In order to analyze the uncertainty,  a  Probability Density Function (PDF) of each point in the mesh is computed to be analyzed.  However, this may be very time consuming (from several hours to even months) using a baseline approach based on parallel processing frameworks such as Spark. In this paper, we propose new solutions to efficiently compute and  analyze the uncertainty of very big simulation data using Spark. Our solutions use an original distributed architecture design. We propose three general approaches: data aggregation, machine learning  prediction and fast processing. We validate our approaches by extensive experimentations using big data ranging from hundreds of GB to several TB. The experimental results show that our approach  scales up very well and reduce the execution time by a factor of 33 (in the order of seconds or minutes) compared with a baseline approach.
This work is part of the  HPC4E European project, joint work with LNCC, Brazil co-authored with  N. Moreno, E. Pacitti, F. Porto and P. Valduriez

Permanent link to this article:

Nov 08

IBC & SciDISC seminar: Marta Mattoso “Human-in-the-loop to Fine-tune Data in Real Time” 14 dec. 2017

Human-in-the-loop to Fine-tune Data in Real Time
Marta Mattoso
COPPE/UFRJ, Rio de Janeiro

14 december 2017, 11h

Room 1/124, Bat.5

In long-lasting exploratory executions, it is often needed to fine-tune several parameters of complex computational models, because they may significantly impact performance.  Listing all possible combinations of parameters and exhaustively trying them all is nearly impossible even in high performance computers. Because of the exploratory nature of those computations, it is hard to determine, before the execution, which parameters and which values will work best to validate the initial hypothesis, even for the most experienced users. For this reason, after the initial setups, the user starts the computation and fine-tunes specific parameters based on online intermediate data analysis. In this talk we present the challenges in supporting the user with data analysis to monitor, evaluate and adjust executions in real time.  One of the problems in these executions is that, after some hours, the users can lose track of what has been tuned at early execution stages if the adaptations are not properly registered. We discuss on using techniques from provenance data management and human-in-the-loop to address the problem of adapting and tracking online parameter fine-tuning in several applications.

Permanent link to this article:

Nov 03

Post-doc position: Similarity Search in Large Scale Time Series

Title: Similarity Search in Large Scale Time Series

We are seeking a postdoctoral fellow in time series analytics, in collaboration with Safran ( ).


Nowadays, sensors technology is improving, and the number of sensors used for collecting information from different environments is increasing, e.g., from critical systems such as airplane engines. This huge utilization of sensors results in the production of large scale data, usually in the form of time series. With such complex and massive sets of time series, fast and accurate similarity search is a key to perform many data mining tasks like Shapelets, Motifs Discovery, Classification or Clustering.

This PostDoc position is proposed in the context of collaboration between the INRIA Zenith team and Safran (a multinational company specialized in the aircraft and rocket engines). We are interested in the correlation detection over multi dimensional time series, e.g. generated by engine check tests. For instance given a time slice (generated using a set of input parameters) of a very large time series, the objective is to detect quickly the time slice that is the most similar to it, and by this to find the input parameter values that generate similar outputs.

One of the distinguishing features of our underlying application is the huge volume of data to be analyzed. To deal with such a dataset, we intend to develop scalable solutions that take advantage of parallel frameworks (such as Mapreduce, Spark or Flink) that allow us to make efficient parallel data mining systems over ordinary machines. We will capitalize on our recent projects where we developed parallel solutions for indexing and analyzing very large datasets, e.g. [YAMP2017, SAM2017, SAM2015, AHMP2015].

One possibility for scalable correlation detection in this project is to build on top of related work, including the matrix profile index [YZUB+2016] over time series generated by thousands of sensors.  One of the tasks, in the context of this project, will be to develop distributed solutions for constructing and exploiting such  indexes over large scale time series coming from massively distributed sensors.

[YAMP2017] Djamel-Edine Yagoubi, Reza Akbarinia, Florent Masseglia, Themis Palpanas. DPiSAX: Massively Distributed Partitioned iSAX. IEEE International Conference on Data Mining (ICDM),  2017.

[SAM2017] Saber Salah, Reza Akbarinia, Florent Masseglia. Data placement in massively distributed environments for fast parallel mining of frequent itemsets. Knowledge and Information Systems (KAIS), 53(1), 207-237, 2017.

[SAM2015] Saber Salah, Reza Akbarinia, Florent Masseglia, Fast Parallel Mining of Maximally Informative k-Itemsets in Big Data. IEEE International Conference on Data Mining (ICDM), 2015.

[AHMP2015] Tristan Allard, Georges Hébrail, Florent Masseglia, Esther Pacitti. Chiaroscuro: Transparency and Privacy for Massive Personal Time-Series Clustering.  ACM Conference on Management of Data (SIGMOD), pp. 779-794, 2015.

[YZUB+2016] C-C M. Yeh, Y. Zhu, L. Ulanova, N. Begum, Y. Ding, H. Anh Dau, D. Furtado Silva, A. Mueen, E. Keogh. Matrix Profile I: All Pairs Similarity Joins for Time Series: A Unifying View that Includes Motifs, Discords and Shapelets. IEEE International Conference on Data Mining (ICDM), 2016.


This work will be done in the context of collaboration between INRIA Zenith team and Safran. The Zenith project-team ( ), headed by Patrick Valduriez, aims to propose new solutions related to scientific data and activities. Our research topics incorporate the management and analysis of massive and complex data, such as uncertain data, in highly distributed environments. Our team is located in Montpellier that is a very active town located in south of France.

Safran ( ; ) is a multinational company specialized in the aircraft/ rocket engines and aerospace component manufacturing.

Skills and profiles

Strong background in data mining

Strong skill of parallel data processing in Spark

A Ph.D. in computer science or mathematics

Duration, Salary and Location

Duration: 12 months

Annual Gross Salary: up to 42K€ depending on your experience.

Starting date: flexible but ideally as soon as possible.

This work will be done mainly in Montpellier, with regular visits to the Safran team in Paris.


Florent Masseglia ( and Reza Akbarinia (

Permanent link to this article:

Jun 21

PhD Position: Distributed Management of Scientific Workflows for High-Throughput Plant Phenotyping

Directors: Esther Pacitti (Zenith team, University Montpellier and Inria, LIRMM), François Tardieu (UMR LEPSE, INRA) and Christophe Pradal (CIRAD)


Funding: #Digitag-Inria (Inria PhD contract, Net salary/month: 1600€)

Keywords : Scientific Workflow, Distributed Computing, Cloud & Grid Computing, Phenotyping, Computer Vision, Reproducibility

Skills: We look for efficient candidates strongly motivated by challenging research topics in a multi-disciplinary environment. The applicant should present a good background in computer science including distributed computing, databases and computer vision. Basic knowledge in scientific workflow would be a plus. As regards software development, C, Python or Java languages are preferred.


This work is part of a new project on Scientific Workflows for Plant Phenotyping using cloud and grid computing, in the context of the Digital Agriculture Convergence Lab (#DigitAg) and in collaboration with the PIA Phenome project. This PhD will be directed both by computer scientists (E. Pacitti, C. Pradal) and by a biologist (F. Tardieu) that will provide both the data and the use cases relevant in plant phenotyping.

In the context of climate change and performance improvement of the crops, plant scientists study traits of interest in order to discover their natural genetic variations and identify their genetic controls. One important category is the morphological traits, which determine the 3D plant architecture [8]. This geometric information is useful to compute in-silico light interception and radiation-use efficiency (RUE), which are essential components to understand the genetic controls of biomass production and yield [9].

During the last decade, high-throughput phenotyping platforms have been designed to acquire quantitative data that will help understanding plant responses to the environmental conditions and the genetic control of these responses. Plant phenotyping consists in the observation of physical and biochemical traits of plant genotypes in response to environmental conditions. Recently, projects such as the Phenome project, have started to use high-throughput platforms to observe the dynamic growth of a large number of plants under different conditions, in field and platform conditions. These widely instrumented platforms produce huge datasets (images of thousands of plants, data collected by various sensors…) that keep increasing with complex in-silico experiments. For example the seven facilities of Phenome produce from 150 to 200 Terabytes of data per year. These data are heterogeneous (images, time courses), multiscale (from the organ to the field) and come from different sites. Farmers and breeders who use sensors from precision agriculture are now able to capture huge amounts of diverse data (e.g. images). Thus, the major problem becomes the automatic analysis of these massive datasets and the reproducibility of the in-silico experiments.

We define a scientific workflow as a pipeline to analyze experiments in an efficient and reproducible way, allowing scientists to express multi-step computational tasks (e.g. upload input files, preprocess the data, run various analyses and simulations, aggregate the results, …). OpenAlea [6] is a scientific workflow system that provides methods and software for plant modeling at different scales. It has been in constant use since 2004 by the plant community: the system has been downloaded 670 000 times and the web site has 10 000 unique visitors a month according to the OpenAlea web repository (

In the frame of Phenome, we are developing Phenomenal, a software package in OpenAlea that is dedicated to the analysis of phenotyping data in connection with ecophysiological models [9,10]. Phenomenal provides fully automatic workflows dedicated to the 3D reconstruction, segmentation and tracking of plant organs. It has been tested on maize, cotton, sorgho and apple tree. OpenAlea radiative models are used to estimate the light use efficiency and the in silico crop performance in a large range of contexts. To illustrate, Figure 1 shows the Phenomenal workflow that automatically reconstructs the 3D shoot architecture of plants from multi-view images acquired with the Phenoarch platform. This workflow has been tested on various annual and perennial plants such as maize, cotton, sorghum and young apple trees.

Executing such complex scientific workflows on huge datasets may take a lot of time. Thus, we have started to design an infrastructure, called InfraPhenoGrid, to distribute the computation of workflows using the EGI/France Grilles computing facilities [1]. EGI provides access to a grid with multiple sites around the world, each with one or more clusters. This environment is now well suited for data-intensive science, with different research groups collaborating at different sites. In this context, the goal is to address two critical issues in the management of plant phenotyping experiments: (i) scheduling distributed computation and (ii) allowing reuse and reproducibility of experiments [1,2].

Thesis subject

The proposed PhD thesis consists in scheduling the Phenomenal workflow on distributed resources and provide proofs of concepts

Scheduling distributed computation.

We shall adopt an algebraic approach, which is better suited for the optimization and parallelization of data-intensive scientific workflows [3]. The scheduling problem resembles scientific workflow execution in a multisite cloud [4,5]. The objective of the thesis is to go further and propose workflow parallelization and dynamic task allocation and data placement techniques to work with heterogeneous sites, as in EGI. To exchange and share intermediate data, we plan to use iRODS, an open-source data management software that federates distributed and heterogeneous data resources into a single logical file system [7]. In this context, the challenge is to deal with both task allocation and data placement among the different sites, while taking into account their heterogeneity, for instance, different transfer capabilities and cost models.

Allowing reuse and reproducibility of experiments

Modern scientific workflow systems are now equipped with modules that offer assistance for this. This is notably the case of the provenance modules, able to trace the parameter settings chosen at runtime and the data sets used as input of (or produced by) each workflow task. However, allowing workflow reproducibility and reuse depends on providing users with the means to interact with provenance information. The originality of the thesis lies in considering popular tools among data scientists, named interactive notebooks (like RStudio or Jupyter) as a means for users to interact with provenance information directly extracted from workflow runs. Challenges are numerous and include providing users with a simplified (sequential), yet correct (in terms of data dependencies involved) provenance information, hiding the complexity of highly parallel executions.

The approaches that proposed in this PhD will be implemented in OpenAlea. Image data from controlled and field phenotyping experiments will be provided by the Phenome project. The grid and cloud infrastructure for experimenting will be France Grille (European Grid Institute).


[1] C. Pradal, S. Artzet, J. Chopard, D. Dupuis, C. Fournier, M. Mielewczik, V. Nègre, P. Neveu, D. Parigot, P. Valduriez, S. Cohen-Boulakia: InfraPhenoGrid: A scientific workflow infrastructure for plant phenomics on the Grid. Future Generation Comp. Syst. 67: 341-353 (2017).

[2] S. Cohen-Boulakia, K. Belhajjame, O. Collin, J. Chopard, C. Froidevaux, A. Gaignard, K. Hinsen, P. Larmande, Y. Le Bras, F. Lemoine, F. Mareuil, H. Ménager, C. Pradal, C. Blanchet: Scientific workflows for computational reproducibility in the life sciences: Status, challenges and opportunities. Future Generation Computer Systems. doi: 10.1016/j.future.2017.01.012 (2017).

[3] E. Ogasawara, J. F. Dias, D. de Oliveira, F. Porto, P. Valduriez, M. Mattoso. An Algebraic Approach for Data-centric Scientific Workflows. In Proceedings of the VLDB Endowment (PVLDB), 4(12): 1328-1339 (2011).

[4] J. Liu, E. Pacitti, P. Valduriez, M. Mattoso: A Survey of Data-Intensive Scientific Workflow Management. J. Grid Comput. 13(4): 457-493(2015).

[5] J. Liu, E. Pacitti, P. Valduriez, D. de Oliveira, M. Mattoso: Multi-objective scheduling of Scientific Workflows in multisite clouds. Future Generation Computer Systems, 63: 76-95 (2016)

[6] C. Pradal, C. Fournier, P. Valduriez, S. Cohen-Boulakia: OpenAlea: scientific workflows combining data analysis and simulation. SSDBM: 11:1-11:6 (2015).

[7] A. Rajasekar, R. Moore, C. Y. Hou, C. A. Lee, R. Marciano, A. de Torcy et al. iRODS Primer: integrated rule-oriented data system. Synthesis Lectures on Information Concepts, Retrieval, and Services, 2(1), 1-143. (2010).

[8] M. Balduzzi, B. M. Binder, A. Bucksch, C. Chang, L. Hong, A. Lyer-Pascuzzi, C. Pradal, E. Sparks. Reshaping plant biology: Qualitative and quantitative descriptors for plant morphology. Frontiers in Plant Science 8:117 (2017).

[9] L. Cabrera‐Bosquet, C. Fournier, N. Brichet, C. Welcker, B. Suard, F. Tardieu. High‐throughput estimation of incident light, light interception and radiation‐use efficiency of thousands of plants in a phenotyping platform. New Phytologist, 212(1), 269-281 (2016).

[10] S. Artzet, N. Brichet, L. Cabrera, T. W. Chen, J. Chopard, M. Mielewczik, C. Fournier, C. Pradal. Image workflows for high throughput phenotyping platforms. BMVA technical meeting: Plants in Computer Vision, London, United Kingdom (2016).

Permanent link to this article:

May 15

IBC seminar: Dennis Shasha “Reducing Errors by Refusing to Guess (Occasionally)” 1 June 2017

IBC Seminar
Thursday 1 June 2017, 15h, room 1/124, bat. 5 Campus Saint Priest, Montpellier

Reducing Errors by Refusing to Guess (Occasionally)

Dennis Shasha

New York University
We propose a meta-algorithm to reduce the error rate of state-of-the-art machine learning algorithms by refusing to make predictions in certain cases even when the underlying algorithms suggest predictions. Intuitively, our new Conjugate Prediction approach estimates the likelihood that a prediction will be in error and when that likelihood is high, the approach refuses to go along with that prediction. Unlike other approaches, we can probabilistically guarantee an error rate on predictions we do make (denoted the {\em decisive predictions}). Empirically on seven diverse data sets from genomics, ecology, image-recognition, and gaming,, our method can probabilistically guarantee to reduce the error rate to 1/4 of what it is in the state-of-the-art machine learning algorithm at a cost of between 11% and 58% refusals. Competing state-of-the-art methods refuse at roughly twice the rate  of ours (sometimes refusing all suggested predictions).

Permanent link to this article:

Apr 26

IBC seminar: Fabio Porto “Simulation Data Management” 1 June 2017

IBC Seminar
Thursday 1 June 2017, 14h, room 1/124, bat. 5 Campus Saint Priest, Montpellier

Simulation Data Management

Fabio Porto

LNCC, Rio de Janeiro, Brazil

Numerical Simulation has attracted the interest of different areas from engineering and biology to astronomy. By using simulations scientists can analyse the behaviour of hard to observe phenomena and practitioners  can test techniques before engaging into dangerous actions, such as brain surgery.  Simulations are CPU intensive applications, normally running in super-computers, such as the Santos_Dumont machine at LNCC. They also produce a huge amount of data, distributed in hundreds of files, and including different structures, such as: a domain discretisation mesh, field values and domain topology information. These data must be integrated
and structured in a way that scientists can easily and efficiently interpret the simulation outcome, using analytical queries or scientific visualization tools. In this talk we will presents the main challenges involved in managing simulation data  and highlight the recent results in this area.

Permanent link to this article:

Feb 15

IBC seminar: Tamer Özsu “Approaches to RDF Data Management and SPARQL Query Processing” 9 march 2017

IBC Seminar
Thursday 9 march 2017, 14h, room 3/124, bat. 5 Campus Saint Priest, Montpellier

Approaches to RDF Data Management and SPARQL Query Processing
M. Tamer Özsu University of Waterloo, Canada
Resource Description Framework (RDF) was originally proposed by the World Wide Web Consortium (W3C) for modeling Web objects as part of developing the “semantic web”. It has found uses in other areas such as the management of biological data (e.g., UniProt) and web data integration (through Linked Open Data). W3C has also proposed SPARQL as the query language for accessing RDF data repositories. Given the growing size of RDF data sets, and the existence of a declarative query language, the topic is ripe for the application of state-of-the-art data management techniques. In this talk, I will discuss the various approaches to RDF data management and SPARQL query processing, including relational techniques, graph approaches, distributed RDF data management, and Linked Open Data querying, using examples from a number of domains including biological data sets, and popular movie databases.

M. Tamer Özsu is Professor of Computer Science at the David R. Cheriton School of Computer Science of the University of Waterloo. His research is in data management focusing on large-scale data distribution and management of non-traditional data. He is a Fellow of the Royal Society of Canada, of the Association for Computing Machinery (ACM), and of the Institute of Electrical and Electronics Engineers (IEEE). He is an elected member of the Science Academy of Turkey, and member of Sigma Xi and American Association for the Advancement of Science (AAAS).

Permanent link to this article:

Jan 23

Zenith seminar: Fabio Porto “Database System Support of Simulation Data” 27 jan. 2017

Zenith seminar, friday 27 january 10h30-12h, room 2/124 bat 5 Campus Saint Priest

Database System Support of Simulation Data
Fabio Porto
LNCC, Rio de Janeiro, Brazil

Supported by increasingly efficient HPC infra-structure, numerical simulations are rapidly expanding to fields such as oil and gas, medicine and meteorology. As simulations become more precise and cover longer periods of time, they may produce files with terabytes of data that need to be efficiently analyzed. In this work, we investigate techniques for managing such data using an array DBMS. We take advantage of multidimensional arrays that nicely models the dimensions and variables used in numerical simulations. However, a naive approach to map simulation data files may lead to sparse arrays, impacting query response time, in particular, when the simulation uses irregular meshes to model its physical domain. We propose efficient techniques to map coordinate values in numerical simulations to evenly distributed cells in array chunks with the use of equi-depth histograms and spacefilling curves. We implemented our techniques in SciDB and, through experiments over real-world data, compared them with two other approaches: row-store and column-store DBMS. The results indicate that multidimensional arrays and column-stores are much faster than a traditional row-store system for queries over a larger amount of simulation data. They also help identifying the scenarios where array DBMSs are most efficient, and those where they are outperformed by column-stores.

Permanent link to this article:

Dec 03

Zenith seminar: Pierre Bourhis “A Formal Study of Collaborative Access Control in Distributed Catalog” 2 dec. 2016

Séminaire Zenith: vendredi 2 déc. 10h30, salle 3/124 bat. 5

A Formal Study of Collaborative Access Control in Distributed Datalog
Pierre Bourhis
CNRS et Inria Lille

We formalize and study a declaratively specified collaborative access control mechanism for data
dissemination in a distributed environment. Data dissemination is specified using distributed
datalog. Access control is also defined by datalog-style rules, at the relation level for extensional
relations, and at the tuple level for intensional ones, based on the derivation of tuples. The model
also includes a mechanism for “declassifying” data, that allows circumventing overly restrictive
access control. We consider the complexity of determining whether a peer is allowed to access
a given fact, and address the problem of achieving the goal of disseminating certain information
under some access control policy. We also investigate the problem of information leakage, which
occurs when a peer is able to infer facts to which the peer is not allowed access by the policy.
Finally, we consider access control extended to facts equipped with provenance information,
motivated by the many applications where such information is required. We provide semantics
for access control with provenance, and establish the complexity of determining whether a peer
may access a given fact together with its provenance. This work is motivated by the access
control of the Webdamlog system, whose core features it formalizes.

Permanent link to this article: