IBC seminar: Dennis Shasha “Reducing Errors by Refusing to Guess (Occasionally)” 1 june 2018.

Séminaire IBC, organisé par  Zenith
Vendredi 1er juin 2018, 14h
Salle des séminaire, Bat. 4, LIRMM

SafePredict: Reducing Errors by Refusing to Guess (Occasionally)
Dennis Shasha
Courant Institute, New York University

We propose a meta-algorithm to reduce the error rate of state-of-the-art machine learning algorithms by refusing to make predictions in certain cases even when the underlying algorithms suggest predictions. Intuitively, our SafePredict approach estimates the likelihood that a prediction will be in error and when that likelihood is high, the approach refuses to go along with that prediction. Unlike other approaches, we can probabilistically guarantee an error rate on predictions we do make (denoted the {\em decisive predictions}). Empirically on seven diverse data sets from genomics, ecology, image-recognition, and gaming,, our method can probabilistically guarantee to reduce the error rate to 1/4 of what it is in the state-of-the-art machine learning algorithm at a cost of between 11% and 58% refusals. Competing state-of-the-art methods refuse at roughly twice the rate  of ours (sometimes refusing all suggested predictions).

Short bio

Dennis Shasha is a Julius Silver Professor of computer science at the Courant Institute of New York University and an Associate Director of NYU Wireless. He works on meta-algorithms for machine learning to achieve guaranteed correctness rates, with biologists on pattern discovery for network inference; with computational chemists on algorithms for protein design; with physicists and financial people on algorithms for time series; on clocked computation for DNA computing; and on  computational reproducibility. Other areas of interest include database tuning as well as tree and graph matching. Because he likes to type, he has written six books of puzzles about a mathematical detective named Dr. Ecco, a biography about great computer scientists, and a book about the future of computing. He has also written five technical books about database tuning, biological pattern recognition, time series, DNA computing, resampling statistics,  and causal inference in molecular networks. He has co-authored over eighty journal papers, seventy conference papers, and twenty-five patents. He has written the puzzle column for various publications including Scientific American, Dr. Dobb’s Journal, and the Communications of the ACM. He is a fellow of the ACM and an INRIA International Chair.

Permanent link to this article: https://team.inria.fr/zenith/ibc-seminar-dennis-shasha-reducing-errors-by-refusing-to-guess-occasionally-1-june-2018/

PhD/postdoc positions in Machine Learning and Big Data

Zenith  is proposing a PhD position and a postdoc position on machine learning and big data, with Antoine Liutkus and Patrick Valduriez as advisors.

The successful candidates would work with us at Inria offices in Montpellier on: learning parameters models in big data, with applications to audio data analysis and processing.

Main research themes:
. Parallelization, distributed computing
. Probabilistic models, inference, sketching
. Deep learning
. Audio processing

The programme is very selective and a good publication track is required. Foreigners are strongly encouraged to apply, because the funding promotes mobility.

Details:
. PhD position
. Postdoc position

 

Permanent link to this article: https://team.inria.fr/zenith/phdpost-positions-in-machine-learning-and-big-data/

Journée Droit de l’Internet: la blockchain, Montpellier, Vendredi 2 mars 2018

Zenith participe à la Journée Droit de l’Internet: la blockchain, Faculté de Droit et de Science Politique, Montpellier, Vendredi 2 mars 2018.

Programme

 

 

Permanent link to this article: https://team.inria.fr/zenith/journee-droit-de-linternet-la-blockchain-montpellier-vendredi-2-mars-2018/

IBC seminar: Themis Palpanas “Data Series Management: Fulfilling the Need for Big Sequence Analytics” 19 jan. 2018

Séminaire IBC, organisé par  Zenith
Lundi 19 mars 2018, 11h
Salle 1/124, Bat. 5

Data Series Management: Fulfilling the Need for Big Sequence Analytics
Themis Palpanas
IUF et Université Paris Descartes

There is an increasingly pressing need, by several applications in diverse domains, for developing techniques able to index and mine very large collections of sequences, or data series. Examples of such applications come from a multitude of social and scientific domains, including biology, where high-throughput sequencing is generating massive sequence collections. It is not unusual for these applications to involve numbers of data series in the order of hundreds of millions to billions, which are often times not analyzed in their full detail due to their sheer size. However, no existing data management solution (such as relational databases, column stores, array databases, and time series management systems) can offer native support for sequences and the corresponding operators necessary for complex analytics.
In this talk, we argue for the need to study the theory and foundations for sequence management of big data sequences, and to build corresponding systems that will enable scalable management and analysis of very large sequence collections. We describe recent efforts in designing techniques for indexing and mining truly massive collections of data series that will enable scientists to easily analyze their data. We discuss novel techniques that adaptively create data series indexes, allowing users to correctly answer queries before the indexing task is finished. Finally, we present our vision for the future in big sequence management research, including the promising directions in terms of storage, distributed processing, and query benchmarks.

short bio
———
Themis Palpanas is Senior Member of the Institut Universitaire de France (IUF), a distinction that recognizes excellence across all academic disciplines, and professor of computer science at the Paris Descartes University (France), where he is director of diNo, the data management group. He received the BS degree from the National Technical University of Athens, Greece, and the MSc and PhD degrees from the University of Toronto, Canada. He has previously held positions at the University of Trento, and at IBM T.J. Watson Research Center, and visited Microsoft Research, and the IBM Almaden Research Center.
His interests include problems related to data science (big data analytics and machine learning applications). He is the author of nine US patents, three of which have been implemented in world-leading commercial data management products. He is the recipient of three Best Paper awards, and the IBM Shared University Research (SUR) Award.
He is curently serving on the VLDB Endowment Board of Trustees, as an Editor in Chief for the BDR Journal, Associate Editor for VLDB 2019, Associate Editor in the TKDE, and IDA journals, as well as on the Editorial Advisory Board of the IS journal, and the Editorial Board of the TLDKS Journal. He has served as General Chair for VLDB 2013, Associate Editor for VLDB 2017, and Workshop Chair for EDBT 2016, ADBIS 2013 and ADBIS 2014, General Chair for the PDA@IOT International Workshop (in conjunction with VLDB 2014), and General Chair for the Event Processing Symposium 2009.

Permanent link to this article: https://team.inria.fr/zenith/ibc-seminar-themis-palpanas-data-series-management-fulfilling-the-need-for-big-sequence-analytics-19-jan-2018/

Journée d’étude Méthode, Intégrité Scientifique & Données, 16 février 2018, Montpellier.

Zenith participe à la Journée d’étude Méthode, Intégrité Scientifique & Données, Vendredi 16 février 2018, MSH SUD, Site Saint Charles 2, Montpellier.

 

Permanent link to this article: https://team.inria.fr/zenith/zenith-participe-a-la-journee-detude-methode-integrite-scientifique-donnees-vendredi-16-fevrier-2018-msh-sud-site-saint-charles-2-montpellier/

Zenith Seminar: Vitor Silva “A methodology for capturing and analyzing dataflow paths in computational simulations” 31 jan. 2018

Mercredi 31 janvier, 11h, Salle 2/124

A methodology for capturing and analyzing dataflow paths in computational simulations
Vitor Silva, COPPE/UFRJ, Rio de Janeiro

Scientific applications in large-scale are based on the execution of complex computational models in a specific field of the science. Moreover, a huge volume of scientific data is commonly generated and stored in data sources, which can be raw data files or in-memory data structures. In this context, domain specialists often need to analyze part of these scientific data to validate their scientific hypotheses. Besides the analysis of single data sources, they also need to relate scientific data from different data sources and to perform analysis during the execution of scientific application, since it may take days or weeks, even in high performance computing environments. Therefore, it is important a solution that enables scientific and provenance data extraction (for providing dataflow monitoring) and online dataflow analysis support. According to this exploratory scientific data analysis scenario, we propose a methodology for capturing and analyzing dataflow paths from scientific applications based on the modeling of the dataflow, scientific data, and queries.

Permanent link to this article: https://team.inria.fr/zenith/zenith-seminar-vitor-silva-a-methodology-for-capturing-and-analyzing-dataflow-paths-in-computational-simulations-31-jan-2018/

Zenith Seminar: Christophe Godin “Can we Manipulate Tree-forms like Numbers ?” 7 dec. 2017

Can we manipulate tree-forms like numbers ?
Christophe Godin, Inria

Thursday 7 December at 14h30

Salle des séminaires, Bat. 4

Abstract: Tree-forms are ubiquitous in nature and recent observation technologies make it increasingly easy to capture their details, as well as the dynamics of their development, in 3 dimensions, with unprecedented accuracy. These massive and complex structural data raise new conceptual and computational issues related to their analysis and to the quantification of their variability. Mathematical and computational techniques that usually successfully apply to traditional scalar or vectorial datasets fail to apply to such structural objects: How to define the average form of a set of tree-forms ? how to compare and classify tree-forms ? Can we solve efficiently optimization problems in tree-form spaces ? how to approximate tree-forms ? Can their intrinsic exponential computational curse be circumvented ? In this talk, I will present a recent work that we have made with my colleague Romain Azais to approach these questions from a new perspective, in which tree-forms show properties similar to that of numbers or real functions: they can be decomposed, approximated, averaged, transformed in dual spaces where specific computations can be carried out more efficiently. I will discuss how these first results can be applied to the analysis and simulation of tree-forms in developmental biology

Permanent link to this article: https://team.inria.fr/zenith/zenith-seminar-christophe-godin-can-we-manipulate-tree-forms-like-numbers-7-dec-2017/

Zenith seminar: Ji Liu “Efficient Uncertainty Analysis of Very Big Seismic Simulation Data ” 6 dec. 2017

Efficient Uncertainty Analysis of Very Big Seismic Simulation Data
Ji Liu
Zenith Postdoc
Wednesday 6 December at 11h
Room: 02/124, Bat 5
Abstract:
In recent years, big  simulation data is commonly generated from specific models,  in  different applications domains (astronomy, bioinformatics social networks, etc). In general, the simulation data corresponds to meshes that represent for instance a seismic soil area. It is of much importance to analyze the uncertainty of the simulation data in order to safely identify geological or seismic phenomenons, e.g. seismic faults. In order to analyze the uncertainty,  a  Probability Density Function (PDF) of each point in the mesh is computed to be analyzed.  However, this may be very time consuming (from several hours to even months) using a baseline approach based on parallel processing frameworks such as Spark. In this paper, we propose new solutions to efficiently compute and  analyze the uncertainty of very big simulation data using Spark. Our solutions use an original distributed architecture design. We propose three general approaches: data aggregation, machine learning  prediction and fast processing. We validate our approaches by extensive experimentations using big data ranging from hundreds of GB to several TB. The experimental results show that our approach  scales up very well and reduce the execution time by a factor of 33 (in the order of seconds or minutes) compared with a baseline approach.
This work is part of the  HPC4E European project, joint work with LNCC, Brazil co-authored with  N. Moreno, E. Pacitti, F. Porto and P. Valduriez

Permanent link to this article: https://team.inria.fr/zenith/zenith-seminar-ji-liu-efficient-uncertainty-analysis-of-very-big-seismic-simulation-data-6-dec-2017/

IBC & SciDISC seminar: Marta Mattoso “Human-in-the-loop to Fine-tune Data in Real Time” 14 dec. 2017

Human-in-the-loop to Fine-tune Data in Real Time
Marta Mattoso
COPPE/UFRJ, Rio de Janeiro

14 december 2017, 11h

Room 1/124, Bat.5

In long-lasting exploratory executions, it is often needed to fine-tune several parameters of complex computational models, because they may significantly impact performance.  Listing all possible combinations of parameters and exhaustively trying them all is nearly impossible even in high performance computers. Because of the exploratory nature of those computations, it is hard to determine, before the execution, which parameters and which values will work best to validate the initial hypothesis, even for the most experienced users. For this reason, after the initial setups, the user starts the computation and fine-tunes specific parameters based on online intermediate data analysis. In this talk we present the challenges in supporting the user with data analysis to monitor, evaluate and adjust executions in real time.  One of the problems in these executions is that, after some hours, the users can lose track of what has been tuned at early execution stages if the adaptations are not properly registered. We discuss on using techniques from provenance data management and human-in-the-loop to address the problem of adapting and tracking online parameter fine-tuning in several applications.

Permanent link to this article: https://team.inria.fr/zenith/marta-2017/

Post-doc position: Similarity Search in Large Scale Time Series

Title: Similarity Search in Large Scale Time Series

We are seeking a postdoctoral fellow in time series analytics, in collaboration with Safran ( https://www.safran-group.com/ ).

Topic

Nowadays, sensors technology is improving, and the number of sensors used for collecting information from different environments is increasing, e.g., from critical systems such as airplane engines. This huge utilization of sensors results in the production of large scale data, usually in the form of time series. With such complex and massive sets of time series, fast and accurate similarity search is a key to perform many data mining tasks like Shapelets, Motifs Discovery, Classification or Clustering.

This PostDoc position is proposed in the context of collaboration between the INRIA Zenith team and Safran (a multinational company specialized in the aircraft and rocket engines). We are interested in the correlation detection over multi dimensional time series, e.g. generated by engine check tests. For instance given a time slice (generated using a set of input parameters) of a very large time series, the objective is to detect quickly the time slice that is the most similar to it, and by this to find the input parameter values that generate similar outputs.

One of the distinguishing features of our underlying application is the huge volume of data to be analyzed. To deal with such a dataset, we intend to develop scalable solutions that take advantage of parallel frameworks (such as Mapreduce, Spark or Flink) that allow us to make efficient parallel data mining systems over ordinary machines. We will capitalize on our recent projects where we developed parallel solutions for indexing and analyzing very large datasets, e.g. [YAMP2017, SAM2017, SAM2015, AHMP2015].

One possibility for scalable correlation detection in this project is to build on top of related work, including the matrix profile index [YZUB+2016] over time series generated by thousands of sensors.  One of the tasks, in the context of this project, will be to develop distributed solutions for constructing and exploiting such  indexes over large scale time series coming from massively distributed sensors.

[YAMP2017] Djamel-Edine Yagoubi, Reza Akbarinia, Florent Masseglia, Themis Palpanas. DPiSAX: Massively Distributed Partitioned iSAX. IEEE International Conference on Data Mining (ICDM),  2017.

[SAM2017] Saber Salah, Reza Akbarinia, Florent Masseglia. Data placement in massively distributed environments for fast parallel mining of frequent itemsets. Knowledge and Information Systems (KAIS), 53(1), 207-237, 2017.

[SAM2015] Saber Salah, Reza Akbarinia, Florent Masseglia, Fast Parallel Mining of Maximally Informative k-Itemsets in Big Data. IEEE International Conference on Data Mining (ICDM), 2015.

[AHMP2015] Tristan Allard, Georges Hébrail, Florent Masseglia, Esther Pacitti. Chiaroscuro: Transparency and Privacy for Massive Personal Time-Series Clustering.  ACM Conference on Management of Data (SIGMOD), pp. 779-794, 2015.

[YZUB+2016] C-C M. Yeh, Y. Zhu, L. Ulanova, N. Begum, Y. Ding, H. Anh Dau, D. Furtado Silva, A. Mueen, E. Keogh. Matrix Profile I: All Pairs Similarity Joins for Time Series: A Unifying View that Includes Motifs, Discords and Shapelets. IEEE International Conference on Data Mining (ICDM), 2016.

Environment

This work will be done in the context of collaboration between INRIA Zenith team and Safran. The Zenith project-team ( https://team.inria.fr/zenith/ ), headed by Patrick Valduriez, aims to propose new solutions related to scientific data and activities. Our research topics incorporate the management and analysis of massive and complex data, such as uncertain data, in highly distributed environments. Our team is located in Montpellier that is a very active town located in south of France.

Safran ( https://www.safran-group.com/ ; https://en.wikipedia.org/wiki/Safran ) is a multinational company specialized in the aircraft/ rocket engines and aerospace component manufacturing.

Skills and profiles

Strong background in data mining

Strong skill of parallel data processing in Spark

A Ph.D. in computer science or mathematics

Duration, Salary and Location

Duration: 12 months

Annual Gross Salary: up to 42K€ depending on your experience.

Starting date: flexible but ideally as soon as possible.

This work will be done mainly in Montpellier, with regular visits to the Safran team in Paris.

Contact

Florent Masseglia (florent.masseglia@inria.fr) and Reza Akbarinia (reza.akbarinia@inria.fr).

Permanent link to this article: https://team.inria.fr/zenith/similarity-search-in-large-scale-time-series/