Post-doc position: A/B testing guided clustering

Amadeus ( ) and the Zenith team of Inria ( ) are seeking a postdoctoral fellow in A/B testing, clustering and time series analytics.

Title: A/B testing guided clustering


The post-doc position takes place in a new partnership between Amadeus and Inria. It is linked to Amadeus’ developments in implementation of intelligent and evolving flight recommendation search means for online travel agencies (OTAs). The general principle is to choose recommendations by optimizing several criteria simultaneously (price, duration of the trip, number of stops, etc.). Each flight recommendation is associated with a score defined as a linear combination of criteria and weight. Weights therefore define how important each criterion is. To be able to adapt the importance of the criteria according to the profile of the user, user queries are segmented by means of unsupervised classification (or clustering). Weight values are optimized independently on each segment by maximizing the estimated reservation probability of returned flight recommendations. Thus, a set of weights is associated with each of the user profiles, called segments. During the weight creation process, large volumes of data are used, especially during the segmentation phase. The ability of the flight recommendation search system to increase the conversion rate is evaluated using A / B test campaigns.

The expected work in this postdoc position is comprised of two complementary topics:
1. optimizing the planning of A / B test campaigns,
2. developing incremental methods of adaptation of flight search segmentation from the results of A / B tests.

The objective of the first point is to improve the use of A / B tests in order to draw conclusions as quickly and as safely as possible, as well as to be able to know at each stage the uncertainty about the results of the A / B test.

The second topic is directly related to the first, since it is a question of using the results of A / B test obtained on each segment to improve the segmentation. The initial idea is to develop an incremental clustering algorithm in which phases of search segmentation and A / B test follow one another.

About Amadeus
Amadeus builds the critical solutions that help airlines and airports, hotels and railways, search engines, travel agencies, tour operators and other travel players to run their operations and improve the travel experience, billions of times a year, all over the world.

About Zenith
The Zenith project-team, headed by Patrick Valduriez, aims to propose new solutions related to scientific data and activities. Our research topics incorporate the management and analysis of massive and complex data, such as uncertain data, in highly distributed environments.

Skills and profile:

– Background in data mining / data analytics
– A Ph.D. in computer science or mathematics

Environment, salary, duration:

The postdoc will be supervised by Amadeus and Inria, while being located in the Amadeus facilities of Sophia Antipolis.

Net salary: up to 3300 Euros net/month depending on your experience.
Duration: 1 Year
Starting date: flexible but ideally as soon as possible.


Nicolas Maillot ( )
Florent Masseglia ( )


Permanent link to this article:

Zenith seminar: Eduardo Ogasawara “Comparing Motif Discovery Techniques with Sequence Mining in the Context of Space-Time Series”, 26 nov. 2018

Zenith seminar:  26/11/18, 14h – BAT5-02.249

Comparing Motif Discovery Techniques with Sequence Mining in the Context of Space-Time Series

Eduardo Ogasawara

CEFET / Rio de Janeiro)

Abstract:  A relevant area that is being explored in time series analysis community is finding patterns. Patterns are sub-sequences of time series that are related to some special properties or behaviors. A particular pattern that occurs a significant number of times in time series is denominated motif. Discovering motifs in time series data has been widely explored. Many time series techniques were developed to tackle this problem. However, various important time-series phenomena present different behaviors when observed at points of space (for example, series collected by sensors and IoT) and are better modeled as spatial-time series, in which each time series is associated to a position in space. When it comes to spatial-time series, it is possible to observe an open gap according to the literature review. Under such scenarios, motifs might not be discovered when we analyze each time series individually. They may be frequent if we consider different spatial-time series at some time interval. Finding patterns that are frequent in a constrained space and time, i.e., find spatial-time motifs, may enable us to comprehend how a phenomenon occurs concerning space and time. Meanwhile, database/data mining community studies the problem of discovering spatiotemporal sequential patterns, which appears in a broad range of applications. Many initiatives find sequences constrained by space and time, which can shed light to tackle spatial-time motif discovery. We are going to present these different techniques and potential challenges and solutions arising from these two communities in the context of spatial-time series motif discovery.


Short bio: Eduardo Ogasawara is a Professor of the Computer Science Department of the Federal Center for Technological Education of Rio de Janeiro (CEFET / RJ) since 2010. He holds a D.Sc. in Systems Engineering and Computer Science at COPPE / UFRJ. His background is in Databases, and his primary interest is Data Science. He is currently interested in data preprocessing, prediction, and pattern discovery regarding spatial-time series and also data-driven parallel and distributed processing. He is a member of the IEEE, ACM, INNS, and SBC. He led the creation of the Post-Graduate Program in Computer Science (PPCIC) of CEFET/RJ approved by CAPES in 2016. Currently, he is heading PPCIC.

Permanent link to this article:

Zenith seminar: Nicolas Anciaux “Personal Data Management Systems using Trusted Execution Environments” 21 nov. 2018

Zenith seminar
Wed. 21 nov. 2018, 10h30
Bat. 5, room 1.124
Personal Data Management Systems using Trusted Execution Environments
Nicolas Anciaux
Inria Saclay & UVSQ

Abstract : Thanks to smart disclosure initiatives and new regulations like GDPR, Personal Data Management Systems (PDMS) emerges. The PDMS paradigm empowers each individual with his complete digital environment. On the bright side, this opens the way to novel value-added services when crossing multiple sources of data of a given person or crossing the data of multiple people. Yet this paradigm shift towards user empowerment raises fundamental questions with regards to the appropriateness of the functionalities and the data management and protection techniques which are offered by existing solutions to laymen users. This presentation (1) compares PDMS alternatives in terms of functionalities and threat models, (2) derives a general set of functionality and security requirements that any PDMS should consider, (3) proposes a preliminary design building upon Trusted Execution Environments for an extensive and secure PDMS reference architecture, and (4) identifies a set of challenges of implementing such a PDMS.

Permanent link to this article:

IBC and Zenith Seminar: Daniel de Oliveira “Parameter and Data Recommendation in Scientific Workflows based on Provenance”, 5 June 2018

IBC seminar (WP5): 5/6/2018, room 1.124, 14h

Organized by Zenith

Parameter and Data Recommendation in Scientific Workflows based on Provenance
Daniel de Oliveira

Fluminense Federal University
Rio de Janeiro, Brazil

Abstract: A growing number of data- and compute-intensive experiments have been modeled as scientific workflows in the last years. Such experiments are commonly executed several types varying parameters and input data files since the comparing method plays an important role in scientific research. As the complexity of the experiments and the volume of input and intermediate data increase, scientists have to spend much time defining parameter values and data files to be processed in such experiments. This talk discusses the problem of identifying suitable parameter values and data files for an experiment and then recommending them for the scientist. We present a novel method to make recommendations for scientists. This method is based on data captured from previous executions of the workflow and machine learning algorithms. Our experiments show that, the recommended data files and parameters do a good job in helping scientists to execute workflow successfully.

Permanent link to this article:

Zenith seminar: Patrick Valduriez “Blockchain 2.0: opportunities and risks” 19 oct 2018

Séminaire Zenith: vendredi 19 octobre 2018, 11h

Blockchain 2.0: opportunities and risks
Patrick Valduriez
Zenith, Inria & LIRMM

Popularized by bitcoin and other digital currencies, the blockchain has the potential to revolutionize our economic and social systems.  Blockchain was invented for bitcoin to solve the double spending problem of previous digital currencies without the need of a trusted, central authority. The original blockchain is a public, distributed ledger that can record and share transactions among a number of computers in a secure and permanent way. It is a complex distributed database infrastructure, combining several technologies such as P2P, data replication, consensus protocols and cryptography.

The term Blockchain 2.0 refers to new applications of the blockchain to go beyond transactions and enable exchange of assets without powerful intermediaries. Examples of applications are smart contracts, persistent digital ids, intellectual property rights, blogging, voting, reputation, etc. Blockchain 2.0  could dramatically cut down transaction costs, by automating operations and removing intermediaries. It could allow people to monetize their own information and creators of intellectual property to be properly compensated. The potential impact on society is also huge, as excluded people could join the global economy, e.g. by having digital bank accounts for free.

In this talk, I will introduce Blockchain 2.0 technologies and applications, and discuss the opportunities and risks. In developing countries, for instance, the lack of existing infrastructure and regulation may be a chance to embrace the blockchain revolution and leapfrog traditional solutions. But there are also risks, related to regulation, security, privacy, or integration with existing practice, which must be well understood and addressed.

Permanent link to this article:

Zenith seminar: Mathieu Fontaine “Alpha-stable process for signal processing” 20 sept 2018

Séminaire Zenith: jeudi 20 septembre 2018, 11h
BAT5-01.124, Campus Saint Priest

Alpha-stable process for signal processing
Mathieu Fontaine
Zenith, Inria & LIRMM

The scientific topic of sound source separation (SSS) aims at decomposing audio signals into their constitutive components, e.g. separate the main singer voice from the background music or from the background noise. In the case of very old and degraded historical recordings, SSS strongly extends classical denoising methods by being able to account for complex signal or noise patterns and achieve efficient separation where traditional approaches fail.
Alpha-stable processes enjoy important mathematical challenges, efficient filtering applications and computational efficiency. This presentation targets at studying these models from a theoretical point of view, for the purpose of extending them in several directions : audio source localization, theoretical research in multichannel scenarios and restoring old historical recordings.

Permanent link to this article:

IBC seminar: Dennis Shasha “Reducing Errors by Refusing to Guess (Occasionally)” 1 june 2018.

Séminaire IBC, organisé par  Zenith
Vendredi 1er juin 2018, 14h
Salle des séminaire, Bat. 4, LIRMM

SafePredict: Reducing Errors by Refusing to Guess (Occasionally)
Dennis Shasha
Courant Institute, New York University

We propose a meta-algorithm to reduce the error rate of state-of-the-art machine learning algorithms by refusing to make predictions in certain cases even when the underlying algorithms suggest predictions. Intuitively, our SafePredict approach estimates the likelihood that a prediction will be in error and when that likelihood is high, the approach refuses to go along with that prediction. Unlike other approaches, we can probabilistically guarantee an error rate on predictions we do make (denoted the {\em decisive predictions}). Empirically on seven diverse data sets from genomics, ecology, image-recognition, and gaming,, our method can probabilistically guarantee to reduce the error rate to 1/4 of what it is in the state-of-the-art machine learning algorithm at a cost of between 11% and 58% refusals. Competing state-of-the-art methods refuse at roughly twice the rate  of ours (sometimes refusing all suggested predictions).

Short bio

Dennis Shasha is a Julius Silver Professor of computer science at the Courant Institute of New York University and an Associate Director of NYU Wireless. He works on meta-algorithms for machine learning to achieve guaranteed correctness rates, with biologists on pattern discovery for network inference; with computational chemists on algorithms for protein design; with physicists and financial people on algorithms for time series; on clocked computation for DNA computing; and on  computational reproducibility. Other areas of interest include database tuning as well as tree and graph matching. Because he likes to type, he has written six books of puzzles about a mathematical detective named Dr. Ecco, a biography about great computer scientists, and a book about the future of computing. He has also written five technical books about database tuning, biological pattern recognition, time series, DNA computing, resampling statistics,  and causal inference in molecular networks. He has co-authored over eighty journal papers, seventy conference papers, and twenty-five patents. He has written the puzzle column for various publications including Scientific American, Dr. Dobb’s Journal, and the Communications of the ACM. He is a fellow of the ACM and an INRIA International Chair.

Permanent link to this article:

PhD/postdoc positions in Machine Learning and Big Data

Zenith  is proposing a PhD position and a postdoc position on machine learning and big data, with Antoine Liutkus and Patrick Valduriez as advisors.

The successful candidates would work with us at Inria offices in Montpellier on: learning parameters models in big data, with applications to audio data analysis and processing.

Main research themes:
. Parallelization, distributed computing
. Probabilistic models, inference, sketching
. Deep learning
. Audio processing

The programme is very selective and a good publication track is required. Foreigners are strongly encouraged to apply, because the funding promotes mobility.

. PhD position
. Postdoc position


Permanent link to this article:

Journée Droit de l’Internet: la blockchain, Montpellier, Vendredi 2 mars 2018

Zenith participe à la Journée Droit de l’Internet: la blockchain, Faculté de Droit et de Science Politique, Montpellier, Vendredi 2 mars 2018.




Permanent link to this article:

IBC seminar: Themis Palpanas “Data Series Management: Fulfilling the Need for Big Sequence Analytics” 19 jan. 2018

Séminaire IBC, organisé par  Zenith
Lundi 19 mars 2018, 11h
Salle 1/124, Bat. 5

Data Series Management: Fulfilling the Need for Big Sequence Analytics
Themis Palpanas
IUF et Université Paris Descartes

There is an increasingly pressing need, by several applications in diverse domains, for developing techniques able to index and mine very large collections of sequences, or data series. Examples of such applications come from a multitude of social and scientific domains, including biology, where high-throughput sequencing is generating massive sequence collections. It is not unusual for these applications to involve numbers of data series in the order of hundreds of millions to billions, which are often times not analyzed in their full detail due to their sheer size. However, no existing data management solution (such as relational databases, column stores, array databases, and time series management systems) can offer native support for sequences and the corresponding operators necessary for complex analytics.
In this talk, we argue for the need to study the theory and foundations for sequence management of big data sequences, and to build corresponding systems that will enable scalable management and analysis of very large sequence collections. We describe recent efforts in designing techniques for indexing and mining truly massive collections of data series that will enable scientists to easily analyze their data. We discuss novel techniques that adaptively create data series indexes, allowing users to correctly answer queries before the indexing task is finished. Finally, we present our vision for the future in big sequence management research, including the promising directions in terms of storage, distributed processing, and query benchmarks.

short bio
Themis Palpanas is Senior Member of the Institut Universitaire de France (IUF), a distinction that recognizes excellence across all academic disciplines, and professor of computer science at the Paris Descartes University (France), where he is director of diNo, the data management group. He received the BS degree from the National Technical University of Athens, Greece, and the MSc and PhD degrees from the University of Toronto, Canada. He has previously held positions at the University of Trento, and at IBM T.J. Watson Research Center, and visited Microsoft Research, and the IBM Almaden Research Center.
His interests include problems related to data science (big data analytics and machine learning applications). He is the author of nine US patents, three of which have been implemented in world-leading commercial data management products. He is the recipient of three Best Paper awards, and the IBM Shared University Research (SUR) Award.
He is curently serving on the VLDB Endowment Board of Trustees, as an Editor in Chief for the BDR Journal, Associate Editor for VLDB 2019, Associate Editor in the TKDE, and IDA journals, as well as on the Editorial Advisory Board of the IS journal, and the Editorial Board of the TLDKS Journal. He has served as General Chair for VLDB 2013, Associate Editor for VLDB 2017, and Workshop Chair for EDBT 2016, ADBIS 2013 and ADBIS 2014, General Chair for the PDA@IOT International Workshop (in conjunction with VLDB 2014), and General Chair for the Event Processing Symposium 2009.

Permanent link to this article: