Deep learning architectures and training methods by Loris Felardos (Inria + IBPC)
2:00 pm – 3:00 pm November 29, 2018
The past few years have seen a dramatic increase in the performance of deep learning architectures applied to fields ranging from computer vision and speech recognition to bio-informatics and drug design. This presentation will consist in three parts. Part 1 is an gentle introduction to the basic ideas that are crucial for training deep neural networks (like logistic regression, SGD and optimization methods). Part 2 focuses on the most common building blocks (convolutions, attention layers and skip connections) of practical neural architectures such as recurrent neural networks, generative models and the more recent graph convolutional networks. Finally, part 3 insists on the importance of carefully designed loss functions across a range a different training methods (may it be for supervised, semi-supervised or unsupervised learning).
Learning with minimal information in continuous games, by Mario Bravo (Univ. de Chile)
2:00 pm – 3:00 pm December 13, 2018
In this talk we introduce a learning process for games with continuous action sets. The procedure is payoff-based and thus requires no sophistication from players and no knowledge of the game. We show that despite such limited information, players will converge to Nash in large classes of games (possibly with a continuum of equilibria). In particular, convergence to stable Nash equilibrium is guaranteed in all games with strategic complements as well as in concave games. Time permitting, we will also discuss convergence results for locally ordinal potential games and games with isolated equilibria.
This is joint work with Sebastian Bervoets and Mathieu Faure.
Reservation Strategies for Stochastic Jobs by Guillaume Aupy (Inria Bordeaux)
2:00 pm – 3:00 pm December 18, 2018
We are interested in scheduling stochastic jobs on a reservation-based
platform. Specifically, we consider jobs whose execution time follows a
known probability distribution. The platform is reservation-based,
meaning that the user has to request fixed-length time slots. The cost
depends on both the request duration and the actual execution time of
the job. A reservation strategy is a sequence of increasing-length
reservations, which are paid for until one of them allows the job to
successfully complete. The goal is to minimize the total expected cost
of the strategy. I will present different scheduling strategies and
properties of an optimal solution.
Best-of-two-worlds analysis of online search, by Christopher Durr (Lip6)
2:00 pm – 3:00 pm January 10, 2019
Best-of-two-worlds analysis of online search
In search problems, a mobile searcher seeks to locate a target that hides in some unknown position of the environment. Such problems are typically considered to be of an on-line nature, in that the input is unknown to the searcher, and the performance of a search strategy is usually analyzed by means of the standard framework of the competitive ratio, which compares the cost incurred by the searcher to an optimal strategy that knows the location of the target. However, one can argue that even for simple search problems, competitive analysis fails to distinguish between strategies which, intuitively, should have different performance in practice.
Motivated by the above, in this work we introduce and study measures supplementary to competitive analysis in the context of search problems. In particular, we focus on the well-known problem of linear search, informally known as the cow-path problem, for which there is an infinite number of strategies that achieve an optimal competitive ratio equal to 9. We propose a measure that reflects the rate at which the line is being explored by the searcher, and which can be seen as an extension of the bijective ratio over an uncountable set of requests. Using this measure we show that a natural strategy that explores the line aggressively is optimal among all 9-competitive strategies. This provides, in particular, a strict separation from the competitively optimal doubling strategy, which is much more conservative in terms of exploration. We also provide evidence that this aggressiveness is requisite for optimality, by showing that any optimal strategy must mimic the aggressive strategy in its first few explorations.
joint work with Spyros Angelopoulos and Shendan Jin
Realistic simulation of the execution of applications deployed on large distributed systems with a focus on improving file management, by Anchen Chai (Insa Lyon)
2:00 pm – 3:00 pm January 24, 2019
Simulation is a powerful tool to study distributed systems. It allows researchers to evaluate different scenarios in a reproducible manner, which is hardly possible in real experiments. However, the realism of simulations is rarely investigated in the literature, leading to a questionable accuracy of the simulated metrics. In this context, the main aim of our work is to improve the realism of simulations with a focus on file transfer in a large distributed production system (i.e., the EGI federated e-Infrastructure (EGI)). Then, based on the findings obtained from realistic simulations, we can propose reliable recommendations to improve file management in the Virtual Imaging Platform (VIP).
In order to realistically reproduce certain behaviors of the real system in simulation, we need to obtain an inside view of it. Therefore, we collect and analyze a set of execution traces of one particular application executed on EGI via VIP. The realism of simulations is investigated with respect to two main aspects in this thesis: the simulator and the platform model.
Based on the knowledge obtained from traces, we design and implement a simulator to provide a simulated environment as close as possible to the real execution conditions for file transfers on EGI. A complete description of a realistic platform model is also built by leveraging the information registered in traces. The accuracy of our platform model is evaluated by confronting the simulation results with the ground truth of real transfers. Our proposed model is shown to largely outperform the state-of-the-art model to reproduce the real-life variability of file transfers on EGI.
Finally, we cross-evaluate different file replication strategies by simulations using an enhanced state-of-the-art model and our platform model built from traces. Simulation results highlight that the instantiation of the two models leads to different qualitative decisions of replication, even though they reflect a similar hierarchical network topology. Last but not least, we find that selecting sites hosting a large number of executed jobs to replicate files is a reliable recommendation to improve file management of VIP. In addition, adopting our proposed dynamic replication strategy can further reduce the duration of file transfers except for extreme cases (very poorly connected sites) that only our proposed platform model is able to capture.