Calendar

Events in December 2017–January 2018

Monday Tuesday Wednesday Thursday Friday Saturday Sunday
November 27, 2017
November 28, 2017
November 29, 2017
November 30, 2017(1 event)

Convergence d’algorithme de non regret, Amélie Heliou (Polaris)


November 30, 2017

Les algorithmes de non-regret sont souvent utilisés dans les jeux répétés où les joueurs ont peu d’information sur le jeu auquel ils jouent. Ces algorithmes garantissent que le regret de chaque joueur est sous-linéaire. La moyenne temporelle des stratégies choisies en suivant un algorithme de non-regret converge dans l’ensemble des équilibres corrélés. Cependant cela ne donne aucune information sur la convergence de la séquence de stratégies.
Nous nous sommes intéressés à la question « est-ce que la sequence de stratégie obtenue pas un algorithme de non regret converge vers un équilibre de Nash? ».
Dans cet exposé, je présenterai un algorithme de non regret appelé Hedge qui est une version d’algorithmes à poids exponentiels. En particulier, je discuterai la convergence des séquences de stratégies obtenues par Hedge en utilisant deux types d’informations accessibles aux joueurs.

Bâtiment IMAG (442)
Saint-Martin-d'Hères, 38400
France

December

December 1, 2017
December 2, 2017
December 3, 2017
December 4, 2017
December 5, 2017
December 6, 2017
December 7, 2017(1 event)

Keynote

December 7, 2017

December 8, 2017
December 9, 2017
December 10, 2017
December 11, 2017
December 12, 2017
December 13, 2017
December 14, 2017(1 event)

Learning efficient Nash equilibra in distributed systems by Bary Pradelski (ETH Zurich)


December 14, 2017

Learning efficient Nash equilibra in distributed systems

with H. Peyton Young

An individual’s learning rule is completely uncoupled if it does not depend directly on the actions or payoffs of anyone else. We propose a variant of log linear learning that is completely uncoupled and that selects an efficient (welfare-maximizing) pure Nash equilibrium in all generic n-person games that possess at least one pure Nash equilibrium. In games that do not have such an equilibrium, there is a simple formula that expresses the long-run probability of the various disequilibrium states in terms of two factors: i) the sum of payoffs over all agents, and ii) the maximum payoff gain that results from a unilateral deviation by some agent. This welfare/stability trade-off criterion provides a novel framework for analyzing the selection of disequilibrium as well as equilibrium states in n-person games.

Bâtiment IMAG (306)
Saint-Martin-d'Hères, 38400
France
December 15, 2017
December 16, 2017
December 17, 2017
December 18, 2017(1 event)

Autotuning MPI Collectives using Performance Guidelines, Sascha Hunold


December 18, 2017

MPI collective operations provide a standardized interface for performing data movements within a group of processes. The efficiency
of collective communication operations depends on the actual algorithm, its implementation, and the specific communication problem
(type of communication, message size, and number of processes).
Many MPI libraries provide numerous algorithms for specific collective operations. The strategy for selecting an efficient algorithm
is often times predefined (hard-coded) in MPI libraries, but some of
them, such as Open MPI, allow users to change the algorithm manually. Finding the best algorithm for each case is a hard problem, and
several approaches to tune these algorithmic parameters have been
proposed. We use an orthogonal approach to the parameter-tuning
of MPI collectives, that is, instead of testing individual algorithmic
choices provided by an MPI library, we compare the latency of
a specific MPI collective operation to the latency of semantically
equivalent functions, which we call the mock-up implementations.
The structure of the mock-up implementations is defined by selfconsistent performance guidelines. The advantage of this approach
is that tuning using mock-up implementations is always possible,
whether or not an MPI library allows users to select a specific algorithm at run-time. We implement this concept in a library called
PGMPITuneLib, which is layered between the user code and the
actual MPI implementation. This library selects the best-performing
algorithmic pattern of an MPI collective by intercepting MPI calls
and redirecting them to our mock-up implementations. Experimental results show that PGMPITuneLib can significantly reduce the
latency of MPI collectives, and also equally important, that it can
help identifying the tuning potential of MPI libraries.

December 19, 2017
December 20, 2017
December 21, 2017(1 event)

TAPIOCA : Une bibliothèque d'agrégation de données pour les I/O parallèles prenant en compte la topologie, François Tessier, Argonne


December 21, 2017

TAPIOCA : Une bibliothèque d'agrégation de données pour les I/O
parallèles prenant en compte la topologie

L'augmentation de la puissance de calcul des supercalculateurs engendre
un coût considérable des mouvements de données. En outre, la majorité
des simulations scientifiques ont des besoins importants en terme de
lecture et d'écriture sur les systèmes de fichiers parallèles. De
nombreuses solutions logicielles ont été développées pour contenir le
goulot d'étranglement causé par les I/O. Une stratégie bien connue dans
le monde des opérations collectives d'I/O consiste à sélectionner un
sous-ensemble des processus de l'application pour agréger des morceaux
de données contiguës avant d'effectuer les lectures et écritures. Dans
cet exposé, je présenterai TAPIOCA, une bibliothèque MPI implémentant un
algorithme d’agrégation de données optimisé prenant en compte la
topologie. Je montrerai les gains de performance substantiels en lecture
et écriture que nous avons obtenus sur deux supercalculateurs présents à
Argonne National Laboratory. Pour terminer, j'aborderai nos travaux
actuels dans TAPIOCA afin de tirer parti des nouveaux niveaux de mémoire
et de stockage disponibles sur les systèmes actuels et à venir (MCDRAM,
SSD locaux, ...).

Bâtiment IMAG
Saint-Martin-d'Hères, 38400
France
December 22, 2017
December 23, 2017
December 24, 2017
December 25, 2017
December 26, 2017
December 27, 2017
December 28, 2017
December 29, 2017
December 30, 2017
December 31, 2017

January

January 1, 2018
January 2, 2018
January 3, 2018
January 4, 2018
January 5, 2018
January 6, 2018
January 7, 2018
January 8, 2018
January 9, 2018
January 10, 2018
January 11, 2018(1 event)

Keynote


January 11, 2018

January 12, 2018
January 13, 2018
January 14, 2018
January 15, 2018
January 16, 2018
January 17, 2018
January 18, 2018(1 event)

Predicting the Energy-Consumption of MPI Applications at Scale Using Only a Single Node, by Christian Heinrich (Polaris)


January 18, 2018

Monitoring and assessing the energy efficiency of supercomputers and
data centers is crucial in order to limit and reduce their energy
consumption. Applications from the domain of High Performance Computing
(HPC), such as MPI applications, account for a significant fraction of
the overall energy consumed by HPC centers. Simulation is a popular
approach for studying the behavior of these applications in a variety of
scenarios, and it is therefore advantageous to be able to study their
energy consumption in a cost-efficient, controllable, and also
reproducible simulation environment. Alas, simulators supporting HPC
applications commonly lack the capability of predicting the energy
consumption, particularly when target platforms consist of multi-core
nodes. In this work, we aim to accurately predict the energy consumption
of MPI applications via simulation. Firstly, we introduce the models
required for meaningful simulations: The computation model, the
communication model, and the energy model of the target platform.
Secondly, we demonstrate that by carefully calibrating these models on a
single node, the predicted energy consumption of HPC applications at a
larger scale is very close (within a few percents) to real experiments.
We further show how to integrate such models into the SimGrid simulation
toolkit. In order to obtain good execution time predictions on
multi-core architectures, we also establish that it is vital to
correctly account for memory effects in simulation. The proposed
simulator is validated through an extensive set of experiments with
well-known HPC benchmarks. Lastly, we show the simulator can be used to
study applications at scale, which allows researchers to save both time
and resources compared to real experiments

Bâtiment IMAG (442)
Saint-Martin-d'Hères, 38400
France
January 19, 2018
January 20, 2018
January 21, 2018
January 22, 2018
January 23, 2018
January 24, 2018
January 25, 2018
January 26, 2018
January 27, 2018
January 28, 2018
January 29, 2018
January 30, 2018
January 31, 2018

February

February 1, 2018(1 event)

Keynote


February 1, 2018

February 2, 2018
February 3, 2018
February 4, 2018

Comments are closed.