SEMINARS

Events in August–September 2017

Monday Tuesday Wednesday Thursday Friday Saturday Sunday
July 31, 2017

August

August 1, 2017
August 2, 2017
August 3, 2017
August 4, 2017
August 5, 2017
August 6, 2017
August 7, 2017
August 8, 2017
August 9, 2017
August 10, 2017
August 11, 2017
August 12, 2017
August 13, 2017
August 14, 2017
August 15, 2017
August 16, 2017
August 17, 2017
August 18, 2017
August 19, 2017
August 20, 2017
August 21, 2017
August 22, 2017
August 23, 2017
August 24, 2017
August 25, 2017
August 26, 2017
August 27, 2017
August 28, 2017
August 29, 2017
August 30, 2017
August 31, 2017

September

September 1, 2017
September 2, 2017
September 3, 2017
September 4, 2017
September 5, 2017
September 6, 2017
September 7, 2017(1 event)

keynote LIG


September 7, 2017

September 8, 2017
September 9, 2017
September 10, 2017
September 11, 2017
September 12, 2017
September 13, 2017
September 14, 2017(1 event)

Computing with coins (by Jean-Marc Vincent)


September 14, 2017

The Head and tail random process appears, since the beginning of the art of computation, as a fundamental part of computer science. This talk will explore at a very basic level several ideas about computation of numbers, evaluation of quantities, checking techniques or recommendation evaluation. By small examples, we’ll try to establish links through ages between the probabilistic and algorithmic thinking.

Bâtiment IMAG (442)
Saint-Martin-d'Hères, 38400
France
September 15, 2017
September 16, 2017
September 17, 2017
September 18, 2017
September 19, 2017
September 20, 2017
September 21, 2017(1 event)

Kleinberg 's Grid Unchained (by Fabien Mathieu, Nokia)


September 21, 2017

One of the key features of small-worlds is the ability to route messages with few hops only using local knowledge of the topology. In 2000, Kleinberg proposed a model based on an augmented grid that asymptotically exhibits such property.

In this paper, we propose to revisit the original model from a simulation-based perspective. Our approach is fueled by a new algorithm that can draw an augmenting link in Õ(1).

The resulting speed gain enables detailed numerical evaluations. We show for example that in practice, the augmented scheme proposed by Kleinberg is more robust than predicted by the asymptotic behavior, even for very large finite grids. We also propose tighter bounds on the performance of Kleinberg's routing algorithm. At last, we show that, fed with realistic parameters, the model gives results in line with real-life experiments.

Bâtiment IMAG (442)
Saint-Martin-d'Hères, 38400
France
September 22, 2017
September 23, 2017
September 24, 2017
September 25, 2017
September 26, 2017
September 27, 2017
September 28, 2017(1 event)

On-line speed scaling minimizing expected energy consumption for real-time tasks, by Stephan Plassart (Polaris)


September 28, 2017

We present a Markov Decision Process (MDP) approach to compute the optimal on-line speed scaling policy to minimize the energy consumption of a processor executing a finite or infinite set of jobs with real-time constraints. The policy is computed off-line but used on-line. We provide several qualitative properties of the optimal policy: monotonicity with respect to the jobs parameters, comparison with on-line deterministic algorithms. Numerical experiments show that our proposition performs well when compared with off-line optimal solutions and out-performs on-line solutions oblivious to statistical information on the jobs.

Bâtiment IMAG (442)
Saint-Martin-d'Hères, 38400
France
September 29, 2017
September 30, 2017

October

October 1, 2017
  • April 3, 2024 @ Bâtiment IMAG (442) -- [Seminar] Victor Boone

    Who: Victor Boone

    When: Wednesday, April 3, 14:00-15:00

    Where: 447

    What: Learning MDPs with Extended Bellman Operators

    More: Efficiently learning Markov Decision Processes (MDPs) is difficult. When facing an unknown environment, where is the adequate limit between repeating actions that have shown their efficiency in the past (exploitation of your knowledge) and testing alternatives that may actually be better than what you currently believe (exploration of the environment)? To bypass this dilemma, a well-known solution is the "optimism-in-face-of-uncertainty" principle: Think of the score of an action as being the largest that is statistically plausible.

    The exploration-exploitation dilemma then becomes the problem of tuning optimism. In this talk, I will explain how optimism in MDPs can be all rephrased using a single operator, embedding all the uncertainty in your environment within a single MDP. This is a story about "extended Bellman operators" and "extended MDPs", and about how one can achieve minimax optimal regret using this machinery.

  • April 11, 2024 @ Bâtiment IMAG (442) -- [Seminar] Charles Arnal

    Who: Charles Arnal

    When: Thursday, April 11, 14:00-15:00

    Where: 442

    What: Mode Estimation with Partial Feedback

    More: The combination of lightly supervised pre-training and online fine-tuning has played a key role in recent AI developments. These new learning pipelines call for new theoretical frameworks. In this paper, we formalize core aspects of weakly supervised and active learning with a simple problem: the estimation of the mode of a distribution using partial feedback. We show how entropy coding allows for optimal information acquisition from partial feedback, develop coarse sufficient statistics for mode identification, and adapt bandit algorithms to our new setting. Finally, we combine those contributions into a statistically and computationally efficient solution to our problem.

  • April 30, 2024 @ Bâtiment IMAG (442) -- Seminar Rémi Castera

    Correlation of Rankings in Matching Markets

Comments are closed.