Events in October–November 2024
- There are no events scheduled during these dates.
- March 26, 2024 @ Bâtiment IMAG (442) -- [Seminar] Romain Cravic
Who: Romain Cravic
When: Tuesday, March 26, 14:00-15:00
Where: IMAG 406
What: Résoudre les jeux partiellement observables : Algorithme CFR et variantes de Monte-Carlo, deuxième partie
More: Dans ce séminaire en deux parties, je vous présenterai la famille des algorithmes CFR (CounterFactual Regret minimization) appliqués aux jeux sous forme extensive à information incomplète. CFR a été utilisé en 2015 par des chercheurs de l’université d’Alberta pour résoudre une version « réaliste » du poker (Heads-up limit poker). Dans la première partie nous verrons comment modéliser l’information incomplète pour les jeux à deux joueurs à somme nulle, comment définir des stratégies dans ce modèle, avant d’analyser en détail l’algorithme CFR qui calcule un approximation de l’équilibre de Nash du jeu. Pour aller plus loin, dans la seconde partie, nous étudierons les variantes dites « Monte-Carlo » de l’algorithme CFR qui sont indispensables quand on souhaite trouver des bonnes stratégies dans des jeux plus ambitieux.
- April 3, 2024 @ Bâtiment IMAG (442) -- [Seminar] Victor Boone
Who: Victor Boone
When: Wednesday, April 3, 14:00-15:00
Where: 447
What: Learning MDPs with Extended Bellman Operators
More: Efficiently learning Markov Decision Processes (MDPs) is difficult. When facing an unknown environment, where is the adequate limit between repeating actions that have shown their efficiency in the past (exploitation of your knowledge) and testing alternatives that may actually be better than what you currently believe (exploration of the environment)? To bypass this dilemma, a well-known solution is the "optimism-in-face-of-uncertainty" principle: Think of the score of an action as being the largest that is statistically plausible.
The exploration-exploitation dilemma then becomes the problem of tuning optimism. In this talk, I will explain how optimism in MDPs can be all rephrased using a single operator, embedding all the uncertainty in your environment within a single MDP. This is a story about "extended Bellman operators" and "extended MDPs", and about how one can achieve minimax optimal regret using this machinery.
- April 11, 2024 @ Bâtiment IMAG (442) -- [Seminar] Charles Arnal
Who: Charles Arnal
When: Thursday, April 11, 14:00-15:00
Where: 442
What: Mode Estimation with Partial Feedback
More: The combination of lightly supervised pre-training and online fine-tuning has played a key role in recent AI developments. These new learning pipelines call for new theoretical frameworks. In this paper, we formalize core aspects of weakly supervised and active learning with a simple problem: the estimation of the mode of a distribution using partial feedback. We show how entropy coding allows for optimal information acquisition from partial feedback, develop coarse sufficient statistics for mode identification, and adapt bandit algorithms to our new setting. Finally, we combine those contributions into a statistically and computationally efficient solution to our problem.
- April 30, 2024 @ Bâtiment IMAG (442) -- Seminar Rémi Castera
Correlation of Rankings in Matching Markets