Events in January–February 2023
MonMonday | TueTuesday | WedWednesday | ThuThursday | FriFriday | SatSaturday | SunSunday |
---|---|---|---|---|---|---|
December 26, 2022
|
December 27, 2022
|
December 28, 2022
|
December 29, 2022
|
December 30, 2022
|
December 31, 2022
|
JanuaryJanuary 1, 2023 |
January 2, 2023
|
January 3, 2023
|
January 4, 2023
|
January 5, 2023
|
January 6, 2023
|
January 7, 2023
|
January 8, 2023
|
January 9, 2023
|
January 10, 2023
|
January 11, 2023
|
January 12, 2023
|
January 13, 2023
|
January 14, 2023
|
January 15, 2023
|
January 16, 2023
|
January 17, 2023
|
January 18, 2023
|
January 19, 2023
|
January 20, 2023
|
January 21, 2023
|
January 22, 2023
|
January 23, 2023
|
January 24, 2023
|
January 25, 2023
|
January 26, 2023
|
January 27, 2023
|
January 28, 2023
|
January 29, 2023
|
January 30, 2023
|
January 31, 2023
|
FebruaryFebruary 1, 2023 |
February 2, 2023
|
February 3, 2023
|
February 4, 2023
|
February 5, 2023
|
February 6, 2023
|
February 7, 2023(1 event)
Seminar Polaris-tt Learning in finite-horizon MDP with UCB (Romain Cravic)Seminar Polaris-tt Learning in finite-horizon MDP with UCB (Romain Cravic) – Most of you probably know Markov Decisions Processes (MDP). They are very useful to handle situations where an agent interacts with an environnement that may involve randomness. Concretely, at each time the MDP has a current state and the agent chooses an action : This couple state-action induces a (random) reward and a (random) state transition. If the probability distributions for rewards and transitions are known, at least theoretically, designing optimal behaviors for the agent is easy. What about the case where these distributions are unknown at the early stage of the process ? How to LEARN optimal behaviors efficiently ? A popular way to handle this issue is to use the optimism paradigm, inspired from UCB algorithms designed for stochastic bandits problems. In this talk, I will expose the main ideas of two possible approaches, UCRL algorithm and optimistic Q-learning algorithm, that use optimism to well perform in finite-horizon Bâtiment IMAG (406) Saint-Martin-d'Hères, 38400 France |
February 8, 2023
|
February 9, 2023
|
February 10, 2023
|
February 11, 2023
|
February 12, 2023
|
February 13, 2023
|
February 14, 2023
|
February 15, 2023
|
February 16, 2023
|
February 17, 2023
|
February 18, 2023
|
February 19, 2023
|
February 20, 2023
|
February 21, 2023(1 event)
Seminar Polaris-tt: Decomposition of Normal Form Games - Harmonic, Potential, and Non-Strategic Games (Davide Legacci)Seminar Polaris-tt: Decomposition of Normal Form Games - Harmonic, Potential, and Non-Strategic Games (Davide Legacci) – In this talk, we will explore the concept of normal form games and their decomposition into non-strategic, harmonic, and potential games. We will begin by introducing the response graph of a game, which is a visual representation of the strategies available to each player and their corresponding utilities. What dictates the strategic interaction among players is the difference between utilities, rather than the utilities themselves. We will introduce an object that captures this behavior, called deviation flow of the game, and use it to define non-strategic, harmonic, and potential games. Finally, we will discuss the properties of these components. Bâtiment IMAG (442) |
February 22, 2023
|
February 23, 2023
|
February 24, 2023
|
February 25, 2023
|
February 26, 2023
|
February 27, 2023
|
February 28, 2023
|
MarchMarch 1, 2023 |
March 2, 2023
|
March 3, 2023
|
March 4, 2023
|
March 5, 2023
|