SEMINARS

Events in December 2024–January 2025

Monday Tuesday Wednesday Thursday Friday Saturday Sunday
November 25, 2024
November 26, 2024
November 27, 2024
November 28, 2024
November 29, 2024
November 30, 2024

December

December 1, 2024
December 2, 2024
December 3, 2024
December 4, 2024
December 5, 2024
December 6, 2024
December 7, 2024
December 8, 2024
December 9, 2024
December 10, 2024
December 11, 2024
December 12, 2024
December 13, 2024
December 14, 2024
December 15, 2024
December 16, 2024
December 17, 2024
December 18, 2024
December 19, 2024
December 20, 2024
December 21, 2024
December 22, 2024
December 23, 2024
December 24, 2024
December 25, 2024
December 26, 2024
December 27, 2024
December 28, 2024
December 29, 2024
December 30, 2024
December 31, 2024

January

January 1, 2025
January 2, 2025
January 3, 2025
January 4, 2025
January 5, 2025
January 6, 2025
January 7, 2025
January 8, 2025
January 9, 2025
January 10, 2025
January 11, 2025
January 12, 2025
January 13, 2025
January 14, 2025
January 15, 2025
January 16, 2025
January 17, 2025
January 18, 2025
January 19, 2025
January 20, 2025
January 21, 2025
January 22, 2025
January 23, 2025
January 24, 2025
January 25, 2025
January 26, 2025
January 27, 2025
January 28, 2025
January 29, 2025
January 30, 2025
January 31, 2025

February

February 1, 2025
February 2, 2025
  • April 3, 2024 @ Bâtiment IMAG (442) -- [Seminar] Victor Boone

    Who: Victor Boone

    When: Wednesday, April 3, 14:00-15:00

    Where: 447

    What: Learning MDPs with Extended Bellman Operators

    More: Efficiently learning Markov Decision Processes (MDPs) is difficult. When facing an unknown environment, where is the adequate limit between repeating actions that have shown their efficiency in the past (exploitation of your knowledge) and testing alternatives that may actually be better than what you currently believe (exploration of the environment)? To bypass this dilemma, a well-known solution is the "optimism-in-face-of-uncertainty" principle: Think of the score of an action as being the largest that is statistically plausible.

    The exploration-exploitation dilemma then becomes the problem of tuning optimism. In this talk, I will explain how optimism in MDPs can be all rephrased using a single operator, embedding all the uncertainty in your environment within a single MDP. This is a story about "extended Bellman operators" and "extended MDPs", and about how one can achieve minimax optimal regret using this machinery.

  • April 11, 2024 @ Bâtiment IMAG (442) -- [Seminar] Charles Arnal

    Who: Charles Arnal

    When: Thursday, April 11, 14:00-15:00

    Where: 442

    What: Mode Estimation with Partial Feedback

    More: The combination of lightly supervised pre-training and online fine-tuning has played a key role in recent AI developments. These new learning pipelines call for new theoretical frameworks. In this paper, we formalize core aspects of weakly supervised and active learning with a simple problem: the estimation of the mode of a distribution using partial feedback. We show how entropy coding allows for optimal information acquisition from partial feedback, develop coarse sufficient statistics for mode identification, and adapt bandit algorithms to our new setting. Finally, we combine those contributions into a statistically and computationally efficient solution to our problem.

  • April 30, 2024 @ Bâtiment IMAG (442) -- Seminar Rémi Castera

    Correlation of Rankings in Matching Markets

Comments are closed.