SEMINARS

Events in September–October 2024

Monday Tuesday Wednesday Thursday Friday Saturday Sunday
August 26, 2024
August 27, 2024
August 28, 2024
August 29, 2024
August 30, 2024
August 31, 2024

September

September 1, 2024
September 2, 2024
September 3, 2024
September 4, 2024
September 5, 2024
September 6, 2024
September 7, 2024
September 8, 2024
September 9, 2024
September 10, 2024
September 11, 2024
September 12, 2024
September 13, 2024
September 14, 2024
September 15, 2024
September 16, 2024
September 17, 2024
September 18, 2024
September 19, 2024
September 20, 2024
September 21, 2024
September 22, 2024
September 23, 2024
September 24, 2024
September 25, 2024
September 26, 2024
September 27, 2024
September 28, 2024
September 29, 2024
September 30, 2024

October

October 1, 2024
October 2, 2024
October 3, 2024
October 4, 2024
October 5, 2024
October 6, 2024
October 7, 2024
October 8, 2024
October 9, 2024
October 10, 2024
October 11, 2024
October 12, 2024
October 13, 2024
October 14, 2024
October 15, 2024
October 16, 2024
October 17, 2024
October 18, 2024
October 19, 2024
October 20, 2024
October 21, 2024
October 22, 2024
October 23, 2024
October 24, 2024
October 25, 2024
October 26, 2024
October 27, 2024
October 28, 2024
October 29, 2024
October 30, 2024
October 31, 2024

November

November 1, 2024
November 2, 2024
November 3, 2024
  • April 3, 2024 @ Bâtiment IMAG (442) -- [Seminar] Victor Boone

    Who: Victor Boone

    When: Wednesday, April 3, 14:00-15:00

    Where: 447

    What: Learning MDPs with Extended Bellman Operators

    More: Efficiently learning Markov Decision Processes (MDPs) is difficult. When facing an unknown environment, where is the adequate limit between repeating actions that have shown their efficiency in the past (exploitation of your knowledge) and testing alternatives that may actually be better than what you currently believe (exploration of the environment)? To bypass this dilemma, a well-known solution is the "optimism-in-face-of-uncertainty" principle: Think of the score of an action as being the largest that is statistically plausible.

    The exploration-exploitation dilemma then becomes the problem of tuning optimism. In this talk, I will explain how optimism in MDPs can be all rephrased using a single operator, embedding all the uncertainty in your environment within a single MDP. This is a story about "extended Bellman operators" and "extended MDPs", and about how one can achieve minimax optimal regret using this machinery.

  • April 11, 2024 @ Bâtiment IMAG (442) -- [Seminar] Charles Arnal

    Who: Charles Arnal

    When: Thursday, April 11, 14:00-15:00

    Where: 442

    What: Mode Estimation with Partial Feedback

    More: The combination of lightly supervised pre-training and online fine-tuning has played a key role in recent AI developments. These new learning pipelines call for new theoretical frameworks. In this paper, we formalize core aspects of weakly supervised and active learning with a simple problem: the estimation of the mode of a distribution using partial feedback. We show how entropy coding allows for optimal information acquisition from partial feedback, develop coarse sufficient statistics for mode identification, and adapt bandit algorithms to our new setting. Finally, we combine those contributions into a statistically and computationally efficient solution to our problem.

  • April 30, 2024 @ Bâtiment IMAG (442) -- Seminar Rémi Castera

    Correlation of Rankings in Matching Markets

Comments are closed.