MonMonday | TueTuesday | WedWednesday | ThuThursday | FriFriday | SatSaturday | SunSunday |
---|---|---|---|---|---|---|
July 29, 2019
|
July 30, 2019
|
July 31, 2019
|
AugustAugust 1, 2019 |
August 2, 2019
|
August 3, 2019
|
August 4, 2019
|
August 5, 2019
|
August 6, 2019
|
August 7, 2019
|
August 8, 2019
|
August 9, 2019
|
August 10, 2019
|
August 11, 2019
|
August 12, 2019
|
August 13, 2019
|
August 14, 2019
|
August 15, 2019
|
August 16, 2019
|
August 17, 2019
|
August 18, 2019
|
August 19, 2019
|
August 20, 2019
|
August 21, 2019
|
August 22, 2019
|
August 23, 2019
|
August 24, 2019
|
August 25, 2019
|
August 26, 2019
|
August 27, 2019
|
August 28, 2019
|
August 29, 2019
|
August 30, 2019
|
August 31, 2019
|
SeptemberSeptember 1, 2019 |
September 2, 2019
|
September 3, 2019
|
September 4, 2019
|
September 5, 2019
|
September 6, 2019
|
September 7, 2019
|
September 8, 2019
|
September 9, 2019
|
September 10, 2019
|
September 11, 2019
|
September 12, 2019(1 event)
Partially Observable Markov Decision Processes with Finite Memory, by Bruno Ziliotto (CNRS, Paris)Partially Observable Markov Decision Processes with Finite Memory, by Bruno Ziliotto (CNRS, Paris) – (joint work with Krishnendu Chatterjee and Raimundo Saona (IST Austria)) A Partially Observable Markov Decision Process (POMDP) is a discrete-time repeated decision-problem where at each Bâtiment IMAG (442) |
September 13, 2019
|
September 14, 2019
|
September 15, 2019
|
September 16, 2019
|
September 17, 2019
|
September 18, 2019
|
September 19, 2019(1 event)
Some theory on Bayesian neural networks by Julyan Arbel (Mistis, Grenoble)Some theory on Bayesian neural networks by Julyan Arbel (Mistis, Grenoble) – In this talk, we first present seminal works at the basis of the theory of Bayesian neural networks. These include Radford Neal result in the 90s regarding the connexion between Gaussian processes and wide neural networks, and the recent developments of this result to deep neural networks. In a second part, we focus on understanding priors in Bayesian neural networks at the unit level. More specifically, we investigate deep Bayesian neural networks with Gaussian weight priors and a class of ReLU-like nonlinearities. We establish that the induced prior distribution on the units before and after activation becomes increasingly heavy-tailed with the depth of the layer. Bâtiment IMAG (442) |
September 20, 2019
|
September 21, 2019
|
September 22, 2019
|
September 23, 2019
|
September 24, 2019
|
September 25, 2019
|
September 26, 2019
|
September 27, 2019
|
September 28, 2019
|
September 29, 2019
|
September 30, 2019
|
OctoberOctober 1, 2019 |
October 2, 2019
|
October 3, 2019
|
October 4, 2019
|
October 5, 2019
|
October 6, 2019
|