Cross-validation failure: small sample sizes lead to large error bars

Gael will give a talk at the next Neurospin unsupervised decoding meeting  on 21/02/2017 – 9h45-11h00, room 2032

Abstract:  Recently, I have become convinced that cross-validation on a hundred or
less samples is not a reliable measure of predictive accuracy. In
addition, techniques generally used to estimate its error or test for
significant prediction are severely optimistic.

I would like to present you very simple evidence of this unreliability,
which it intrinsic to the sample sizes that we are working with. It is a
simple sampling-noise problem that cannot be alleviated without increasing
the number of samples.

I want to have a discussion about what this means for the field, and how
we should address this problem. I would like to invite critical thinking
about aspects of the practice that I might have overlooked and could make
it more robust.

I invite many people to come, so that we can convince ourselves of
whether or not there is a problem with the way we often work. Indeed, it
is troublesome for methods development as well as for neuroscience
research.

PhD defense of Andrés

Andrés has successfully defended his PhD thesis “Ensembles of models in fMRI: stable learning in large-scale settings” on January 20th, in front of a committee comprising:

  • Martin Lindquist
  • Florent Krzakala
  • Christophe Phillips
  • Erwan le Pennec
  • Bertrand Thirion

Congratulations !