MEG/EEG Data Analysis
Last modified on : Fri, 12 Feb 10
From a signal processing point of view, detecting and extracting meaningful information from the measurements is a difficult task, because of the low signal to noise ratio, the presence of ongoing cerebral activity (the notion of “noiseless signal” does not exist). Averaging across repetitions of the same experiment is often performed, leading to Averaged Evoked Response Potentials. Since the seminal work of Lehmann et al on microstates, much effort is being devoted in the community in order to be able to analyze single-trial measurements, or to segment continuous strands of data into pieces within which the signals enjoy similar properties.
Statistical methods must be adapted to the multidimensionality of the data and heterogeneity of the dimensions (time, 3D space, trials, conditions, subjects)\footcite{miwakeichi-martinez-montes-etal:04}. Blind Source Separation techniques have been applied to M/EEG, in order to separate the data into independent components which may then be easier to interpret. Though well-suited to artefact elimination, the methods rarely prove effective in revealing activities of interest.
Modelling the head and solving the forward problem
The holy grail of M/EEG research is to solve the inverse source reconstruction problem: to unveil from surface recordings the cortical regions responsible for the measured activity, along with their associated time-courses. The related inverse problems are ill-posed, and their solution is highly dependent on the precision of the models relating the sources of electrical activity to the sensors.
There are three main ingredients to such models: sources, head tissues with their appropriate conductivity, and sensors.
Solving the inverse problem and analyzing the results
Source recovery from sensor measurements is an ill-posed inverse problem: formally, it is unstable, and in the distributed source case, non-unique. Constraints, or regularization, are necessary in order to guarantee an unique and stable solution. Choosing the proper type of regularization and constraints is the subject of intense research in the M/EEG community. The statistical assessment of the quality of solutions is also a crucial point, because in functional neuroimaging in general “ground truth” is difficult to obtain. Solutions using statistical parametric maps, Bayesian learning or permutation tests have been proposed.