MEG/EEG Data Analysis

MEG/EEG Data Analysis

                                                                                                                                                Last modified on : Fri, 12 Feb 10

From a signal processing point of view, detecting and extracting meaningful information from the measurements is a difficult task, because of the low signal to noise ratio, the presence of ongoing cerebral activity (the notion of “noiseless signal” does not exist). Averaging across repetitions of the same experiment is often performed, leading to Averaged Evoked Response Potentials. Since the seminal work of Lehmann et al on microstates, much effort is being devoted in the community in order to be able to analyze single-trial measurements, or to segment continuous strands of data into pieces within which the signals enjoy similar properties.

Statistical methods must be adapted to the multidimensionality of the data and heterogeneity of the dimensions (time, 3D space, trials, conditions, subjects)\footcite{miwakeichi-martinez-montes-etal:04}. Blind Source Separation techniques have been applied to M/EEG, in order to separate the data into independent components which may then be easier to interpret. Though well-suited to artefact elimination, the methods rarely prove effective in revealing activities of interest.

Modelling the head and solving the forward problem

The holy grail of M/EEG research is to solve the inverse source reconstruction problem: to unveil from surface recordings the cortical regions responsible for the measured activity, along with their associated time-courses. The related inverse problems are ill-posed, and their solution is highly dependent on the precision of the models relating the sources of electrical activity to the sensors.

There are three main ingredients to such models: sources, head tissues with their appropriate conductivity, and sensors.

Isolated dipoles were introduced by Scherg and von Cramon to model brain activity, and later, Dale and Sereno proposed a distributed source model, on the cortical mantle segmented from MRI. Conductivity models are generallyconsidered piecewise constant and isotropic, or may incorporate anisotropy to handle the skull or the white matter.
Appropriate computational methods are compulsory for solving the M/EEG forward problem: either by surface-based Boundary Element Methods (BEM) or volume-based Finite Element or Finite Difference Methods. Until recently, the state of the art in BEM consisted in using a double-layer formulation, with an accuracy improvement provided by the isolated Skull Approach.We have proposed a new, symmetric BEM which improves over the state of the art in terms of accuracy.
Finite Element Methods (FEM) are also being studied for M/EEG because of their ability to account for anisotropic media. The cumbersome meshing procedure associated to the FEM should be alleviated after our recent development of the Implicit Mesh FEM.

Solving the inverse problem and analyzing the results

Source recovery from sensor measurements is an ill-posed inverse problem: formally, it is unstable, and in the distributed source case, non-unique. Constraints, or regularization, are necessary in order to guarantee an unique and stable solution. Choosing the proper type of regularization and constraints is the subject of intense research in the M/EEG community. The statistical assessment of the quality of solutions is also a crucial point, because in functional neuroimaging in general “ground truth” is difficult to obtain. Solutions using statistical parametric maps, Bayesian learning or permutation tests have been proposed.