Papers by Jérémie Mattout

Journal of Neural Engineering, 2019
Objective. Going adaptive is a major challenge for the field of Brain-Computer Interface (BCI). T... more Objective. Going adaptive is a major challenge for the field of Brain-Computer Interface (BCI). This entails a machine that optimally articulates inference about the user's intentions and its own actions. Adaptation can operate over several dimensions which calls for a generic and flexible framework. Approach. We appeal to one of the most comprehensive computational approach to brain (adaptive) functions: the Active Inference (AI) framework. It entails an explicit (probabilistic) model of the user that the machine interacts with, here involved in a P300-spelling task. This takes the form of a discrete input-output state-space model establishing the link between the machine's (i) observations-a P300 or Error Potential for instance, (ii) representations-of the user intentions to spell or pause, and (iii) actionsto flash, spell or switch-off the application. Main results. Using simulations with real EEG data from 18 subjects, results demonstrate the ability of AI to yield a significant increase in bit rate (17%) over state-of-the-art approaches, such as dynamic stopping. Significance. Thanks to its flexibility, this one model enables to implement optimal (dynamic) stopping but also optimal flashing (i.e. active sampling), automated error correction, and switching off when the user does not look at the screen anymore. Importantly, this approach enables the machine to flexibly arbitrate between all these possible actions. We demonstrate AI as a unifying and generic framework to implement a flexible interaction in a given BCI context.

Decision-Making in a Changing World: A Study in Autism Spectrum Disorders
Journal of Autism and Developmental Disorders, 2014
To learn to deal with the unexpected is essential to adaptation to a social, therefore often unpr... more To learn to deal with the unexpected is essential to adaptation to a social, therefore often unpredictable environment. Fourteen adults with autism spectrum disorders (ASD) and 15 controls underwent a decision-making task aimed at investigating the influence of either a social or a non-social environment, and its interaction with either a stable (with constant probabilities) or an unstable (with changing probabilities) context on their performance. Participants with ASD presented with difficulties in accessing underlying statistical rules in an unstable context, a deficit especially enhanced in the social environment. These results point out that the difficulties people with ASD encounter in their social life might be caused by impaired social cues processing and by the unpredictability associated with the social world.

Journal of Neural Engineering, 2011
A Brain-Computer Interface (BCI) is a specific type of human-computer interface that enables the ... more A Brain-Computer Interface (BCI) is a specific type of human-computer interface that enables the direct communication between human and computers through decoding of brain activity. As such, event-related potentials (ERPs) like the P300 can be obtained with an oddball paradigm whose targets are selected by the user. This paper deals with methods to reduce the needed set of EEG sensors in the P300 speller application. A reduced number of sensors yields more comfort for the user, decreases installation time duration, may substantially reduce the financial cost of the BCI setup and may reduce the power consumption for wireless EEG caps. Our new approach to select relevant sensors is based on backward elimination using a cost function based on the signal to signal-plus-noise ratio, after some spatial filtering. We show that this cost function select sensors subsets that provide a better accuracy in the speller recognition rate during the test sessions than selected subsets based on classification accuracy. We validate our selection strategy on data from 20 healthy subjects.
H. Cecotti 1, B. Rivet 1, M. Congedo 1, C. Jutten 1, O. Bertrand 2, E. Maby 2, J. Mattout 2 ... 1... more H. Cecotti 1, B. Rivet 1, M. Congedo 1, C. Jutten 1, O. Bertrand 2, E. Maby 2, J. Mattout 2 ... 1 GIPSA-lab CNRS UMR 5216 Grenoble Universities F-38402 Saint Martin d'Heres, France ... 2 INSERM, U821, Lyon, F-69500, France Institut Federatif des Neurosciences, Lyon, F-...

NeuroImage, 2014
There are now a number of non-invasive methods to image human brain function in-vivo. However, th... more There are now a number of non-invasive methods to image human brain function in-vivo. However, the accuracy of these images remains unknown and can currently only be estimated through the use of invasive recordings to generate a functional ground truth. Neuronal activity follows grey matter structure and accurate estimates of neuronal activity will have stronger support from accurate generative models of anatomy. Here we introduce a general framework that, for the first time, enables the spatial distortion of a functional brain image to be estimated empirically. We use a spherical harmonic decomposition to modulate each cortical hemisphere from its original form towards progressively simpler structures, ending in an ellipsoid. Functional estimates that are not supported by the simpler cortical structures have less inherent spatial distortion. This method allows us to compare directly between magnetoencephalography (MEG) source reconstructions based upon different assumption sets without recourse to functional ground truth.

NeuroImage, 2007
We address some key issues entailed by population inference about responses evoked in distributed... more We address some key issues entailed by population inference about responses evoked in distributed brain systems using magnetoencephalography (MEG). In particular, we look at model selection issues at the within-subject level and feature selection issues at the between-subject level, using responses evoked by intact and scrambled faces around 170 ms (M170). We compared the face validity of subject-specific forward models and their summary statistics in terms of how estimated responses reproduced over subjects. At the within-subject level, we focused on the use of multiple constraints, or priors, for inverting distributed source models. We used restricted maximum likelihood (ReML) estimates of prior covariance components (in both sensor and source space) and show that their relative importance is conserved over subjects. At the between-subject level, we used standard anatomical normalization methods to create posterior probability maps that furnish inference about regionally specific population responses. We used these to compare different summary statistics, namely; (i) whether to test for differences between condition-specific source estimates, or whether to test the source estimate of differences between conditions, and (ii) whether to accommodate differences in source orientation by using signed or unsigned (absolute) estimates of source activity. Crown

Impaired pitch perception and memory in congenital amusia: the deficit starts in the auditory cortex
Brain, 2013
Congenital amusia is a lifelong disorder of music perception and production. The present study in... more Congenital amusia is a lifelong disorder of music perception and production. The present study investigated the cerebral bases of impaired pitch perception and memory in congenital amusia using behavioural measures, magnetoencephalography and voxel-based morphometry. Congenital amusics and matched control subjects performed two melodic tasks (a melodic contour task and an easier transposition task); they had to indicate whether sequences of six tones (presented in pairs) were the same or different. Behavioural data indicated that in comparison with control participants, amusics' short-term memory was impaired for the melodic contour task, but not for the transposition task. The major finding was that pitch processing and short-term memory deficits can be traced down to amusics' early brain responses during encoding of the melodic information. Temporal and frontal generators of the N100m evoked by each note of the melody were abnormally recruited in the amusic brain. Dynamic causal modelling of the N100m further revealed decreased intrinsic connectivity in both auditory cortices, increased lateral connectivity between auditory cortices as well as a decreased right fronto-temporal backward connectivity in amusics relative to control subjects. Abnormal functioning of this fronto-temporal network was also shown during the retention interval and the retrieval of melodic information. In particular, induced gamma oscillations in right frontal areas were decreased in amusics during the retention interval. Using voxel-based morphometry, we confirmed morphological brain anomalies in terms of white and grey matter concentration in the right inferior frontal gyrus and the right superior temporal gyrus in the amusic brain. The convergence between functional and structural brain differences strengthens the hypothesis of abnormalities in the fronto-temporal pathway of the amusic brain. Our data provide first evidence of altered functioning of the auditory cortices during pitch perception and memory in congenital amusia. They further support the hypothesis that in neurodevelopmental disorders impacting high-level functions (here musical abilities), abnormalities in cerebral processing can be observed in early brain responses.
First results on the GEM operated at low gas pressures
Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment, 1998
We report on the properties of the Gaseous Electron Multiplier (GEM) operated at 10–40Torr isobut... more We report on the properties of the Gaseous Electron Multiplier (GEM) operated at 10–40Torr isobutane and methane. We found stable operation at gains of a few thousand, fast response and effective photon-feedback reduction. The transmission of single electrons through the GEM apertures was studied. Ion-induced feedback, from a wire chamber following the GEM, was found to limit the total two-stage multiplication at high GEM gains. A stable double-GEM operation with reduced ion-feedback was demonstrated. Some applications are discussed.

Neuroimage, 2006
In Amblard et al. . Biomagnetic source detection by maximum entropy and graphical models. IEEE Tr... more In Amblard et al. . Biomagnetic source detection by maximum entropy and graphical models. IEEE Trans. Biomed. Eng. 55 (3) 427 -442], the authors introduced the maximum entropy on the mean (MEM) as a methodological framework for solving the magnetoencephalography (MEG) inverse problem. The main component of the MEM is a reference probability density that enables one to include all kind of prior information on the source intensity distribution to be estimated. This reference law also encompasses the definition of a model. We consider a distributed source model together with a clustering hypothesis that assumes functionally coherent dipoles. The reference probability distribution is defined as a prior parceling of the cortical surface. In this paper, we present a data-driven approach for parceling out the cortex into regions that are functionally coherent. Based on the recently developed multivariate source prelocalization (MSP) principle [Mattout, J., Pelegrini-Issac, M., Garnero, L., Benali, H. 2005. Multivariate source prelocalization (MSP): Use of functionally informed basis functions for better conditioning the MEG inverse problem. NeuroImage 26 (2) 356 -373], the data-driven clustering (DDC) of the dipoles provides an efficient parceling of the sources as well as an estimate of parameters of the initial reference probability distribution. On MEG simulated data, the DDC is shown to further improve the MEM inverse approach, as evaluated considering two different iterative algorithms and using classical error metrics as well as ROC (receiver operating characteristic) curve analysis. The MEM solution is also compared to a LORETA-like inverse approach. The data-driven clustering allows to take most advantage of the MEM formalism. Its main trumps lie in the flexible probabilistic way of introducing priors and in the notion of spatial coherent regions of activation. The latter reduces the dimensionality of the problem. In so doing, it narrows down the gap between the two types of inverse methods, the popular dipolar approaches and the distributed ones. D
This paper presents an original multivariate approach for group analysis of functional magnetic r... more This paper presents an original multivariate approach for group analysis of functional magnetic resonance imaging (fMRI) experiments. The proposed hierarchical method avoids the use of any spatial normalization. Rather, it relies on the analysis of a particular set of time series whose variations are common to all subjects. This common set of time series is extracted from the fMRI data of all subjects considered simultaneously, using a generalized fixed-effect model. Then, a multivariate regression model is applied for analyzing these time series and estimating activation maps associated with each subject. The method is illustrated by using real fMRI data.

IEEE Transactions on Biomedical Engineering, 2006
Characterizing the cortical activity sources of electroencephalography (EEG)/magnetoencephalograp... more Characterizing the cortical activity sources of electroencephalography (EEG)/magnetoencephalography data is a critical issue since it requires solving an ill-posed inverse problem that does not admit a unique solution. Two main different and complementary source models have emerged: equivalent current dipoles (ECD) and distributed linear (DL) models. While ECD models remain highly popular since they provide an easy way to interpret the solutions, DL models (also referred to as imaging techniques) are known to be more realistic and flexible. In this paper, we show how those two representations of the brain electromagnetic activity can be cast into a common general framework yielding an optimal description and estimation of the EEG sources. From this extended source mixing model, we derive a hybrid approach whose key aspect is the separation between temporal and spatial characteristics of brain activity, which allows to dramatically reduce the number of DL model parameters. Furthermore, the spatial profile of the sources, as a temporal invariant map, is estimated using the entire time window data, allowing to significantly enhance the information available about the spatial aspect of the EEG inverse problem. A Bayesian framework is introduced to incorporate distinct temporal and spatial constraints on the solution and to estimate both parameters and hyperparameters of the model. Using simulated EEG data, the proposed inverse approach is evaluated and compared with standard distributed methods using both classical criteria and ROC curves.
Data-Driven Cortex Parcelling: A Regularization Tool for the EEG/MEG Inverse Problem
Recent inverse approaches based on the distributed source model require the use of functionally c... more Recent inverse approaches based on the distributed source model require the use of functionally coherent cortical regions. In this note, we present an EEG/MEG data-driven method for parceling the cortical surface into a set of connex and functionally coherent components. We therefore consider the realistic tridimensional geometry of the cortical sheet and define functional coherence criteria that rely upon the recently proposed multivariate source prelocalization (MSP). We also describe an automatic way of estimating the optimal parcelling hyperparameter, given the EEG/MEG measurements. This new approach leads to a restricted and functionally meaningful description of the inverse solution space, which might be further exploited for constraining the source reconstruction process itself.

IEEE Transactions on Signal Processing, 2005
Characterizing the cortical activity from electro-and magneto-encephalography (EEG/MEG) data requ... more Characterizing the cortical activity from electro-and magneto-encephalography (EEG/MEG) data requires solving an ill-posed inverse problem that does not admit a unique solution. As a consequence, the use of functional neuroimaging, for instance, functional Magnetic Resonance Imaging (fMRI), constitutes an appealing way of constraining the solution. However, the match between bioelectric and metabolic activities is desirable but not assured. Therefore, the introduction of spatial priors derived from other functional modalities in the EEG/MEG inverse problem should be considered with caution. In this paper, we propose a Bayesian characterization of the relevance of fMRI-derived prior information regarding the EEG/MEG data. This is done by quantifying the adequacy of this prior to the data, compared with that obtained using an noninformative prior instead. This quantitative comparison, using the so-called Bayes factor, allows us to decide whether the informative prior should (or not) be included in the inverse solution. We validate our approach using extensive simulations, where fMRI-derived priors are built as perturbed versions of the simulated EEG sources. Moreover, we show how this inference framework can be generalized to optimize the way we should incorporate the informative prior.

Neuroimage, 2005
Spatially characterizing and quantifying the brain electromagnetic response using MEG/EEG data st... more Spatially characterizing and quantifying the brain electromagnetic response using MEG/EEG data still remains a critical issue since it requires solving an ill-posed inverse problem that does not admit a unique solution. To overcome this lack of uniqueness, inverse methods have to introduce prior information about the solution. Most existing approaches are directly based upon extrinsic anatomical and functional priors and usually attempt at simultaneously localizing and quantifying brain activity. By contrast, this paper deals with a preprocessing tool which aims at better conditioning the source reconstruction process, by relying only upon intrinsic knowledge (a forward model and the MEG/ EEG data itself) and focusing on the key issue of localization. Based on a discrete and realistic anatomical description of the cortex, we first define functionally Informed Basis Functions (fIBF) that are subject specific. We then propose a multivariate method which exploits these fIBF to calculate a probability-like coefficient of activation associated with each dipolar source of the model. This estimated distribution of activation coefficients may then be used as an intrinsic functional prior, either by taking these quantities into account in a subsequent inverse method, or by thresholding the set of probabilities in order to reduce the dimension of the solution space. These two ways of constraining the source reconstruction process may naturally be coupled. We successively describe the proposed Multivariate Source Prelocalization (MSP) approach and illustrate its performance on both simulated and real MEG data. Finally, the better conditioning induced by the MSP process in a classical regularization scheme is extensively and quantitatively evaluated. D

Adaptive training session for a P300 speller brain–computer interface
Journal of Physiology-paris
With a brain-computer interface (BCI), it is nowadays possible to achieve a direct pathway betwee... more With a brain-computer interface (BCI), it is nowadays possible to achieve a direct pathway between the brain and computers thanks to the analysis of some particular brain activities. The detection of even-related potentials, like the P300 in the oddball paradigm exploited in P300-speller, provides a way to create BCIs by assigning several detected ERP to a command. Due to the noise present in the electroencephalographic signal, the detection of an ERP and its different components requires efficient signal processing and machine learning techniques. As a consequence, a calibration session is needed for training the models, which can be a drawback if its duration is too long. Although the model depends on the subject, the goal is to provide a reliable model for the P300 detection over time. In this study, we propose a new method to evaluate the optimal number of symbols (i.e. the number of ERP that shall be detected given a determined target probability) that should be spelt during the calibration process. The goal is to provide a usable system with a minimum calibration duration and such that it can automatically switch between the training and online sessions. The method allows to adaptively adjust the number of training symbols to each subject. The evaluation has been tested on data recorded on 20 healthy subjects. This procedure lets drastically reduced the calibration session: height symbols during the training session reach an initialized system with an average accuracy of 80% after five epochs.

Journal of Neural Engineering, 2011
A brain-computer interface (BCI) is a specific type of human-computer interface that enables dire... more A brain-computer interface (BCI) is a specific type of human-computer interface that enables direct communication between human and computer through decoding of brain activity. As such, event-related potentials like the P300 can be obtained with an oddball paradigm whose targets are selected by the user. This paper deals with methods to reduce the needed set of EEG sensors in the P300 speller application. A reduced number of sensors yields more comfort for the user, decreases installation time duration, may substantially reduce the financial cost of the BCI setup and may reduce the power consumption for wireless EEG caps. Our new approach to select relevant sensors is based on backward elimination using a cost function based on the signal to signal-plus-noise ratio, after some spatial filtering. We show that this cost function selects sensors' subsets that provide a better accuracy in the speller recognition rate during the test sessions than selected subsets based on classification accuracy. We validate our selection strategy on data from 20 healthy subjects.

Brain Topography
A challenge in designing a Brain-Computer Interface (BCI) is the choice of the channels, e.g. the... more A challenge in designing a Brain-Computer Interface (BCI) is the choice of the channels, e.g. the most relevant sensors. Although a setup with many sensors can be more efficient for the detection of Event-Related Potential (ERP) like the P300, it is relevant to consider only a low number of sensors for a commercial or clinical BCI application. Indeed, a reduced number of sensors can naturally increase the user comfort by reducing the time required for the installation of the EEG (electroencephalogram) cap and can decrease the price of the device. In this study, the influence of spatial filtering during the process of sensor selection is addressed. Two of them maximize the Signal to Signal-plus-Noise Ratio (SSNR) for the different sensor subsets while the third one maximizes the differences between the averaged P300 waveform and the non P300 waveform. We show that the locations of the most relevant sensors subsets for the detection of the P300 are highly dependent on the use of spatial filtering. Applied on data from 20 healthy subjects, this study proves that subsets obtained where sensors are suppressed in relation to their individual SSNR are less efficient than when sensors are suppressed in relation to their contribution once the different selected sensors are combined for enhancing the signal. In other words, it highlights the difference between estimating the P300 projection on the scalp and evaluating the more efficient sensor subsets for a P300-BCI. Finally, this study explores the issue of channel commonality across subjects. The results support the conclusion that spatial filters during the sensor selection procedure allow selecting better sensors for a visual P300 Brain-Computer Interface.
Irbm, 2011
a Inserm U1028, équipe dynamique cérébrale et cognition, centre de recherche en neurosciences de ... more a Inserm U1028, équipe dynamique cérébrale et cognition, centre de recherche en neurosciences de Lyon, centre hospitalier le Vinatier, bâtiment 452, 95, boulevard Pinel,

Journal of Cognitive Neuroscience, 2009
& Speech is not a purely auditory signal. From around 2 months of age, infants are able to correc... more & Speech is not a purely auditory signal. From around 2 months of age, infants are able to correctly match the vowel they hear with the appropriate articulating face. However, there is no behavioral evidence of integrated audiovisual perception until 4 months of age, at the earliest, when an illusory percept can be created by the fusion of the auditory stimulus and of the facial cues (McGurk effect). To understand how infants initially match the articulatory movements they see with the sounds they hear, we recorded high-density ERPs in response to auditory vowels that followed a congruent or incongruent silently articulating face in 10-week-old infants. In a first experiment, we determined that auditory-visual integration occurs during the early stages of perception as in adults. The mismatch response was similar in timing and in topography whether the preceding vowels were presented visually or aurally. In the second experiment, we studied audiovisual integration in the linguistic (vowel perception) and nonlinguistic (gender perception) domain. We observed a mismatch response for both types of change at similar latencies. Their topographies were significantly different demonstrating that cross-modal integration of these features is computed in parallel by two different networks. Indeed, brain source modeling revealed that phoneme and gender computations were lateralized toward the left and toward the right hemisphere, respectively, suggesting that each hemisphere possesses an early processing bias. We also observed repetition suppression in temporal regions and repetition enhancement in frontal regions. These results underscore how complex and structured is the human cortical organization which sustains communication from the first weeks of life on. &

Neuroimage, 2008
This paper describes an application of hierarchical or empirical Bayes to the distributed source ... more This paper describes an application of hierarchical or empirical Bayes to the distributed source reconstruction problem in electro-and magnetoencephalography (EEG and MEG). The key contribution is the automatic selection of multiple cortical sources with compact spatial support that are specified in terms of empirical priors. This obviates the need to use priors with a specific form (e.g., smoothness or minimum norm) or with spatial structure (e.g., priors based on depth constraints or functional magnetic resonance imaging results). Furthermore, the inversion scheme allows for a sparse solution for distributed sources, of the sort enforced by equivalent current dipole (ECD) models. This means the approach automatically selects either a sparse or a distributed model, depending on the data. The scheme is compared with conventional applications of Bayesian solutions to quantify the improvement in performance.
Uploads
Papers by Jérémie Mattout