A beginner's guide to analyze MEG data using MNE-Python
If you have questions or want to start a discussion, please use the Disqus comment system below the post. If you wish to contribute or correct the contents (more than welcome!), please open an issue or create a pull request.
Be aware that this is my learning journey. The materials will change based on my knowledge of MEG and mne-python
. If something looks wrong, it could be wrong. Please let me know if something looks confusing!
scripts
: python scripts for processingviz
: python scripts and jupyter notebooks for visualizationREADME.md
: instructions and some notes Chinese translation
Usually, dicom files (e.g., xxxx.IMG from Siemens MRI scanners) are provided after MRI scans. Before running the FreeSurfer anatomic procedure, convert the dicom files into NifTi format.
Use any software you like. I used dcm2niix included in MRIcroGL at the moment (2020). For older dataset (e.g., data collected in 2016), I used dcm2nii in MRICron 2MAY2016. Mysteriously, dcm2nii works fine on old data while dcm2niix adds additional bytes in the NifTi files and FreeSurfer is unable to process it sometimes.
Though mri_convert
in FreeSurfer can convert DICOM to NifTi, the output files are sometimes problematic and FreeSurfer is unable to run the reconstruction on it.
Use command line:
recon-all -all -s sample -i sample.nii.gz
Or use Python scripts to call FreeSurfer.
If the reconstruction looks incorrect, you may need to adjust the results or the parameters used in reconstruction manually. Refer to My watershed BEM meshes look incorrect for advices.
0_fetch_dataset.py
to get dataset01_anatomical_construction.py
.nii.gz
)recon-all
takes hours (~7 hours on AMD Ryzen 5 3600) to complete!mne.parallel.parallel_func
to loop through all the subjects.There are two ways to create BEM surfaces:
mne.bem.make_flash_bem
requires additional processes to prepare fast low-angle shot (FLASH) images.mne.bem.convert_flash_mris
and mne.bem.make_flash_bem
mne.bem.make_watershed_bem
create BEM surfaces using the FreeSurfer watershed algorithm (could be less accurate).After making BEM surfaces, use them to create BEM models, BEM solutions and setup the source space.
Usually, ico4 (2562 sources per hemisphere) is precise enough and efficient for source reconstruction. But ico5 (10242 sources per hemisphere) can be used for extra precision. This demo uses ico4 (bem_ico = 4
in config.py
).
overwrite=False
.02_setup_source_space.py
If you are from NTU (National Taiwan University) like me, the data are assumed Maxwell filtered using SSS (or tSSS) with MaxFilter provided by Elekta. If the cHPI recordings are available, MaxFilter can also perform movement compensation.
This tutorial utilizes the sample dataset and uses MNE to Maxwell-filter the raw data.
0_fetch_dataset.py
to get the sample dataset if not already.The structure of your study directory should look like:
.
├── MEG
│ └── sample
├── MRI
│ └── sample
├── results
├── scripts
├── subjects
└── viz
Before anything, take a look at the data to decide whether the data are worth processing. Also, annotate bad data segments to drop bad epochs afterward.
03-1_annotate.py
Most event-related brain signals are below 40 Hz. Low-pass filter with 40 Hz does not affect most brain signals of interest and attenuates the powerline frequency (60 Hz in Taiwan, 50 Hz in some countries) and all HPI coil frequencies (above 100 Hz). To use ICA to repair artifacts, data of EOG channels would be further high-pass filtered on 1Hz in order to remove slow drifts.
03-2_filter.py
ICA is a blind source separation technique that maximizes the statistical independence between the components. Because of its peaky distributions, eye blinks and heartbeats can be easily removed using ICA. Because ICA can be affected by high amplitude artifacts, autoreject (global)
is used on the raw data to determine the rejection threshold before fitting ICA to the data.
04_ica.py
04-2_viz_ica.py
Extract events using mne.find_events
and create epochs based on the events. Bad epochs according to the annotation file are dropped. ECG and EOG events are detected in this stage and excluded from the ICA to prevent the noise from spreading to all signals. CTPS method is used to detect ECG related IC and the threshold is set to automatic computation (implemented in mne v0.21). The maximum number of excluded IC is confined to 3 for ECG (QRS complex) and 3 for EOG. When creating epochs, it is common practice to use baseline correction so that any constant offsets in the baseline are removed. After creating the epochs, autoreject (local)
is used to drop bad epochs and interpolate bad channels. Notice that running autoreject
is not required but recommended to denoise the data. Refer to the paper when in doubt.
l_freq=10
and h_freq=20
give better results than the default (l_freq=8
, h_freq=16
).autoreject (local)
utilizes machine learning techniques to clean the signals and can be resource-consuming.05_epochs.py
05-2_viz_artifact_epochs.py
The epochs are averaged across conditions to create evoked responses for each subject.
06_evoked.py
Because inverse solvers usually assume Gaussian noise distribution, M/EEG signals require a whitening step due to the nature of being highly spatially correlated. To denoise the data, one must provide an estimate of the spatial noise covariance matrix. Empty-room recordings for MEG or pre-stimulus period can be used to compute such information.
Here, the pre-stimulus period (baseline) is used.
07_covariance.py
The evoked responses are averaged for group averages.
10_group_average_sensor.py
The recommended way to use the GUI is through command line with:
mne coreg
or use mne.gui.coregistration
to initiate the GUI.
11_setup_head_for_coreg.py
and mne make_scalp_surfaces
.Compute forward solution for the MEG data.
sample_audvis_raw-trans.fif
from the sample dataset for practice, or create it manually.12_forward_solution.py
Compute and apply a dSPM inverse solution for each evoked data set.
13_inverse_solution.py
To analyze data at the group level, data from all subjects need to be transformed to a common source space. This procedure is called morphing by MNE.
Here, the data are morphed to the standard FreeSurfer average subject fsaverage
.
14_morph_source.py
After morphing, the source estimates are averaged for group responses on source level.
15_group_average_source.py
15-2_viz_group_average.py
Test if the evoked responses are significantly different between conditions across subjects. The multiple comparisons problem is addressed with a cluster-level permutation test across space and time. In this demo, the evoked responses elicited by left auditory stimuli and by left visual stimuli are compared.
The cluster results are saved to HDF (*.h5) for future use (e.g., visualization). The cluster results are further visualized via mne.stats.summarize_clusters_stc
.
16_statistics.py
16-2_viz_statistics.py
Congratulations and good luck!