You are here


Motion Correction in MRI Using Deep Learning

The following abstract was presented as part of London Health Research Day 2018.

Research Areas: Medical biophysics, engineering and imaging; Detection, screening and diagnosis of health and disease
First Author: Patricia Johnson
Supervisor(s): M. Drangova

Subject motion in MRI remains an unsolved problem; motion during image acquisition may cause artefacts that severely degrade image quality.  In the clinic, if an image with motion artefacts is acquired, it will often be reacquired.  This provides a source from which a large number of motion-degraded images, along with their respective re-scans, could be collected. These pairs of images could be used to train a neural network to identify the mapping relationship between an image with motion artefacts and a high quality, artefact free image. Inspired by previous work demonstrating MR image reconstruction with machine learning,[1,2] our objective is to train a neural network to perform motion corrected image reconstruction on image data with simulated motion artefacts. We simulate motion in previously acquired brain images and use the image pairs (corrupted + original) to train a deep neural network (DNN).

We hypothesized that a deep neural network could be trained to perform a motion corrected MR image reconstruction given the motion corrupted data in fourier space (k-space). 

Materials and Methods:
Data were obtained from an open source neuro MRI data set [3] comprising T2* weighted, magnitude and phase images for 53 patients, each with 128 non-overlapping image slices; the data set thereby provides thousands of unique 2D complex-valued images. Each 2D image from this data-set was Fourier transformed to simulate the acquired k-space data. To simulate rigid motion, k-space lines were rotated and phase shifted, simulating the k-space inconsistencies that would occur if the subject were moving their head. The motion profiles were parameterized by the time, magnitude and direction of motion and were randomly generated with constraints to keep the motion within the realm of realistic head motion. A unique 3D motion profile was applied to each image. The DNN was developed and trained using the TensorFlow library [4]. The network consists of a densely connected layer followed by 4 convolutional layers. The input to the network has 5 channels; each channel contains the data from one k-space slice. The network training set consisted of 2,048 image pairs; 64 pairs were reserved for validation and testing. The network was trained for 4 hrs using the SHARCNET computing network. 

The images predicted by the DNN, from motion-corrupted k-space, have improved image quality compared to the motion-corrupted images. The mean absolute error (MAE) between the motion corrupted and ground-truth images was 32% of the image mean value, while the (MAE) between the DNN-predicted and ground-truth images was only 11%. 

Discussion and Conclusions: 
Motion-corrected image reconstruction was successfully achieved on brain images with simulated motion artefacts. This work represents the first time machine learning has been used to perform motion correction of MR images. Improving the consistency of the networks performance is the focus of ongoing work. 

[1] Zhu, Bo. et al., Image reconstruction by domain transform manifold learning, 2017 [2] Hammernik K, et al., Learning a Variational Network for Reconstruction of Accelerated MRI Data, 2017 [3] Forstmann BU, et al., Multi-modal ultra-high resolution structural 7-Tesla MRI data repository. 2014 [4] Abadi M. et al., TensorFlow, 2015.