Depending on the application, multiple imaging modalities are available for diagnosis in the clinical routine. As a result of this, repositories of patient scans often contain mixed modalities. This poses a challenge for image analysis methods, which require special modifications to work with multiple modalities. This is especially critical for deep learning-based methods, which require large amounts of data. Within this context, a typical example is follow-up imaging in acute ischemic stroke patients, which is an important step in determining potential complications from the evolution of a lesion. In this study, we addressed the mixed modalities issue by translating unpaired images between two of the most relevant follow-up stroke modalities, namely non-contrast computed tomography (NCCT) and fluid-attenuated inversion recovery (FLAIR) MRI. For the translation, we use the widely used cycle-consistent generative adversarial network (CycleGAN). To preserve stroke lesions after translation, we implemented and tested two modifications to regularize them: (1) we use manual segmentations of the stroke lesions as an attention channel when training the discriminator networks, and (2) we use an additional gradient-consistency loss to preserve the structural morphology. For the evaluation of the proposed method, 238 NCCT and 244 FLAIR scans from acute ischemic stroke patients were available. Our method showed a considerable improvement over the original CycleGAN. More precisely, it is capable to translate images between NCCT and FLAIR while preserving the stroke lesion’s shape, location, and modality-specific intensity (average Kullback-Leibler divergence improved from 2,365 to 396). Our proposed method has the potential of increasing the amount of available data used for existing and future applications while conserving original patient features and ground truth labels.
|