CyTran: A cycle-consistent transformer with multi-level consistency for non-contrast to contrast CT translationShow others and affiliations
2023 (English)In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 538, article id 126211Article in journal (Refereed) Published
Abstract [en]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to noncontrast CT scans and the other way around. Solving this task has two important applications: (i) to automatically generate contrast CT scans for patients for whom injecting contrast substance is not an option, and (ii) to enhance the alignment between contrast and non-contrast CT by reducing the differences induced by the contrast substance before registration.Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran. Our neural model can be trained on unpaired images, due to the integration of a multi-level cycleconsistency loss. Aside from the standard cycle-consistency loss applied at the image level, we propose to apply additional cycle-consistency losses between intermediate feature representations, which enforces the model to be cycle-consistent at multiple representations levels, leading to superior results. To deal with high-resolution images, we design a hybrid architecture based on convolutional and multi-head attention layers. In addition, we introduce a novel data set, Coltea-Lung-CT-100W, containing 100 3D triphasic lung CT scans (with a total of 37,290 images) collected from 100 female patients (there is one examination per patient). Each scan contains three phases (non-contrast, early portal venous, and late arterial), allowing us to perform experiments to compare our novel approach with state-of-the-art methods for image style transfer.Our empirical results show that CyTran outperforms all competing methods. Moreover, we show that CyTran can be employed as a preliminary step to improve a state-of-the-art medical image alignment method. We release our novel model and data set as open source at: https://github.com/ristea/cycletransformer.Our qualitative and subjective human evaluations reveal that CyTran is the only approach that does not introduce visual artifacts during the translation process. We believe this is a key advantage in our application domain, where medical images need to precisely represent the scanned body parts. (c) 2023 Elsevier B.V. All rights reserved.
Place, publisher, year, edition, pages
ELSEVIER , 2023. Vol. 538, article id 126211
Keywords [en]
Transformers; Generative adversarial transformers; Deep learning; Cycle-consistency; Image translation; Image registration; Computed tomography; Triphasic lung CT
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-193955DOI: 10.1016/j.neucom.2023.03.072ISI: 000981177000001OAI: oai:DiVA.org:liu-193955DiVA, id: diva2:1758145
Note
Funding Agencies|project ELO-Hyp; Alexander von Humboldt Foundation; Stiftung Mercator; [24/2020]
2023-05-222023-05-222023-05-22