liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 77) Show all publications
Akbar, M. U., Wang, W. & Eklund, A. (2025). Beware of diffusion models for synthesizing medical images - A comparison with GANs in terms of memorizing brain MRI and chest x-ray images. Machine Learning: Science and Technology
Open this publication in new window or tab >>Beware of diffusion models for synthesizing medical images - A comparison with GANs in terms of memorizing brain MRI and chest x-ray images
2025 (English)In: Machine Learning: Science and Technology, E-ISSN 2632-2153Article in journal (Refereed) Published
Abstract [en]

Diffusion models were initially developed for text-to-image generation and are now being utilized to generate high quality synthetic images. Preceded by GANs, diffusion models have shown impressive results using various evaluation metrics. However, commonly used metrics such as FID and IS are not suitable for determining whether diffusion models are simply reproducing the training images. Here we train StyleGAN and a diffusion model, using  BRATS20, BRATS21 and a chest x-ray pneumonia dataset, to synthesize brain MRI and chest x-ray images, and measure the correlation between the synthetic images and all training images. Our results show that diffusion models are more likely to memorize the training images, compared to StyleGAN, especially for small datasets and when using 2D slices from 3D volumes. Researchers should be careful when using diffusion models (and to some extent GANs) for medical imaging, if the final goal is to share the synthetic images. 

Keywords
Synthetic images, GANs, diffusion models, memorization
National Category
Radiology, Nuclear Medicine and Medical Imaging Probability Theory and Statistics
Identifiers
urn:nbn:se:liu:diva-210499 (URN)10.1088/2632-2153/ad9a3a (DOI)001408876900001 ()2-s2.0-85217039477 (Scopus ID)
Funder
Åke Wiberg Foundation, M22-0088Vinnova, 2021-01954
Note

Funding Agencies|ITEA/VINNOVA project ASSIST [2021-01954]; LiU Cancer and the akeWiberg foundation

Available from: 2024-12-16 Created: 2024-12-16 Last updated: 2025-03-03
Batool, H., Mukhtar, A., Khawaja, S. G., Alghamdi, N. S., Khan, A. M., Qayyum, A., . . . Eklund, A. (2025). Knowledge Distillation and Transformer Based Framework for Automatic Spine CT Report Generation. IEEE Access, 1-1
Open this publication in new window or tab >>Knowledge Distillation and Transformer Based Framework for Automatic Spine CT Report Generation
Show others...
2025 (English)In: IEEE Access, E-ISSN 2169-3536, p. 1-1Article in journal (Refereed) Published
Abstract [en]

Spine Computed Tomography (SCT) is essential for identifying fractures, tumors and degenerative spine diseases, assisting medical practitioners in formulating an accurate diagnosis and treatment. One of the core element of SCT is reporting. The effectiveness of spine reporting is often limited by challenges such as an inadequate infrastructure and lack of experts. Automated SCT analysis has the potential to revolutionize spinal healthcare and improve patient outcomes. To achieve this objective, we proposed a framework for spine report generation that utilizes transformer architecture, trained on textual reports alongside the visual features extracted from the sagittal slices of the SCT volume. A foundation model is used to perform Knowledge Distillation (KD) alongside an encoder to ensure an optimal performance. The proposed framework is evaluated on the public dataset (VerSe20). The incorporation of KD results improved both the BERT and BLEU1 score on the dataset, from 0.7486 to 0.7522 and 0.6361 to 0.7291. Additionally, the proposed framework is evaluated using four different types of reports: original radiologist reports, reports without spine-level annotations, rephrased reports, and reports generated by ChatGPT-4o (ChatGPT). The evaluation without spine-level annotations demonstrates superior performance across most metrics, achieving the highest BLEU-1 and ROUGE-L scores, with a BLEU-1 of 0.9293 and a ROUGE-L score of 0.9297. In contrast, the other techniques achieved moderate scores across all metrics. Finally, experienced radiologists assessed the spine report and have given high rating to the original reports across all three criteria (completeness, conciseness and correctness), in comparison to the generated reports. This study’s findings suggest that omitting spine-level annotations can improve the quality of text generation.

Keywords
Spine Report Generation, Knowledge Distillation, Foundation Model, ChatGPT
National Category
Radiology and Medical Imaging
Identifiers
urn:nbn:se:liu:diva-211967 (URN)10.1109/access.2025.3546131 (DOI)001446493800034 ()2-s2.0-105001061916 (Scopus ID)
Note

Funding Agencies|Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia [PNURSP2025R40]

Available from: 2025-03-01 Created: 2025-03-01 Last updated: 2025-04-08
Ordinola, A., Abramian, D., Herberthson, M., Eklund, A. & Özarslan, E. (2025). Super-resolution mapping of anisotropic tissue structure with diffusion MRI and deep learning. Scientific Reports, 15(1), Article ID 6580.
Open this publication in new window or tab >>Super-resolution mapping of anisotropic tissue structure with diffusion MRI and deep learning
Show others...
2025 (English)In: Scientific Reports, E-ISSN 2045-2322, Vol. 15, no 1, article id 6580Article in journal (Refereed) Published
Abstract [en]

Diffusion magnetic resonance imaging (diffusion MRI) is widely employed to probe the diffusive motion of water molecules within the tissue. Numerous diseases and processes affecting the central nervous system can be detected and monitored via diffusion MRI thanks to its sensitivity to microstructural alterations in tissue. The latter has prompted interest in quantitative mapping of the microstructural parameters, such as the fiber orientation distribution function (fODF), which is instrumental for noninvasively mapping the underlying axonal fiber tracts in white matter through a procedure known as tractography. However, such applications demand repeated acquisitions of MRI volumes with varied experimental parameters demanding long acquisition times and/or limited spatial resolution. In this work, we present a deep-learning-based approach for increasing the spatial resolution of diffusion MRI data in the form of fODFs obtained through constrained spherical deconvolution. The proposed approach is evaluated on high quality data from the Human Connectome Project, and is shown to generate upsampled results with a greater correspondence to ground truth high-resolution data than can be achieved with ordinary spline interpolation methods. Furthermore, we employ a measure based on the earth mover’s distance to assess the accuracy of the upsampled fODFs. At low signal-to-noise ratios, our super-resolution method provides more accurate estimates of the fODF compared to data collected with 8 times smaller voxel volume.

Keywords
Diffusion MRI, super resolution, deep learning, brain, white matter
National Category
Radiology and Medical Imaging Medical Imaging
Identifiers
urn:nbn:se:liu:diva-211968 (URN)10.1038/s41598-025-90972-7 (DOI)001433275500049 ()39994322 (PubMedID)2-s2.0-85218687239 (Scopus ID)
Funder
Linköpings universitetVinnova, 2021-01954
Note

Funding Agencies|Linkping University [2021-01954]; ITEA/VINNOVA project ASSIST (Automation)

Available from: 2025-03-01 Created: 2025-03-01 Last updated: 2025-03-20
Trenti, C., Boito, D., Hammaréus, F., Eklund, A., Swahn, E., Jonasson, L., . . . Dyverfeldt, P. (2024). Abnormal Patterns of Wall Shear Stress in Aortic Dilation Revealed by Permutation Tests. Journal of Cardiovascular Magnetic Resonance, 26, Article ID 100612.
Open this publication in new window or tab >>Abnormal Patterns of Wall Shear Stress in Aortic Dilation Revealed by Permutation Tests
Show others...
2024 (English)In: Journal of Cardiovascular Magnetic Resonance, ISSN 1097-6647, E-ISSN 1532-429X, Vol. 26, article id 100612Article in journal, Meeting abstract (Refereed) Published
Abstract [en]

Four-dimensional flow (4D Flow) CMR affords comprehensive 3D maps of advanced hemodynamics parameters such as wall shear stress (WSS). However, the evaluation of these data is often restricted to spatial averages in large regions of interests, such as the ascending aorta. Recent studies have explored ways of analyzing local intercohort WSS differences by using basic statistical tests with a p-value of 0.05 for determining significance, thus not accounting for the large number of comparisons made when exploring differences for multiple locations across the ascending aorta surface.

Permutation tests, frequently used in brain MRI, permit statistical analysis on a local level while controlling for the family-wise error rate by constructing the null hypothesis distribution based on the maximum statistic over the voxels at each permutation. We sought to use permutation tests to identify local regions of abnormal WSS in the ascending aorta in patients with aortic dilation.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Aortic Dilation; Wall Shear Stress; magnetic resonance imaging
National Category
Radiology, Nuclear Medicine and Medical Imaging Cardiology and Cardiovascular Disease Medical Imaging
Identifiers
urn:nbn:se:liu:diva-207855 (URN)10.1016/j.jocmr.2024.100612 (DOI)
Available from: 2024-09-26 Created: 2024-09-26 Last updated: 2025-04-22Bibliographically approved
Nguyen, H. H., Le, D. T., Shore-Lorenti, C., Chen, C., Schilcher, J., Eklund, A., . . . Ebeling, P. R. (2024). AFFnet - a deep convolutional neural network for the detection of atypical femur fractures from anteriorposterior radiographs. Bone, Article ID 117215.
Open this publication in new window or tab >>AFFnet - a deep convolutional neural network for the detection of atypical femur fractures from anteriorposterior radiographs
Show others...
2024 (English)In: Bone, ISSN 8756-3282, E-ISSN 1873-2763, article id 117215Article in journal (Refereed) Published
Abstract [en]

 Despite well-defined criteria for radiographic diagnosis of atypical femur fractures (AFFs), missed and delayed diagnosis is common. An AFF diagnostic software could provide timely AFF detection to prevent progression of incomplete or development of contralateral AFFs. In this study, we investigated the ability for an artificial intelligence (AI)-based application, using deep learning models (DLMs), particularly convolutional neural networks (CNNs), to detect AFFs from femoral radiographs. A labelled Australian dataset of pre-operative complete AFF (cAFF), incomplete AFF (iAFF), typical femoral shaft fracture (TFF), and non-fractured femoral (NFF) X-ray images in anterior-posterior view were used for training (N = 213, 49, 394, 1359, respectively). An AFFnet model was developed using a pretrained (ImageNet dataset) ResNet-50 backbone, and a novel Box Attention Guide (BAG) module to guide the model's scanning patterns to enhance its learning. All images were used to train and internally test the model using a 5-fold cross validation approach, and further validated by an external dataset. External validation of the model's performance was conducted on a Sweden dataset comprising 733 TFF and 290 AFF images. Precision, sensitivity, specificity, F1-score and AUC were measured and compared between AFFnet and a global approach with ResNet-50. Excellent diagnostic performance was recorded in both models (all AUC >0.97), however AFFnet recorded lower number of prediction errors, and improved sensitivity, F1-score and precision compared to ResNet-50 in both internal and external testing. Sensitivity in the detection of iAFF was higher for AFFnet than ResNet-50 (82 % vs 56 %). In conclusion, AFFnet achieved excellent diagnostic performance on internal and external validation, which was superior to a pre-existing model. Accurate AI-based AFF diagnostic software has the potential to improve AFF diagnosis, reduce radiologist error, and allow urgent intervention, thus improving patient outcomes.

Place, publisher, year, edition, pages
ELSEVIER SCIENCE INC, 2024
Keywords
Atypical femur fracture, Screening, Osteoporosis, Radiology, Antiresorptive
National Category
Orthopaedics Radiology, Nuclear Medicine and Medical Imaging Medical Imaging
Identifiers
urn:nbn:se:liu:diva-206057 (URN)10.1016/j.bone.2024.117215 (DOI)001284934900001 ()39074569 (PubMedID)
Funder
Knut and Alice Wallenberg Foundation
Note

Funding Agencies|National Health & Medical Research Council [1143364]

Available from: 2024-07-30 Created: 2024-07-30 Last updated: 2025-03-04
Akbar, M. U., Larsson, M., Blystad, I. & Eklund, A. (2024). Brain tumor segmentation using synthetic MR images - A comparison of GANs and diffusion models. Scientific Data, 11(1), Article ID 259.
Open this publication in new window or tab >>Brain tumor segmentation using synthetic MR images - A comparison of GANs and diffusion models
2024 (English)In: Scientific Data, E-ISSN 2052-4463, Vol. 11, no 1, article id 259Article in journal (Refereed) Published
Abstract [en]

Large annotated datasets are required for training deep learning models, but in medical imaging data sharing is often complicated due to ethics, anonymization and data protection legislation. Generative AI models, such as generative adversarial networks (GANs) and diffusion models, can today produce very realistic synthetic images, and can potentially facilitate data sharing. However, in order to share synthetic medical images it must first be demonstrated that they can be used for training different networks with acceptable performance. Here, we therefore comprehensively evaluate four GANs (progressive GAN, StyleGAN 1–3) and a diffusion model for the task of brain tumor segmentation (using two segmentation networks, U-Net and a Swin transformer). Our results show that segmentation networks trained on synthetic images reach Dice scores that are 80%–90% of Dice scores when training with real images, but that memorization of the training images can be a problem for diffusion models if the original dataset is too small. Our conclusion is that sharing synthetic medical images is a viable option to sharing real images, but that further work is required. The trained generative models and the generated synthetic images are shared on AIDA data hub.

Place, publisher, year, edition, pages
Nature Publishing Group, 2024
Keywords
Deep learning, brain tumor, magnetic resonance imaging, synthetic images, generative adversarial networks, diffusion models
National Category
Radiology, Nuclear Medicine and Medical Imaging Medical Imaging
Identifiers
urn:nbn:se:liu:diva-201435 (URN)10.1038/s41597-024-03073-x (DOI)001177063000006 ()38424097 (PubMedID)2-s2.0-85186294143 (Scopus ID)
Funder
Vinnova, 2021-01954Vinnova, 2021-01420Åke Wiberg Foundation, M22-0088
Note

Funding Agencies|ITEA/VINNOVA project ASSIST [2021-01420]; LiU Cancer; VINNOVA AIDA [M22-0088]; Ake Wiberg foundation; Wallenberg Center for Molecular Medicine as an associated clinical fellow;  [2021-01954]

Available from: 2024-03-09 Created: 2024-03-09 Last updated: 2025-02-09Bibliographically approved
Spyretos, C., Tampu, I. E., Khalili, N., Pardo Ladino, J. M., Nyman, P., Blystad, I., . . . Haj-Hosseini, N. (2024). Early fusion of H&E and IHC histology images for pediatric brain tumor classification. In: Francesco Ciompi, Nadieh Khalili, Linda Studer, Milda Poceviciute, Amjad Khan, Mitko Veta, Yiping Jiao and Neda Haj-Hosseini and Hao Chen and Shan Raza and Fayyaz Minhas and Inti Zlobec and Nikolay Burlutskiy and Veronica Vilaplana and Biagio Brattoli, Henning Muller, Manfredo Atzori, Shan Raza, Fayyaz Minhas (Ed.), Proceedings of Machine Learning Research: . Paper presented at 27th INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION (MICCAI), Computational Pathology with Multimodal Data (COMPAYL) Workshop (pp. 192-202). Marrakesh: PMLR, 254
Open this publication in new window or tab >>Early fusion of H&E and IHC histology images for pediatric brain tumor classification
Show others...
2024 (English)In: Proceedings of Machine Learning Research / [ed] Francesco Ciompi, Nadieh Khalili, Linda Studer, Milda Poceviciute, Amjad Khan, Mitko Veta, Yiping Jiao and Neda Haj-Hosseini and Hao Chen and Shan Raza and Fayyaz Minhas and Inti Zlobec and Nikolay Burlutskiy and Veronica Vilaplana and Biagio Brattoli, Henning Muller, Manfredo Atzori, Shan Raza, Fayyaz Minhas, Marrakesh: PMLR , 2024, Vol. 254, p. 192-202Conference paper, Published paper (Refereed)
Abstract [en]

This study explores the application of computational pathology to analyze pediatric brain tumors utilizing hematoxylin and eosin (H&E) and immunohistochemistry (IHC) whole slide images (WSIs). Experiments were conducted on H&E images for predicting tumor diagnosis and fusing them with unregistered IHC images to investigate potential improvements. Patch features were extracted using UNI, a vision transformer (ViT) model trained on H&E data, and whole slide classification was achieved using the attention-based multiple instance learning CLAM framework. In the astrocytoma tumor classification, early fusion of the H&E and IHC significantly improved the differentiation between tumor grades (balanced accuracy: 0.82±0.05vs 0.84 ± 0.05). In the multiclass classification, H&E images alone had a balanced accuracy of 0.79 ± 0.03 without any improvement obtained when fused with IHC. The findings highlight the potential of using multi-stain fusion to advance the diagnosis of pediatric brain tumors, however, further fusion methods should be investigated.

Place, publisher, year, edition, pages
Marrakesh: PMLR, 2024
Series
Proceedings of the MICCAI Workshop on Computational Pathology, ISSN 2640-3498 ; 254
Keywords
pediatric brain tumour, immunohistochemistry (IHC), computational pathology, early fusion, foundation model, cancer
National Category
Medical Imaging Cancer and Oncology Pediatrics
Identifiers
urn:nbn:se:liu:diva-208716 (URN)
Conference
27th INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION (MICCAI), Computational Pathology with Multimodal Data (COMPAYL) Workshop
Funder
Swedish Childhood Cancer Foundation, MT2021-0011, MT2022-0013Vinnova, AIDA (2022-2222)Linköpings universitet, Joanna Cocozza 2022Linköpings universitet, Cancer Strength Area
Available from: 2024-10-21 Created: 2024-10-21 Last updated: 2025-03-03
Gustafsson, C. J., Löfstedt, T., Åkesson, M., Rogowski, V., Akbar, M. U., Hellander, A., . . . Eklund, A. (2024). Federated training of segmentation models for radiation therapy treatment planning. Paper presented at ESTRO. Radiotherapy and Oncology, 194, S4819-S4822
Open this publication in new window or tab >>Federated training of segmentation models for radiation therapy treatment planning
Show others...
2024 (English)In: Radiotherapy and Oncology, ISSN 0167-8140, E-ISSN 1879-0887, Vol. 194, p. S4819-S4822Article in journal, Meeting abstract (Refereed) Published
Abstract [en]

Radiotherapy treatment planning takes substantial time, several hours per patient, as it involves manual segmentation of tumor and risk organs. Segmentation networks can be trained to automatically perform the segmentations, but typically require large annotated datasets for training. Sharing of sensitive data between hospitals, to create a larger dataset, is often difficult due to ethics and GDPR. Here we therefore demonstrate that federated learning is a solution to this problem, as then only the segmentation model is sent between each hospital and a global server. We export and preprocess brain tumor images from the oncology departments in Linköping and Lund, and use federated learning to train a global segmentation model using two different frameworks.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Radiotherapy, deep learning, federated learning
National Category
Medical Imaging Cancer and Oncology
Identifiers
urn:nbn:se:liu:diva-207369 (URN)10.1016/s0167-8140(24)01903-0 (DOI)
Conference
ESTRO
Funder
Vinnova, 2021-01954
Available from: 2024-09-07 Created: 2024-09-07 Last updated: 2025-02-09
Schilcher, J., Nilsson, A., Andlid, O. & Eklund, A. (2024). Fusion of electronic health records and radiographic images for a multimodal deep learning prediction model of atypical femur fractures. Computers in Biology and Medicine, 168, Article ID 107704.
Open this publication in new window or tab >>Fusion of electronic health records and radiographic images for a multimodal deep learning prediction model of atypical femur fractures
2024 (English)In: Computers in Biology and Medicine, ISSN 0010-4825, E-ISSN 1879-0534, Vol. 168, article id 107704Article in journal (Refereed) Published
Abstract [en]

Atypical femur fractures (AFF) represent a very rare type of fracture that can be difficult to discriminate radiologically from normal femur fractures (NFF). AFFs are associated with drugs that are administered to prevent osteoporosis-related fragility fractures, which are highly prevalent in the elderly population. Given that these fractures are rare and the radiologic changes are subtle currently only 7% of AFFs are correctly identified, which hinders adequate treatment for most patients with AFF. Deep learning models could be trained to classify automatically a fracture as AFF or NFF, thereby assisting radiologists in detecting these rare fractures. Historically, for this classification task, only imaging data have been used, using convolutional neural networks (CNN) or vision transformers applied to radiographs. However, to mimic situations in which all available data are used to arrive at a diagnosis, we adopted an approach of deep learning that is based on the integration of image data and tabular data (from electronic health records) for 159 patients with AFF and 914 patients with NFF. We hypothesized that the combinatorial data, compiled from all the radiology departments of 72 hospitals in Sweden and the Swedish National Patient Register, would improve classification accuracy, as compared to using only one modality. At the patient level, the area under the ROC curve (AUC) increased from 0.966 to 0.987 when using the integrated set of imaging data and seven pre-selected variables, as compared to only using imaging data. More importantly, the sensitivity increased from 0.796 to 0.903. We found a greater impact of data fusion when only a randomly selected subset of available images was used to make the image and tabular data more balanced for each patient. The AUC then increased from 0.949 to 0.984, and the sensitivity increased from 0.727 to 0.849.

These AUC improvements are not large, mainly because of the already excellent performance of the CNN (AUC of 0.966) when only images are used. However, the improvement is clinically highly relevant considering the importance of accuracy in medical diagnostics. We expect an even greater effect when imaging data from a clinical workflow, comprising a more diverse set of diagnostic images, are used.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Atypical femoral fractures; Multimodal; Fusion; Deep learning
National Category
Orthopaedics Medical Imaging Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:liu:diva-199184 (URN)10.1016/j.compbiomed.2023.107704 (DOI)001119023400001 ()37980797 (PubMedID)
Funder
Vinnova, 2021-01954Knut and Alice Wallenberg FoundationSwedish Research Council, 2023-01942
Note

Funding: ITEA/VINNOVA [2021-01954]; Region Ostergotland; Knut and Alice Wallenberg Foundation; Swedish research council [2023-01942]

Available from: 2023-11-15 Created: 2023-11-15 Last updated: 2025-02-09Bibliographically approved
Tampu, I. E., Bianchessi, T., Eklund, A. & Haj-Hosseini, N. (2024). Pediatric brain tumor classification using MR-images with age fusion. In: : . Paper presented at IEEE International Symposium on Biomedical Imaging (ISBI). Athens
Open this publication in new window or tab >>Pediatric brain tumor classification using MR-images with age fusion
2024 (English)Conference paper, Poster (with or without abstract) (Other academic)
Place, publisher, year, edition, pages
Athens: , 2024
Keywords
cancer, brain tumor, radiology, MRI, deep learning, AI
National Category
Medical Engineering Medical Imaging Cancer and Oncology
Identifiers
urn:nbn:se:liu:diva-203314 (URN)
Conference
IEEE International Symposium on Biomedical Imaging (ISBI)
Funder
Swedish Childhood Cancer FoundationLinköpings universitet, Cancer Strength Area
Available from: 2024-05-06 Created: 2024-05-06 Last updated: 2025-02-09Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-7061-7995

Search in DiVA

Show all publications