liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
Publications (10 of 27) Show all publications
Tampu, I. E., Nyman, P., Spyretos, C., Blystad, I., Shamikh, A., Prochazka, G., . . . Haj-Hosseini, N. (2026). Pediatric brain tumor classification using digital pathology and deep learning: Evaluation of SOTA methods on a multi-center Swedish cohort. Brain Pathology, 36(1), Article ID e70029.
Open this publication in new window or tab >>Pediatric brain tumor classification using digital pathology and deep learning: Evaluation of SOTA methods on a multi-center Swedish cohort
Show others...
2026 (English)In: Brain Pathology, ISSN 1015-6305, Vol. 36, no 1, article id e70029Article in journal (Refereed) Published
Abstract [en]

Brain tumors are the most common solid tumors in children and young adults, but the scarcity of large histopathology datasets has limited the application of computational pathology in this group. This study implements two weakly supervised multiple-instance learning (MIL) approaches on patch features obtained from state-of-the-art histology-specific foundation models to classify pediatric brain tumors in hematoxylin and eosin whole slide images (WSIs) from a multi-center Swedish cohort. WSIs from 540 subjects (age 8.5 ± 4.9 years) diagnosed with brain tumors were gathered from the six Swedish university hospitals. Instance (patch)-level features were obtained from WSIs using three pre-trained feature extractors: ResNet50, UNI, and CONCH. Instances were aggregated using attention-based MIL (ABMIL) or clustering-constrained attention MIL (CLAM) for patient-level classification. Models were evaluated on three classification tasks based on the hierarchical classification of pediatric brain tumors: tumor category, family, and type. Model generalization was assessed by training on data from two of the centers and testing on data from four other centers. Model interpretability was evaluated through attention mapping. The highest classification performance was achieved using UNI features and ABMIL aggregation, with Matthew's correlation coefficient of 0.76 ± 0.04, 0.63 ± 0.04, and 0.60 ± 0.05 for tumor category, family, and type classification, respectively. When evaluating generalization, models utilizing UNI and CONCH features outperformed those using ResNet50. However, the drop in performance from the in-site to out-of-site testing was similar across feature extractors. These results show the potential of state-of-the-art computational pathology methods in diagnosing pediatric brain tumors at different hierarchical levels with fair generalizability on a multi-center national dataset.

Place, publisher, year, edition, pages
John Wiley & Sons, 2026
Keywords
Deep learning, artificial intelligence, Cancer, Pediatric brain tumor, digital pathology
National Category
Medical Imaging Cancer and Oncology Pediatrics
Identifiers
urn:nbn:se:liu:diva-208705 (URN)10.1111/bpa.70029 (DOI)001519965600001 ()40589103 (PubMedID)2-s2.0-105009437454 (Scopus ID)
Funder
Swedish Childhood Cancer Foundation, MT2021-0011, MT2022-0013Linköpings universitet, Cocozza 2022Linköpings universitet, Cancer Strength AreaVinnova, AIDA (2022-2222)Region Östergötland, ALF, 974566Wallenberg Foundations, Wallenberg Center for Molecular Medicine
Note

Funding Agencies|Linkoeping University's Cancer Strength Area; ALF Grants, Region Ostergoetland [974566]; Vinnova via Medtech4Health and Analytic Imaging Diagnostics Arena [2222]; Swedish Childhood Cancer Fund [MT2021-0011, MT2022-0013]; Joanna Cocozza's Foundation for Children's Medical Research

Available from: 2024-10-21 Created: 2024-10-21 Last updated: 2025-12-18Bibliographically approved
Tampu, I. E., Bianchessi, T., Blystad, I., Lundberg, P., Nyman, P., Eklund, A. & Haj-Hosseini, N. (2025). Pediatric brain tumor classification using deep learning on MR-images with age fusion. Neuro-Oncology Advances, 7(1), Article ID vdae205.
Open this publication in new window or tab >>Pediatric brain tumor classification using deep learning on MR-images with age fusion
Show others...
2025 (English)In: Neuro-Oncology Advances, E-ISSN 2632-2498, ISSN 2632-2498, Vol. 7, no 1, article id vdae205Article in journal (Refereed) Published
Abstract [en]

Purpose: To implement and evaluate deep learning-based methods for the classification of pediatric brain tumors in MR data.

Materials and methods: A subset of the “Children’s Brain Tumor Network” dataset was retrospectively used (n=178 subjects, female=72, male=102, NA=4, age-range [0.01, 36.49] years) with tumor types being low-grade astrocytoma (n=84), ependymoma (n=32), and medulloblastoma (n=62). T1w post-contrast (n=94 subjects), T2w (n=160 subjects), and ADC (n=66 subjects) MR sequences were used separately. Two deep-learning models were trained on transversal slices showing tumor. Joint fusion was implemented to combine image and age data, and two pre-training paradigms were utilized. Model explainability was investigated using gradient-weighted class activation mapping (Grad-CAM), and the learned feature space was visualized using principal component analysis (PCA).

Results: The highest tumor-type classification performance was achieved when using a vision transformer model pre-trained on ImageNet and fine-tuned on ADC images with age fusion (MCC: 0.77 ± 0.14 Accuracy: 0.87 ± 0.08), followed by models trained on T2w (MCC: 0.58 ± 0.11, Accuracy: 0.73 ± 0.08) and T1w post-contrast (MCC: 0.41 ± 0.11, Accuracy: 0.62 ± 0.08) data. Age fusion marginally improved the model’s performance. Both model architectures performed similarly across the experiments, with no differences between the pre-training strategies. Grad-CAMs showed that the models’ attention focused on the brain region. PCA of the feature space showed greater separation of the tumor-type clusters when using contrastive pre-training.

Conclusion: Classification of pediatric brain tumors on MR-images could be accomplished using deep learning, with the top-performing model being trained on ADC data, which is used by radiologists for the clinical classification of these tumors.

Place, publisher, year, edition, pages
Oxford University Press, 2025
Keywords
deep-learning, artificial intelligence, cancer, pediatric brain tumor, MRI, data fusion
National Category
Medical Imaging Cancer and Oncology Pediatrics
Identifiers
urn:nbn:se:liu:diva-208701 (URN)10.1093/noajnl/vdae205 (DOI)001390014100001 ()39777258 (PubMedID)2-s2.0-85214564318 (Scopus ID)
Funder
Swedish Childhood Cancer Foundation, MT2021-0011, MT2022-0013Linköpings universitet, Cocozza 2022Linköpings universitet, Cancer Strength AreaRegion Östergötland, ALF, 974566
Note

Funding Agencies|Swedish Childhood Cancer Foundation; Children's Brain Tumor Tissue Consortium (CBTTC) / The Children's Brain Tumor Network (CBTN)

Available from: 2024-10-21 Created: 2024-10-21 Last updated: 2025-04-10Bibliographically approved
Akbar, M. U., Larsson, M., Blystad, I. & Eklund, A. (2024). Brain tumor segmentation using synthetic MR images - A comparison of GANs and diffusion models. Scientific Data, 11(1), Article ID 259.
Open this publication in new window or tab >>Brain tumor segmentation using synthetic MR images - A comparison of GANs and diffusion models
2024 (English)In: Scientific Data, E-ISSN 2052-4463, Vol. 11, no 1, article id 259Article in journal (Refereed) Published
Abstract [en]

Large annotated datasets are required for training deep learning models, but in medical imaging data sharing is often complicated due to ethics, anonymization and data protection legislation. Generative AI models, such as generative adversarial networks (GANs) and diffusion models, can today produce very realistic synthetic images, and can potentially facilitate data sharing. However, in order to share synthetic medical images it must first be demonstrated that they can be used for training different networks with acceptable performance. Here, we therefore comprehensively evaluate four GANs (progressive GAN, StyleGAN 1–3) and a diffusion model for the task of brain tumor segmentation (using two segmentation networks, U-Net and a Swin transformer). Our results show that segmentation networks trained on synthetic images reach Dice scores that are 80%–90% of Dice scores when training with real images, but that memorization of the training images can be a problem for diffusion models if the original dataset is too small. Our conclusion is that sharing synthetic medical images is a viable option to sharing real images, but that further work is required. The trained generative models and the generated synthetic images are shared on AIDA data hub.

Place, publisher, year, edition, pages
Nature Publishing Group, 2024
Keywords
Deep learning, brain tumor, magnetic resonance imaging, synthetic images, generative adversarial networks, diffusion models
National Category
Radiology, Nuclear Medicine and Medical Imaging Medical Imaging
Identifiers
urn:nbn:se:liu:diva-201435 (URN)10.1038/s41597-024-03073-x (DOI)001177063000006 ()38424097 (PubMedID)2-s2.0-85186294143 (Scopus ID)
Funder
Vinnova, 2021-01954Vinnova, 2021-01420Åke Wiberg Foundation, M22-0088
Note

Funding Agencies|ITEA/VINNOVA project ASSIST [2021-01420]; LiU Cancer; VINNOVA AIDA [M22-0088]; Ake Wiberg foundation; Wallenberg Center for Molecular Medicine as an associated clinical fellow;  [2021-01954]

Available from: 2024-03-09 Created: 2024-03-09 Last updated: 2025-02-09Bibliographically approved
Spyretos, C., Tampu, I. E., Khalili, N., Pardo Ladino, J. M., Nyman, P., Blystad, I., . . . Haj-Hosseini, N. (2024). Early fusion of H&E and IHC histology images for pediatric brain tumor classification. In: Francesco Ciompi, Nadieh Khalili, Linda Studer, Milda Poceviciute, Amjad Khan, Mitko Veta, Yiping Jiao and Neda Haj-Hosseini and Hao Chen and Shan Raza and Fayyaz Minhas and Inti Zlobec and Nikolay Burlutskiy and Veronica Vilaplana and Biagio Brattoli, Henning Muller, Manfredo Atzori, Shan Raza, Fayyaz Minhas (Ed.), Proceedings of Machine Learning Research: . Paper presented at 27th INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION (MICCAI), Computational Pathology with Multimodal Data (COMPAYL) Workshop (pp. 192-202). Marrakesh: PMLR, 254
Open this publication in new window or tab >>Early fusion of H&E and IHC histology images for pediatric brain tumor classification
Show others...
2024 (English)In: Proceedings of Machine Learning Research / [ed] Francesco Ciompi, Nadieh Khalili, Linda Studer, Milda Poceviciute, Amjad Khan, Mitko Veta, Yiping Jiao and Neda Haj-Hosseini and Hao Chen and Shan Raza and Fayyaz Minhas and Inti Zlobec and Nikolay Burlutskiy and Veronica Vilaplana and Biagio Brattoli, Henning Muller, Manfredo Atzori, Shan Raza, Fayyaz Minhas, Marrakesh: PMLR , 2024, Vol. 254, p. 192-202Conference paper, Published paper (Refereed)
Abstract [en]

This study explores the application of computational pathology to analyze pediatric brain tumors utilizing hematoxylin and eosin (H&E) and immunohistochemistry (IHC) whole slide images (WSIs). Experiments were conducted on H&E images for predicting tumor diagnosis and fusing them with unregistered IHC images to investigate potential improvements. Patch features were extracted using UNI, a vision transformer (ViT) model trained on H&E data, and whole slide classification was achieved using the attention-based multiple instance learning CLAM framework. In the astrocytoma tumor classification, early fusion of the H&E and IHC significantly improved the differentiation between tumor grades (balanced accuracy: 0.82±0.05vs 0.84 ± 0.05). In the multiclass classification, H&E images alone had a balanced accuracy of 0.79 ± 0.03 without any improvement obtained when fused with IHC. The findings highlight the potential of using multi-stain fusion to advance the diagnosis of pediatric brain tumors, however, further fusion methods should be investigated.

Place, publisher, year, edition, pages
Marrakesh: PMLR, 2024
Series
Proceedings of the MICCAI Workshop on Computational Pathology, ISSN 2640-3498 ; 254
Keywords
pediatric brain tumour, immunohistochemistry (IHC), computational pathology, early fusion, foundation model, cancer
National Category
Medical Imaging Cancer and Oncology Pediatrics
Identifiers
urn:nbn:se:liu:diva-208716 (URN)001479306100015 ()
Conference
27th INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION (MICCAI), Computational Pathology with Multimodal Data (COMPAYL) Workshop
Funder
Swedish Childhood Cancer Foundation, MT2021-0011, MT2022-0013Vinnova, AIDA (2022-2222)Linköpings universitet, Joanna Cocozza 2022Linköpings universitet, Cancer Strength Area
Note

Funding Agencies|Swedish Childhood Cancer Foundation [MT2021-0011, MT2022-0013]; Joanna Cocozza's Foundation; Vinnova project via Medtech4Health and Analytic Imaging Diagnostics Arena (1908) [2017-02447, 2222]; Linkoping University's Cancer Strength Area (2022)

Available from: 2024-10-21 Created: 2024-10-21 Last updated: 2026-02-23
Gustafsson, C. J., Löfstedt, T., Åkesson, M., Rogowski, V., Akbar, M. U., Hellander, A., . . . Eklund, A. (2024). Federated training of segmentation models for radiation therapy treatment planning. Paper presented at ESTRO. Radiotherapy and Oncology, 194, S4819-S4822
Open this publication in new window or tab >>Federated training of segmentation models for radiation therapy treatment planning
Show others...
2024 (English)In: Radiotherapy and Oncology, ISSN 0167-8140, E-ISSN 1879-0887, Vol. 194, p. S4819-S4822Article in journal, Meeting abstract (Refereed) Published
Abstract [en]

Radiotherapy treatment planning takes substantial time, several hours per patient, as it involves manual segmentation of tumor and risk organs. Segmentation networks can be trained to automatically perform the segmentations, but typically require large annotated datasets for training. Sharing of sensitive data between hospitals, to create a larger dataset, is often difficult due to ethics and GDPR. Here we therefore demonstrate that federated learning is a solution to this problem, as then only the segmentation model is sent between each hospital and a global server. We export and preprocess brain tumor images from the oncology departments in Linköping and Lund, and use federated learning to train a global segmentation model using two different frameworks.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Radiotherapy, deep learning, federated learning
National Category
Medical Imaging Cancer and Oncology
Identifiers
urn:nbn:se:liu:diva-207369 (URN)10.1016/s0167-8140(24)01903-0 (DOI)
Conference
ESTRO
Funder
Vinnova, 2021-01954
Available from: 2024-09-07 Created: 2024-09-07 Last updated: 2025-08-30
Fällmar, D., Granberg, T., Kits, A., Nilsson, M., Sundström, K., Åslund, P.-E., . . . Blystad, I. (2023). Att tänka på vid neuroradiologisk diagnostik av gravida och ammande: Om datortomografi, magnetkamera och kontrastmedel [Conditions for performing CT and MRI scans in pregnant and lactating patients]. Läkartidningen, 120
Open this publication in new window or tab >>Att tänka på vid neuroradiologisk diagnostik av gravida och ammande: Om datortomografi, magnetkamera och kontrastmedel [Conditions for performing CT and MRI scans in pregnant and lactating patients]
Show others...
2023 (Swedish)In: Läkartidningen, ISSN 0023-7205, E-ISSN 1652-7518, Vol. 120Article, review/survey (Refereed) Published
Abstract [en]

Many women are pregnant during several percent of their lives. Occasionally, there is a need for neuroradiological examinations during pregnancy or lactation. In our clinical work, we regularly see that female patients are being withheld relevant diagnostic scans during pregnancy, due to insufficient knowledge or an unbalanced comparison between benefits and risks. This article describes the current knowledge regarding conditions for performing CT and MRI scans in pregnant and lactating patients, including the use of contrast media. PET scans and reactions to contrast media are briefly mentioned, but interventional radiology is not discussed.

Abstract [sv]

Många kvinnor är gravida under flera procent av sina liv. Sjukdom kan inträffa i olika livsskeden, även un der graviditeten, och de flesta har då av naturliga skäl en ökad hälsomedvetenhet. Ibland behövs radiolo gisk diagnostik under graviditet eller amning, trots eventuell oro för strålning och andra potentiella ris ker. Återkommande ser vi i vårt arbete som radiologer att gravida undanhålls medicinskt berättigade under sökningar, ibland på grund av okunskap eller en skev bedömning av undersökningens risker. Osäkerheten ligger inte bara hos patienterna utan även hos många remittenter och radiologer. En sannolik orsak till att många läkare är osäkra kan vara att den nödvändiga informationen har funnits utspridd mellan olika käl lor och därmed varit svår att överblicka. Denna arti kel syftar till att samla informationen och göra den tillgänglig. Det kan också vara svårt att värdera risker med stråldoser eftersom det kräver vissa förkunska per, och det finns inte sällan en psykologisk drivkraft för att minimera risker för fostret, vilket kan utgöra en målkonflikt gentemot den gravida patientens hälsa. Vi som författat den här artikeln arbetar på oli ka sätt med radiologi och representerar bland annat Svensk förening för neuroradiologi (SFNR), Svenska alliansen för magnetkamerasäkerhet (SAMS) och kon trastmedelsgruppen inom Svensk förening för medi cinsk radiologi (SFMR). Syftet är att öka kunskapen om dessa frågor bland läkare och att bidra till att gra vida och ammande som har behov av neuroradiolo gisk dia gnostik får en korrekt och berättigad under sökning. Artikeln fokuserar på neuroradiologisk diagnostik med datortomografi (DT) och magnetkameraunder sökning (MR). DT utsätter patienter för joniserande strålning, vilket MR inte gör. Dessutom används olika sorters kontrastmedel (KM); vid DT används jodkon trastmedel (jodKM) och vid MR används (primärt) ga doliniumkontrastmedel (GdKM). Förutsättningarna för att ge kontrastmedel vid graviditet och amning be skrivs nedan. Positronemissionstomografi (PET) samt hantering av överkänslighetsreaktioner nämns kort fattat. Radiologisk intervention och konventionell an giografi diskuteras inte här, utan blir föremål för indi viduell bedömning.

Place, publisher, year, edition, pages
Sveriges Läkarförbund, 2023
National Category
Gynaecology, Obstetrics and Reproductive Medicine
Identifiers
urn:nbn:se:liu:diva-202797 (URN)37656000 (PubMedID)2-s2.0-85169356028 (Scopus ID)
Available from: 2024-04-26 Created: 2024-04-26 Last updated: 2025-02-11
Tampu, I. E., Haj-Hosseini, N., Blystad, I. & Eklund, A. (2023). Deep learning-based detection and identification of brain tumor biomarkers in quantitative MR-images. Machine Learning: Science and Technology, 4(3), Article ID 035038.
Open this publication in new window or tab >>Deep learning-based detection and identification of brain tumor biomarkers in quantitative MR-images
2023 (English)In: Machine Learning: Science and Technology, E-ISSN 2632-2153, Vol. 4, no 3, article id 035038Article in journal (Refereed) Published
Abstract [en]

The infiltrative nature of malignant gliomas results in active tumor spreading into the peritumoral edema, which is not visible in conventional magnetic resonance imaging (cMRI) even after contrast injection. MR relaxometry (qMRI) measures relaxation rates dependent on tissue properties, and can offer additional contrast mechanisms to highlight the non-enhancing infiltrative tumor. To investigate if qMRI data provides additional information compared to cMRI sequences when considering deep learning-based brain tumor detection and segmentation, preoperative conventional (T1-w per- and post-contrast, T2-w and FLAIR) and quantitative (pre- and post-contrast R1, R2 and proton density) MR data was obtained from 23 patients with typical radiological findings suggestive of a high-grade malignant glioma. 2D deep learning models were trained on transversal slices (n=528) for tumor detection and segmentation using either conventional or quantitative data. Moreover, trends in quantitative R1 and R2 rates of regions identified as relevant for tumor detection by model explainability methods were qualitatively analyzed. Tumor detection and segmentation performance for models trained with a combination of qMRI pre- and post-contrast was the highest (detection MCC=0.72, segmentation DSC=0.90), however, the difference compared to cMRI was not statistically significant. Overall analysis of the relevant regions identified using model explainability showed no differences between models trained on cMRI or qMRI. When looking at the individual cases, relaxation rates of brain regions outside the annotation and identified as relevant for tumor detection exhibited changes after contrast injection similar to region inside the annotation in the majority of cases. In conclusion, models trained on qMRI data obtained similar detection and segmentation performance to those trained on cMRI data, with the advantage of quantitatively measuring brain tissue properties within similar scan time. When considering individual patients, the analysis of relaxation rates of regions identified by model explainability suggests the presence of infiltrative tumor outside the tumor cMRI-based annotation.

Place, publisher, year, edition, pages
IOP Publishing Ltd, 2023
Keywords
quantitative MRI, brain tumor, deep learning, model explainability, cancer
National Category
Medical Imaging
Identifiers
urn:nbn:se:liu:diva-196603 (URN)10.1088/2632-2153/acf095 (DOI)001058164800001 ()2-s2.0-85170823259 (Scopus ID)
Funder
Swedish Research Council, 2018-05250Vinnova, ASSISTVinnova, IMPACTÅke Wiberg Foundation, M22-0088Medical Research Council of Southeast Sweden (FORSS), FORSS-234551Linköpings universitet, LiU Cancer Strength Area 2021
Note

Funding: CENIIT at Linkoeping University, ITEA3 / VINNOVA funded project Intelligence based iMprovement of Personalized treatment And Clinical workflow supporT (IMPACT); ITEA4 / VINNOVA funded project Automation, Surgery Support and Intuitive 3D visualization to optimize workflow in IGT SysTems (ASSIST) [2021-01954]; Cancer Strength Area at Linkoeping University, VINOVA project via the Analytic Imaging Diagnostics Arena (AIDA) [2017-02447]; Medical Research Council of Southeast Sweden [FORSS-234551]; Swedish Research Council [2018-05250]

Available from: 2023-08-16 Created: 2023-08-16 Last updated: 2025-02-19
Boito, D., Herberthson, M., Dela Haije, T., Blystad, I. & Özarslan, E. (2023). Diffusivity-limited q-space trajectory imaging. Magnetic Resonance Letters, 3(2), 187-196
Open this publication in new window or tab >>Diffusivity-limited q-space trajectory imaging
Show others...
2023 (English)In: Magnetic Resonance Letters, ISSN 2772-5162, Vol. 3, no 2, p. 187-196Article in journal (Refereed) Published
Abstract [en]

Q-space trajectory imaging (QTI) allows non-invasive estimation of microstructural features of heterogeneous porous media via diffusion magnetic resonance imaging performed with generalised gradient waveforms. A recently proposed constrained estimation framework, called QTI+, improved QTI’s resilience to noise and data sparsity, thus increasing the reliability of the method by enforcing relevant positivity constraints. In this work we consider expanding the set of constraints to be applied during the fitting of the QTI model. We show that the additional conditions, which introduce an upper bound on the diffusivity values, further improve the retrieved parameters on a publicly available human brain dataset as well as on data acquired from healthy volunteers using a scanner-ready protocol.

Place, publisher, year, edition, pages
KeAi Publishing Communications, 2023
Keywords
Diffusion; Diffusion MRI; q-space trajectory imaging; QTI; Microstructure; Microscopic anisotropy; QTI+Constrained
National Category
Medical Engineering Mathematics
Identifiers
urn:nbn:se:liu:diva-198025 (URN)10.1016/j.mrl.2022.12.003 (DOI)001223797500001 ()
Funder
Swedish Foundation for Strategic ResearchVinnova
Note

Funding agencies: This research was funded by Sweden’s Innovation Agency (VINNOVA) ASSIST, Analytic Imaging Diagnostic Arena (AIDA), Swedish Foundation for Strategic Research (RMX18-0056), Linköping University Center for Industrial Information Technology (CENIIT), LiU Cancer Barncancerfonden, and a research grant (00028384) from VILLUM FONDEN.

Available from: 2023-09-22 Created: 2023-09-22 Last updated: 2024-11-15Bibliographically approved
Abramian, D., Blystad, I. & Eklund, A. (2023). Evaluation of inverse treatment planning for gamma knife radiosurgery using fMRI brain activation maps as organs at risk. Medical physics (Lancaster), 50(9), 5297-5311
Open this publication in new window or tab >>Evaluation of inverse treatment planning for gamma knife radiosurgery using fMRI brain activation maps as organs at risk
2023 (English)In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 50, no 9, p. 5297-5311Article in journal (Refereed) Published
Abstract [en]

Background: Stereotactic radiosurgery (SRS) can be an effective primary or adjuvant treatment option for intracranial tumors. However, it carries risks of various radiation toxicities, which can lead to functional deficits for the patients. Current inverse planning algorithms for SRS provide an efficient way for sparing organs at risk (OARs) by setting maximum radiation dose constraints in the treatment planning process.Purpose: We propose using activation maps from functional MRI (fMRI) to map the eloquent regions of the brain and define functional OARs (fOARs) for Gamma Knife SRS treatment planning.Methods: We implemented a pipeline for analyzing patient fMRI data, generating fOARs from the resulting activation maps, and loading them onto the GammaPlan treatment planning software. We used the Lightning inverse planner to generate multiple treatment plans from open MRI data of five subjects, and evaluated the effects of incorporating the proposed fOARs.Results: The Lightning optimizer designs treatment plans with high conformity to the specified parameters. Setting maximum dose constraints on fOARs successfully limits the radiation dose incident on them, but can have a negative impact on treatment plan quality metrics. By masking out fOAR voxels surrounding the tumor target it is possible to achieve high quality treatment plans while controlling the radiation dose on fOARs.Conclusions: The proposed method can effectively reduce the radiation dose incident on the eloquent brain areas during Gamma Knife SRS of brain tumors.

Place, publisher, year, edition, pages
WILEY, 2023
Keywords
fMRI, radiotherapy, radiosurgery, gamma knife, brain tumor
National Category
Radiology, Nuclear Medicine and Medical Imaging Cancer and Oncology
Identifiers
urn:nbn:se:liu:diva-196436 (URN)10.1002/mp.16660 (DOI)001041239600001 ()37531209 (PubMedID)
Funder
Vinnova, 2018‐02230Vinnova, 2021‐01954
Note

Funding: Centrum foer Industriell Informationsteknologi, Linkoepings Universitet; Vinnova [2018-02230, 2021-01954]

Available from: 2023-08-03 Created: 2023-08-03 Last updated: 2024-05-05
Boito, D., Eklund, A., Tisell, A., Levi, R., Özarslan, E. & Blystad, I. (2023). MRI with generalized diffusion encoding reveals damaged white matter in patients previously hospitalized for COVID-19 and with persisting symptoms at follow-up. Brain Communications, 5(6), Article ID fcad284.
Open this publication in new window or tab >>MRI with generalized diffusion encoding reveals damaged white matter in patients previously hospitalized for COVID-19 and with persisting symptoms at follow-up
Show others...
2023 (English)In: Brain Communications, E-ISSN 2632-1297, Vol. 5, no 6, article id fcad284Article in journal (Refereed) Published
Abstract [en]

There is mounting evidence of the long-term effects of COVID-19 on the central nervous system, with patients experiencing diverse symptoms, often suggesting brain involvement. Conventional brain MRI of these patients shows unspecific patterns, with no clear connection of the symptomatology to brain tissue abnormalities, whereas diffusion tensor studies and volumetric analyses detect measurable changes in the brain after COVID-19. Diffusion MRI exploits the random motion of water molecules to achieve unique sensitivity to structures at the microscopic level, and new sequences employing generalized diffusion encoding provide structural information which are sensitive to intravoxel features. In this observational study, a total of 32 persons were investigated: 16 patients previously hospitalized for COVID-19 with persisting symptoms of post-COVID condition (mean age 60 years: range 41–79, all male) at 7-month follow-up and 16 matched controls, not previously hospitalized for COVID-19, with no post-COVID symptoms (mean age 58 years, range 46–69, 11 males). Standard MRI and generalized diffusion encoding MRI were employed to examine the brain white matter of the subjects. To detect possible group differences, several tissue microstructure descriptors obtainable with the employed diffusion sequence, the fractional anisotropy, mean diffusivity, axial diffusivity, radial diffusivity, microscopic anisotropy, orientational coherence (Cc) and variance in compartment’s size (CMD) were analysed using the tract-based spatial statistics framework. The tract-based spatial statistics analysis showed widespread statistically significant differences (P < 0.05, corrected for multiple comparisons using the familywise error rate) in all the considered metrics in the white matter of the patients compared to the controls. Fractional anisotropy, microscopic anisotropy and Cc were lower in the patient group, while axial diffusivity, radial diffusivity, mean diffusivity and CMD were higher. Significant changes in fractional anisotropy, microscopic anisotropy and CMD affected approximately half of the analysed white matter voxels located across all brain lobes, while changes in Cc were mainly found in the occipital parts of the brain. Given the predominant alteration in microscopic anisotropy compared to Cc, the observed changes in diffusion anisotropy are mostly due to loss of local anisotropy, possibly connected to axonal damage, rather than white matter fibre coherence disruption. The increase in radial diffusivity is indicative of demyelination, while the changes in mean diffusivity and CMD are compatible with vasogenic oedema. In summary, these widespread alterations of white matter microstructure are indicative of vasogenic oedema, demyelination and axonal damage. These changes might be a contributing factor to the diversity of central nervous system symptoms that many patients experience after COVID-19.

Place, publisher, year, edition, pages
Oxford University Press, 2023
Keywords
MRI; Q-space trajectory imaging; microscopic fractional anisotropy; fractional anisotropy; COVID-19
National Category
Radiology, Nuclear Medicine and Medical Imaging Neurosciences Medical Imaging
Identifiers
urn:nbn:se:liu:diva-199215 (URN)10.1093/braincomms/fcad284 (DOI)001103246200003 ()37953843 (PubMedID)
Funder
Vinnova, 2021-01954Wallenberg Foundations
Note

Funding: Analytic Imaging Diagnostic Arena (AIDA), a Medtech4Health initiative; ITEA/ VINNOVA (The Swedish Innovation Agency) project ASSIST (Automation, Surgery Support and Intuitive 3D visualization to optimize workflow in IGT SysTems) [2021-01954]; Wallenberg Center for Molecular Medicine

Available from: 2023-11-19 Created: 2023-11-19 Last updated: 2025-02-09Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-8857-5698

Search in DiVA

Show all publications