liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Improving burn depth assessment for pediatric scalds by AI based on semantic segmentation of polarized light photography images
Linköping University, Department of Biomedical Engineering, Division of Biomedical Engineering. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).ORCID iD: 0000-0003-2777-9416
Linköping University, Faculty of Medicine and Health Sciences. Linköping University, Department of Biomedical and Clinical Sciences, Division of Surgery, Orthopedics and Oncology. Region Östergötland, Anaesthetics, Operations and Specialty Surgery Center, Department of Hand and Plastic Surgery. (The Burn Centre)
Linköping University, Department of Biomedical and Clinical Sciences, Division of Surgery, Orthopedics and Oncology. Linköping University, Faculty of Medicine and Health Sciences. Region Östergötland, Anaesthetics, Operations and Specialty Surgery Center, Department of Hand and Plastic Surgery. (The Burn Centre)ORCID iD: 0000-0002-5903-2918
Center for Artificial Intelligence, Prince Mohammad Bin Fahd University, Khobar, Saudi Arabia.
2021 (English)In: Burns, ISSN 0305-4179, E-ISSN 1879-1409, Vol. 47, no 7, p. 1586-1593Article in journal (Refereed) Published
Abstract [en]

This paper illustrates the efficacy of an artificial intelligence (AI) (a convolutional neural network, based on the U-Net), for the burn-depth assessment using semantic segmentation of polarized high-performance light camera images of burn wounds. The proposed method is evaluated for paediatric scald injuries to differentiate four burn wound depths: superficial partial-thickness (healing in 0–7 days), superficial to intermediate partial-thickness (healing in 8–13 days), intermediate to deep partial-thickness (healing in 14–20 days), deep partial-thickness (healing after 21 days) and full-thickness burns, based on observed healing time.

In total 100 burn images were acquired. Seventeen images contained all 4 burn depths and were used to train the network. Leave-one-out cross-validation reports were generated and an accuracy and dice coefficient average of almost 97% was then obtained. After that, the remaining 83 burn-wound images were evaluated using the different network during the cross-validation, achieving an accuracy and dice coefficient, both on average 92%.

This technique offers an interesting new automated alternative for clinical decision support to assess and localize burn-depths in 2D digital images. Further training and improvement of the underlying algorithm by e.g., more images, seems feasible and thus promising for the future.

Place, publisher, year, edition, pages
Elsevier, 2021. Vol. 47, no 7, p. 1586-1593
Keywords [en]
Artificial intelligence, Deep learning, Convolutional neural networks, U-Net, Semantic segmentation, Paediatric burns
National Category
Medical Image Processing Radiology, Nuclear Medicine and Medical Imaging
Identifiers
URN: urn:nbn:se:liu:diva-175890DOI: 10.1016/j.burns.2021.01.011ISI: 000719797100015PubMedID: 33947595OAI: oai:DiVA.org:liu-175890DiVA, id: diva2:1557470
Available from: 2021-05-26 Created: 2021-05-26 Last updated: 2021-12-07
In thesis
1. A path along deep learning for medical image analysis: With focus on burn wounds and brain tumors
Open this publication in new window or tab >>A path along deep learning for medical image analysis: With focus on burn wounds and brain tumors
2021 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The number of medical images that clinicians need to review on a daily basis has increased dramatically during the last decades. Since the number of clinicians has not increased as much, it is necessary to develop tools which can help doctors to work more efficiently. Deep learning is the last trend in the medical imaging field, as methods based on deep learning often outperform more traditional analysis methods. However, in medical imaging a general problem for deep learning is to obtain large, annotated datasets for training the deep networks.

This thesis presents how deep learning can be used for two medical problems: assessment of burn wounds and brain tumors. The first papers present methods for analyzing 2D burn wound images; to estimate how large the burn wound is (through image segmentation) and to classify how deep a burn wound is (image classification). The last papers present methods for analyzing 3D magnetic resonance imaging (MRI) volumes containing brain tumors; to estimate how large the different parts of the tumor are (image segmentation). 

Since medical imaging datasets are often rather small, image augmentation is necessary to artificially increase the size of the dataset and, at the same time, the performance of a convolutional neural network. Traditional augmentation techniques simply apply operations such as rotation, scaling and elastic deformations to generate new similar images, but it is often not clear what type of augmentation that is best for a certain problem. Generative adversarial networks (GANs), on the other hand, can generate completely new images by learning the high dimensional data distribution of images and sampling from it (which can be seen as advanced augmentation). GANs can also be trained to generate images of type B from images of type A, which can be used for image segmentation.  

The conclusion of this thesis is that deep learning is a powerful technology that doctors can benefit from, to assess injuries and diseases more accurately and more quickly. In the end, this can lead to better healthcare for the patients.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2021. p. 79
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2175
Keywords
Deep learning, Medical image analysis, Burn wounds, Brain tumors, Image classification, Image segmentation, Image augmentation, CNNs, GANs
National Category
Medical Image Processing
Identifiers
urn:nbn:se:liu:diva-179914 (URN)10.3384/diss.diva-179914 (DOI)9789179290382 (ISBN)
Public defence
2021-11-19, Hugo Theorell, Building 448, Campus US, Linköping, 13:15 (English)
Opponent
Supervisors
Available from: 2021-10-18 Created: 2021-10-06 Last updated: 2021-10-18Bibliographically approved

Open Access in DiVA

fulltext(1036 kB)289 downloads
File information
File name FULLTEXT01.pdfFile size 1036 kBChecksum SHA-512
6a781df8ca41bbdb76443ba822aa33964ac24b4a29d5170c50c9c0d0241b21bee4687f39a0df7321136920cd118a3db591b31d448c0b3eef5f35068b2b5eacd2
Type fulltextMimetype application/pdf

Other links

Publisher's full textPubMed

Authority records

Cirillo, Marco DomenicoMirdell, RobinSjöberg, Folke

Search in DiVA

By author/editor
Cirillo, Marco DomenicoMirdell, RobinSjöberg, Folke
By organisation
Division of Biomedical EngineeringFaculty of Science & EngineeringCenter for Medical Image Science and Visualization (CMIV)Faculty of Medicine and Health SciencesDivision of Surgery, Orthopedics and OncologyDepartment of Hand and Plastic Surgery
In the same journal
Burns
Medical Image ProcessingRadiology, Nuclear Medicine and Medical Imaging

Search outside of DiVA

GoogleGoogle Scholar
Total: 289 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 278 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf