liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Time-Independent Prediction of Burn Depth using Deep Convolutional Neural Networks
Linköping University, Department of Biomedical Engineering, Division of Biomedical Engineering. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0003-2777-9416
Linköping University, Department of Clinical and Experimental Medicine, Division of Surgery, Orthopedics and Oncology. Linköping University, Faculty of Medicine and Health Sciences. Region Östergötland, Anaesthetics, Operations and Specialty Surgery Center, Department of Hand and Plastic Surgery.
Linköping University, Department of Clinical and Experimental Medicine, Division of Surgery, Orthopedics and Oncology. Linköping University, Faculty of Medicine and Health Sciences. Region Östergötland, Anaesthetics, Operations and Specialty Surgery Center, Department of Hand and Plastic Surgery.
Linköping University, Department of Biomedical Engineering, Division of Biomedical Engineering. Linköping University, Faculty of Science & Engineering. (Pattern Recognition)ORCID iD: 0000-0002-4255-5130
2019 (English)In: Journal of Burn Care & Research, ISSN 1559-047X, E-ISSN 1559-0488, Vol. 40, no 6, p. 857-863Article in journal (Refereed) Published
Abstract [en]

We present in this paper the application of deep convolutional neural networks, which are a state-of-the-art artificial intelligence (AI) approach in machine learning, for automated time-independent prediction of burn depth. Colour images of four types of burn depth injured in first few days, including normal skin and background, acquired by a TiVi camera were trained and tested with four pre-trained deep convolutional neural networks: VGG-16, GoogleNet, ResNet-50, and ResNet-101. In the end, the best 10-fold cross-validation results obtained from ResNet- 101 with an average, minimum, and maximum accuracy are 81.66%, 72.06% and 88.06%, respectively; and the average accuracy, sensitivity and specificity for the four different types of burn depth are 90.54%, 74.35% and 94.25%, respectively. The accuracy was compared to the clinical diagnosis obtained after the wound had healed. Hence, application of AI is very promising for prediction of burn depth and therefore can be a useful tool to help in guiding clinical decision and initial treatment of burn wounds.

Place, publisher, year, edition, pages
Oxford University Press, 2019. Vol. 40, no 6, p. 857-863
Keywords [en]
Burn depth, time-independent prediction, deep convolutional neural network, artificial intelligence
National Category
Surgery Medical Image Processing Other Clinical Medicine
Identifiers
URN: urn:nbn:se:liu:diva-157386DOI: 10.1093/jbcr/irz103ISI: 000495368300020PubMedID: 31187119OAI: oai:DiVA.org:liu-157386DiVA, id: diva2:1322728
Note

Funding agencies: Analytic Imaging Diagnostic Arena (AIDA)

Available from: 2019-06-11 Created: 2019-06-11 Last updated: 2021-10-06Bibliographically approved
In thesis
1. A path along deep learning for medical image analysis: With focus on burn wounds and brain tumors
Open this publication in new window or tab >>A path along deep learning for medical image analysis: With focus on burn wounds and brain tumors
2021 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The number of medical images that clinicians need to review on a daily basis has increased dramatically during the last decades. Since the number of clinicians has not increased as much, it is necessary to develop tools which can help doctors to work more efficiently. Deep learning is the last trend in the medical imaging field, as methods based on deep learning often outperform more traditional analysis methods. However, in medical imaging a general problem for deep learning is to obtain large, annotated datasets for training the deep networks.

This thesis presents how deep learning can be used for two medical problems: assessment of burn wounds and brain tumors. The first papers present methods for analyzing 2D burn wound images; to estimate how large the burn wound is (through image segmentation) and to classify how deep a burn wound is (image classification). The last papers present methods for analyzing 3D magnetic resonance imaging (MRI) volumes containing brain tumors; to estimate how large the different parts of the tumor are (image segmentation). 

Since medical imaging datasets are often rather small, image augmentation is necessary to artificially increase the size of the dataset and, at the same time, the performance of a convolutional neural network. Traditional augmentation techniques simply apply operations such as rotation, scaling and elastic deformations to generate new similar images, but it is often not clear what type of augmentation that is best for a certain problem. Generative adversarial networks (GANs), on the other hand, can generate completely new images by learning the high dimensional data distribution of images and sampling from it (which can be seen as advanced augmentation). GANs can also be trained to generate images of type B from images of type A, which can be used for image segmentation.  

The conclusion of this thesis is that deep learning is a powerful technology that doctors can benefit from, to assess injuries and diseases more accurately and more quickly. In the end, this can lead to better healthcare for the patients.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2021. p. 79
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2175
Keywords
Deep learning, Medical image analysis, Burn wounds, Brain tumors, Image classification, Image segmentation, Image augmentation, CNNs, GANs
National Category
Medical Image Processing
Identifiers
urn:nbn:se:liu:diva-179914 (URN)10.3384/diss.diva-179914 (DOI)9789179290382 (ISBN)
Public defence
2021-11-19, Hugo Theorell, Building 448, Campus US, Linköping, 13:15 (English)
Opponent
Supervisors
Available from: 2021-10-18 Created: 2021-10-06 Last updated: 2021-10-18Bibliographically approved

Open Access in DiVA

fulltext(553 kB)572 downloads
File information
File name FULLTEXT01.pdfFile size 553 kBChecksum SHA-512
adf2993ceff5dafde8dbe1233794b1bfffd07520e32cd22b14e5d1c6be1948c411ecc73c00a324e147a317971d4ace6703ed9adbef7f1e7294b4ae71689871c5
Type fulltextMimetype application/pdf

Other links

Publisher's full textPubMed

Authority records

Cirillo, Marco DomenicoMirdell, RobinSjöberg, FolkePham, Tuan

Search in DiVA

By author/editor
Cirillo, Marco DomenicoMirdell, RobinSjöberg, FolkePham, Tuan
By organisation
Division of Biomedical EngineeringFaculty of Science & EngineeringDivision of Surgery, Orthopedics and OncologyFaculty of Medicine and Health SciencesDepartment of Hand and Plastic Surgery
In the same journal
Journal of Burn Care & Research
SurgeryMedical Image ProcessingOther Clinical Medicine

Search outside of DiVA

GoogleGoogle Scholar
Total: 574 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 422 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf