liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
DUNet: A deformable network for retinal vessel segmentation
School of Computer Software, College of Intelligence and Computing, Tianjin University, Tianjin, China.
School of Computer Software, College of Intelligence and Computing, Tianjin University, Tianjin, China; Tianjin University of Traditional Chinese Medicine, Tianjin, China.
Linköping University, Department of Biomedical Engineering, Division of Biomedical Engineering. Linköping University, Faculty of Science & Engineering. (Pattern Recognition)ORCID iD: 0000-0002-4255-5130
School of Computer Software, College of Intelligence and Computing, Tianjin University, Tianjin, China.
Show others and affiliations
2019 (English)In: Knowledge-Based Systems, ISSN 0950-7051, E-ISSN 1872-7409, Vol. 178, p. 149-162Article in journal (Refereed) Published
Abstract [en]

Automatic segmentation of retinal vessels in fundus images plays an important role in the diagnosis of some diseases such as diabetes and hypertension. In this paper, we propose Deformable U-Net (DUNet), which exploits the retinal vessels’ local features with a U-shape architecture, in an end to end manner for retinal vessel segmentation. Inspired by the recently introduced deformable convolutional networks, we integrate the deformable convolution into the proposed network. The DUNet, with upsampling operators to increase the output resolution, is designed to extract context information and enable precise localization by combining low-level features with high-level ones. Furthermore, DUNet captures the retinal vessels at various shapes and scales by adaptively adjusting the receptive fields according to vessels’ scales and shapes. Public datasets: DRIVE, STARE, CHASE_DB1 and HRF are used to test our models. Detailed comparisons between the proposed network and the deformable neural network, U-Net are provided in our study. Results show that more detailed vessels can be extracted by DUNet and it exhibits state-of-the-art performance for retinal vessel segmentation with a global accuracy of 0.9566/0.9641/0.9610/0.9651 and AUC of 0.9802/0.9832/0.9804/0.9831 on DRIVE, STARE, CHASE_DB1 and HRF respectively. Moreover, to show the generalization ability of the DUNet, we use another two retinal vessel data sets, i.e., WIDE and SYNTHE, to qualitatively and quantitatively analyze and compare with other methods. Extensive cross-training evaluations are used to further assess the extendibility of DUNet. The proposed method has the potential to be applied to the early diagnosis of diseases.

Place, publisher, year, edition, pages
Elsevier, 2019. Vol. 178, p. 149-162
Keywords [en]
Retinal blood vessel, Segmentation, DUNet, U-Net, Deformable convolution
National Category
Medical Image Processing
Identifiers
URN: urn:nbn:se:liu:diva-157172DOI: 10.1016/j.knosys.2019.04.025ISI: 000472687500013Scopus ID: 2-s2.0-85065243868OAI: oai:DiVA.org:liu-157172DiVA, id: diva2:1319451
Note

Funding agencies: National Natural Science Foundation of China [61702361]; Science and Technology Program of Tianjin, China [16ZXHLGX00170]; National Key Technology R&D Program of China [2018YFB1701700]

Available from: 2019-06-01 Created: 2019-06-01 Last updated: 2019-07-19Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records BETA

Pham, Tuan

Search in DiVA

By author/editor
Pham, Tuan
By organisation
Division of Biomedical EngineeringFaculty of Science & Engineering
In the same journal
Knowledge-Based Systems
Medical Image Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 12 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf