liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Uncertainty-Aware Convolutional Neural Networks for Vision Tasks on Sparse Data
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0003-3292-7153
2021 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Early computer vision algorithms operated on dense 2D images captured using conventional monocular or color sensors. Those sensors embrace a passive nature providing limited scene representations based on light reflux, and are only able to operate under adequate lighting conditions. These limitations hindered the development of many computer vision algorithms that require some knowledge of the scene structure under varying conditions. The emergence of active sensors such as Time-of-Flight (ToF) cameras contributed to mitigating these limitations; however, they gave a rise to many novel challenges, such as data sparsity that stems from multi-path interference, and occlusion.

Many approaches have been proposed to alleviate these challenges by enhancing the acquisition process of ToF cameras or by post-processing their output. Nonetheless, these approaches are sensor and model specific, requiring an individual tuning for each sensor. Alternatively, learning-based approaches, i.e., machine learning, are an attractive solution to these problems by learning a mapping from the original sensor output to a refined version of it. Convolutional Neural Networks (CNNs) are one example of powerful machine learning approaches and they have demonstrated a remarkable success on many computer vision tasks. Unfortunately, CNNs naturally operate on dense data and cannot efficiently handle sparse data from ToF sensors.

In this thesis, we propose a novel variation of CNNs denoted as the Normalized Convolutional Neural Networks that can directly handle sparse data very efficiently. First, we formulate a differentiable normalized convolution layer that takes in sparse data and a confidence map as input. The confidence map provides information about valid and missing pixels to the normalized convolution layer, where the missing values are interpolated from their valid vicinity. Afterwards, we propose a confidence propagation criterion that allows building cascades of normalized convolution layers similar to the standard CNNs. We evaluated our approach on the task of unguided scene depth completion and achieved state-of-the-art results using an exceptionally small network.

As a second contribution, we investigated the fusion of a normalized convolution network with standard CNNs employing RGB images. We study different fusion schemes, and we provide a thorough analysis for different components of the network. By employing our best fusion strategy, we achieve state-of-the-art results on guided depth completion using a remarkably small network.

Thirdly, to provide a statistical interpretation for confidences, we derive a probabilistic framework for the normalized convolutional neural networks. This framework estimates the input confidence in a self-supervised manner and propagates it to provide a statistically valid output confidence. When compared against existing approaches for uncertainty estimation in CNNs such as Bayesian Deep Learning, our probabilistic framework provides a higher quality measure of uncertainty at a significantly lower computational cost.

Finally, we attempt to employ our framework in a common task in CNNs, namely upsampling. We formulate the upsampling problem as a sparse problem, and we employ the normalized convolutional neural networks to solve it. In comparison to existing approaches, our proposed upsampler is structure-aware while being light-weight. We test our upsampler with various optical flow estimation networks, and we show that it consistently improves the results. When integrated with a recent optical flow network, it sets a new state-of-the-art on the most challenging optical flow dataset.

Abstract [sv]

Tidiga datorseendealgoritmer arbetade med täta 2D-bilder som spelats in i gråskala eller med färgkameror. Dessa är passiva bildsensorer som under gynnsamma ljusförhållanden ger en begränsad scenrepresentation baserad endast på ljusflöde. Dessa begränsningar hämmade utvecklingen av de många datorseendealgoritmer som kräver information om scenens struktur under varierande ljusförhållanden. Utvecklingen av aktiva sensorer såsom kameror baserade på Time-of-Flight (ToF) bidrog till att lindra dessa begränsningar. Dessa gav emellertid istället upphov till många nya utmaningar, såsom bearbetning av gles data kommen av flervägsinterferens samt ocklusion.

Man har försökt tackla dessa utmaningar genom att förbättra insamlingsprocessen i TOFkameror eller genom att efterbearbeta deras data. Tidigare föreslagna metoder har dock varit sensor- eller till och med modellspecifika där man måste ställa in varje enskild sensor. Ett attraktivt alternativ är inlärningsbaserade metoder där man istället lär sig förhållandet mellan sensordatan och en förbättrad version av dito. Ett kraftfullt exempel på inlärningsbaserade metoder är neurala faltningsnät (CNNs). Dessa har varit extremt framgångsrika inom datorseende, men förutsätter tyvärr tät data och kan därför inte på ett effektivt sätt bearbeta ToF-sensorernas glesa data.

I denna avhandling föreslår vi en ny variant av faltningsnät som vi kallar normaliserade faltningsnät (eng. Normalized Convolutional Neural Networks) och som direkt kan arbeta med gles data. Först skapar vi ett deriverbart faltningsnätlager baserat på normaliserad faltning som tar in gles data samt en konfidenskarta. Konfidenskartan innehåller information om vilka pixlar vi har mätningar för och vilka som saknar mätningar. Modulen interpolerar sedan pixlar som saknar mätningar baserat på närliggande pixlar för vilka mätningar finns. Därefter föreslår vi ett kriterie för att propagera konfidens vilket tillåter oss att bygga en kaskad av normaliserade faltningslager motsvarande kaskaden av faltningslager i ett faltningsnät. We utvärderade metoden på scendjupkompletteringsproblemet utan färgbilder och uppnådde state-of-the-art-prestanda med ett mycket litet nätverk.

Som ett andra bidrag undersökte vi sammanslagningen av normaliserade faltningsnät med konventionella faltningsnät som arbetar med vanliga färgbilder. We undersöker olika sätt att slå samman näten och ger en grundlig analys för de olika nätverksdelarna. Den bästa sammanslagningsmetoden uppnår state-of-the-art-prestanda på scendjupkompletteringsproblemed med färgbilder, återigen med ett mycket litet nätverk.

Som ett tredje bidrag försöker vi statistiskt tolka prediktionerna från det normaliserade faltningsnätet. Vi härleder ett statistiskt ramverk för detta ändamål där det normala faltningsnätet via självstyrd inlärning lär sig estimera konfidenser och propagera dessa till en statistiskt korrekt sannolikhet. När vi jämför med befintliga metoder för att prediktera osäkerhet i faltningsnät, exempelvis via Bayesiansk djupinlärning, så ger vårt probabilistiska ramverk bättre estimat till en lägre beräkningskostnad.

Slutligen försöker vi använda vårt ramverk för en uppgift man ofta löser med vanliga faltningsnät, nämligen uppsampling. We formulerar uppsamplingsproblemet som om vi fått in gles data och löser det med normaliserade faltningsnät. Jämfört med befintliga metoder är den föreslagna metoden både medveten om lokal bildstruktur och lättviktig. Vi testar vår uppsamplare diverse optisktflödesnät och visar att den konsekvent ger förbättrade resultat. När vi integrerar den med ett nyligen föreslaget optisktflödesnät slår vi alla befintliga metoder för estimering av optiskt flöde.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2021. , p. 59
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2123
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:liu:diva-175307DOI: 10.3384/diss.diva-175307ISBN: 9789179297015 (print)OAI: oai:DiVA.org:liu-175307DiVA, id: diva2:1547851
Public defence
2021-06-18, Online through Zoom (contact carina.e.lindstrom@liu.se) and Ada Lovelace, B Building, Campus Valla, Linköping, 13:00 (English)
Opponent
Supervisors
Funder
Swedish Research Council, 2018-04673Wallenberg AI, Autonomous Systems and Software Program (WASP)Available from: 2021-05-26 Created: 2021-04-28 Last updated: 2021-05-26Bibliographically approved
List of papers
1. Propagating Confidences through CNNs for Sparse Data Regression
Open this publication in new window or tab >>Propagating Confidences through CNNs for Sparse Data Regression
2019 (English)In: British Machine Vision Conference 2018, BMVC 2018, BMVA Press , 2019Conference paper, Published paper (Refereed)
Abstract [en]

In most computer vision applications, convolutional neural networks (CNNs) operate on dense image data generated by ordinary cameras. Designing CNNs for sparse and irregularly spaced input data is still an open problem with numerous applications in autonomous driving, robotics, and surveillance. To tackle this challenging problem, we introduce an algebraically-constrained convolution layer for CNNs with sparse input and demonstrate its capabilities for the scene depth completion task. We propose novel strategies for determining the confidence from the convolution operation and propagating it to consecutive layers. Furthermore, we propose an objective function that simultaneously minimizes the data error while maximizing the output confidence. Comprehensive experiments are performed on the KITTI depth benchmark and the results clearly demonstrate that the proposed approach achieves superior performance while requiring three times fewer parameters than the state-of-the-art methods. Moreover, our approach produces a continuous pixel-wise confidence map enabling information fusion, state inference, and decision support.

Place, publisher, year, edition, pages
BMVA Press, 2019
National Category
Computer Vision and Robotics (Autonomous Systems) Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-149648 (URN)
Conference
The 29th British Machine Vision Conference (BMVC), Northumbria University, Newcastle upon Tyne, England, UK, 3-6 September, 2018
Available from: 2018-07-13 Created: 2018-07-13 Last updated: 2021-05-26Bibliographically approved
2. Confidence Propagation through CNNs for Guided Sparse Depth Regression
Open this publication in new window or tab >>Confidence Propagation through CNNs for Guided Sparse Depth Regression
2020 (English)In: IEEE Transactions on Pattern Analysis and Machine Intelligence, ISSN 0162-8828, Vol. 42, no 10Article in journal (Refereed) Published
Abstract [en]

Generally, convolutional neural networks (CNNs) process data on a regular grid, e.g. data generated by ordinary cameras. Designing CNNs for sparse and irregularly spaced input data is still an open research problem with numerous applications in autonomous driving, robotics, and surveillance. In this paper, we propose an algebraically-constrained normalized convolution layer for CNNs with highly sparse input that has a smaller number of network parameters compared to related work. We propose novel strategies for determining the confidence from the convolution operation and propagating it to consecutive layers. We also propose an objective function that simultaneously minimizes the data error while maximizing the output confidence. To integrate structural information, we also investigate fusion strategies to combine depth and RGB information in our normalized convolution network framework. In addition, we introduce the use of output confidence as an auxiliary information to improve the results. The capabilities of our normalized convolution network framework are demonstrated for the problem of scene depth completion. Comprehensive experiments are performed on the KITTI-Depth and the NYU-Depth-v2 datasets. The results clearly demonstrate that the proposed approach achieves superior performance while requiring only about 1-5% of the number of parameters compared to the state-of-the-art methods.

Place, publisher, year, edition, pages
IEEE, 2020
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-161086 (URN)10.1109/TPAMI.2019.2929170 (DOI)000567471300008 ()
Note

Funding agencies: Vinnova through grant CYCLAVinnova; Swedish Research CouncilSwedish Research Council [2018-04673]; VR starting grant [2016-05543]

Available from: 2019-10-21 Created: 2019-10-21 Last updated: 2021-12-29
3. Uncertainty-Aware CNNs for Depth Completion: Uncertainty from Beginning to End
Open this publication in new window or tab >>Uncertainty-Aware CNNs for Depth Completion: Uncertainty from Beginning to End
2020 (English)In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2020, p. 12011-12020Conference paper, Published paper (Refereed)
Abstract [en]

The focus in deep learning research has been mostly to push the limits of prediction accuracy. However, this was often achieved at the cost of increased complexity, raising concerns about the interpretability and the reliability of deep networks. Recently, an increasing attention has been given to untangling the complexity of deep networks and quantifying their uncertainty for different computer vision tasks. Differently, the task of depth completion has not received enough attention despite the inherent noisy nature of depth sensors. In this work, we thus focus on modeling the uncertainty of depth data in depth completion starting from the sparse noisy input all the way to the final prediction. We propose a novel approach to identify disturbed measurements in the input by learning an input confidence estimator in a self-supervised manner based on the normalized convolutional neural networks (NCNNs). Further, we propose a probabilistic version of NCNNs that produces a statistically meaningful uncertainty measure for the final prediction. When we evaluate our approach on the KITTI dataset for depth completion, we outperform all the existing Bayesian Deep Learning approaches in terms of prediction accuracy, quality of the uncertainty measure, and the computational efficiency. Moreover, our small network with 670k parameters performs on-par with conventional approaches with millions of parameters. These results give strong evidence that separating the network into parallel uncertainty and prediction streams leads to state-of-the-art performance with accurate uncertainty estimates.

Place, publisher, year, edition, pages
IEEE, 2020
Series
Conference on Computer Vision and Pattern Recognition (CVPR), ISSN 1063-6919, E-ISSN 2575-7075
Keywords
Uncertainty, Task analysis, Probabilistic logic, Measurement uncertainty, Noise measurement, Convolution, Computer vision
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-169106 (URN)10.1109/CVPR42600.2020.01203 (DOI)001309199904086 ()978-1-7281-7168-5 (ISBN)978-1-7281-7169-2 (ISBN)
Conference
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Available from: 2020-09-09 Created: 2020-09-09 Last updated: 2024-11-18
4. Normalized Convolution Upsampling for Refined Optical Flow Estimation
Open this publication in new window or tab >>Normalized Convolution Upsampling for Refined Optical Flow Estimation
2021 (English)In: Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, SciTePress , 2021, Vol. 5, p. 742-752Conference paper, Published paper (Refereed)
Abstract [en]

Optical flow is a regression task where convolutional neural networks (CNNs) have led to major breakthroughs. However, this comes at major computational demands due to the use of cost-volumes and pyramidal representations. This was mitigated by producing flow predictions at quarter the resolution, which are upsampled using bilinear interpolation during test time. Consequently, fine details are usually lost and post-processing is needed to restore them. We propose the Normalized Convolution UPsampler (NCUP), an efficient joint upsampling approach to produce the full-resolution flow during the training of optical flow CNNs. Our proposed approach formulates the upsampling task as a sparse problem and employs the normalized convolutional neural networks to solve it. We evaluate our upsampler against existing joint upsampling approaches when trained end-to-end with a a coarse-to-fine optical flow CNN (PWCNet) and we show that it outperforms all other approaches on the FlyingChairs dataset  while having at least one order fewer parameters. Moreover, we test our upsampler with a recurrent optical flow CNN (RAFT) and we achieve state-of-the-art results on Sintel benchmark with ∼ 6% error reduction, and on-par on the KITTI dataset, while having 7.5% fewer parameters (see Figure 1). Finally, our upsampler shows better generalization capabilities than RAFT when trained and evaluated on different datasets.

Place, publisher, year, edition, pages
SciTePress, 2021
Series
VISIGRAPP, ISSN 2184-4321
Keywords
Optical Flow Estimation CNNs, Joint Image Upsampling, Normalized Convolution, Spare CNNS
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-175901 (URN)10.5220/0010343707420752 (DOI)000661288200079 ()9789897584886 (ISBN)
Conference
16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021), Online, February 8-10, 2021
Note

Funding: Wallenberg AI, Autonomous Systems and Software Program (WASP); Swedish Research CouncilSwedish Research CouncilEuropean Commission [2018-04673]

Available from: 2021-05-26 Created: 2021-05-26 Last updated: 2021-12-29Bibliographically approved

Open Access in DiVA

fulltext(7131 kB)981 downloads
File information
File name FULLTEXT01.pdfFile size 7131 kBChecksum SHA-512
afe8f0149fd5f4bf04ac9dd1a485ef239b21928f58e1bfe5ac00ab9cf37a0fe1175a41d5ee7670aeb3ab3d4fafd5a4e135e4962553fcb3557f5d82de49abe356
Type fulltextMimetype application/pdf
Order online >>

Other links

Publisher's full text

Authority records

Eldesokey, Abdelrahman

Search in DiVA

By author/editor
Eldesokey, Abdelrahman
By organisation
Computer VisionFaculty of Science & Engineering
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
Total: 994 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 2630 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf