liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Eldesokey, Abdelrahman
Publications (6 of 6) Show all publications
Eldesokey, A., Felsberg, M. & Khan, F. S. (2019). Confidence Propagation through CNNs for Guided Sparse Depth Regression. IEEE Transactions on Pattern Analysis and Machine Intelligence
Open this publication in new window or tab >>Confidence Propagation through CNNs for Guided Sparse Depth Regression
2019 (English)In: IEEE Transactions on Pattern Analysis and Machine Intelligence, ISSN 0182-8828Article in journal (Refereed) Published
Abstract [en]

Generally, convolutional neural networks (CNNs) process data on a regular grid, e.g. data generated by ordinary cameras. Designing CNNs for sparse and irregularly spaced input data is still an open research problem with numerous applications in autonomous driving, robotics, and surveillance. In this paper, we propose an algebraically-constrained normalized convolution layer for CNNs with highly sparse input that has a smaller number of network parameters compared to related work. We propose novel strategies for determining the confidence from the convolution operation and propagating it to consecutive layers. We also propose an objective function that simultaneously minimizes the data error while maximizing the output confidence. To integrate structural information, we also investigate fusion strategies to combine depth and RGB information in our normalized convolution network framework. In addition, we introduce the use of output confidence as an auxiliary information to improve the results. The capabilities of our normalized convolution network framework are demonstrated for the problem of scene depth completion. Comprehensive experiments are performed on the KITTI-Depth and the NYU-Depth-v2 datasets. The results clearly demonstrate that the proposed approach achieves superior performance while requiring only about 1-5% of the number of parameters compared to the state-of-the-art methods.

National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-161086 (URN)10.1109/TPAMI.2019.2929170 (DOI)
Available from: 2019-10-21 Created: 2019-10-21 Last updated: 2019-10-25
Eldesokey, A., Felsberg, M. & Khan, F. S. (2019). Propagating Confidences through CNNs for Sparse Data Regression. In: British Machine Vision Conference 2018, BMVC 2018: . Paper presented at The 29th British Machine Vision Conference (BMVC), Northumbria University, Newcastle upon Tyne, England, UK, 3-6 September, 2018. BMVA Press
Open this publication in new window or tab >>Propagating Confidences through CNNs for Sparse Data Regression
2019 (English)In: British Machine Vision Conference 2018, BMVC 2018, BMVA Press , 2019Conference paper, Published paper (Refereed)
Abstract [en]

In most computer vision applications, convolutional neural networks (CNNs) operate on dense image data generated by ordinary cameras. Designing CNNs for sparse and irregularly spaced input data is still an open problem with numerous applications in autonomous driving, robotics, and surveillance. To tackle this challenging problem, we introduce an algebraically-constrained convolution layer for CNNs with sparse input and demonstrate its capabilities for the scene depth completion task. We propose novel strategies for determining the confidence from the convolution operation and propagating it to consecutive layers. Furthermore, we propose an objective function that simultaneously minimizes the data error while maximizing the output confidence. Comprehensive experiments are performed on the KITTI depth benchmark and the results clearly demonstrate that the proposed approach achieves superior performance while requiring three times fewer parameters than the state-of-the-art methods. Moreover, our approach produces a continuous pixel-wise confidence map enabling information fusion, state inference, and decision support.

Place, publisher, year, edition, pages
BMVA Press, 2019
National Category
Computer Vision and Robotics (Autonomous Systems) Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-149648 (URN)
Conference
The 29th British Machine Vision Conference (BMVC), Northumbria University, Newcastle upon Tyne, England, UK, 3-6 September, 2018
Available from: 2018-07-13 Created: 2018-07-13 Last updated: 2020-02-03Bibliographically approved
Kristan, M., Leonardis, A., Matas, J., Felsberg, M., Pflugfelder, R., Zajc, L. C., . . . He, Z. (2018). The Sixth Visual Object Tracking VOT2018 Challenge Results. In: Laura Leal-Taixé and Stefan Roth (Ed.), Computer Vision – ECCV 2018 Workshops: Munich, Germany, September 8–14, 2018 Proceedings, Part I. Paper presented at Computer Vision – ECCV 2018 Workshops, Munich, Germany, September 8–14, 2018 (pp. 3-53). Cham: Springer Publishing Company
Open this publication in new window or tab >>The Sixth Visual Object Tracking VOT2018 Challenge Results
Show others...
2018 (English)In: Computer Vision – ECCV 2018 Workshops: Munich, Germany, September 8–14, 2018 Proceedings, Part I / [ed] Laura Leal-Taixé and Stefan Roth, Cham: Springer Publishing Company, 2018, p. 3-53Conference paper, Published paper (Refereed)
Abstract [en]

The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a “real-time” experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new long-term tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website (http://votchallenge.net).

Place, publisher, year, edition, pages
Cham: Springer Publishing Company, 2018
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 11129
National Category
Computer Vision and Robotics (Autonomous Systems) Computer Sciences
Identifiers
urn:nbn:se:liu:diva-161343 (URN)10.1007/978-3-030-11009-3_1 (DOI)9783030110086 (ISBN)9783030110093 (ISBN)
Conference
Computer Vision – ECCV 2018 Workshops, Munich, Germany, September 8–14, 2018
Available from: 2019-10-30 Created: 2019-10-30 Last updated: 2020-01-22Bibliographically approved
Nyberg, A., Eldesokey, A., Bergström, D. & Gustafsson, D. (2018). Unpaired Thermal to Visible Spectrum Transfer using Adversarial Training. In: : . Paper presented at Multimodal Learning and Applications Workshop (MULA) - ECCV 2018 workshop at Munich, Germany.
Open this publication in new window or tab >>Unpaired Thermal to Visible Spectrum Transfer using Adversarial Training
2018 (English)Conference paper, Poster (with or without abstract) (Refereed)
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-161252 (URN)
Conference
Multimodal Learning and Applications Workshop (MULA) - ECCV 2018 workshop at Munich, Germany
Available from: 2019-10-24 Created: 2019-10-24 Last updated: 2019-10-24
Eldesokey, A., Felsberg, M. & Khan, F. S. (2017). Ellipse Detection for Visual Cyclists Analysis “In the Wild”. In: Michael Felsberg, Anders Heyden and Norbert Krüger (Ed.), Computer Analysis of Images and Patterns: 17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24, 2017, Proceedings, Part I. Paper presented at 17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24, 2017, Proceedings, Part I (pp. 319-331). Springer, 10424
Open this publication in new window or tab >>Ellipse Detection for Visual Cyclists Analysis “In the Wild”
2017 (English)In: Computer Analysis of Images and Patterns: 17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24, 2017, Proceedings, Part I / [ed] Michael Felsberg, Anders Heyden and Norbert Krüger, Springer, 2017, Vol. 10424, p. 319-331Conference paper, Published paper (Refereed)
Abstract [en]

Autonomous driving safety is becoming a paramount issue due to the emergence of many autonomous vehicle prototypes. The safety measures ensure that autonomous vehicles are safe to operate among pedestrians, cyclists and conventional vehicles. While safety measures for pedestrians have been widely studied in literature, little attention has been paid to safety measures for cyclists. Visual cyclists analysis is a challenging problem due to the complex structure and dynamic nature of the cyclists. The dynamic model used for cyclists analysis heavily relies on the wheels. In this paper, we investigate the problem of ellipse detection for visual cyclists analysis in the wild. Our first contribution is the introduction of a new challenging annotated dataset for bicycle wheels, collected in real-world urban environment. Our second contribution is a method that combines reliable arcs selection and grouping strategies for ellipse detection. The reliable selection and grouping mechanism leads to robust ellipse detections when combined with the standard least square ellipse fitting approach. Our experiments clearly demonstrate that our method provides improved results, both in terms of accuracy and robustness in challenging urban environment settings.

Place, publisher, year, edition, pages
Springer, 2017
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 10424
National Category
Computer Vision and Robotics (Autonomous Systems) Computer Engineering
Identifiers
urn:nbn:se:liu:diva-145372 (URN)10.1007/978-3-319-64689-3_26 (DOI)000432085900026 ()9783319646886 (ISBN)9783319646893 (ISBN)
Conference
17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24, 2017, Proceedings, Part I
Note

Funding agencies: VR (EMC2, ELLIIT, starting grant) [2016-05543]; Vinnova (Cykla)

Available from: 2018-02-26 Created: 2018-02-26 Last updated: 2018-10-17Bibliographically approved
Felsberg, M., Kristan, M., Matas, J., Leonardis, A., Pflugfelder, R., Häger, G., . . . He, Z. (2016). The Thermal Infrared Visual Object Tracking VOT-TIR2016 Challenge Results. In: Hua G., Jégou H. (Ed.), Computer Vision – ECCV 2016 Workshops. ECCV 2016.: . Paper presented at 14th European Conference on Computer Vision (ECCV) (pp. 824-849). SPRINGER INT PUBLISHING AG
Open this publication in new window or tab >>The Thermal Infrared Visual Object Tracking VOT-TIR2016 Challenge Results
Show others...
2016 (English)In: Computer Vision – ECCV 2016 Workshops. ECCV 2016. / [ed] Hua G., Jégou H., SPRINGER INT PUBLISHING AG , 2016, p. 824-849Conference paper, Published paper (Refereed)
Abstract [en]

The Thermal Infrared Visual Object Tracking challenge 2016, VOT-TIR2016, aims at comparing short-term single-object visual trackers that work on thermal infrared (TIR) sequences and do not apply pre-learned models of object appearance. VOT-TIR2016 is the second benchmark on short-term tracking in TIR sequences. Results of 24 trackers are presented. For each participating tracker, a short description is provided in the appendix. The VOT-TIR2016 challenge is similar to the 2015 challenge, the main difference is the introduction of new, more difficult sequences into the dataset. Furthermore, VOT-TIR2016 evaluation adopted the improvements regarding overlap calculation in VOT2016. Compared to VOT-TIR2015, a significant general improvement of results has been observed, which partly compensate for the more difficult sequences. The dataset, the evaluation kit, as well as the results are publicly available at the challenge website.

Place, publisher, year, edition, pages
SPRINGER INT PUBLISHING AG, 2016
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 9914
Keywords
Performance evaluation; Object tracking; Thermal IR; VOT
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-133773 (URN)10.1007/978-3-319-48881-3_55 (DOI)000389501700055 ()978-3-319-48881-3 (ISBN)978-3-319-48880-6 (ISBN)
Conference
14th European Conference on Computer Vision (ECCV)
Available from: 2017-01-11 Created: 2017-01-09 Last updated: 2018-10-15
Organisations

Search in DiVA

Show all publications