liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Mask-Guided Attention Network for Occluded Pedestrian Detection
Tianjin Univ, Peoples R China.
Tianjin Univ, Peoples R China.
Incept Inst Artificial Intelligence, U Arab Emirates.
Incept Inst Artificial Intelligence, U Arab Emirates.
Show others and affiliations
2019 (English)In: 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), IEEE COMPUTER SOC , 2019, p. 4966-4974Conference paper, Published paper (Refereed)
Abstract [en]

Pedestrian detection relying on deep convolution neural networks has made significant progress. Though promising results have been achieved on standard pedestrians, the performance on heavily occluded pedestrians remains far from satisfactory. The main culprits are intra-class occlusions involving other pedestrians and inter-class occlusions caused by other objects, such as cars and bicycles. These result in a multitude of occlusion patterns. We propose an approach for occluded pedestrian detection with the following contributions. First, we introduce a novel mask-guided attention network that fits naturally into popular pedestrian detection pipelines. Our attention network emphasizes on visible pedestrian regions while suppressing the occluded ones by modulating full body features. Second, we empirically demonstrate that coarse-level segmentation annotations provide reasonable approximation to their dense pixel-wise counterparts. Experiments are performed on CityPersons and Caltech datasets. Our approach sets a new state-of-the-art on both datasets. Our approach obtains an absolute gain of 9.5% in log-average miss rate, compared to the best reported results [31] on the heavily occluded HO pedestrian set of CityPersons test set. Further, on the HO pedestrian set of Caltech dataset, our method achieves an absolute gain of 5.0% in log-average miss rate, compared to the best reported results [13]. Code and models are available at: https://github.com/Leotju/MGAN.

Place, publisher, year, edition, pages
IEEE COMPUTER SOC , 2019. p. 4966-4974
Series
IEEE International Conference on Computer Vision, ISSN 1550-5499
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:liu:diva-168114DOI: 10.1109/ICCV.2019.00507ISI: 000531438105014ISBN: 978-1-7281-4803-8 (print)OAI: oai:DiVA.org:liu-168114DiVA, id: diva2:1458513
Conference
IEEE/CVF International Conference on Computer Vision (ICCV) Seoul, SOUTH KOREA, OCT 27-NOV 02, 2019
Note

Funding Agencies|National Natural Science Foundation of ChinaNational Natural Science Foundation of China [61632018]

Available from: 2020-08-17 Created: 2020-08-17 Last updated: 2020-08-17

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Khan, Fahad Shahbaz

Search in DiVA

By author/editor
Khan, Fahad Shahbaz
By organisation
Computer VisionFaculty of Science & Engineering
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 47 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf