liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Spatio-temporal Relation Modeling for Few-shot Action Recognition
Mohamed Bin Zayed Univ Artificial Intelligence, U Arab Emirates.
Incept Inst Artificial Intelligence, U Arab Emirates.
Mohamed Bin Zayed Univ Artificial Intelligence, U Arab Emirates; Australian Natl Univ, Australia.
Mohamed Bin Zayed Univ Artificial Intelligence, U Arab Emirates; Aalto Univ, Finland.
Show others and affiliations
2022 (English)In: 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), IEEE COMPUTER SOC , 2022, p. 19926-19935Conference paper, Published paper (Refereed)
Abstract [en]

We propose a novel few-shot action recognition framework, STRM, which enhances class-specific feature discriminability while simultaneously learning higher-order temporal representations. The focus of our approach is a novel spatio-temporal enrichment module that aggregates spatial and temporal contexts with dedicated local patch-level and global frame-level feature enrichment sub-modules. Local patch-level enrichment captures the appearance-based characteristics of actions. On the other hand, global frame-level enrichment explicitly encodes the broad temporal context, thereby capturing the relevant object features over time. The resulting spatio-temporally enriched representations are then utilized to learn the relational matching between query and support action sub-sequences. We further introduce a query-class similarity classifier on the patch-level enriched features to enhance class-specific feature discriminability by reinforcing the feature learning at different stages in the proposed framework. Experiments are performed on four few-shot action recognition benchmarks: Kinetics, SSv2, HMDB51 and UCF101. Our extensive ablation study reveals the benefits of the proposed contributions. Furthermore, our approach sets a new state-of-the-art on all four benchmarks. On the challenging SSv2 benchmark, our approach achieves an absolute gain of 3.5% in classification accuracy, as compared to the best existing method in the literature.

Place, publisher, year, edition, pages
IEEE COMPUTER SOC , 2022. p. 19926-19935
Series
IEEE Conference on Computer Vision and Pattern Recognition, ISSN 1063-6919
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:liu:diva-190967DOI: 10.1109/CVPR52688.2022.01933ISI: 000870783005074ISBN: 9781665469463 (electronic)ISBN: 9781665469470 (print)OAI: oai:DiVA.org:liu-190967DiVA, id: diva2:1725183
Conference
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, jun 18-24, 2022
Note

Funding Agencies|VR starting grant [2016-05543]; Swedish Research Council [2018-05973]

Available from: 2023-01-10 Created: 2023-01-10 Last updated: 2023-01-10

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Search in DiVA

By author/editor
Khan, Fahad
By organisation
Computer VisionFaculty of Science & Engineering
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 19 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf