liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
CONDA: Condensed Deep Association Learning for Co-salient Object Detection
Northwestern Polytech Univ, Peoples R China.
Mohamed Bin Zayed Univ Artificial Intelligence, U Arab Emirates.
Northwestern Polytech Univ, Peoples R China.
Xi An Jiao Tong Univ, Peoples R China.
Show others and affiliations
2025 (English)In: COMPUTER VISION - ECCV 2024, PT L, SPRINGER INTERNATIONAL PUBLISHING AG , 2025, Vol. 15108, p. 287-303Conference paper, Published paper (Refereed)
Abstract [en]

Inter-image association modeling is crucial for co-salient object detection. Despite satisfactory performance, previous methods still have limitations on sufficient inter-image association modeling. Because most of them focus on image feature optimization under the guidance of heuristically calculated raw inter-image associations. They directly rely on raw associations which are not reliable in complex scenarios, and their image feature optimization approach is not explicit for inter-image association modeling. To alleviate these limitations, this paper proposes a deep association learning strategy that deploys deep networks on raw associations to explicitly transform them into deep association features. Specifically, we first create hyperassociations to collect dense pixel-pair-wise raw associations and then deploys deep aggregation networks on them. We design a progressive association generation module for this purpose with additional enhancement of the hyperassociation calculation. More importantly, we propose a correspondence-induced association condensation module that introduces a pretext task, i.e. semantic correspondence estimation, to condense the hyperassociations for computational burden reduction and noise elimination. We also design an object-aware cycle consistency loss for high-quality correspondence estimations. Experimental results in three benchmark datasets demonstrate the remarkable effectiveness of our proposed method with various training settings. The code is available at: https://github.com/dragonlee258079/CONDA.

Place, publisher, year, edition, pages
SPRINGER INTERNATIONAL PUBLISHING AG , 2025. Vol. 15108, p. 287-303
Series
Lecture Notes in Computer Science, ISSN 0302-9743
Keywords [en]
Co-salient Object Detection; Deep Association Learning
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:liu:diva-210442DOI: 10.1007/978-3-031-72973-7_17ISI: 001353694700017Scopus ID: 2-s2.0-85209929642ISBN: 9783031729720 (print)ISBN: 9783031729737 (electronic)OAI: oai:DiVA.org:liu-210442DiVA, id: diva2:1921484
Conference
18th European Conference on Computer Vision (ECCV), Milan, ITALY, sep 29-oct 04, 2024
Note

Funding Agencies|Key-Area Research and Development Program of Guangdong Province [2021B0101200001]; National Natural Science Foundation of China [62136007, 62036011, U20B2065, 6202781, 62036005]; Key R&D Program of Shaanxi Province [2021ZDLGY01-08, WIS P008, P009]

Available from: 2024-12-16 Created: 2024-12-16 Last updated: 2025-02-07

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Khan, Fahad
By organisation
Computer VisionFaculty of Science & Engineering
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 140 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf