liu.seSearch for publications in DiVA
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Exploring the applicability of Graph Attention Networks in computer vision and their hardware acceleration
Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Institutionen för systemteknik, Elektronik och datorteknik.ORCID-id: 0000-0003-4870-2768
Linköpings universitet, Institutionen för systemteknik, Elektronik och datorteknik. Linköpings universitet, Tekniska fakulteten.ORCID-id: 0000-0002-5153-5481
2025 (engelsk)Inngår i: AccML papers 2025, 2025, artikkel-id 3Konferansepaper, Publicerat paper (Annet vitenskapelig)
Abstract [en]

Edge detection is a fundamental task in computer vision, crucial for object recognition, segmentation, and scene understanding. Traditional methods often fail to capture complex edge structures due to their inability to model intricate relationships between pixels. Graph Neural Networks (GNNs), particularly Graph Attention Networks (GATs), have shown promise in addressing these limitations by leveraging graph structures to model pixel relationships. This paper explores the applicability of Graph Attention Networks in edge detection, highlighting their advantages over ordinary Graph convolutional Networks (GCNs) through rigorous mathematical reasoning. We integrate GATs into an edge detection framework based on an encoder-decoder structure with U-Net architecture and provide detailed theoretical and implementation insights. Furthermore, we discuss the hardware acceleration of GCNs and GATs with a reconfigurable dataflow architecture integrated in the Pytorch framework. The experimental results demonstrate the superior performance of GAT-based edge detection and the potential acceleration possible on reconfigurable edge platforms with limited resources. The key advantage of our proposed method is its hardware-friendly design, making it highly suitable for FPGA acceleration while also enabling efficient optimization through pruning of the network. 

sted, utgiver, år, opplag, sider
2025. artikkel-id 3
Emneord [en]
edge detection, GNN, Graph Attention Networks, encoder-decoder structure, U-Net
HSV kategori
Identifikatorer
URN: urn:nbn:se:liu:diva-220183OAI: oai:DiVA.org:liu-220183DiVA, id: diva2:2023448
Konferanse
7th Workshop on Accelerated Machine Learning (AccML) on HiPEAC 2025 Conference, 21st January, 2025, Barcelona, Spain
Tilgjengelig fra: 2025-12-19 Laget: 2025-12-19 Sist oppdatert: 2025-12-19

Open Access i DiVA

fulltext(871 kB)66 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 871 kBChecksum SHA-512
4caf91d057343b0a8dc7655ca0bca0bfbb9ea54b6c69a4487363989dfa72126fbb9127d3660e2e70508f2eb8e0571c3eb052c34c784a919e7b38f11c90fad2ba
Type fulltextMimetype application/pdf

Andre lenker

Fulltext from conference

Person

Khalili Sadaghiani, AbdolvahabNunez-Yanez, Jose

Søk i DiVA

Av forfatter/redaktør
Khalili Sadaghiani, AbdolvahabNunez-Yanez, Jose
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

urn-nbn

Altmetric

urn-nbn
Totalt: 1266 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf