liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Sensordatafusion av IR- och radarbilder
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
2004 (Swedish)Independent thesis Basic level (professional degree), 20 credits / 30 HE creditsStudent thesisAlternative title
Sensor data fusion of IR- and radar images (English)
Abstract [sv]

Den här rapporten beskriver och utvärderar ett antal algoritmer för multisensordatafusion av radar och IR/TV-data på rådatanivå. Med rådatafusion menas att fusionen ska ske innan attribut- eller objektextrahering. Attributextrahering kan medföra att information går förlorad som skulle kunna förbättra fusionen. Om fusionen sker på rådatanivå finns mer information tillgänglig och skulle kunna leda till en förbättrad attributextrahering i ett senare steg. Två tillvägagångssätt presenteras. Den ena metoden projicerar radarbilden till IR-vyn och vice versa. Fusionen utförs sedan på de par av bilder med samma dimensioner. Den andra metoden fusionerar de två ursprungliga bilderna till en volym. Volymen spänns upp av de tre dimensionerna representerade i ursprungsbilderna. Metoden utökas också genom att utnyttja stereoseende. Resultaten visar att det kan vara givande att utnyttja stereoseende då den extra informationen underlättar fusionen samt ger en mer generell lösning på problemet.

Abstract [en]

This thesis describes and evaluates a number of algorithms for multi sensor fusion of radar and IR/TV data. The fusion is performed on raw data level, that is prior to attribute extraction. The idea is that less information will be lost compared to attribute level fusion. Two methods are presented. The first method transforms the radar image to the IR-view and vice versa. The images sharing the same dimension are then fused together. The second method fuses the original images to a three dimensional volume. Another version is also presented, where stereo vision is used. The results show that stereo vision can be used with good performance and gives a more general solution to the problem.

Place, publisher, year, edition, pages
Institutionen för systemteknik , 2004. , 70 p.
Series
LiTH-ISY-Ex, 3475
Keyword [en]
Technology, Sensordatafusion, IR, radar, rådatafusion, stereoseende Sensor data fusion, IR, radar, raw data fusion, stereo vision
Keyword [sv]
TEKNIKVETENSKAP
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:liu:diva-2193OAI: oai:DiVA.org:liu-2193DiVA: diva2:19523
Subject / course
Computer Vision Laboratory
Uppsok
Technology
Supervisors
Examiners
Available from: 2004-03-12 Created: 2004-03-12 Last updated: 2012-07-02

Open Access in DiVA

fulltext(2894 kB)939 downloads
File information
File name FULLTEXT01.pdfFile size 2894 kBChecksum SHA-1
cb06ec797bfecac7b019973f4135f2e174fbf0ef3c6722162890fc5d9c886590ea6b947f
Type fulltextMimetype application/pdf

By organisation
Computer VisionThe Institute of Technology
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar
Total: 939 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 404 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf