liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Ahlberg, Jörgen
    et al.
    Swedish Defence Research Agency, Sweden.
    Folkesson, Martin
    Swedish Defence Research Agency, Sweden.
    Grönwall, Christina
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Horney, Tobias
    Swedish Defence Research Agency, Sweden.
    Jungert, Erland
    Swedish Defence Research Agency, Sweden.
    Klasén, Lena
    Swedish Defence Research Agency, Sweden.
    Ulvklo, Morgan
    Swedish Defence Research Agency, Sweden.
    Ground Target Recognition in a Query-Based Multi-Sensor Information System2006Report (Other academic)
    Abstract [en]

    We present a system covering the complete process for automatic ground target recognition, from sensor data to the user interface, i.e., from low level image processing to high level situation analysis. The system is based on a query language and a query processor, and includes target detection, target recognition, data fusion, presentation and situation analysis. This paper focuses on target recognition and its interaction with the query processor. The target recognitionis executed in sensor nodes, each containing a sensor and the corresponding signal/image processing algorithms. New sensors and algorithms are easily added to the system. The processing of sensor data is performed in two steps; attribute estimation and matching. First, several attributes, like orientation and dimensions, are estimated from the (unknown but detected) targets. These estimates are used to select the models of interest in a matching step, where the targetis matched with a number of target models. Several methods and sensor data types are used in both steps, and data is fused after each step. Experiments have been performed using sensor data from laser radar, thermal and visual cameras. Promising results are reported, demonstrating the capabilities of the target recognition algorithms, the advantages of the two-level data fusion and the query-based system.

  • 2.
    Horney, Tobias
    et al.
    Swedish Defence Research Agency, Sweden.
    Ahlberg, Jörgen
    Swedish Defence Research Agency, Sweden.
    Grönwall, Christina
    Swedish Defence Research Agency, Sweden.
    Folkesson, Martin
    Swedish Defence Research Agency, Sweden.
    Silvervarg, Karin
    Swedish Defence Research Agency, Sweden.
    Fransson, Jörgen
    Swedish Defence Research Agency, Sweden.
    Klasén, Lena
    Swedish Defence Research Agency, Sweden.
    Jungert, Erland
    Swedish Defence Research Agency, Sweden.
    Lantz, Fredrik
    Swedish Defence Research Agency, Sweden.
    Ulvklo, Morgan
    Swedish Defence Research Agency, Sweden.
    An information system for target recognition2004In: Volume 5434 Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications / [ed] Belur V. Dasarathy, SPIE - International Society for Optical Engineering, 2004, p. 163-175Conference paper (Refereed)
    Abstract [en]

    We present an approach to a general decision support system. The aim is to cover the complete process for automatic target recognition, from sensor data to the user interface. The approach is based on a query-based information system, and include tasks like feature extraction from sensor data, data association, data fusion and situation analysis. Currently, we are working with data from laser radar, infrared cameras, and visual cameras, studying target recognition from cooperating sensors on one or several platforms. The sensors are typically airborne and at low altitude. The processing of sensor data is performed in two steps. First, several attributes are estimated from the (unknown but detected) target. The attributes include orientation, size, speed, temperature etc. These estimates are used to select the models of interest in the matching step, where the target is matched with a number of target models, returning a likelihood value for each model. Several methods and sensor data types are used in both steps. The user communicates with the system via a visual user interface, where, for instance, the user can mark an area on a map and ask for hostile vehicles in the chosen area. The user input is converted to a query in ΣQL, a query language developed for this type of applications, and an ontological system decides which algorithms should be invoked and which sensor data should be used. The output from the sensors is fused by a fusion module and answers are given back to the user. The user does not need to have any detailed technical knowledge about the sensors (or which sensors that are available), and new sensors and algorithms can easily be plugged into the system.

  • 3.
    Nygårds, Jonas
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Mechanical Engineering, Fluid and Mechanical Engineering Systems.
    Ulvklo, Morgan
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering.
    Skoglar, P.
    Högström, T.
    Navigation aided image processing i UAV surveillance. Preliminary results and design of an airborne experimental systems2004In: First Workshop on Integration of Vision and Intertial Sensors,2003, Coimbra, Portugal: Workshop on Integration of Vision and Intertial Sensors , 2004Conference paper (Refereed)
    Abstract [en]

       

  • 4.
    Ulvklo, Morgan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Texture Analysis1995In: Signal Processing for Computer Vision / [ed] Gösta H. Granlund and Hans Knutsson, Dordrecht: Kluwer , 1995, p. 399-418Chapter in book (Refereed)
    Abstract [en]

    This chapter deals with texture analysis, an important application of the methods described in earlier chapters. It introduces ideas from preattentive vision, which gives clues for the extraction of texture primitives. There is also a discussion on how to handle features whose significance varies with spatial position.

  • 5.
    Ulvklo, Morgan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Granlund, Gösta H.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Adaptive Reconstruction Using Multiple Views1998In: Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation: Tucson, Arizona, USA, 1998, p. 47-52Conference paper (Refereed)
    Abstract [en]

    This paper introduces a novel algorithm for extracting the optical flow obtained from a translating camera in a static scene. Occlusion between objects is incorporated as a natural component in a scene reconstruction strategy by first evaluate and reconstruct the foreground and then exclude its influence on the partly occluded objects behind.

  • 6.
    Ulvklo, Morgan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Granlund, Gösta H.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Depth Segmentation and Occluded Scene Reconstruction using Ego-motion1998In: Proceedings of the SPIE Conference on Visual Information Processing: Orlando, Florida, USA, 1998, p. 112-123Conference paper (Refereed)
    Abstract [en]

    This paper introduces a signal processing strategy for depth segmentation and scene reconstruction that incorporates occlusion as a natural component. The work aims to maximize the use of connectivity in the temporal domain as much as possible under the condition that the scene is static and that the camera motion is known. An object behind the foreground is reconstructed using the fact that different parts of the object have been seen in different images in the sequence. One of the main ideas in this paper is the use of a spatio- temporal certainty volume c(x) with the same dimension as the input spatio- temporal volume s(x), and then use c(x) as a 'blackboard' for rejecting already segmented image structures. The segmentation starts with searching for image structures in the foreground, eliminate their occluding influence, and then proceed. Normalized convolution, which is a Weighted Least Mean Square technique for filtering data with varying spatial reliability, is used for all filtering. High spatial resolution near object borders is achieved and only neighboring structures with similar depth supports each other.

  • 7.
    Ulvklo, Morgan
    et al.
    Linköping University, Department of Electrical Engineering. Linköping University, The Institute of Technology. Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Nygårds, Jonas
    Linköping University, Department of Mechanical Engineering, Fluid and Mechanical Engineering Systems. Linköping University, The Institute of Technology. Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Karlholm, Jörgen
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Skoglar, Per
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering. Linköping University, The Institute of Technology. Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Nilsson, Jonas
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    A sensor management framework for autonomous UAV surveillance2005In: Proceedings of SPIE 5787, Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications II, SPIE - International Society for Optical Engineering, 2005, p. 48-61Conference paper (Refereed)
    Abstract [en]

    This paper presents components of a sensor management architecture for autonomous UAV systems equipped with IR and video sensors, focusing on two main areas. Firstly, a framework inspired by optimal control and information theory is presented for concurrent path and sensor planning. Secondly, a method for visual landmark selection and recognition is presented. The latter is intended to be used within a SLAM (Simultaneous Localization and Mapping) architecture for visual navigation. Results are presented on both simulated and real sensor data, the latter from the MASP system (Modular Airborne Sensor Platform), an in-house developed UAV surrogate system containing a gimballed IR camera, a video sensor, and an integrated high performance navigation system.

1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf