liu.seSearch for publications in DiVA
Change search
Refine search result
12 1 - 50 of 68
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Active Contours in Three Dimensions1996Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    To find a shape in an image, a technique called snakes or active contours can be used. An active contour is a curve that moves towards the sought-for shape in a way controlled by internal forces - such as rigidity and elasticity - and an image force. The image force should attract the contour to certain features, such as edges, in the image. This is done by creating an attractor image, which defines how strongly each point in the image should attract the contour.

    In this thesis the extension to contours (surfaces) in three dimensional images is studied. Methods of representation of the contour and computation of the internal forces are treated.

    Also, a new way of creating the attractor image, using the orientation tensor to detect planar structure in 3D images, is studied. The new method is not generally superior to those already existing, but still has its uses in specific applications.

    During the project, it turned out that the main problem of active contours in 3D images was instability due to strong internal forces overriding the influence of the attractor image. The problem was solved satisfactory by projecting the elasticity force on the contour’s tangent plane, which was approximated efficiently using sphere-fitting.

  • 2.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology.
    An active model for facial feature tracking2002In: EURASTP journal an applied signal processing, ISSN 1110-8657, E-ISSN 1687-0433, Vol. 2002, no 6, p. 566-571Article in journal (Refereed)
    Abstract [en]

    We present a system for finding and tracking a face and extract global and local animation parameters from a video sequence. The system uses an initial colour processing step for finding a rough estimate of the position, size, and inplane rotation of the face, followed by a refinement step drived by an active model. The latter step refines the previous estimate, and also extracts local animation parameters. The system is able to track the face and some facial features in near real-time, and can compress the result to a bitstream compliant to MPEG-4 face and body animation.

  • 3.
    Ahlberg, Jörgen
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Estimating atmosphere parameters in hyperspectral data2010In: Proc. SPIE 7695, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVI / [ed] Sylvia S. Shen, Paul E. Lewis, SPIE - International Society for Optical Engineering, 2010, p. Art.nr. 7695-82-Conference paper (Refereed)
    Abstract [en]

    We address the problem of estimating atmosphere parameters (temperature, water vapour content) from data captured by an airborne thermal hyperspectral imager, and propose a method based on direct optimization. The method also involves the estimation of object parameters (temperature and emissivity) under the restriction that the emissivity is constant for all wavelengths. Certain sensor parameters can be estimated as well in the same process. The method is analyzed with respect to sensitivity to noise and number of spectral bands. Simulations with synthetic signatures are performed to validate the analysis, showing that estimation can be performed with as few as 10-20 spectral bands at moderate noise levels. More than 20 bands does not improvethe estimates. The proposedmethod is alsoextended to incorporateadditionalknowledge,for examplemeasurements ofatmospheric parameters and sensor noise.

  • 4.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. FOI, SE-58111 Linkoping, Sweden.
    Optimizing Object, Atmosphere, and Sensor Parameters in Thermal Hyperspectral Imagery2017In: IEEE Transactions on Geoscience and Remote Sensing, ISSN 0196-2892, E-ISSN 1558-0644, Vol. 55, no 2, p. 658-670Article in journal (Refereed)
    Abstract [en]

    We address the problem of estimating atmosphere parameters (temperature and water vapor content) from data captured by an airborne thermal hyperspectral imager and propose a method based on linear and nonlinear optimization. The method is used for the estimation of the parameters (temperature and emissivity) of the observed object as well as sensor gain under certain restrictions. The method is analyzed with respect to sensitivity to noise and the number of spectral bands. Simulations with synthetic signatures are performed to validate the analysis, showing that the estimation can be performed with as few as 10-20 spectral bands at moderate noise levels. The proposed method is also extended to exploit additional knowledge, for example, measurements of atmospheric parameters and sensor noise. Additionally, we show how to extend the method in order to improve spectral calibration.

  • 5.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, Faculty of Science & Engineering.
    Visualization Techniques for Surveillance: Visualizing What Cannot Be Seen and Hiding What Should Not Be Seen2015In: Konsthistorisk Tidskrift, ISSN 0023-3609, E-ISSN 1651-2294, Vol. 84, no 2, p. 123-138Article in journal (Refereed)
    Abstract [en]

    This paper gives an introduction to some of the problems of modern camera surveillance, and how these problems are, or can be, addressed using visualization techniques. The paper is written from an engineering point of view, attempting to communicate visualization techniques invented in recent years to the non-engineer reader. Most of these techniques have the purpose of facilitating for the surveillance operator to recognize or detect relevant events (such as violence), while, in contrast, some have the purpose of hiding information in order to be less privacy-intrusive. Furthermore, there are also cameras and sensors that produce data that have no natural visible form, and methods for visualizing such data are discussed as well. Finally, in a concluding discussion an attempt is made to predict how the discussed methods and techniques will be used in the future. 

  • 6.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Arsic, Dejan
    Munich University of Technology, Germany.
    Ganchev, Todor
    University of Patras, Greece.
    Linderhed, Anna
    FOI Swedish Defence Research Agency.
    Menezes, Paolo
    University of Coimbra, Portugal.
    Ntalampiras, Stavros
    University of Patras, Greece.
    Olma, Tadeusz
    MARAC S.A., Greece.
    Potamitis, Ilyas
    Technological Educational Institute of Crete, Greece.
    Ros, Julien
    Probayes SAS, France.
    Prometheus: Prediction and interpretation of human behaviour based on probabilistic structures and heterogeneous sensors2008Conference paper (Refereed)
    Abstract [en]

    The on-going EU funded project Prometheus (FP7-214901) aims at establishing a general framework which links fundamental sensing tasks to automated cognition processes enabling interpretation and short-term prediction of individual and collective human behaviours in unrestricted environments as well as complex human interactions. To achieve the aforementioned goals, the Prometheus consortium works on the following core scientific and technological objectives:

    1. sensor modeling and information fusion from multiple, heterogeneous perceptual modalities;

    2. modeling, localization, and tracking of multiple people;

    3. modeling, recognition, and short-term prediction of continuous complex human behavior.

  • 7.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Berg, Amanda
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Evaluating Template Rescaling in Short-Term Single-Object Tracking2015Conference paper (Refereed)
    Abstract [en]

    In recent years, short-term single-object tracking has emerged has a popular research topic, as it constitutes the core of more general tracking systems. Many such tracking methods are based on matching a part of the image with a template that is learnt online and represented by, for example, a correlation filter or a distribution field. In order for such a tracker to be able to not only find the position, but also the scale, of the tracked object in the next frame, some kind of scale estimation step is needed. This step is sometimes separate from the position estimation step, but is nevertheless jointly evaluated in de facto benchmarks. However, for practical as well as scientific reasons, the scale estimation step should be evaluated separately – for example,theremightincertainsituationsbeothermethodsmore suitable for the task. In this paper, we describe an evaluation method for scale estimation in template-based short-term single-object tracking, and evaluate two state-of-the-art tracking methods where estimation of scale and position are separable.

  • 8.
    Ahlberg, Jörgen
    et al.
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Dornaika, Fadi
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology.
    Efficient active appearance model for real-time head and facial feature tracking2003In: Analysis and Modeling of Faces and Gestures, 2003. AMFG 2003. IEEE International Workshop on, IEEE conference proceedings, 2003, p. 173-180Conference paper (Refereed)
    Abstract [en]

    We address the 3D tracking of pose and animation of the human face in monocular image sequences using active appearance models. The classical appearance-based tracking suffers from two disadvantages: (i) the estimated out-of-plane motions are not very accurate, and (ii) the convergence of the optimization process to desired minima is not guaranteed. We aim at designing an efficient active appearance model, which is able to cope with the above disadvantages by retaining the strengths of feature-based and featureless tracking methodologies. For each frame, the adaptation is split into two consecutive stages. In the first stage, the 3D head pose is recovered using robust statistics and a measure of consistency with a statistical model of a face texture. In the second stage, the local motion associated with some facial features is recovered using the concept of the active appearance model search. Tracking experiments and method comparison demonstrate the robustness and out-performance of the developed framework.

  • 9.
    Ahlberg, Jörgen
    et al.
    Dept. of IR Systems, Div. of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Dornaika, Fadi
    Computer Vision Center, Universitat Autonoma de Barcelona, Bellaterra, Spain.
    Parametric Face Modeling and Tracking2005In: Handbook of Face Recognition / [ed] Stan Z. Li, Anil K. Jain, Springer-Verlag New York, 2005, p. 65-87Chapter in book (Other academic)
  • 10.
    Ahlberg, Jörgen
    et al.
    Swedish Defence Research Agency, Sweden.
    Folkesson, Martin
    Swedish Defence Research Agency, Sweden.
    Grönwall, Christina
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Horney, Tobias
    Swedish Defence Research Agency, Sweden.
    Jungert, Erland
    Swedish Defence Research Agency, Sweden.
    Klasén, Lena
    Swedish Defence Research Agency, Sweden.
    Ulvklo, Morgan
    Swedish Defence Research Agency, Sweden.
    Ground Target Recognition in a Query-Based Multi-Sensor Information System2006Report (Other academic)
    Abstract [en]

    We present a system covering the complete process for automatic ground target recognition, from sensor data to the user interface, i.e., from low level image processing to high level situation analysis. The system is based on a query language and a query processor, and includes target detection, target recognition, data fusion, presentation and situation analysis. This paper focuses on target recognition and its interaction with the query processor. The target recognitionis executed in sensor nodes, each containing a sensor and the corresponding signal/image processing algorithms. New sensors and algorithms are easily added to the system. The processing of sensor data is performed in two steps; attribute estimation and matching. First, several attributes, like orientation and dimensions, are estimated from the (unknown but detected) targets. These estimates are used to select the models of interest in a matching step, where the targetis matched with a number of target models. Several methods and sensor data types are used in both steps, and data is fused after each step. Experiments have been performed using sensor data from laser radar, thermal and visual cameras. Promising results are reported, demonstrating the capabilities of the target recognition algorithms, the advantages of the two-level data fusion and the query-based system.

  • 11.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology. Div. of Sensor Technology, Swedish Defence Research Agency, Linköping, Sweden.
    Forchheimer, Robert
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology.
    Face tracking for model-based coding and face animation2003In: International journal of imaging systems and technology (Print), ISSN 0899-9457, E-ISSN 1098-1098, Vol. 13, no 1, p. 8-22Article in journal (Refereed)
    Abstract [en]

    We present a face and facial feature tracking system able to extract animation parameters describing the motion and articulation of a human face in real-time on consumer hardware. The system is based on a statistical model of face appearance and a search algorithm for adapting the model to an image. Speed and robustness is discussed, and the system evaluated in terms of accuracy.

  • 12.
    Ahlberg, Jörgen
    et al.
    Div. of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Klasén, Lena
    Div. of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Surveillance Systems for Urban Crisis Management2005Conference paper (Other academic)
    Abstract [en]

    We present a concept for combing 3D models and multiple heterogeneous sensors into a surveillance system enabling superior situation awareness. The concept has many military as well as civilian applications. A key issue is the use of a 3D environment model of the area to be surveyed, typically an urban area. In addition to the 3D model, the area of interest is monitored over time using multiple heterogeneous sensors, such as optical, acoustic, and/or seismic sensors. Data and analysis results from the sensors are visualized in the 3D model, thus putting them in a common reference frame and making their spatial and temporal relations obvious. The result is highlighted by an example where data from different sensor systems is integrated in a 3D model of a Swedish urban area.

  • 13.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Li, Haibo
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Representing and Compressing MPEG-4 Facial Animation Parameters using Facial Action Basis Functions1999In: IEEE Transactions on Circuits and Systems, ISSN 0098-4094, E-ISSN 1558-1276, Vol. 9, no 3, p. 405-410Article in journal (Refereed)
    Abstract [en]

    In model-based, or semantic, coding, parameters describing the nonrigid motion of objects, e.g., the mimics of a face, are of crucial interest. The facial animation parameters (FAPs) specified in MPEG-4 compose a very rich set of such parameters, allowing a wide range of facial motion. However, the FAPs are typically correlated and also constrained in their motion due to the physiology of the human face. We seek here to utilize this spatial correlation to achieve efficient compression. As it does not introduce any interframe delay, the method is suitable for interactive applications, e.g., videophone and interactive video, where low delay is a vital issue.

  • 14.
    Ahlberg, Jörgen
    et al.
    Termisk Systemteknik AB Linköping, Sweden; Visage Technologies AB Linköping, Sweden.
    Markuš, Nenad
    Human-Oriented Technologies Laboratory, Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia.
    Berg, Amanda
    Termisk Systemteknik AB, Linköping, Sweden.
    Multi-person fever screening using a thermal and a visual camera2015Conference paper (Other academic)
    Abstract [en]

    We propose a system to automatically measure the body temperature of persons as they pass. In contrast to exisitng systems, the persons do not need to stop and look into a camera one-by-one. Instead, their eye corners are automatically detected and the temperatures therein measured using a thermal camera. The system handles multiple simultaneous persons and can thus be used where a flow of people pass, such as at airport gates.

  • 15.
    Ahlberg, Jörgen
    et al.
    Division of Information Systems, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Pandzic, Igor
    Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia.
    Facial Action Tracking2011In: Handbook of Face Recognition / [ed] Stan Z. Li, Anil K. Jain, London: Springer London, 2011, 2, p. 461-486Chapter in book (Refereed)
    Abstract [en]

    This chapter explains the basics of parametric face models used for face and facial action tracking as well as fundamental strategies and methodologies for tracking. A few tracking algorithms serving as pedagogical examples are described in more detail.

  • 16.
    Ahlberg, Jörgen
    et al.
    Department of IR Systems, Division of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Renhorn, Ingmar
    Department of IR Systems, Division of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    An information-theoretic approach to band selection2005In: Proc. SPIE 5811, Targets and Backgrounds XI: Characterization and Representation / [ed] Wendell R. Watkins; Dieter Clement; William R. Reynolds, SPIE - International Society for Optical Engineering, 2005, p. 15-23Conference paper (Refereed)
    Abstract [en]

    When we digitize data from a hyperspectral imager, we do so in three dimensions; the radiometric dimension, the spectral dimension, and the spatial dimension(s). The output can be regarded as a random variable taking values from a discrete alphabet, thus allowing simple estimation of the variable’s entropy, i.e., its information content. By modeling the target/background state as a binary random variable and the corresponding measured spectra as a function thereof, wecan compute theinformation capacity ofa certainsensoror sensor configuration. This can be used as a measure of the separability of the two classes, and also gives a bound on the sensor’s performance. Changing the parameters of the digitizing process, bascially how many bits and bands to spend, will affect the information capacity, and we can thus try to find parameters where as few bits/bands as possible gives us as good class separability as possible. The parameters to be optimized in this way (and with respect to the chosen target and background) are spatial, radiometric and spectral resolution, i.e., which spectral bands to use and how to quantize them. In this paper, we focus on the band selection problem, describe an initial approach, and show early results of target/background separation.

  • 17.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Glana Sensors AB, Sweden.
    Renhorn, Ingmar
    Glana Sensors AB, Sweden.
    Chevalier, Tomas
    Scienvisic AB, Sweden.
    Rydell, Joakim
    FOI, Swedish Defence Research Agency, Sweden.
    Bergström, David
    FOI, Swedish Defence Research Agency, Sweden.
    Three-dimensional hyperspectral imaging technique2017In: ALGORITHMS AND TECHNOLOGIES FOR MULTISPECTRAL, HYPERSPECTRAL, AND ULTRASPECTRAL IMAGERY XXIII / [ed] Miguel Velez-Reyes; David W. Messinger, SPIE - International Society for Optical Engineering, 2017, Vol. 10198, article id 1019805Conference paper (Refereed)
    Abstract [en]

    Hyperspectral remote sensing based on unmanned airborne vehicles is a field increasing in importance. The combined functionality of simultaneous hyperspectral and geometric modeling is less developed. A configuration has been developed that enables the reconstruction of the hyperspectral three-dimensional (3D) environment. The hyperspectral camera is based on a linear variable filter and a high frame rate, high resolution camera enabling point-to-point matching and 3D reconstruction. This allows the information to be combined into a single and complete 3D hyperspectral model. In this paper, we describe the camera and illustrate capabilities and difficulties through real-world experiments.

  • 18.
    Ahlberg, Jörgen
    et al.
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Renhorn, Ingmar G.
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Wadströmer, Niclas
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    An information measure of sensor performance and its relation to the ROC curve2010In: Proc. SPIE 7695, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVI / [ed] Sylvia S. Shen; Paul E. Lewis, SPIE - International Society for Optical Engineering, 2010, p. Art.nr. 7695-72-Conference paper (Refereed)
    Abstract [en]

    The ROC curve is the most frequently used performance measure for detection methods and the underlying sensor configuration. Common problems are that the ROC curve does not present a single number that can be compared to other systems and that no discrimination between sensor performance and algorithm performance is done. To address the first problem, a number of measures are used in practice, like detection rate at a specific false alarm rate, or area-under-curve. For the second problem, we proposed in a previous paper1 an information theoretic method for measuring sensor performance. We now relate the method to the ROC curve, show that it is equivalent to selecting a certain point on the ROC curve, and that this point is easily determined. Our scope is hyperspectral data, studying discrimination between single pixels.

  • 19.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Åstrom, Anders
    Swedish Natl Forens Ctr NFC, Linkoping, Sweden.
    Forchheimer, Robert
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, Faculty of Science & Engineering.
    Simultaneous sensing, readout, and classification on an intensity-ranking image sensor2018In: International journal of circuit theory and applications, ISSN 0098-9886, E-ISSN 1097-007X, Vol. 46, no 9, p. 1606-1619Article in journal (Refereed)
    Abstract [en]

    We combine the near-sensor image processing concept with address-event representation leading to an intensity-ranking image sensor (IRIS) and show the benefits of using this type of sensor for image classification. The functionality of IRIS is to output pixel coordinates (X and Y values) continuously as each pixel has collected a certain number of photons. Thus, the pixel outputs will be automatically intensity ranked. By keeping track of the timing of these events, it is possible to record the full dynamic range of the image. However, in many cases, this is not necessary-the intensity ranking in itself gives the needed information for the task at hand. This paper describes techniques for classification and proposes a particular variant (groves) that fits the IRIS architecture well as it can work on the intensity rankings only. Simulation results using the CIFAR-10 dataset compare the results of the proposed method with the more conventional ferns technique. It is concluded that the simultaneous sensing and classification obtainable with the IRIS sensor yields both fast (shorter than full exposure time) and processing-efficient classification.

  • 20.
    Andersson, Maria
    et al.
    Division of Information Systems, FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Ntalampiras, Stavros
    Department of Electrical and Computer Engineering, University of Patras, Patras, Greece.
    Ganchev, Todor
    Department of Electrical and Computer Engineering, University of Patras, Patras, Greece.
    Rydell, Joakim
    Division of Information Systems, FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Ahlberg, Jörgen
    Division of Information Systems, FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Fakotakis, Nikos
    Department of Electrical and Computer Engineering, University of Patras, Patras, Greece.
    Fusion of Acoustic and Optical Sensor Data for Automatic Fight Detection in Urban Environments2010In: Information Fusion (FUSION), 2010 13th Conference on, IEEE conference proceedings, 2010, p. 1-8Conference paper (Refereed)
    Abstract [en]

    We propose a two-stage method for detection of abnormal behaviours, such as aggression and fights in urban environment, which is applicable to operator support in surveillance applications. The proposed method is based on fusion of evidence from audio and optical sensors. In the first stage, a number of modalityspecific detectors perform recognition of low-level events. Their outputs act as input to the second stage, which performs fusion and disambiguation of the firststage detections. Experimental evaluation on scenes from the outdoor part of the PROMETHEUS database demonstrated the practical viability of the proposed approach. We report a fight detection rate of 81% when both audio and optical information are used. Reduced performance is observed when evidence from audio data is excluded from the fusion process. Finally, in the case when only evidence from one camera is used for detecting the fights, the recognition performance is poor. 

  • 21.
    Andersson, Maria
    et al.
    FOI Swedish Defence Research Agency.
    Rydell, Joakim
    FOI Swedish Defence Research Agency.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. FOI Swedish Defence Research Agency.
    Estimation of crowd behaviour using sensor networks and sensor fusion2009Conference paper (Refereed)
    Abstract [en]

    Commonly, surveillance operators are today monitoring a large number of CCTV screens, trying to solve the complex cognitive tasks of analyzing crowd behavior and detecting threats and other abnormal behavior. Information overload is a rule rather than an exception. Moreover, CCTV footage lacks important indicators revealing certain threats, and can also in other respects be complemented by data from other sensors. This article presents an approach to automatically interpret sensor data and estimate behaviors of groups of people in order to provide the operator with relevant warnings. We use data from distributed heterogeneous sensors (visual cameras and a thermal infrared camera), and process the sensor data using detection algorithms. The extracted features are fed into a hidden Markov model in order to model normal behavior and detect deviations. We also discuss the use of radars for weapon detection.

  • 22.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology. Termisk Systemteknik AB, Linköping, Sweden.
    Classification and temporal analysis of district heating leakages in thermal images2014In: Proceedings of The 14th International Symposium on District Heating and Cooling, 2014Conference paper (Other academic)
    Abstract [en]

    District heating pipes are known to degenerate with time and in some cities the pipes have been used for several decades. Due to bad insulation or cracks, energy or media leakages might appear. This paper presents a complete system for large-scale monitoring of district heating networks, including methods for detection, classification and temporal characterization of (potential) leakages. The system analyses thermal infrared images acquired by an aircraft-mounted camera, detecting the areas for which the pixel intensity is higher than normal. Unfortunately, the system also finds many false detections, i.e., warm areas that are not caused by media or energy leakages. Thus, in order to reduce the number of false detections we describe a machine learning method to classify the detections. The results, based on data from three district heating networks show that we can remove more than half of the false detections. Moreover, we also propose a method to characterize leakages over time, that is, repeating the image acquisition one or a few years later and indicate areas that suffer from an increased energy loss.

  • 23.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology. Termisk Systemteknik AB, Linköping, Sweden.
    Classification of leakage detections acquired by airborne thermography of district heating networks2014In: 2014 8th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS), IEEE , 2014, p. 1-4Conference paper (Refereed)
    Abstract [en]

    We address the problem of reducing the number offalse alarms among automatically detected leakages in districtheating networks. The leakages are detected in images capturedby an airborne thermal camera, and each detection correspondsto an image region with abnormally high temperature. Thisapproach yields a significant number of false positives, and wepropose to reduce this number in two steps. First, we use abuilding segmentation scheme in order to remove detectionson buildings. Second, we extract features from the detectionsand use a Random forest classifier on the remaining detections.We provide extensive experimental analysis on real-world data,showing that this post-processing step significantly improves theusefulness of the system.

  • 24.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB Linköping, Sweden.
    Classifying district heating network leakages in aerial thermal imagery2014Conference paper (Other academic)
    Abstract [en]

    In this paper we address the problem of automatically detecting leakages in underground pipes of district heating networks from images captured by an airborne thermal camera. The basic idea is to classify each relevant image region as a leakage if its temperature exceeds a threshold. This simple approach yields a significant number of false positives. We propose to address this issue by machine learning techniques and provide extensive experimental analysis on real-world data. The results show that this postprocessing step significantly improves the usefulness of the system.

  • 25.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    A thermal infrared dataset for evaluation of short-term tracking methods2015Conference paper (Other academic)
    Abstract [en]

    During recent years, thermal cameras have decreased in both size and cost while improving image quality. The area of use for such cameras has expanded with many exciting applications, many of which require tracking of objects. While being subject to extensive research in the visual domain, tracking in thermal imagery has historically been of interest mainly for military purposes. The available thermal infrared datasets for evaluating methods addressing these problems are few and the ones that do are not challenging enough for today’s tracking algorithms. Therefore, we hereby propose a thermal infrared dataset for evaluation of short-term tracking methods. The dataset consists of 20 sequences which have been collected from multiple sources and the data format used is in accordance with the Visual Object Tracking (VOT) Challenge.

  • 26.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    A Thermal Object Tracking Benchmark2015Conference paper (Refereed)
    Abstract [en]

    Short-term single-object (STSO) tracking in thermal images is a challenging problem relevant in a growing number of applications. In order to evaluate STSO tracking algorithms on visual imagery, there are de facto standard benchmarks. However, we argue that tracking in thermal imagery is different than in visual imagery, and that a separate benchmark is needed. The available thermal infrared datasets are few and the existing ones are not challenging for modern tracking algorithms. Therefore, we hereby propose a thermal infrared benchmark according to the Visual Object Tracking (VOT) protocol for evaluation of STSO tracking methods. The benchmark includes the new LTIR dataset containing 20 thermal image sequences which have been collected from multiple sources and annotated in the format used in the VOT Challenge. In addition, we show that the ranking of different tracking principles differ between the visual and thermal benchmarks, confirming the need for the new benchmark.

  • 27.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Faculty of Science & Engineering.
    Channel Coded Distribution Field Tracking for Thermal Infrared Imagery2016In: PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), IEEE , 2016, p. 1248-1256Conference paper (Refereed)
    Abstract [en]

    We address short-term, single-object tracking, a topic that is currently seeing fast progress for visual video, for the case of thermal infrared (TIR) imagery. The fast progress has been possible thanks to the development of new template-based tracking methods with online template updates, methods which have not been explored for TIR tracking. Instead, tracking methods used for TIR are often subject to a number of constraints, e.g., warm objects, low spatial resolution, and static camera. As TIR cameras become less noisy and get higher resolution these constraints are less relevant, and for emerging civilian applications, e.g., surveillance and automotive safety, new tracking methods are needed. Due to the special characteristics of TIR imagery, we argue that template-based trackers based on distribution fields should have an advantage over trackers based on spatial structure features. In this paper, we propose a template-based tracking method (ABCD) designed specifically for TIR and not being restricted by any of the constraints above. In order to avoid background contamination of the object template, we propose to exploit background information for the online template update and to adaptively select the object region used for tracking. Moreover, we propose a novel method for estimating object scale change. The proposed tracker is evaluated on the VOT-TIR2015 and VOT2015 datasets using the VOT evaluation toolkit and a comparison of relative ranking of all common participating trackers in the challenges is provided. Further, the proposed tracker, ABCD, and the VOT-TIR2015 winner SRDCFir are evaluated on maritime data. Experimental results show that the ABCD tracker performs particularly well on thermal infrared sequences.

  • 28.
    Berg, Amanda
    et al.
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Electrical Engineering, Computer Vision. Termisk Syst Tekn AB, Diskettgatan 11 B, SE-58335 Linkoping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Syst Tekn AB, Diskettgatan 11 B, SE-58335 Linkoping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Enhanced analysis of thermographic images for monitoring of district heat pipe networks2016In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 83, no 2, p. 215-223Article in journal (Refereed)
    Abstract [en]

    We address two problems related to large-scale aerial monitoring of district heating networks. First, we propose a classification scheme to reduce the number of false alarms among automatically detected leakages in district heating networks. The leakages are detected in images captured by an airborne thermal camera, and each detection corresponds to an image region with abnormally high temperature. This approach yields a significant number of false positives, and we propose to reduce this number in two steps; by (a) using a building segmentation scheme in order to remove detections on buildings, and (b) to use a machine learning approach to classify the remaining detections as true or false leakages. We provide extensive experimental analysis on real-world data, showing that this post-processing step significantly improves the usefulness of the system. Second, we propose a method for characterization of leakages over time, i.e., repeating the image acquisition one or a few years later and indicate areas that suffer from an increased energy loss. We address the problem of finding trends in the degradation of pipe networks in order to plan for long-term maintenance, and propose a visualization scheme exploiting the consecutive data collections.

  • 29.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Generating Visible Spectrum Images from Thermal Infrared2018In: Proceedings 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops CVPRW 2018, Institute of Electrical and Electronics Engineers (IEEE), 2018, p. 1224-1233Conference paper (Refereed)
    Abstract [en]

    Transformation of thermal infrared (TIR) images into visual, i.e. perceptually realistic color (RGB) images, is a challenging problem. TIR cameras have the ability to see in scenarios where vision is severely impaired, for example in total darkness or fog, and they are commonly used, e.g., for surveillance and automotive applications. However, interpretation of TIR images is difficult, especially for untrained operators. Enhancing the TIR image display by transforming it into a plausible, visual, perceptually realistic RGB image presumably facilitates interpretation. Existing grayscale to RGB, so called, colorization methods cannot be applied to TIR images directly since those methods only estimate the chrominance and not the luminance. In the absence of conventional colorization methods, we propose two fully automatic TIR to visual color image transformation methods, a two-step and an integrated approach, based on Convolutional Neural Networks. The methods require neither pre- nor postprocessing, do not require any user input, and are robust to image pair misalignments. We show that the methods do indeed produce perceptually realistic results on publicly available data, which is assessed both qualitatively and quantitatively.

  • 30.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Object Tracking in Thermal Infrared Imagery based on Channel Coded Distribution Fields2017Conference paper (Other academic)
    Abstract [en]

    We address short-term, single-object tracking, a topic that is currently seeing fast progress for visual video, for the case of thermal infrared (TIR) imagery. Tracking methods designed for TIR are often subject to a number of constraints, e.g., warm objects, low spatial resolution, and static camera. As TIR cameras become less noisy and get higher resolution these constraints are less relevant, and for emerging civilian applications, e.g., surveillance and automotive safety, new tracking methods are needed. Due to the special characteristics of TIR imagery, we argue that template-based trackers based on distribution fields should have an advantage over trackers based on spatial structure features. In this paper, we propose a templatebased tracking method (ABCD) designed specifically for TIR and not being restricted by any of the constraints above. The proposed tracker is evaluated on the VOT-TIR2015 and VOT2015 datasets using the VOT evaluation toolkit and a comparison of relative ranking of all common participating trackers in the challenges is provided. Experimental results show that the ABCD tracker performs particularly well on thermal infrared sequences.

  • 31.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision.
    Visual Spectrum Image Generation fromThermal Infrared2019Conference paper (Other academic)
    Abstract [en]

    We address short-term, single-object tracking, a topic that is currently seeing fast progress for visual video, for the case of thermal infrared (TIR) imagery. Tracking methods designed for TIR are often subject to a number of constraints, e.g., warm objects, low spatial resolution, and static camera. As TIR cameras become less noisy and get higher resolution these constraints are less relevant, and for emerging civilian applications, e.g., surveillance and automotive safety, new tracking methods are needed. Due to the special characteristics of TIR imagery, we argue that template-based trackers based on distribution fields should have an advantage over trackers based on spatial structure features. In this paper, we propose a templatebased tracking method (ABCD) designed specifically for TIR and not being restricted by any of the constraints above. The proposed tracker is evaluated on the VOT-TIR2015 and VOT2015 datasets using the VOT evaluation toolkit and a comparison of relative ranking of all common participating trackers in the challenges is provided. Experimental results show that the ABCD tracker performs particularly well on thermal infrared sequences.

  • 32.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Häger, Gustav
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    An Overview of the Thermal Infrared Visual Object Tracking VOT-TIR2015 Challenge2016Conference paper (Other academic)
    Abstract [en]

    The Thermal Infrared Visual Object Tracking (VOT-TIR2015) Challenge was organized in conjunction with ICCV2015. It was the first benchmark on short-term,single-target tracking in thermal infrared (TIR) sequences. The challenge aimed at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. It was based on the VOT2013 Challenge, but introduced the following novelties: (i) the utilization of the LTIR (Linköping TIR) dataset, (ii) adaption of the VOT2013 attributes to thermal data, (iii) a similar evaluation to that of VOT2015. This paper provides an overview of the VOT-TIR2015 Challenge as well as the results of the 24 participating trackers.

  • 33.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Termisk Systemteknik AB, Linköping, Sweden.
    Johnander, Joakim
    Linköping University, Department of Electrical Engineering, Computer Vision. Zenuity AB, Göteborg, Sweden.
    Durand de Gevigney, Flavie
    Linköping University, Department of Electrical Engineering, Computer Vision. Grenoble INP, France.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision.
    Semi-automatic Annotation of Objects in Visual-Thermal Video2019Conference paper (Refereed)
    Abstract [en]

    Deep learning requires large amounts of annotated data. Manual annotation of objects in video is, regardless of annotation type, a tedious and time-consuming process. In particular, for scarcely used image modalities human annotationis hard to justify. In such cases, semi-automatic annotation provides an acceptable option.

    In this work, a recursive, semi-automatic annotation method for video is presented. The proposed method utilizesa state-of-the-art video object segmentation method to propose initial annotations for all frames in a video based on only a few manual object segmentations. In the case of a multi-modal dataset, the multi-modality is exploited to refine the proposed annotations even further. The final tentative annotations are presented to the user for manual correction.

    The method is evaluated on a subset of the RGBT-234 visual-thermal dataset reducing the workload for a human annotator with approximately 78% compared to full manual annotation. Utilizing the proposed pipeline, sequences are annotated for the VOT-RGBT 2019 challenge.

  • 34.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Öfjäll, Kristoffer
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Detecting Rails and Obstacles Using a Train-Mounted Thermal Camera2015In: Image Analysis: 19th Scandinavian Conference, SCIA 2015, Copenhagen, Denmark, June 15-17, 2015. Proceedings / [ed] Rasmus R. Paulsen; Kim S. Pedersen, Springer, 2015, p. 492-503Conference paper (Refereed)
    Abstract [en]

    We propose a method for detecting obstacles on the railway in front of a moving train using a monocular thermal camera. The problem is motivated by the large number of collisions between trains and various obstacles, resulting in reduced safety and high costs. The proposed method includes a novel way of detecting the rails in the imagery, as well as a way to detect anomalies on the railway. While the problem at a first glance looks similar to road and lane detection, which in the past has been a popular research topic, a closer look reveals that the problem at hand is previously unaddressed. As a consequence, relevant datasets are missing as well, and thus our contribution is two-fold: We propose an approach to the novel problem of obstacle detection on railways and we describe the acquisition of a novel data set.

  • 35.
    Bešenić, Krešimir
    et al.
    Faculty of Electrical Engineering and Computing, University of Zagreb,.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Pandžić, Igor
    Faculty of Electrical Engineering and Computing, University of Zagreb.
    Unsupervised Facial Biometric Data Filtering for Age and Gender Estimation2019In: Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019), SciTePress, 2019, Vol. 5, p. 209-217Conference paper (Refereed)
    Abstract [en]

    Availability of large training datasets was essential for the recent advancement and success of deep learning methods. Due to the difficulties related to biometric data collection, datasets with age and gender annotations are scarce and usually limited in terms of size and sample diversity. Web-scraping approaches for automatic data collection can produce large amounts weakly labeled noisy data. The unsupervised facial biometric data filtering method presented in this paper greatly reduces label noise levels in web-scraped facial biometric data. Experiments on two large state-of-the-art web-scraped facial datasets demonstrate the effectiveness of the proposed method, with respect to training and validation scores, training convergence, and generalization capabilities of trained age and gender estimators.

  • 36.
    Brattberg, Oskar
    et al.
    Dept. of IR Systems, Div. of Sensor Tecnology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Ahlberg, Jörgen
    Dept. of IR Systems, Div. of Sensor Tecnology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Analysis of Multispectral Reconnaissance Imagery for Target Detection and Operator Support2006Conference paper (Other academic)
    Abstract [en]

    This paper describes a method to estimate motion in an image sequence acquired using a multispectral airborne sensor. The purpose of the motion estimation is to align the sequentually acquired spectral bands and fuse them into multispectral images. These multispectral images are then analysed and presented in order to support an operator in an air-to-ground reconnaissance scenario.

  • 37.
    Dornaika, Fadi
    et al.
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology.
    Ahlberg, Jörgen
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Face and facial feature tracking using deformable models2004In: International Journal of Image and Graphics, ISSN 0219-4678, Vol. 4, no 3, p. 499-532Article in journal (Refereed)
    Abstract [en]

    In this paper, we address the 3D tracking of pose and animation of the human face in monocular image sequences using deformable 3D models. The main contributions of this paper are as follows. First, we show how the robustness and stability of the Active Appearance Algorithm can be improved through the inclusion of a simple motion compensation based on feature correspondence. Second, we develop a new method able to adapt a deformable 3D model to a face in the input image. Central to this method is the decoupling of global head movements and local non-rigid deformations/animations. This decoupling is achieved by, first, estimating the global (rigid) motion using robust statistics and a statistical model for face texture, and then, adapting the 3D model to possible local animations using the concept of the Active Appearance Algorithm. This proposed method constitutes a significant step towards reliable model-based face trackers since the strengths of complementary tracking methodologies are combined.

    Experiments evaluating the effectiveness of the methods are reported. Adaptation and tracking examples demonstrate the feasibility and robustness of the developed methods.

  • 38.
    Dornaika, Fadi
    et al.
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology.
    Ahlberg, Jörgen
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Face Model Adaptation for Tracking and Active Appearance Model Training2003In: Proceedings of the British Machine Vision Conference / [ed] Richard Harvey and Andrew Bangham, 2003, p. 57.1-57.10Conference paper (Other academic)
    Abstract [en]

    In this paper, we consider the potentialities of adapting a 3D deformable face model to video sequences. Two adaptation methods are proposed. The first method computes the adaptation using a locally exhaustive and directed search in the parameter space. The second method decouples the estimation of head and facial feature motion. It computes the 3D head pose by combining: (i) a robust feature-based pose estimator, and (ii) a global featureless criterion. The facial animation parameters are then estimated with a combined exhaustive and directed search. Tracking experiments and performance evaluation demonstrate the feasibility and usefulness of the developed methods. These experiments also show that the proposed methods can outperform the adaptation based on a directed continuous search.

  • 39.
    Dornaika, Fadi
    et al.
    Laboratoire Heudiasyc, Université de Technologie de Compiègne, France.
    Ahlberg, Jörgen
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Fast and Reliable Active Appearance Model Search for 3D Face Tracking2004In: IEEE transactions on systems, man and cybernetics. Part B. Cybernetics, ISSN 1083-4419, E-ISSN 1941-0492, Vol. 34, no 4, p. 1838-1853Article in journal (Refereed)
    Abstract [en]

    This paper addresses the three-dimensional (3-D) tracking of pose and animation of the human face in monocular image sequences using active appearance models. The major problem of the classical appearance-based adaptation is the high computationaltimeresultingfrom theinclusionofasynthesisstep in the iterative optimization. Whenever the dimension of the face space is large, a real-time performance cannot be achieved. In this paper, we aim at designing a fast and stable active appearance model search for 3-D face tracking. The main contribution is a search algorithm whose CPU-time is not dependent on the dimension of the face space. Using this algorithm, we show that both the CPU-time and the likelihood of a nonaccurate tracking are reduced. Experiments evaluating the effectiveness of the proposed algorithm are reported, as well as method comparison and tracking synthetic and real image sequences.

  • 40.
    Dornaika, Fadi
    et al.
    Computer Vision Centre, Autonomous University of Barcelona, Edifici O, Campus UAB, Bellaterra, Barcelona, Spain.
    Ahlberg, Jörgen
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Fitting 3D Face Models for Tracking and Active Appearance Model Training2006In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 24, no 9, p. 1010-1024Article in journal (Refereed)
    Abstract [en]

    In this paper, we consider fitting a 3D deformable face model to continuous video sequences for the tasks of tracking and training. We propose two appearance-based methods that only require a simple statistical facial texture model and do not require any information about an empirical or analytical gradient matrix, since the best search directions are estimated on the fly. The first method computes the fitting using a locally exhaustive and directed search where the 3D head pose and the facial actions are simultaneously estimated. The second method decouples the estimation of these parameters. It computes the 3D head pose using a robust feature-based pose estimator incorporating a facial texture consistency measure. Then, it estimates the facial actions with an exhaustive and directed search. Fitting and tracking experiments demonstrate the feasibility and usefulness of the developed methods. A performance evaluation also shows that the proposed methods can outperform the fitting based on an active appearance model search adopting a pre-computed gradient matrix. Although the proposed schemes are not as fast as the schemes adopting a directed continuous search, they can tackle many disadvantages associated with such approaches.

  • 41.
    Dornaika, Fadi
    et al.
    CNRS HEUDIASYC – UTC, Compiègne Cedex, France.
    Ahlberg, Jörgen
    Swedish Defence Research Agency, Linköping, Sweden.
    Model-based Head and Facial Motion Tracking2004In: Computer Vision in Human-Computer Interaction: ECCV 2004 Workshop on HCI, Prague, Czech Republic, May 16, 2004, Proceedings / [ed] Sebe, Nicu, Lew, Michael S., Huang, Thomas S., Springer Berlin/Heidelberg, 2004, p. 221-232Conference paper (Refereed)
    Abstract [en]

    This paper addresses the real-time tracking of head and facial motion in monocular image sequences using 3D deformable models. It introduces two methods. The first method only tracks the 3D head pose using a cascade of two stages: the first stage utilizes a robust featurebased pose estimator associated with two consecutive frames, the second stage relies on a Maximum a Posteriori inference scheme exploiting the temporal coherence in both the 3D head motions and facial textures. The facial texture is updated dynamically in order to obtain a simple on-line appearance model. The implementation of this method is kept simple and straightforward. In addition to the 3D head pose tracking, the second method tracks some facial animations using an Active Appearance Model search. Tracking experiments and performance evaluation demonstrate the robustness and usefulness of the developed methods that retain the advantages of both feature-based and appearance-based methods.

  • 42.
    Felsberg, Michael
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Berg, Amanda
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Häger, Gustav
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Kristan, Matej
    University of Ljubljana, Slovenia.
    Matas, Jiri
    Czech Technical University, Czech Republic.
    Leonardis, Ales
    University of Birmingham, United Kingdom.
    Cehovin, Luka
    University of Ljubljana, Slovenia.
    Fernandez, Gustavo
    Austrian Institute of Technology, Austria.
    Vojır, Tomas
    Czech Technical University, Czech Republic.
    Nebehay, Georg
    Austrian Institute of Technology, Austria.
    Pflugfelder, Roman
    Austrian Institute of Technology, Austria.
    Lukezic, Alan
    University of Ljubljana, Slovenia.
    Garcia-Martin8, Alvaro
    Universidad Autonoma de Madrid, Spain.
    Saffari, Amir
    Affectv, United Kingdom.
    Li, Ang
    Xi’an Jiaotong University.
    Solıs Montero, Andres
    University of Ottawa, Canada.
    Zhao, Baojun
    Beijing Institute of Technology, China.
    Schmid, Cordelia
    INRIA Grenoble Rhˆone-Alpes, France.
    Chen, Dapeng
    Xi’an Jiaotong University.
    Du, Dawei
    University at Albany, USA.
    Shahbaz Khan, Fahad
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Porikli, Fatih
    Australian National University, Australia.
    Zhu, Gao
    Australian National University, Australia.
    Zhu, Guibo
    NLPR, Chinese Academy of Sciences, China.
    Lu, Hanqing
    NLPR, Chinese Academy of Sciences, China.
    Kieritz, Hilke
    Fraunhofer IOSB, Germany.
    Li, Hongdong
    Australian National University, Australia.
    Qi, Honggang
    University at Albany, USA.
    Jeong, Jae-chan
    Electronics and Telecommunications Research Institute, Korea.
    Cho, Jae-il
    Electronics and Telecommunications Research Institute, Korea.
    Lee, Jae-Yeong
    Electronics and Telecommunications Research Institute, Korea.
    Zhu, Jianke
    Zhejiang University, China.
    Li, Jiatong
    University of Technology, Australia.
    Feng, Jiayi
    Institute of Automation, Chinese Academy of Sciences, China.
    Wang, Jinqiao
    NLPR, Chinese Academy of Sciences, China.
    Kim, Ji-Wan
    Electronics and Telecommunications Research Institute, Korea.
    Lang, Jochen
    University of Ottawa, Canada.
    Martinez, Jose M.
    Universidad Aut´onoma de Madrid, Spain.
    Xue, Kai
    INRIA Grenoble Rhˆone-Alpes, France.
    Alahari, Karteek
    INRIA Grenoble Rhˆone-Alpes, France.
    Ma, Liang
    Harbin Engineering University, China.
    Ke, Lipeng
    University at Albany, USA.
    Wen, Longyin
    University at Albany, USA.
    Bertinetto, Luca
    Oxford University, United Kingdom.
    Danelljan, Martin
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Arens, Michael
    Fraunhofer IOSB, Germany.
    Tang, Ming
    Institute of Automation, Chinese Academy of Sciences, China.
    Chang, Ming-Ching
    University at Albany, USA.
    Miksik, Ondrej
    Oxford University, United Kingdom.
    Torr, Philip H S
    Oxford University, United Kingdom.
    Martin-Nieto, Rafael
    Universidad Aut´onoma de Madrid, Spain.
    Laganiere, Robert
    University of Ottawa, Canada.
    Hare, Sam
    Obvious Engineering, United Kingdom.
    Lyu, Siwei
    University at Albany, USA.
    Zhu, Song-Chun
    University of California, USA.
    Becker, Stefan
    Fraunhofer IOSB, Germany.
    Hicks, Stephen L
    Oxford University, United Kingdom.
    Golodetz, Stuart
    Oxford University, United Kingdom.
    Choi, Sunglok
    Electronics and Telecommunications Research Institute, Korea.
    Wu, Tianfu
    University of California, USA.
    Hubner, Wolfgang
    Fraunhofer IOSB, Germany.
    Zhao, Xu
    Institute of Automation, Chinese Academy of Sciences, China.
    Hua, Yang
    INRIA Grenoble Rhˆone-Alpes, France.
    Li, Yang
    Zhejiang University, China.
    Lu, Yang
    University of California, USA.
    Li, Yuezun
    University at Albany, USA.
    Yuan, Zejian
    Xi’an Jiaotong University.
    Hong, Zhibin
    University of Technology, Australia.
    The Thermal Infrared Visual Object Tracking VOT-TIR2015 Challenge Results2015In: Proceedings of the IEEE International Conference on Computer Vision, Institute of Electrical and Electronics Engineers (IEEE), 2015, p. 639-651Conference paper (Refereed)
    Abstract [en]

    The Thermal Infrared Visual Object Tracking challenge 2015, VOTTIR2015, aims at comparing short-term single-object visual trackers that work on thermal infrared (TIR) sequences and do not apply prelearned models of object appearance. VOT-TIR2015 is the first benchmark on short-term tracking in TIR sequences. Results of 24 trackers are presented. For each participating tracker, a short description is provided in the appendix. The VOT-TIR2015 challenge is based on the VOT2013 challenge, but introduces the following novelties: (i) the newly collected LTIR (Linköping TIR) dataset is used, (ii) the VOT2013 attributes are adapted to TIR data, (iii) the evaluation is performed using insights gained during VOT2013 and VOT2014 and is similar to VOT2015.

  • 43.
    Felsberg, Michael
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Kristan, Matej
    University of Ljubljana, Slovenia.
    Matas, Jiri
    Czech Technical University, Czech Republic.
    Leonardis, Ales
    University of Birmingham, England.
    Pflugfelder, Roman
    Austrian Institute Technology, Austria.
    Häger, Gustav
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Berg, Amanda
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Electrical Engineering, Computer Vision. Termisk Syst Tekn AB, Linkoping, Sweden.
    Eldesokey, Abdelrahman
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Syst Tekn AB, Linkoping, Sweden.
    Cehovin, Luka
    University of Ljubljana, Slovenia.
    Vojir, Tomas
    Czech Technical University, Czech Republic.
    Lukezic, Alan
    University of Ljubljana, Slovenia.
    Fernandez, Gustavo
    Austrian Institute Technology, Austria.
    Petrosino, Alfredo
    Parthenope University of Naples, Italy.
    Garcia-Martin, Alvaro
    University of Autonoma Madrid, Spain.
    Solis Montero, Andres
    University of Ottawa, Canada.
    Varfolomieiev, Anton
    Kyiv Polytech Institute, Ukraine.
    Erdem, Aykut
    Hacettepe University, Turkey.
    Han, Bohyung
    POSTECH, South Korea.
    Chang, Chang-Ming
    University of Albany, GA USA.
    Du, Dawei
    Australian National University, Australia; Chinese Academic Science, Peoples R China.
    Erdem, Erkut
    Hacettepe University, Turkey.
    Khan, Fahad Shahbaz
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Porikli, Fatih
    ARC Centre Excellence Robot Vis, Australia; CSIRO, Australia.
    Zhao, Fei
    Australian National University, Australia; Chinese Academic Science, Peoples R China.
    Bunyak, Filiz
    University of Missouri, MO 65211 USA.
    Battistone, Francesco
    Parthenope University of Naples, Italy.
    Zhu, Gao
    University of Missouri, Columbia, USA.
    Seetharaman, Guna
    US Navy, DC 20375 USA.
    Li, Hongdong
    ARC Centre Excellence Robot Vis, Australia.
    Qi, Honggang
    Australian National University, Australia; Chinese Academic Science, Peoples R China.
    Bischof, Horst
    Graz University of Technology, Austria.
    Possegger, Horst
    Graz University of Technology, Austria.
    Nam, Hyeonseob
    NAVER Corp, South Korea.
    Valmadre, Jack
    University of Oxford, England.
    Zhu, Jianke
    Zhejiang University, Peoples R China.
    Feng, Jiayi
    Australian National University, Australia; Chinese Academic Science, Peoples R China.
    Lang, Jochen
    University of Ottawa, Canada.
    Martinez, Jose M.
    University of Autonoma Madrid, Spain.
    Palaniappan, Kannappan
    University of Missouri, MO 65211 USA.
    Lebeda, Karel
    University of Surrey, England.
    Gao, Ke
    University of Missouri, MO 65211 USA.
    Mikolajczyk, Krystian
    Imperial Coll London, England.
    Wen, Longyin
    University of Albany, GA USA.
    Bertinetto, Luca
    University of Oxford, England.
    Poostchi, Mahdieh
    University of Missouri, MO 65211 USA.
    Maresca, Mario
    Parthenope University of Naples, Italy.
    Danelljan, Martin
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Arens, Michael
    Fraunhofer IOSB, Germany.
    Tang, Ming
    Australian National University, Australia; Chinese Academic Science, Peoples R China.
    Baek, Mooyeol
    POSTECH, South Korea.
    Fan, Nana
    Harbin Institute Technology, Peoples R China.
    Al-Shakarji, Noor
    University of Missouri, MO 65211 USA.
    Miksik, Ondrej
    University of Oxford, England.
    Akin, Osman
    Hacettepe University, Turkey.
    Torr, Philip H. S.
    University of Oxford, England.
    Huang, Qingming
    Australian National University, Australia; Chinese Academic Science, Peoples R China.
    Martin-Nieto, Rafael
    University of Autonoma Madrid, Spain.
    Pelapur, Rengarajan
    University of Missouri, MO 65211 USA.
    Bowden, Richard
    University of Surrey, England.
    Laganiere, Robert
    University of Ottawa, Canada.
    Krah, Sebastian B.
    Fraunhofer IOSB, Germany.
    Li, Shengkun
    University of Albany, GA USA.
    Yao, Shizeng
    University of Missouri, MO 65211 USA.
    Hadfield, Simon
    University of Surrey, England.
    Lyu, Siwei
    University of Albany, GA USA.
    Becker, Stefan
    Fraunhofer IOSB, Germany.
    Golodetz, Stuart
    University of Oxford, England.
    Hu, Tao
    Australian National University, Australia; Chinese Academic Science, Peoples R China.
    Mauthner, Thomas
    Graz University of Technology, Austria.
    Santopietro, Vincenzo
    Parthenope University of Naples, Italy.
    Li, Wenbo
    Lehigh University, PA 18015 USA.
    Huebner, Wolfgang
    Fraunhofer IOSB, Germany.
    Li, Xin
    Harbin Institute Technology, Peoples R China.
    Li, Yang
    Zhejiang University, Peoples R China.
    Xu, Zhan
    Zhejiang University, Peoples R China.
    He, Zhenyu
    Harbin Institute Technology, Peoples R China.
    The Thermal Infrared Visual Object Tracking VOT-TIR2016 Challenge Results2016In: Computer Vision – ECCV 2016 Workshops. ECCV 2016. / [ed] Hua G., Jégou H., SPRINGER INT PUBLISHING AG , 2016, p. 824-849Conference paper (Refereed)
    Abstract [en]

    The Thermal Infrared Visual Object Tracking challenge 2016, VOT-TIR2016, aims at comparing short-term single-object visual trackers that work on thermal infrared (TIR) sequences and do not apply pre-learned models of object appearance. VOT-TIR2016 is the second benchmark on short-term tracking in TIR sequences. Results of 24 trackers are presented. For each participating tracker, a short description is provided in the appendix. The VOT-TIR2016 challenge is similar to the 2015 challenge, the main difference is the introduction of new, more difficult sequences into the dataset. Furthermore, VOT-TIR2016 evaluation adopted the improvements regarding overlap calculation in VOT2016. Compared to VOT-TIR2015, a significant general improvement of results has been observed, which partly compensate for the more difficult sequences. The dataset, the evaluation kit, as well as the results are publicly available at the challenge website.

  • 44.
    Felsberg, Michael
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Larsson, Fredrik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wadströmer, Niclas
    FOI.
    Ahlberg, Jörgen
    Termisk Systemteknik AB.
    Online Learning of Correspondences between Images2013In: IEEE Transaction on Pattern Analysis and Machine Intelligence, ISSN 0162-8828, E-ISSN 1939-3539, Vol. 35, no 1, p. 118-129Article in journal (Refereed)
    Abstract [en]

    We propose a novel method for iterative learning of point correspondences between image sequences. Points moving on surfaces in 3D space are projected into two images. Given a point in either view, the considered problem is to determine the corresponding location in the other view. The geometry and distortions of the projections are unknown as is the shape of the surface. Given several pairs of point-sets but no access to the 3D scene, correspondence mappings can be found by excessive global optimization or by the fundamental matrix if a perspective projective model is assumed. However, an iterative solution on sequences of point-set pairs with general imaging geometry is preferable. We derive such a method that optimizes the mapping based on Neyman's chi-square divergence between the densities representing the uncertainties of the estimated and the actual locations. The densities are represented as channel vectors computed with a basis function approach. The mapping between these vectors is updated with each new pair of images such that fast convergence and high accuracy are achieved. The resulting algorithm runs in real-time and is superior to state-of-the-art methods in terms of convergence and accuracy in a number of experiments.

  • 45.
    Friman, Ola
    et al.
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology. Swedish Defence Research Agency, Linköping, Sweden.
    Follo, Peter
    Swedish Defence Research Agency, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology. Termisk Systemteknik AB, Linköping, Sweden.
    Sjökvist, Stefan
    Termisk Systemteknik AB, Linköping, Sweden.
    Methods for Large-Scale Monitoring of District Heating Systems Using Airborne Thermography2014In: IEEE Transactions on Geoscience and Remote Sensing, ISSN 0196-2892, E-ISSN 1558-0644, Vol. 52, no 8, p. 5175-5182Article in journal (Refereed)
    Abstract [en]

    District heating is a common way of providing heat to buildings in urban areas. The heat is carried by hot water or steam and distributed in a network of pipes from a central powerplant. It is of great interest to minimize energy losses due to bad pipe insulation or leakages in such district heating networks. As the pipes generally are placed underground, it may be difficult to establish the presence and location of losses and leakages. Toward this end, this work presents methods for large-scale monitoring and detection of leakages by means of remote sensing using thermal cameras, so-called airborne thermography. The methods rely on the fact that underground losses in district heating systems lead to increased surface temperatures. The main contribution of this work is methods for automatic analysis of aerial thermal images to localize leaking district heating pipes. Results and experiences from large-scale leakage detection in several cities in Sweden and Norway are presented.

  • 46.
    Friman, Ola
    et al.
    Swedish Defence Research Agency, Linköping, Sweden.
    Tolt, Gustav
    Swedish Defence Research Agency, Linköping, Sweden.
    Ahlberg, Jörgen
    Termisk Systemteknik, Linköping, Sweden.
    Illumination and shadow compensation of hyperspectral images using a digital surface model and non-linear least squares estimation2011In: Proc. SPIE 8180, Image and Signal Processing for Remote Sensing XVII / [ed] Lorenzo Bruzzone, SPIE - International Society for Optical Engineering, 2011, p. Art.nr 8180-26-Conference paper (Refereed)
    Abstract [en]

    Object detection and material classification are two central tasks in electro-optical remote sensing and hyperspectral imaging applications. These are challenging problems as the measured spectra in hyperspectral images from satellite or airborne platforms vary significantly depending on the light conditions at the imaged surface, e.g., shadow versus non-shadow. In this work, a Digital Surface Model (DSM) is used to estimate different components of the incident light. These light components are subsequently used to predict what a measured spectrum would look like under different light conditions. The derived method is evaluated using an urban hyperspectral data set with 24 bands in the wavelength range 381.9 nm to 1040.4 nm and a DSM created from LIDAR 3D data acquired simultaneously with the hyperspectral data

  • 47.
    Hamoir, Dominique
    et al.
    Onera – The French Aerospace Lab, Toulouse, France.
    Hespel, Laurent
    Onera – The French Aerospace Lab, Toulouse, France.
    Déliot, Philippe
    Onera – The French Aerospace Lab, Toulouse, France.
    Boucher, Yannick
    Onera – The French Aerospace Lab, Toulouse, France.
    Steinvall, Ove
    Swedish Defense Research Agency (FOI), Linköping, Sweden.
    Ahlberg, Jörgen
    Swedish Defense Research Agency (FOI), Linköping, Sweden.
    Larsson, Håkan
    Swedish Defense Research Agency (FOI), Linköping, Sweden.
    Letalick, Dietmar
    Swedish Defense Research Agency (FOI), Linköping, Sweden.
    Lutzmann, Peter
    Fraunhofer-IOSB, Ettlingen, Germany.
    Repasi, Endre
    Fraunhofer-IOSB, Ettlingen, Germany.
    Ritt, Gunnar
    Fraunhofer-IOSB, Ettlingen, Germany.
    Results of ACTIM: an EDA study on spectral laser imaging2011In: Proc. SPIE 8186, Electro-Optical Remote Sensing, Photonic Technologies, and Applications V / [ed] Gary W. Kamerman; Ove Steinvall; Gary J. Bishop; John D. Gonglewski; Keith L. Lewis; Richard C. Hollins; Thomas J. Merlet, SPIE - International Society for Optical Engineering, 2011, p. Art.nr 8186A-25-Conference paper (Refereed)
    Abstract [en]

    The European Defence Agency (EDA) launched the Active Imaging (ACTIM) study to investigate the potential of active imaging, especially that of spectral laser imaging. The work included a literature survey, the identification of promising military applications, system analyses, a roadmap and recommendations.   Passive multi- and hyper-spectral imaging allows discriminating between materials. But the measured radiance in the sensor is difficult to relate to spectral reflectance due to the dependence on e.g. solar angle, clouds, shadows... In turn, active spectral imaging offers a complete control of the illumination, thus eliminating these effects. In addition it allows observing details at long ranges, seeing through degraded atmospheric conditions, penetrating obscurants (foliage, camouflage…) or retrieving polarization information. When 3D, it is suited to producing numerical terrain models and to performing geometry-based identification. Hence fusing the knowledge of ladar and passive spectral imaging will result in new capabilities.  We have identified three main application areas for active imaging, and for spectral active imaging in particular: (1) long range observation for identification, (2) mid-range mapping for reconnaissance, (3) shorter range perception for threat detection. We present the system analyses that have been performed for confirming the interests, limitations and requirements of spectral active imaging in these three prioritized applications.

  • 48.
    Hatami, Sepehr
    et al.
    Swerea IVF AB, Mölndal, Sweden.
    Dahl-Jendelin, Anton
    Swerea IVF AB, Mölndal, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Nelsson, Claes
    Termisk Systemteknik AB, Linköping, Sweden.
    Selective Laser Melting Process Monitoring by Means of Thermography2018In: Proceedings of Euro Powder Metallurgy Congress (Euro PM), European Powder Metallurgy Association (EPMA) , 2018, article id 3957771Conference paper (Refereed)
    Abstract [en]

    Selective laser melting (SLM) enables production of highly intricate components. From this point of view, the capabilities of this technology are known to the industry and have been demonstrated in numerous applications. Nonetheless, for serial production purposes the manufacturing industry has so far been reluctant in substituting its conventional methods with SLM. One underlying reason is the lack of simple and reliable process monitoring methods. This study examines the feasibility of using thermography for process monitoring. To this end, an infra-red (IR) camera was mounted off-axis to monitor and record the temperature of every layer. The recorded temperature curves are analysed and interpreted with respect to different stages of the process. Furthermore, the possibility of detecting variations in laser settings by means of thermography is demonstrated. The results show that once thermal patterns are identified, this data can be utilized for in-process and post-process monitoring of SLM production.

  • 49.
    Heggenes, Jan
    et al.
    Department of Environmental and Health Sciences, University College of Southeast Norway, Bø i Telemark, Norway.
    Odland, Arvid
    Department of Environmental and Health Sciences, University College of Southeast Norway, Bø i Telemark, Norway.
    Chevalier, Tomas
    Scienvisic AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Berg, Amanda
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Bjerketvedt, Dag
    Department of Environmental and Health Sciences, University College of Southeast Norway, Bø i Telemark, Norway.
    Herbivore grazing—or trampling? Trampling effects by a large ungulate in cold high-latitude ecosystems2017In: Ecology and Evolution, ISSN 2045-7758, Vol. 7, no 16, p. 6423-6431Article in journal (Refereed)
    Abstract [en]

    Mammalian herbivores have important top-down effects on ecological processes and landscapes by generating vegetation changes through grazing and trampling. For free-ranging herbivores on large landscapes, trampling is an important ecological factor. However, whereas grazing is widely studied, low-intensity trampling is rarely studied and quantified. The cold-adapted northern tundra reindeer (Rangifer tarandus) is a wide-ranging keystone herbivore in large open alpine and Arctic ecosystems. Reindeer may largely subsist on different species of slow-growing ground lichens, particularly in winter. Lichen grows in dry, snow-poor habitats with frost. Their varying elasticity makes them suitable for studying trampling. In replicated factorial experiments, high-resolution 3D laser scanning was used to quantify lichen volume loss from trampling by a reindeer hoof. Losses were substantial, that is, about 0.3 dm3 per imprint in dry thick lichen, but depended on type of lichen mat and humidity. Immediate trampling volume loss was about twice as high in dry, compared to humid thin (2–3 cm), lichen mats and about three times as high in dry vs. humid thick (6–8 cm) lichen mats, There was no significant difference in volume loss between 100% and 50% wetted lichen. Regained volume with time was insignificant for dry lichen, whereas 50% humid lichen regained substantial volumes, and 100% humid lichen regained almost all lost volume, and mostly within 10–20 min. Reindeer trampling may have from near none to devastating effects on exposed lichen forage. During a normal week of foraging, daily moving 5 km across dry 6- to 8-cm-thick continuous lichen mats, one adult reindeer may trample a lichen volume corresponding to about a year's supply of lichen. However, the lichen humidity appears to be an important factor for trampling loss, in addition to the extent of reindeer movement.

  • 50.
    Horney, Tobias
    et al.
    Swedish Defence Research Agency, Sweden.
    Ahlberg, Jörgen
    Swedish Defence Research Agency, Sweden.
    Grönwall, Christina
    Swedish Defence Research Agency, Sweden.
    Folkesson, Martin
    Swedish Defence Research Agency, Sweden.
    Silvervarg, Karin
    Swedish Defence Research Agency, Sweden.
    Fransson, Jörgen
    Swedish Defence Research Agency, Sweden.
    Klasén, Lena
    Swedish Defence Research Agency, Sweden.
    Jungert, Erland
    Swedish Defence Research Agency, Sweden.
    Lantz, Fredrik
    Swedish Defence Research Agency, Sweden.
    Ulvklo, Morgan
    Swedish Defence Research Agency, Sweden.
    An information system for target recognition2004In: Volume 5434 Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications / [ed] Belur V. Dasarathy, SPIE - International Society for Optical Engineering, 2004, p. 163-175Conference paper (Refereed)
    Abstract [en]

    We present an approach to a general decision support system. The aim is to cover the complete process for automatic target recognition, from sensor data to the user interface. The approach is based on a query-based information system, and include tasks like feature extraction from sensor data, data association, data fusion and situation analysis. Currently, we are working with data from laser radar, infrared cameras, and visual cameras, studying target recognition from cooperating sensors on one or several platforms. The sensors are typically airborne and at low altitude. The processing of sensor data is performed in two steps. First, several attributes are estimated from the (unknown but detected) target. The attributes include orientation, size, speed, temperature etc. These estimates are used to select the models of interest in the matching step, where the target is matched with a number of target models, returning a likelihood value for each model. Several methods and sensor data types are used in both steps. The user communicates with the system via a visual user interface, where, for instance, the user can mark an area on a map and ask for hostile vehicles in the chosen area. The user input is converted to a query in ΣQL, a query language developed for this type of applications, and an ontological system decides which algorithms should be invoked and which sensor data should be used. The output from the sensors is fused by a fusion module and answers are given back to the user. The user does not need to have any detailed technical knowledge about the sensors (or which sensors that are available), and new sensors and algorithms can easily be plugged into the system.

12 1 - 50 of 68
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf