liu.seSearch for publications in DiVA
Change search
Refine search result
12 1 - 50 of 56
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology.
    An active model for facial feature tracking2002In: EURASTP journal an applied signal processing, ISSN 1110-8657, E-ISSN 1687-0433, Vol. 2002, no 6, p. 566-571Article in journal (Refereed)
    Abstract [en]

    We present a system for finding and tracking a face and extract global and local animation parameters from a video sequence. The system uses an initial colour processing step for finding a rough estimate of the position, size, and inplane rotation of the face, followed by a refinement step drived by an active model. The latter step refines the previous estimate, and also extracts local animation parameters. The system is able to track the face and some facial features in near real-time, and can compress the result to a bitstream compliant to MPEG-4 face and body animation.

  • 2.
    Ahlberg, Jörgen
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Estimating atmosphere parameters in hyperspectral data2010In: Proc. SPIE 7695, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVI / [ed] Sylvia S. Shen, Paul E. Lewis, SPIE - International Society for Optical Engineering, 2010, p. Art.nr. 7695-82-Conference paper (Refereed)
    Abstract [en]

    We address the problem of estimating atmosphere parameters (temperature, water vapour content) from data captured by an airborne thermal hyperspectral imager, and propose a method based on direct optimization. The method also involves the estimation of object parameters (temperature and emissivity) under the restriction that the emissivity is constant for all wavelengths. Certain sensor parameters can be estimated as well in the same process. The method is analyzed with respect to sensitivity to noise and number of spectral bands. Simulations with synthetic signatures are performed to validate the analysis, showing that estimation can be performed with as few as 10-20 spectral bands at moderate noise levels. More than 20 bands does not improvethe estimates. The proposedmethod is alsoextended to incorporateadditionalknowledge,for examplemeasurements ofatmospheric parameters and sensor noise.

  • 3.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, Faculty of Science & Engineering.
    Visualization Techniques for Surveillance: Visualizing What Cannot Be Seen and Hiding What Should Not Be Seen2015In: Konsthistorisk Tidskrift, ISSN 0023-3609, E-ISSN 1651-2294, Vol. 84, no 2, p. 123-138Article in journal (Refereed)
    Abstract [en]

    This paper gives an introduction to some of the problems of modern camera surveillance, and how these problems are, or can be, addressed using visualization techniques. The paper is written from an engineering point of view, attempting to communicate visualization techniques invented in recent years to the non-engineer reader. Most of these techniques have the purpose of facilitating for the surveillance operator to recognize or detect relevant events (such as violence), while, in contrast, some have the purpose of hiding information in order to be less privacy-intrusive. Furthermore, there are also cameras and sensors that produce data that have no natural visible form, and methods for visualizing such data are discussed as well. Finally, in a concluding discussion an attempt is made to predict how the discussed methods and techniques will be used in the future. 

  • 4.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Arsic, Dejan
    Munich University of Technology, Germany.
    Ganchev, Todor
    University of Patras, Greece.
    Linderhed, Anna
    FOI Swedish Defence Research Agency.
    Menezes, Paolo
    University of Coimbra, Portugal.
    Ntalampiras, Stavros
    University of Patras, Greece.
    Olma, Tadeusz
    MARAC S.A., Greece.
    Potamitis, Ilyas
    Technological Educational Institute of Crete, Greece.
    Ros, Julien
    Probayes SAS, France.
    Prometheus: Prediction and interpretation of human behaviour based on probabilistic structures and heterogeneous sensors2008Conference paper (Refereed)
    Abstract [en]

    The on-going EU funded project Prometheus (FP7-214901) aims at establishing a general framework which links fundamental sensing tasks to automated cognition processes enabling interpretation and short-term prediction of individual and collective human behaviours in unrestricted environments as well as complex human interactions. To achieve the aforementioned goals, the Prometheus consortium works on the following core scientific and technological objectives:

    1. sensor modeling and information fusion from multiple, heterogeneous perceptual modalities;

    2. modeling, localization, and tracking of multiple people;

    3. modeling, recognition, and short-term prediction of continuous complex human behavior.

  • 5.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Berg, Amanda
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Evaluating Template Rescaling in Short-Term Single-Object Tracking2015Conference paper (Refereed)
    Abstract [en]

    In recent years, short-term single-object tracking has emerged has a popular research topic, as it constitutes the core of more general tracking systems. Many such tracking methods are based on matching a part of the image with a template that is learnt online and represented by, for example, a correlation filter or a distribution field. In order for such a tracker to be able to not only find the position, but also the scale, of the tracked object in the next frame, some kind of scale estimation step is needed. This step is sometimes separate from the position estimation step, but is nevertheless jointly evaluated in de facto benchmarks. However, for practical as well as scientific reasons, the scale estimation step should be evaluated separately – for example,theremightincertainsituationsbeothermethodsmore suitable for the task. In this paper, we describe an evaluation method for scale estimation in template-based short-term single-object tracking, and evaluate two state-of-the-art tracking methods where estimation of scale and position are separable.

  • 6.
    Ahlberg, Jörgen
    et al.
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Dornaika, Fadi
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology.
    Efficient active appearance model for real-time head and facial feature tracking2003In: Analysis and Modeling of Faces and Gestures, 2003. AMFG 2003. IEEE International Workshop on, IEEE conference proceedings, 2003, p. 173-180Conference paper (Refereed)
    Abstract [en]

    We address the 3D tracking of pose and animation of the human face in monocular image sequences using active appearance models. The classical appearance-based tracking suffers from two disadvantages: (i) the estimated out-of-plane motions are not very accurate, and (ii) the convergence of the optimization process to desired minima is not guaranteed. We aim at designing an efficient active appearance model, which is able to cope with the above disadvantages by retaining the strengths of feature-based and featureless tracking methodologies. For each frame, the adaptation is split into two consecutive stages. In the first stage, the 3D head pose is recovered using robust statistics and a measure of consistency with a statistical model of a face texture. In the second stage, the local motion associated with some facial features is recovered using the concept of the active appearance model search. Tracking experiments and method comparison demonstrate the robustness and out-performance of the developed framework.

  • 7.
    Ahlberg, Jörgen
    et al.
    Dept. of IR Systems, Div. of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Dornaika, Fadi
    Computer Vision Center, Universitat Autonoma de Barcelona, Bellaterra, Spain.
    Parametric Face Modeling and Tracking2005In: Handbook of Face Recognition / [ed] Stan Z. Li, Anil K. Jain, Springer-Verlag New York, 2005, p. 65-87Chapter in book (Other academic)
  • 8.
    Ahlberg, Jörgen
    et al.
    Swedish Defence Research Agency, Sweden.
    Folkesson, Martin
    Swedish Defence Research Agency, Sweden.
    Grönwall, Christina
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Horney, Tobias
    Swedish Defence Research Agency, Sweden.
    Jungert, Erland
    Swedish Defence Research Agency, Sweden.
    Klasén, Lena
    Swedish Defence Research Agency, Sweden.
    Ulvklo, Morgan
    Swedish Defence Research Agency, Sweden.
    Ground Target Recognition in a Query-Based Multi-Sensor Information System2006Report (Other academic)
    Abstract [en]

    We present a system covering the complete process for automatic ground target recognition, from sensor data to the user interface, i.e., from low level image processing to high level situation analysis. The system is based on a query language and a query processor, and includes target detection, target recognition, data fusion, presentation and situation analysis. This paper focuses on target recognition and its interaction with the query processor. The target recognitionis executed in sensor nodes, each containing a sensor and the corresponding signal/image processing algorithms. New sensors and algorithms are easily added to the system. The processing of sensor data is performed in two steps; attribute estimation and matching. First, several attributes, like orientation and dimensions, are estimated from the (unknown but detected) targets. These estimates are used to select the models of interest in a matching step, where the targetis matched with a number of target models. Several methods and sensor data types are used in both steps, and data is fused after each step. Experiments have been performed using sensor data from laser radar, thermal and visual cameras. Promising results are reported, demonstrating the capabilities of the target recognition algorithms, the advantages of the two-level data fusion and the query-based system.

  • 9.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology. Div. of Sensor Technology, Swedish Defence Research Agency, Linköping, Sweden.
    Forchheimer, Robert
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology.
    Face tracking for model-based coding and face animation2003In: International journal of imaging systems and technology (Print), ISSN 0899-9457, E-ISSN 1098-1098, Vol. 13, no 1, p. 8-22Article in journal (Refereed)
    Abstract [en]

    We present a face and facial feature tracking system able to extract animation parameters describing the motion and articulation of a human face in real-time on consumer hardware. The system is based on a statistical model of face appearance and a search algorithm for adapting the model to an image. Speed and robustness is discussed, and the system evaluated in terms of accuracy.

  • 10.
    Ahlberg, Jörgen
    et al.
    Div. of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Klasén, Lena
    Div. of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Surveillance Systems for Urban Crisis Management2005Conference paper (Other academic)
    Abstract [en]

    We present a concept for combing 3D models and multiple heterogeneous sensors into a surveillance system enabling superior situation awareness. The concept has many military as well as civilian applications. A key issue is the use of a 3D environment model of the area to be surveyed, typically an urban area. In addition to the 3D model, the area of interest is monitored over time using multiple heterogeneous sensors, such as optical, acoustic, and/or seismic sensors. Data and analysis results from the sensors are visualized in the 3D model, thus putting them in a common reference frame and making their spatial and temporal relations obvious. The result is highlighted by an example where data from different sensor systems is integrated in a 3D model of a Swedish urban area.

  • 11.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Li, Haibo
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Representing and Compressing MPEG-4 Facial Animation Parameters using Facial Action Basis Functions1999In: IEEE Transactions on Circuits and Systems, ISSN 0098-4094, E-ISSN 1558-1276, Vol. 9, no 3, p. 405-410Article in journal (Refereed)
    Abstract [en]

    In model-based, or semantic, coding, parameters describing the nonrigid motion of objects, e.g., the mimics of a face, are of crucial interest. The facial animation parameters (FAPs) specified in MPEG-4 compose a very rich set of such parameters, allowing a wide range of facial motion. However, the FAPs are typically correlated and also constrained in their motion due to the physiology of the human face. We seek here to utilize this spatial correlation to achieve efficient compression. As it does not introduce any interframe delay, the method is suitable for interactive applications, e.g., videophone and interactive video, where low delay is a vital issue.

  • 12.
    Ahlberg, Jörgen
    et al.
    Termisk Systemteknik AB Linköping, Sweden; Visage Technologies AB Linköping, Sweden.
    Markuš, Nenad
    Human-Oriented Technologies Laboratory, Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia.
    Berg, Amanda
    Termisk Systemteknik AB, Linköping, Sweden.
    Multi-person fever screening using a thermal and a visual camera2015Conference paper (Other academic)
    Abstract [en]

    We propose a system to automatically measure the body temperature of persons as they pass. In contrast to exisitng systems, the persons do not need to stop and look into a camera one-by-one. Instead, their eye corners are automatically detected and the temperatures therein measured using a thermal camera. The system handles multiple simultaneous persons and can thus be used where a flow of people pass, such as at airport gates.

  • 13.
    Ahlberg, Jörgen
    et al.
    Division of Information Systems, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Pandzic, Igor
    Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia.
    Facial Action Tracking2011In: Handbook of Face Recognition / [ed] Stan Z. Li, Anil K. Jain, London: Springer London, 2011, 2, p. 461-486Chapter in book (Refereed)
    Abstract [en]

    This chapter explains the basics of parametric face models used for face and facial action tracking as well as fundamental strategies and methodologies for tracking. A few tracking algorithms serving as pedagogical examples are described in more detail.

  • 14.
    Ahlberg, Jörgen
    et al.
    Department of IR Systems, Division of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Renhorn, Ingmar
    Department of IR Systems, Division of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    An information-theoretic approach to band selection2005In: Proc. SPIE 5811, Targets and Backgrounds XI: Characterization and Representation / [ed] Wendell R. Watkins; Dieter Clement; William R. Reynolds, SPIE - International Society for Optical Engineering, 2005, p. 15-23Conference paper (Refereed)
    Abstract [en]

    When we digitize data from a hyperspectral imager, we do so in three dimensions; the radiometric dimension, the spectral dimension, and the spatial dimension(s). The output can be regarded as a random variable taking values from a discrete alphabet, thus allowing simple estimation of the variable’s entropy, i.e., its information content. By modeling the target/background state as a binary random variable and the corresponding measured spectra as a function thereof, wecan compute theinformation capacity ofa certainsensoror sensor configuration. This can be used as a measure of the separability of the two classes, and also gives a bound on the sensor’s performance. Changing the parameters of the digitizing process, bascially how many bits and bands to spend, will affect the information capacity, and we can thus try to find parameters where as few bits/bands as possible gives us as good class separability as possible. The parameters to be optimized in this way (and with respect to the chosen target and background) are spatial, radiometric and spectral resolution, i.e., which spectral bands to use and how to quantize them. In this paper, we focus on the band selection problem, describe an initial approach, and show early results of target/background separation.

  • 15.
    Ahlberg, Jörgen
    et al.
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Renhorn, Ingmar G.
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Wadströmer, Niclas
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    An information measure of sensor performance and its relation to the ROC curve2010In: Proc. SPIE 7695, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVI / [ed] Sylvia S. Shen; Paul E. Lewis, SPIE - International Society for Optical Engineering, 2010, p. Art.nr. 7695-72-Conference paper (Refereed)
    Abstract [en]

    The ROC curve is the most frequently used performance measure for detection methods and the underlying sensor configuration. Common problems are that the ROC curve does not present a single number that can be compared to other systems and that no discrimination between sensor performance and algorithm performance is done. To address the first problem, a number of measures are used in practice, like detection rate at a specific false alarm rate, or area-under-curve. For the second problem, we proposed in a previous paper1 an information theoretic method for measuring sensor performance. We now relate the method to the ROC curve, show that it is equivalent to selecting a certain point on the ROC curve, and that this point is easily determined. Our scope is hyperspectral data, studying discrimination between single pixels.

  • 16.
    Andersson, Maria
    et al.
    Division of Information Systems, FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Ntalampiras, Stavros
    Department of Electrical and Computer Engineering, University of Patras, Patras, Greece.
    Ganchev, Todor
    Department of Electrical and Computer Engineering, University of Patras, Patras, Greece.
    Rydell, Joakim
    Division of Information Systems, FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Ahlberg, Jörgen
    Division of Information Systems, FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Fakotakis, Nikos
    Department of Electrical and Computer Engineering, University of Patras, Patras, Greece.
    Fusion of Acoustic and Optical Sensor Data for Automatic Fight Detection in Urban Environments2010In: Information Fusion (FUSION), 2010 13th Conference on, IEEE conference proceedings, 2010, p. 1-8Conference paper (Refereed)
    Abstract [en]

    We propose a two-stage method for detection of abnormal behaviours, such as aggression and fights in urban environment, which is applicable to operator support in surveillance applications. The proposed method is based on fusion of evidence from audio and optical sensors. In the first stage, a number of modalityspecific detectors perform recognition of low-level events. Their outputs act as input to the second stage, which performs fusion and disambiguation of the firststage detections. Experimental evaluation on scenes from the outdoor part of the PROMETHEUS database demonstrated the practical viability of the proposed approach. We report a fight detection rate of 81% when both audio and optical information are used. Reduced performance is observed when evidence from audio data is excluded from the fusion process. Finally, in the case when only evidence from one camera is used for detecting the fights, the recognition performance is poor. 

  • 17.
    Andersson, Maria
    et al.
    FOI Swedish Defence Research Agency.
    Rydell, Joakim
    FOI Swedish Defence Research Agency.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. FOI Swedish Defence Research Agency.
    Estimation of crowd behaviour using sensor networks and sensor fusion2009Conference paper (Refereed)
    Abstract [en]

    Commonly, surveillance operators are today monitoring a large number of CCTV screens, trying to solve the complex cognitive tasks of analyzing crowd behavior and detecting threats and other abnormal behavior. Information overload is a rule rather than an exception. Moreover, CCTV footage lacks important indicators revealing certain threats, and can also in other respects be complemented by data from other sensors. This article presents an approach to automatically interpret sensor data and estimate behaviors of groups of people in order to provide the operator with relevant warnings. We use data from distributed heterogeneous sensors (visual cameras and a thermal infrared camera), and process the sensor data using detection algorithms. The extracted features are fed into a hidden Markov model in order to model normal behavior and detect deviations. We also discuss the use of radars for weapon detection.

  • 18.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering. Linköping University, The Institute of Technology. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology. Termisk Systemteknik AB, Linköping, Sweden.
    Classification and temporal analysis of district heating leakages in thermal images2014In: Proceedings of The 14th International Symposium on District Heating and Cooling, 2014Conference paper (Other academic)
    Abstract [en]

    District heating pipes are known to degenerate with time and in some cities the pipes have been used for several decades. Due to bad insulation or cracks, energy or media leakages might appear. This paper presents a complete system for large-scale monitoring of district heating networks, including methods for detection, classification and temporal characterization of (potential) leakages. The system analyses thermal infrared images acquired by an aircraft-mounted camera, detecting the areas for which the pixel intensity is higher than normal. Unfortunately, the system also finds many false detections, i.e., warm areas that are not caused by media or energy leakages. Thus, in order to reduce the number of false detections we describe a machine learning method to classify the detections. The results, based on data from three district heating networks show that we can remove more than half of the false detections. Moreover, we also propose a method to characterize leakages over time, that is, repeating the image acquisition one or a few years later and indicate areas that suffer from an increased energy loss.

  • 19.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering. Linköping University, The Institute of Technology. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology. Termisk Systemteknik AB, Linköping, Sweden.
    Classification of leakage detections acquired by airborne thermography of district heating networks2014In: 2014 8th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS), IEEE , 2014, p. 1-4Conference paper (Refereed)
    Abstract [en]

    We address the problem of reducing the number offalse alarms among automatically detected leakages in districtheating networks. The leakages are detected in images capturedby an airborne thermal camera, and each detection correspondsto an image region with abnormally high temperature. Thisapproach yields a significant number of false positives, and wepropose to reduce this number in two steps. First, we use abuilding segmentation scheme in order to remove detectionson buildings. Second, we extract features from the detectionsand use a Random forest classifier on the remaining detections.We provide extensive experimental analysis on real-world data,showing that this post-processing step significantly improves theusefulness of the system.

  • 20.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB Linköping, Sweden.
    Classifying district heating network leakages in aerial thermal imagery2014Conference paper (Other academic)
    Abstract [en]

    In this paper we address the problem of automatically detecting leakages in underground pipes of district heating networks from images captured by an airborne thermal camera. The basic idea is to classify each relevant image region as a leakage if its temperature exceeds a threshold. This simple approach yields a significant number of false positives. We propose to address this issue by machine learning techniques and provide extensive experimental analysis on real-world data. The results show that this postprocessing step significantly improves the usefulness of the system.

  • 21.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    A thermal infrared dataset for evaluation of short-term tracking methods2015Conference paper (Other academic)
    Abstract [en]

    During recent years, thermal cameras have decreased in both size and cost while improving image quality. The area of use for such cameras has expanded with many exciting applications, many of which require tracking of objects. While being subject to extensive research in the visual domain, tracking in thermal imagery has historically been of interest mainly for military purposes. The available thermal infrared datasets for evaluating methods addressing these problems are few and the ones that do are not challenging enough for today’s tracking algorithms. Therefore, we hereby propose a thermal infrared dataset for evaluation of short-term tracking methods. The dataset consists of 20 sequences which have been collected from multiple sources and the data format used is in accordance with the Visual Object Tracking (VOT) Challenge.

  • 22.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    A Thermal Object Tracking Benchmark2015Conference paper (Refereed)
    Abstract [en]

    Short-term single-object (STSO) tracking in thermal images is a challenging problem relevant in a growing number of applications. In order to evaluate STSO tracking algorithms on visual imagery, there are de facto standard benchmarks. However, we argue that tracking in thermal imagery is different than in visual imagery, and that a separate benchmark is needed. The available thermal infrared datasets are few and the existing ones are not challenging for modern tracking algorithms. Therefore, we hereby propose a thermal infrared benchmark according to the Visual Object Tracking (VOT) protocol for evaluation of STSO tracking methods. The benchmark includes the new LTIR dataset containing 20 thermal image sequences which have been collected from multiple sources and annotated in the format used in the VOT Challenge. In addition, we show that the ranking of different tracking principles differ between the visual and thermal benchmarks, confirming the need for the new benchmark.

  • 23.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Generating Visible Spectrum Images from Thermal Infrared2018In: Proceedings 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops CVPRW 2018, Institute of Electrical and Electronics Engineers (IEEE), 2018, p. 1224-1233Conference paper (Refereed)
    Abstract [en]

    Transformation of thermal infrared (TIR) images into visual, i.e. perceptually realistic color (RGB) images, is a challenging problem. TIR cameras have the ability to see in scenarios where vision is severely impaired, for example in total darkness or fog, and they are commonly used, e.g., for surveillance and automotive applications. However, interpretation of TIR images is difficult, especially for untrained operators. Enhancing the TIR image display by transforming it into a plausible, visual, perceptually realistic RGB image presumably facilitates interpretation. Existing grayscale to RGB, so called, colorization methods cannot be applied to TIR images directly since those methods only estimate the chrominance and not the luminance. In the absence of conventional colorization methods, we propose two fully automatic TIR to visual color image transformation methods, a two-step and an integrated approach, based on Convolutional Neural Networks. The methods require neither pre- nor postprocessing, do not require any user input, and are robust to image pair misalignments. We show that the methods do indeed produce perceptually realistic results on publicly available data, which is assessed both qualitatively and quantitatively.

  • 24.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Object Tracking in Thermal Infrared Imagery based on Channel Coded Distribution Fields2017Conference paper (Other academic)
    Abstract [en]

    We address short-term, single-object tracking, a topic that is currently seeing fast progress for visual video, for the case of thermal infrared (TIR) imagery. Tracking methods designed for TIR are often subject to a number of constraints, e.g., warm objects, low spatial resolution, and static camera. As TIR cameras become less noisy and get higher resolution these constraints are less relevant, and for emerging civilian applications, e.g., surveillance and automotive safety, new tracking methods are needed. Due to the special characteristics of TIR imagery, we argue that template-based trackers based on distribution fields should have an advantage over trackers based on spatial structure features. In this paper, we propose a templatebased tracking method (ABCD) designed specifically for TIR and not being restricted by any of the constraints above. The proposed tracker is evaluated on the VOT-TIR2015 and VOT2015 datasets using the VOT evaluation toolkit and a comparison of relative ranking of all common participating trackers in the challenges is provided. Experimental results show that the ABCD tracker performs particularly well on thermal infrared sequences.

  • 25.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Häger, Gustav
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    An Overview of the Thermal Infrared Visual Object Tracking VOT-TIR2015 Challenge2016Conference paper (Other academic)
    Abstract [en]

    The Thermal Infrared Visual Object Tracking (VOT-TIR2015) Challenge was organized in conjunction with ICCV2015. It was the first benchmark on short-term,single-target tracking in thermal infrared (TIR) sequences. The challenge aimed at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. It was based on the VOT2013 Challenge, but introduced the following novelties: (i) the utilization of the LTIR (Linköping TIR) dataset, (ii) adaption of the VOT2013 attributes to thermal data, (iii) a similar evaluation to that of VOT2015. This paper provides an overview of the VOT-TIR2015 Challenge as well as the results of the 24 participating trackers.

  • 26.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Öfjäll, Kristoffer
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Detecting Rails and Obstacles Using a Train-Mounted Thermal Camera2015In: Image Analysis: 19th Scandinavian Conference, SCIA 2015, Copenhagen, Denmark, June 15-17, 2015. Proceedings / [ed] Rasmus R. Paulsen; Kim S. Pedersen, Springer, 2015, p. 492-503Conference paper (Refereed)
    Abstract [en]

    We propose a method for detecting obstacles on the railway in front of a moving train using a monocular thermal camera. The problem is motivated by the large number of collisions between trains and various obstacles, resulting in reduced safety and high costs. The proposed method includes a novel way of detecting the rails in the imagery, as well as a way to detect anomalies on the railway. While the problem at a first glance looks similar to road and lane detection, which in the past has been a popular research topic, a closer look reveals that the problem at hand is previously unaddressed. As a consequence, relevant datasets are missing as well, and thus our contribution is two-fold: We propose an approach to the novel problem of obstacle detection on railways and we describe the acquisition of a novel data set.

  • 27.
    Bešenić, Krešimir
    et al.
    Faculty of Electrical Engineering and Computing, University of Zagreb,.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Pandžić, Igor
    Faculty of Electrical Engineering and Computing, University of Zagreb.
    Unsupervised Facial Biometric Data Filtering for Age and Gender Estimation2019In: Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019), SciTePress, 2019, Vol. 5, p. 209-217Conference paper (Refereed)
    Abstract [en]

    Availability of large training datasets was essential for the recent advancement and success of deep learning methods. Due to the difficulties related to biometric data collection, datasets with age and gender annotations are scarce and usually limited in terms of size and sample diversity. Web-scraping approaches for automatic data collection can produce large amounts weakly labeled noisy data. The unsupervised facial biometric data filtering method presented in this paper greatly reduces label noise levels in web-scraped facial biometric data. Experiments on two large state-of-the-art web-scraped facial datasets demonstrate the effectiveness of the proposed method, with respect to training and validation scores, training convergence, and generalization capabilities of trained age and gender estimators.

  • 28.
    Brattberg, Oskar
    et al.
    Dept. of IR Systems, Div. of Sensor Tecnology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Ahlberg, Jörgen
    Dept. of IR Systems, Div. of Sensor Tecnology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Analysis of Multispectral Reconnaissance Imagery for Target Detection and Operator Support2006Conference paper (Other academic)
    Abstract [en]

    This paper describes a method to estimate motion in an image sequence acquired using a multispectral airborne sensor. The purpose of the motion estimation is to align the sequentually acquired spectral bands and fuse them into multispectral images. These multispectral images are then analysed and presented in order to support an operator in an air-to-ground reconnaissance scenario.

  • 29.
    Dornaika, Fadi
    et al.
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology.
    Ahlberg, Jörgen
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Face and facial feature tracking using deformable models2004In: International Journal of Image and Graphics, ISSN 0219-4678, Vol. 4, no 3, p. 499-532Article in journal (Refereed)
    Abstract [en]

    In this paper, we address the 3D tracking of pose and animation of the human face in monocular image sequences using deformable 3D models. The main contributions of this paper are as follows. First, we show how the robustness and stability of the Active Appearance Algorithm can be improved through the inclusion of a simple motion compensation based on feature correspondence. Second, we develop a new method able to adapt a deformable 3D model to a face in the input image. Central to this method is the decoupling of global head movements and local non-rigid deformations/animations. This decoupling is achieved by, first, estimating the global (rigid) motion using robust statistics and a statistical model for face texture, and then, adapting the 3D model to possible local animations using the concept of the Active Appearance Algorithm. This proposed method constitutes a significant step towards reliable model-based face trackers since the strengths of complementary tracking methodologies are combined.

    Experiments evaluating the effectiveness of the methods are reported. Adaptation and tracking examples demonstrate the feasibility and robustness of the developed methods.

  • 30.
    Dornaika, Fadi
    et al.
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology.
    Ahlberg, Jörgen
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Face Model Adaptation for Tracking and Active Appearance Model Training2003In: Proceedings of the British Machine Vision Conference / [ed] Richard Harvey and Andrew Bangham, 2003, p. 57.1-57.10Conference paper (Other academic)
    Abstract [en]

    In this paper, we consider the potentialities of adapting a 3D deformable face model to video sequences. Two adaptation methods are proposed. The first method computes the adaptation using a locally exhaustive and directed search in the parameter space. The second method decouples the estimation of head and facial feature motion. It computes the 3D head pose by combining: (i) a robust feature-based pose estimator, and (ii) a global featureless criterion. The facial animation parameters are then estimated with a combined exhaustive and directed search. Tracking experiments and performance evaluation demonstrate the feasibility and usefulness of the developed methods. These experiments also show that the proposed methods can outperform the adaptation based on a directed continuous search.

  • 31.
    Dornaika, Fadi
    et al.
    Laboratoire Heudiasyc, Université de Technologie de Compiègne, France.
    Ahlberg, Jörgen
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Fast and Reliable Active Appearance Model Search for 3D Face Tracking2004In: IEEE transactions on systems, man and cybernetics. Part B. Cybernetics, ISSN 1083-4419, E-ISSN 1941-0492, Vol. 34, no 4, p. 1838-1853Article in journal (Refereed)
    Abstract [en]

    This paper addresses the three-dimensional (3-D) tracking of pose and animation of the human face in monocular image sequences using active appearance models. The major problem of the classical appearance-based adaptation is the high computationaltimeresultingfrom theinclusionofasynthesisstep in the iterative optimization. Whenever the dimension of the face space is large, a real-time performance cannot be achieved. In this paper, we aim at designing a fast and stable active appearance model search for 3-D face tracking. The main contribution is a search algorithm whose CPU-time is not dependent on the dimension of the face space. Using this algorithm, we show that both the CPU-time and the likelihood of a nonaccurate tracking are reduced. Experiments evaluating the effectiveness of the proposed algorithm are reported, as well as method comparison and tracking synthetic and real image sequences.

  • 32.
    Dornaika, Fadi
    et al.
    Computer Vision Centre, Autonomous University of Barcelona, Edifici O, Campus UAB, Bellaterra, Barcelona, Spain.
    Ahlberg, Jörgen
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Fitting 3D Face Models for Tracking and Active Appearance Model Training2006In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 24, no 9, p. 1010-1024Article in journal (Refereed)
    Abstract [en]

    In this paper, we consider fitting a 3D deformable face model to continuous video sequences for the tasks of tracking and training. We propose two appearance-based methods that only require a simple statistical facial texture model and do not require any information about an empirical or analytical gradient matrix, since the best search directions are estimated on the fly. The first method computes the fitting using a locally exhaustive and directed search where the 3D head pose and the facial actions are simultaneously estimated. The second method decouples the estimation of these parameters. It computes the 3D head pose using a robust feature-based pose estimator incorporating a facial texture consistency measure. Then, it estimates the facial actions with an exhaustive and directed search. Fitting and tracking experiments demonstrate the feasibility and usefulness of the developed methods. A performance evaluation also shows that the proposed methods can outperform the fitting based on an active appearance model search adopting a pre-computed gradient matrix. Although the proposed schemes are not as fast as the schemes adopting a directed continuous search, they can tackle many disadvantages associated with such approaches.

  • 33.
    Dornaika, Fadi
    et al.
    CNRS HEUDIASYC – UTC, Compiègne Cedex, France.
    Ahlberg, Jörgen
    Swedish Defence Research Agency, Linköping, Sweden.
    Model-based Head and Facial Motion Tracking2004In: Computer Vision in Human-Computer Interaction: ECCV 2004 Workshop on HCI, Prague, Czech Republic, May 16, 2004, Proceedings / [ed] Sebe, Nicu, Lew, Michael S., Huang, Thomas S., Springer Berlin/Heidelberg, 2004, p. 221-232Conference paper (Refereed)
    Abstract [en]

    This paper addresses the real-time tracking of head and facial motion in monocular image sequences using 3D deformable models. It introduces two methods. The first method only tracks the 3D head pose using a cascade of two stages: the first stage utilizes a robust featurebased pose estimator associated with two consecutive frames, the second stage relies on a Maximum a Posteriori inference scheme exploiting the temporal coherence in both the 3D head motions and facial textures. The facial texture is updated dynamically in order to obtain a simple on-line appearance model. The implementation of this method is kept simple and straightforward. In addition to the 3D head pose tracking, the second method tracks some facial animations using an Active Appearance Model search. Tracking experiments and performance evaluation demonstrate the robustness and usefulness of the developed methods that retain the advantages of both feature-based and appearance-based methods.

  • 34.
    Felsberg, Michael
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Berg, Amanda
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Häger, Gustav
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Kristan, Matej
    University of Ljubljana, Slovenia.
    Matas, Jiri
    Czech Technical University, Czech Republic.
    Leonardis, Ales
    University of Birmingham, United Kingdom.
    Cehovin, Luka
    University of Ljubljana, Slovenia.
    Fernandez, Gustavo
    Austrian Institute of Technology, Austria.
    Vojır, Tomas
    Czech Technical University, Czech Republic.
    Nebehay, Georg
    Austrian Institute of Technology, Austria.
    Pflugfelder, Roman
    Austrian Institute of Technology, Austria.
    Lukezic, Alan
    University of Ljubljana, Slovenia.
    Garcia-Martin8, Alvaro
    Universidad Autonoma de Madrid, Spain.
    Saffari, Amir
    Affectv, United Kingdom.
    Li, Ang
    Xi’an Jiaotong University.
    Solıs Montero, Andres
    University of Ottawa, Canada.
    Zhao, Baojun
    Beijing Institute of Technology, China.
    Schmid, Cordelia
    INRIA Grenoble Rhˆone-Alpes, France.
    Chen, Dapeng
    Xi’an Jiaotong University.
    Du, Dawei
    University at Albany, USA.
    Shahbaz Khan, Fahad
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Porikli, Fatih
    Australian National University, Australia.
    Zhu, Gao
    Australian National University, Australia.
    Zhu, Guibo
    NLPR, Chinese Academy of Sciences, China.
    Lu, Hanqing
    NLPR, Chinese Academy of Sciences, China.
    Kieritz, Hilke
    Fraunhofer IOSB, Germany.
    Li, Hongdong
    Australian National University, Australia.
    Qi, Honggang
    University at Albany, USA.
    Jeong, Jae-chan
    Electronics and Telecommunications Research Institute, Korea.
    Cho, Jae-il
    Electronics and Telecommunications Research Institute, Korea.
    Lee, Jae-Yeong
    Electronics and Telecommunications Research Institute, Korea.
    Zhu, Jianke
    Zhejiang University, China.
    Li, Jiatong
    University of Technology, Australia.
    Feng, Jiayi
    Institute of Automation, Chinese Academy of Sciences, China.
    Wang, Jinqiao
    NLPR, Chinese Academy of Sciences, China.
    Kim, Ji-Wan
    Electronics and Telecommunications Research Institute, Korea.
    Lang, Jochen
    University of Ottawa, Canada.
    Martinez, Jose M.
    Universidad Aut´onoma de Madrid, Spain.
    Xue, Kai
    INRIA Grenoble Rhˆone-Alpes, France.
    Alahari, Karteek
    INRIA Grenoble Rhˆone-Alpes, France.
    Ma, Liang
    Harbin Engineering University, China.
    Ke, Lipeng
    University at Albany, USA.
    Wen, Longyin
    University at Albany, USA.
    Bertinetto, Luca
    Oxford University, United Kingdom.
    Danelljan, Martin
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Arens, Michael
    Fraunhofer IOSB, Germany.
    Tang, Ming
    Institute of Automation, Chinese Academy of Sciences, China.
    Chang, Ming-Ching
    University at Albany, USA.
    Miksik, Ondrej
    Oxford University, United Kingdom.
    Torr, Philip H S
    Oxford University, United Kingdom.
    Martin-Nieto, Rafael
    Universidad Aut´onoma de Madrid, Spain.
    Laganiere, Robert
    University of Ottawa, Canada.
    Hare, Sam
    Obvious Engineering, United Kingdom.
    Lyu, Siwei
    University at Albany, USA.
    Zhu, Song-Chun
    University of California, USA.
    Becker, Stefan
    Fraunhofer IOSB, Germany.
    Hicks, Stephen L
    Oxford University, United Kingdom.
    Golodetz, Stuart
    Oxford University, United Kingdom.
    Choi, Sunglok
    Electronics and Telecommunications Research Institute, Korea.
    Wu, Tianfu
    University of California, USA.
    Hubner, Wolfgang
    Fraunhofer IOSB, Germany.
    Zhao, Xu
    Institute of Automation, Chinese Academy of Sciences, China.
    Hua, Yang
    INRIA Grenoble Rhˆone-Alpes, France.
    Li, Yang
    Zhejiang University, China.
    Lu, Yang
    University of California, USA.
    Li, Yuezun
    University at Albany, USA.
    Yuan, Zejian
    Xi’an Jiaotong University.
    Hong, Zhibin
    University of Technology, Australia.
    The Thermal Infrared Visual Object Tracking VOT-TIR2015 Challenge Results2015In: Proceedings of the IEEE International Conference on Computer Vision, Institute of Electrical and Electronics Engineers (IEEE), 2015, p. 639-651Conference paper (Refereed)
    Abstract [en]

    The Thermal Infrared Visual Object Tracking challenge 2015, VOTTIR2015, aims at comparing short-term single-object visual trackers that work on thermal infrared (TIR) sequences and do not apply prelearned models of object appearance. VOT-TIR2015 is the first benchmark on short-term tracking in TIR sequences. Results of 24 trackers are presented. For each participating tracker, a short description is provided in the appendix. The VOT-TIR2015 challenge is based on the VOT2013 challenge, but introduces the following novelties: (i) the newly collected LTIR (Linköping TIR) dataset is used, (ii) the VOT2013 attributes are adapted to TIR data, (iii) the evaluation is performed using insights gained during VOT2013 and VOT2014 and is similar to VOT2015.

  • 35.
    Felsberg, Michael
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Kristan, Matej
    University of Ljubljana, Slovenia.
    Matas, Jiri
    Czech Technical University, Czech Republic.
    Leonardis, Ales
    University of Birmingham, England.
    Pflugfelder, Roman
    Austrian Institute Technology, Austria.
    Häger, Gustav
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Berg, Amanda
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Electrical Engineering, Computer Vision. Termisk Syst Tekn AB, Linkoping, Sweden.
    Eldesokey, Abdelrahman
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Syst Tekn AB, Linkoping, Sweden.
    Cehovin, Luka
    University of Ljubljana, Slovenia.
    Vojir, Tomas
    Czech Technical University, Czech Republic.
    Lukezic, Alan
    University of Ljubljana, Slovenia.
    Fernandez, Gustavo
    Austrian Institute Technology, Austria.
    Petrosino, Alfredo
    Parthenope University of Naples, Italy.
    Garcia-Martin, Alvaro
    University of Autonoma Madrid, Spain.
    Solis Montero, Andres
    University of Ottawa, Canada.
    Varfolomieiev, Anton
    Kyiv Polytech Institute, Ukraine.
    Erdem, Aykut
    Hacettepe University, Turkey.
    Han, Bohyung
    POSTECH, South Korea.
    Chang, Chang-Ming
    University of Albany, GA USA.
    Du, Dawei
    Australian National University, Australia; Chinese Academic Science, Peoples R China.
    Erdem, Erkut
    Hacettepe University, Turkey.
    Khan, Fahad Shahbaz
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Porikli, Fatih
    ARC Centre Excellence Robot Vis, Australia; CSIRO, Australia.
    Zhao, Fei
    Australian National University, Australia; Chinese Academic Science, Peoples R China.
    Bunyak, Filiz
    University of Missouri, MO 65211 USA.
    Battistone, Francesco
    Parthenope University of Naples, Italy.
    Zhu, Gao
    University of Missouri, Columbia, USA.
    Seetharaman, Guna
    US Navy, DC 20375 USA.
    Li, Hongdong
    ARC Centre Excellence Robot Vis, Australia.
    Qi, Honggang
    Australian National University, Australia; Chinese Academic Science, Peoples R China.
    Bischof, Horst
    Graz University of Technology, Austria.
    Possegger, Horst
    Graz University of Technology, Austria.
    Nam, Hyeonseob
    NAVER Corp, South Korea.
    Valmadre, Jack
    University of Oxford, England.
    Zhu, Jianke
    Zhejiang University, Peoples R China.
    Feng, Jiayi
    Australian National University, Australia; Chinese Academic Science, Peoples R China.
    Lang, Jochen
    University of Ottawa, Canada.
    Martinez, Jose M.
    University of Autonoma Madrid, Spain.
    Palaniappan, Kannappan
    University of Missouri, MO 65211 USA.
    Lebeda, Karel
    University of Surrey, England.
    Gao, Ke
    University of Missouri, MO 65211 USA.
    Mikolajczyk, Krystian
    Imperial Coll London, England.
    Wen, Longyin
    University of Albany, GA USA.
    Bertinetto, Luca
    University of Oxford, England.
    Poostchi, Mahdieh
    University of Missouri, MO 65211 USA.
    Maresca, Mario
    Parthenope University of Naples, Italy.
    Danelljan, Martin
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Arens, Michael
    Fraunhofer IOSB, Germany.
    Tang, Ming
    Australian National University, Australia; Chinese Academic Science, Peoples R China.
    Baek, Mooyeol
    POSTECH, South Korea.
    Fan, Nana
    Harbin Institute Technology, Peoples R China.
    Al-Shakarji, Noor
    University of Missouri, MO 65211 USA.
    Miksik, Ondrej
    University of Oxford, England.
    Akin, Osman
    Hacettepe University, Turkey.
    Torr, Philip H. S.
    University of Oxford, England.
    Huang, Qingming
    Australian National University, Australia; Chinese Academic Science, Peoples R China.
    Martin-Nieto, Rafael
    University of Autonoma Madrid, Spain.
    Pelapur, Rengarajan
    University of Missouri, MO 65211 USA.
    Bowden, Richard
    University of Surrey, England.
    Laganiere, Robert
    University of Ottawa, Canada.
    Krah, Sebastian B.
    Fraunhofer IOSB, Germany.
    Li, Shengkun
    University of Albany, GA USA.
    Yao, Shizeng
    University of Missouri, MO 65211 USA.
    Hadfield, Simon
    University of Surrey, England.
    Lyu, Siwei
    University of Albany, GA USA.
    Becker, Stefan
    Fraunhofer IOSB, Germany.
    Golodetz, Stuart
    University of Oxford, England.
    Hu, Tao
    Australian National University, Australia; Chinese Academic Science, Peoples R China.
    Mauthner, Thomas
    Graz University of Technology, Austria.
    Santopietro, Vincenzo
    Parthenope University of Naples, Italy.
    Li, Wenbo
    Lehigh University, PA 18015 USA.
    Huebner, Wolfgang
    Fraunhofer IOSB, Germany.
    Li, Xin
    Harbin Institute Technology, Peoples R China.
    Li, Yang
    Zhejiang University, Peoples R China.
    Xu, Zhan
    Zhejiang University, Peoples R China.
    He, Zhenyu
    Harbin Institute Technology, Peoples R China.
    The Thermal Infrared Visual Object Tracking VOT-TIR2016 Challenge Results2016In: Computer Vision – ECCV 2016 Workshops. ECCV 2016. / [ed] Hua G., Jégou H., SPRINGER INT PUBLISHING AG , 2016, p. 824-849Conference paper (Refereed)
    Abstract [en]

    The Thermal Infrared Visual Object Tracking challenge 2016, VOT-TIR2016, aims at comparing short-term single-object visual trackers that work on thermal infrared (TIR) sequences and do not apply pre-learned models of object appearance. VOT-TIR2016 is the second benchmark on short-term tracking in TIR sequences. Results of 24 trackers are presented. For each participating tracker, a short description is provided in the appendix. The VOT-TIR2016 challenge is similar to the 2015 challenge, the main difference is the introduction of new, more difficult sequences into the dataset. Furthermore, VOT-TIR2016 evaluation adopted the improvements regarding overlap calculation in VOT2016. Compared to VOT-TIR2015, a significant general improvement of results has been observed, which partly compensate for the more difficult sequences. The dataset, the evaluation kit, as well as the results are publicly available at the challenge website.

  • 36.
    Felsberg, Michael
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Larsson, Fredrik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wadströmer, Niclas
    FOI.
    Ahlberg, Jörgen
    Termisk Systemteknik AB.
    Online Learning of Correspondences between Images2013In: IEEE Transaction on Pattern Analysis and Machine Intelligence, ISSN 0162-8828, E-ISSN 1939-3539, Vol. 35, no 1, p. 118-129Article in journal (Refereed)
    Abstract [en]

    We propose a novel method for iterative learning of point correspondences between image sequences. Points moving on surfaces in 3D space are projected into two images. Given a point in either view, the considered problem is to determine the corresponding location in the other view. The geometry and distortions of the projections are unknown as is the shape of the surface. Given several pairs of point-sets but no access to the 3D scene, correspondence mappings can be found by excessive global optimization or by the fundamental matrix if a perspective projective model is assumed. However, an iterative solution on sequences of point-set pairs with general imaging geometry is preferable. We derive such a method that optimizes the mapping based on Neyman's chi-square divergence between the densities representing the uncertainties of the estimated and the actual locations. The densities are represented as channel vectors computed with a basis function approach. The mapping between these vectors is updated with each new pair of images such that fast convergence and high accuracy are achieved. The resulting algorithm runs in real-time and is superior to state-of-the-art methods in terms of convergence and accuracy in a number of experiments.

  • 37.
    Friman, Ola
    et al.
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology. Swedish Defence Research Agency, Linköping, Sweden.
    Follo, Peter
    Swedish Defence Research Agency, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology. Termisk Systemteknik AB, Linköping, Sweden.
    Sjökvist, Stefan
    Termisk Systemteknik AB, Linköping, Sweden.
    Methods for Large-Scale Monitoring of District Heating Systems Using Airborne Thermography2014In: IEEE Transactions on Geoscience and Remote Sensing, ISSN 0196-2892, E-ISSN 1558-0644, Vol. 52, no 8, p. 5175-5182Article in journal (Refereed)
    Abstract [en]

    District heating is a common way of providing heat to buildings in urban areas. The heat is carried by hot water or steam and distributed in a network of pipes from a central powerplant. It is of great interest to minimize energy losses due to bad pipe insulation or leakages in such district heating networks. As the pipes generally are placed underground, it may be difficult to establish the presence and location of losses and leakages. Toward this end, this work presents methods for large-scale monitoring and detection of leakages by means of remote sensing using thermal cameras, so-called airborne thermography. The methods rely on the fact that underground losses in district heating systems lead to increased surface temperatures. The main contribution of this work is methods for automatic analysis of aerial thermal images to localize leaking district heating pipes. Results and experiences from large-scale leakage detection in several cities in Sweden and Norway are presented.

  • 38.
    Friman, Ola
    et al.
    Swedish Defence Research Agency, Linköping, Sweden.
    Tolt, Gustav
    Swedish Defence Research Agency, Linköping, Sweden.
    Ahlberg, Jörgen
    Termisk Systemteknik, Linköping, Sweden.
    Illumination and shadow compensation of hyperspectral images using a digital surface model and non-linear least squares estimation2011In: Proc. SPIE 8180, Image and Signal Processing for Remote Sensing XVII / [ed] Lorenzo Bruzzone, SPIE - International Society for Optical Engineering, 2011, p. Art.nr 8180-26-Conference paper (Refereed)
    Abstract [en]

    Object detection and material classification are two central tasks in electro-optical remote sensing and hyperspectral imaging applications. These are challenging problems as the measured spectra in hyperspectral images from satellite or airborne platforms vary significantly depending on the light conditions at the imaged surface, e.g., shadow versus non-shadow. In this work, a Digital Surface Model (DSM) is used to estimate different components of the incident light. These light components are subsequently used to predict what a measured spectrum would look like under different light conditions. The derived method is evaluated using an urban hyperspectral data set with 24 bands in the wavelength range 381.9 nm to 1040.4 nm and a DSM created from LIDAR 3D data acquired simultaneously with the hyperspectral data

  • 39.
    Hamoir, Dominique
    et al.
    Onera – The French Aerospace Lab, Toulouse, France.
    Hespel, Laurent
    Onera – The French Aerospace Lab, Toulouse, France.
    Déliot, Philippe
    Onera – The French Aerospace Lab, Toulouse, France.
    Boucher, Yannick
    Onera – The French Aerospace Lab, Toulouse, France.
    Steinvall, Ove
    Swedish Defense Research Agency (FOI), Linköping, Sweden.
    Ahlberg, Jörgen
    Swedish Defense Research Agency (FOI), Linköping, Sweden.
    Larsson, Håkan
    Swedish Defense Research Agency (FOI), Linköping, Sweden.
    Letalick, Dietmar
    Swedish Defense Research Agency (FOI), Linköping, Sweden.
    Lutzmann, Peter
    Fraunhofer-IOSB, Ettlingen, Germany.
    Repasi, Endre
    Fraunhofer-IOSB, Ettlingen, Germany.
    Ritt, Gunnar
    Fraunhofer-IOSB, Ettlingen, Germany.
    Results of ACTIM: an EDA study on spectral laser imaging2011In: Proc. SPIE 8186, Electro-Optical Remote Sensing, Photonic Technologies, and Applications V / [ed] Gary W. Kamerman; Ove Steinvall; Gary J. Bishop; John D. Gonglewski; Keith L. Lewis; Richard C. Hollins; Thomas J. Merlet, SPIE - International Society for Optical Engineering, 2011, p. Art.nr 8186A-25-Conference paper (Refereed)
    Abstract [en]

    The European Defence Agency (EDA) launched the Active Imaging (ACTIM) study to investigate the potential of active imaging, especially that of spectral laser imaging. The work included a literature survey, the identification of promising military applications, system analyses, a roadmap and recommendations.   Passive multi- and hyper-spectral imaging allows discriminating between materials. But the measured radiance in the sensor is difficult to relate to spectral reflectance due to the dependence on e.g. solar angle, clouds, shadows... In turn, active spectral imaging offers a complete control of the illumination, thus eliminating these effects. In addition it allows observing details at long ranges, seeing through degraded atmospheric conditions, penetrating obscurants (foliage, camouflage…) or retrieving polarization information. When 3D, it is suited to producing numerical terrain models and to performing geometry-based identification. Hence fusing the knowledge of ladar and passive spectral imaging will result in new capabilities.  We have identified three main application areas for active imaging, and for spectral active imaging in particular: (1) long range observation for identification, (2) mid-range mapping for reconnaissance, (3) shorter range perception for threat detection. We present the system analyses that have been performed for confirming the interests, limitations and requirements of spectral active imaging in these three prioritized applications.

  • 40.
    Hatami, Sepehr
    et al.
    Swerea IVF AB, Mölndal, Sweden.
    Dahl-Jendelin, Anton
    Swerea IVF AB, Mölndal, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Nelsson, Claes
    Termisk Systemteknik AB, Linköping, Sweden.
    Selective Laser Melting Process Monitoring by Means of Thermography2018In: Proceedings of Euro Powder Metallurgy Congress (Euro PM), European Powder Metallurgy Association (EPMA) , 2018, article id 3957771Conference paper (Refereed)
    Abstract [en]

    Selective laser melting (SLM) enables production of highly intricate components. From this point of view, the capabilities of this technology are known to the industry and have been demonstrated in numerous applications. Nonetheless, for serial production purposes the manufacturing industry has so far been reluctant in substituting its conventional methods with SLM. One underlying reason is the lack of simple and reliable process monitoring methods. This study examines the feasibility of using thermography for process monitoring. To this end, an infra-red (IR) camera was mounted off-axis to monitor and record the temperature of every layer. The recorded temperature curves are analysed and interpreted with respect to different stages of the process. Furthermore, the possibility of detecting variations in laser settings by means of thermography is demonstrated. The results show that once thermal patterns are identified, this data can be utilized for in-process and post-process monitoring of SLM production.

  • 41.
    Horney, Tobias
    et al.
    Swedish Defence Research Agency, Sweden.
    Ahlberg, Jörgen
    Swedish Defence Research Agency, Sweden.
    Grönwall, Christina
    Swedish Defence Research Agency, Sweden.
    Folkesson, Martin
    Swedish Defence Research Agency, Sweden.
    Silvervarg, Karin
    Swedish Defence Research Agency, Sweden.
    Fransson, Jörgen
    Swedish Defence Research Agency, Sweden.
    Klasén, Lena
    Swedish Defence Research Agency, Sweden.
    Jungert, Erland
    Swedish Defence Research Agency, Sweden.
    Lantz, Fredrik
    Swedish Defence Research Agency, Sweden.
    Ulvklo, Morgan
    Swedish Defence Research Agency, Sweden.
    An information system for target recognition2004In: Volume 5434 Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications / [ed] Belur V. Dasarathy, SPIE - International Society for Optical Engineering, 2004, p. 163-175Conference paper (Refereed)
    Abstract [en]

    We present an approach to a general decision support system. The aim is to cover the complete process for automatic target recognition, from sensor data to the user interface. The approach is based on a query-based information system, and include tasks like feature extraction from sensor data, data association, data fusion and situation analysis. Currently, we are working with data from laser radar, infrared cameras, and visual cameras, studying target recognition from cooperating sensors on one or several platforms. The sensors are typically airborne and at low altitude. The processing of sensor data is performed in two steps. First, several attributes are estimated from the (unknown but detected) target. The attributes include orientation, size, speed, temperature etc. These estimates are used to select the models of interest in the matching step, where the target is matched with a number of target models, returning a likelihood value for each model. Several methods and sensor data types are used in both steps. The user communicates with the system via a visual user interface, where, for instance, the user can mark an area on a map and ask for hostile vehicles in the chosen area. The user input is converted to a query in ΣQL, a query language developed for this type of applications, and an ontological system decides which algorithms should be invoked and which sensor data should be used. The output from the sensors is fused by a fusion module and answers are given back to the user. The user does not need to have any detailed technical knowledge about the sensors (or which sensors that are available), and new sensors and algorithms can easily be plugged into the system.

  • 42.
    Ingemars, Nils
    et al.
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology.
    Feature-based Face Tracking using Extended Kalman Filtering2007Conference paper (Other academic)
    Abstract [en]

    This work examines the possiblity to, with the computational power of today’s consumer hardware, employ techniques previously developed for 3D tracking of rigid objects, and use them for tracking of deformable objects. Our target objects are human faces in a video conversation pose, and our purpose is to create a deformable face tracker based on a head tracker operating in real-time on consumer hardware. We also investigate how to combine model-based and image based tracking in order to get precise tracking and avoid drift.

  • 43.
    Markus, Nenad
    et al.
    University of Zagreb, Croatia .
    Frljak, Miroslav
    University of Zagreb, Croatia .
    Pandzic, Igor S.
    University of Zagreb, Croatia .
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Forchheimer, Robert
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Eye pupil localization with an ensemble of randomized trees2014In: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 47, no 2, p. 578-587Article in journal (Refereed)
    Abstract [en]

    We describe a method for eye pupil localization based on an ensemble of randomized regression trees and use several publicly available datasets for its quantitative and qualitative evaluation. The method compares well with reported state-of-the-art and runs in real-time on hardware with limited processing power, such as mobile devices.

  • 44.
    Markus, Nenad
    et al.
    Faculty of Electrical Engineering and Computing, University of Zagreb.
    Gogic, Ivan
    Faculty of Electrical Engineering and Computing, University of Zagreb.
    Pandžic, Igor
    Faculty of Electrical Engineering and Computing, University of Zagreb.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Memory-efficient Global Refinement of Decision-Tree Ensembles and its Application to Face Alignment2018Conference paper (Refereed)
    Abstract [en]

    Ren et al. [17] recently introduced a method for aggregating multiple decision trees into a strong predictor by interpreting a path taken by a sample down each tree as a binary vector and performing linear regression on top of these vectors stacked together. They provided experimental evidence that the method offers advantages over the usual approaches for combining decision trees (random forests and boosting). The method truly shines when the regression target is a large vector with correlated dimensions, such as a 2D face shape represented with the positions of several facial landmarks. However, we argue that their basic method is not applicable in many practical scenarios due to large memory requirements. This paper shows how this issue can be solved through the use of quantization and architectural changes of the predictor that maps decision tree-derived encodings to the desired output.

  • 45.
    Markuš, Nenad
    et al.
    University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia.
    Fratarcangeli, Marco
    Chalmers University of Technology, Dept. of Applied Information Technology, Göteborg, Sweden.
    Pandžić, Igor
    University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Fast Rendering of Image Mosaics and ASCII Art2015In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 34, no 6, p. 251-261Article in journal (Refereed)
    Abstract [en]

    An image mosaic is an assembly of a large number of small images, usually called tiles, taken from a specific dictionary/codebook. When viewed as a whole, the appearance of a single large image emerges, i.e. each tile approximates a small block of pixels. ASCII art is a related (and older) graphic design technique for producing images from printable characters. Although automatic procedures for both of these visualization schemes have been studied in the past, some are computationally heavy and cannot offer real-time and interactive performance. We propose an algorithm able to reproduce the quality of existing non-photorealistic rendering techniques, in particular ASCII art and image mosaics, obtaining large performance speed-ups. The basic idea is to partition the input image into a rectangular grid and use a decision tree to assign a tile from a pre-determined codebook to each cell. Our implementation can process video streams from webcams in real time and it is suitable for modestly equipped devices. We evaluate our technique by generating the renderings of a variety of images and videos, with good results. The source code of our engine is publicly available.

  • 46.
    Markuš, Nenad
    et al.
    University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia.
    Frljak, Miroslav
    University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia.
    Pandžić, Igor
    University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Forchheimer, Robert
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, Faculty of Science & Engineering.
    High-performance face tracking2012Conference paper (Refereed)
    Abstract [en]

    Face tracking is an extensively studied field. Nevertheless, it is still a challenge to make a robust and efficient face tracker, especially on mobile devices. This extended abstract briefly describes our implementation of a high-performance multi-platform face and facial feature tracking system. The main characteristics of our approach are that the tracker is fully automatic and works with the majority of faces without any manual initialization. It is robust, resistant to rapid changes in pose and facial expressions, does not suffer from drifting and is modestly computationally expensive. The tracker runs in real-time on mobile devices.

  • 47.
    Nawaz, Tahir
    et al.
    Computational Vision Group, Department of Computer Science, University of Reading.
    Berg, Amanda
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ferryman, James
    Computational Vision Group, Department of Computer Science, University of Reading.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Effective evaluation of privacy protection techniques in visible and thermal imagery2017In: Journal of Electronic Imaging (JEI), ISSN 1017-9909, E-ISSN 1560-229X, Vol. 26, no 5, article id 051408Article in journal (Refereed)
    Abstract [en]

    Privacy protection may be defined as replacing the original content in an image region with a new (less intrusive) content having modified target appearance information to make it less recognizable by applying a privacy protection technique. Indeed the development of privacy protection techniques needs also to be complemented with an established objective evaluation method to facilitate their assessment and comparison. Generally, existing evaluation methods rely on the use of subjective judgements or assume a specific target type in image data and use target detection and recognition accuracies to assess privacy protection. This work proposes a new annotation-free evaluation method that is neither subjective nor assumes a specific target type. It assesses two key aspects of privacy protection: protection and utility. Protection is quantified as an appearance similarity and utility is measured as a structural similarity between original and privacy-protected image regions. We performed an extensive experimentation using six challenging datasets (having 12 video sequences) including a new dataset (having six sequences) that contains visible and thermal imagery. The new dataset, called TST-Priv, is made available online below for community. We demonstrate effectiveness of the proposed method by evaluating six image-based privacy protection techniques, and also show comparisons of the proposed method over existing methods.

  • 48.
    Ringaby, Erik
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Ahlberg, Jörgen
    Sensor Informatics Group, Swedish Defence Research Agenc y (FOI), Linköping.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wadströmer, Niclas
    Sensor Informatics Group, Swedish Defence Research Agenc y (FOI), Linköping.
    Co-alignmnent of Aerial Push-broom Strips using Trajectory Smoothness Constraints2010Conference paper (Other academic)
    Abstract [en]

    We study the problem of registering a sequence of scan lines (a strip) from an airborne push-broom imager to another sequence partly covering the same area. Such a registration has to compensate for deformations caused by attitude and speed changes in the aircraft. The registration is challenging, as both strips contain such deformations. Our algorithm estimates the 3D rotation of the camera for each scan line, by parametrising it as a linear spline with a number of knots evenly distributed in one of the strips. The rotations are estimated from correspondences between strips of the same area. Once the rotations are known, they can be compensated for, and each line of pixels can be transformed such that ground trace of the two strips are registered with respect to each other.

  • 49.
    Ringaby, Erik
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Ahlberg, Jörgen
    FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Wadströmer, Niclas
    FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Co-aligning Aerial Hyperspectral Push-broom Strips for Change Detection2010In: Proc. SPIE 7835, Electro-Optical Remote Sensing, Photonic Technologies, and Applications IV / [ed] Gary W. Kamerman; Ove Steinvall; Keith L. Lewis; Richard C. Hollins; Thomas J. Merlet; Gary J. Bishop; John D. Gonglewski, SPIE - International Society for Optical Engineering, 2010, p. Art.nr. 7835B-36-Conference paper (Refereed)
    Abstract [en]

    We have performed a field trial with an airborne push-broom hyperspectral sensor, making several flights over the same area and with known changes (e.g., moved vehicles) between the flights. Each flight results in a sequence of scan lines forming an image strip, and in order to detect changes between two flights, the two resulting image strips must be geometrically aligned and radiometrically corrected. The focus of this paper is the geometrical alignment, and we propose an image- and gyro-based method for geometric co-alignment (registration) of two image strips. The method is particularly useful when the sensor is not stabilized, thus reducing the need for expensive mechanical stabilization. The method works in several steps, including gyro-based rectification, global alignment using SIFT matching, and a local alignment using KLT tracking. Experimental results are shown but not quantified, as ground truth is, by the nature of the trial, lacking.

  • 50.
    Runnemalm, Anna
    et al.
    Division of Engineering Science, University West, Trollättan, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Appelgren, Anders
    Division of Engineering Science, University West, Trollättan, Sweden.
    Sjökvist, Stefan
    Termisk Systemteknik AB, Linköping, Sweden.
    Automatic Inspection of Spot Welds by Thermography2014In: Journal of nondestructive evaluation, ISSN 0195-9298, E-ISSN 1573-4862, Vol. 33, no 3, p. 398-406Article in journal (Refereed)
    Abstract [en]

    The interest for thermography as a method for spot weld inspection has increased during the last years since it is a full-field method suitable for automatic inspection. Thermography systems can be developed in different ways, with different physical setups, excitation sources, and image analysis algorithms. In this paper we suggest a single-sided setup of a thermography system using a flash lamp as excitation source. The analysis algorithm aims to find the spatial region in the acquired images corresponding to the successfully welded area, i.e., the nugget size. Experiments show that the system is able to detect spot welds, measure the nugget diameter, and based on the information also separate a spot weld from a stick weld. The system is capable to inspect more than four spot welds per minute, and has potential for an automatic non-destructive system for spot weld inspection. The development opportunities are significant, since the algorithm used in the initial analysis is rather simplified. Moreover, further evaluation of alternative excitation sources can potentially improve the performance.

12 1 - 50 of 56
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf