liu.seSearch for publications in DiVA
Endre søk
Begrens søket
1234567 1 - 50 of 475
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Abels, Esther
    et al.
    PathAI, MA USA.
    Pantanowitz, Liron
    Univ Pittsburgh, PA USA.
    Aeffner, Famke
    Amgen Inc, CA USA.
    Zarella, Mark D.
    Drexel Univ, PA 19104 USA.
    van der Laak, Jeroen
    Linköpings universitet, Institutionen för medicin och hälsa, Avdelningen för radiologiska vetenskaper. Linköpings universitet, Medicinska fakulteten. Region Östergötland, Diagnostikcentrum, Klinisk patologi. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Radboud Univ Nijmegen, Netherlands.
    Bui, Marilyn M.
    H Lee Moffitt Canc Ctr and Res Inst, FL USA.
    Vemuri, Venkata N. P.
    Chan Zuckerberg Biohub, CA USA.
    Parwani, Anil V.
    Ohio State Univ, OH 43210 USA.
    Gibbs, Jeff
    Hyman Phelps and McNamara PC, DC USA.
    Agosto-Arroyo, Emmanuel
    H Lee Moffitt Canc Ctr and Res Inst, FL USA.
    Beck, Andrew H.
    PathAI, MA USA.
    Kozlowski, Cleopatra
    Genentech Inc, CA 94080 USA.
    Computational pathology definitions, best practices, and recommendations for regulatory guidance: a white paper from the Digital Pathology Association2019Inngår i: Journal of Pathology, ISSN 0022-3417, E-ISSN 1096-9896Artikkel, forskningsoversikt (Fagfellevurdert)
    Abstract [en]

    In this white paper, experts from the Digital Pathology Association (DPA) define terminology and concepts in the emerging field of computational pathology, with a focus on its application to histology images analyzed together with their associated patient data to extract information. This review offers a historical perspective and describes the potential clinical benefits from research and applications in this field, as well as significant obstacles to adoption. Best practices for implementing computational pathology workflows are presented. These include infrastructure considerations, acquisition of training data, quality assessments, as well as regulatory, ethical, and cyber-security concerns. Recommendations are provided for regulators, vendors, and computational pathology practitioners in order to facilitate progress in the field. (c) 2019 The Authors. The Journal of Pathology published by John Wiley amp; Sons Ltd on behalf of Pathological Society of Great Britain and Ireland.

  • 2.
    Abramian, David
    et al.
    Linköpings universitet, Institutionen för medicinsk teknik, Avdelningen för medicinsk teknik. Linköpings universitet, Tekniska fakulteten.
    Eklund, Anders
    Linköpings universitet, Institutionen för medicinsk teknik, Avdelningen för medicinsk teknik. Linköpings universitet, Institutionen för datavetenskap, Statistik och maskininlärning. Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    REFACING: RECONSTRUCTING ANONYMIZED FACIAL FEATURES USING GANS2019Inngår i: 2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019), IEEE , 2019, s. 1104-1108Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Anonymization of medical images is necessary for protecting the identity of the test subjects, and is therefore an essential step in data sharing. However, recent developments in deep learning may raise the bar on the amount of distortion that needs to be applied to guarantee anonymity. To test such possibilities, we have applied the novel CycleGAN unsupervised image-to-image translation framework on sagittal slices of T1 MR images, in order to reconstruct, facial features from anonymized data. We applied the CycleGAN framework on both face-blurred and face-removed images. Our results show that face blurring may not provide adequate protection against malicious attempts at identifying the subjects, while face removal provides more robust anonymization, but is still partially reversible.

  • 3.
    Ahlberg, Jörgen
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Arsic, Dejan
    Munich University of Technology, Germany.
    Ganchev, Todor
    University of Patras, Greece.
    Linderhed, Anna
    FOI Swedish Defence Research Agency.
    Menezes, Paolo
    University of Coimbra, Portugal.
    Ntalampiras, Stavros
    University of Patras, Greece.
    Olma, Tadeusz
    MARAC S.A., Greece.
    Potamitis, Ilyas
    Technological Educational Institute of Crete, Greece.
    Ros, Julien
    Probayes SAS, France.
    Prometheus: Prediction and interpretation of human behaviour based on probabilistic structures and heterogeneous sensors2008Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The on-going EU funded project Prometheus (FP7-214901) aims at establishing a general framework which links fundamental sensing tasks to automated cognition processes enabling interpretation and short-term prediction of individual and collective human behaviours in unrestricted environments as well as complex human interactions. To achieve the aforementioned goals, the Prometheus consortium works on the following core scientific and technological objectives:

    1. sensor modeling and information fusion from multiple, heterogeneous perceptual modalities;

    2. modeling, localization, and tracking of multiple people;

    3. modeling, recognition, and short-term prediction of continuous complex human behavior.

  • 4.
    Ahlberg, Jörgen
    et al.
    Linköpings universitet, Institutionen för systemteknik, Informationskodning. Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB, Linköping, Sweden.
    Berg, Amanda
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB, Linköping, Sweden.
    Evaluating Template Rescaling in Short-Term Single-Object Tracking2015Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In recent years, short-term single-object tracking has emerged has a popular research topic, as it constitutes the core of more general tracking systems. Many such tracking methods are based on matching a part of the image with a template that is learnt online and represented by, for example, a correlation filter or a distribution field. In order for such a tracker to be able to not only find the position, but also the scale, of the tracked object in the next frame, some kind of scale estimation step is needed. This step is sometimes separate from the position estimation step, but is nevertheless jointly evaluated in de facto benchmarks. However, for practical as well as scientific reasons, the scale estimation step should be evaluated separately – for example,theremightincertainsituationsbeothermethodsmore suitable for the task. In this paper, we describe an evaluation method for scale estimation in template-based short-term single-object tracking, and evaluate two state-of-the-art tracking methods where estimation of scale and position are separable.

  • 5.
    Ahlberg, Jörgen
    et al.
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Dornaika, Fadi
    Linköpings universitet, Institutionen för systemteknik, Bildkodning. Linköpings universitet, Tekniska högskolan.
    Efficient active appearance model for real-time head and facial feature tracking2003Inngår i: Analysis and Modeling of Faces and Gestures, 2003. AMFG 2003. IEEE International Workshop on, IEEE conference proceedings, 2003, s. 173-180Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We address the 3D tracking of pose and animation of the human face in monocular image sequences using active appearance models. The classical appearance-based tracking suffers from two disadvantages: (i) the estimated out-of-plane motions are not very accurate, and (ii) the convergence of the optimization process to desired minima is not guaranteed. We aim at designing an efficient active appearance model, which is able to cope with the above disadvantages by retaining the strengths of feature-based and featureless tracking methodologies. For each frame, the adaptation is split into two consecutive stages. In the first stage, the 3D head pose is recovered using robust statistics and a measure of consistency with a statistical model of a face texture. In the second stage, the local motion associated with some facial features is recovered using the concept of the active appearance model search. Tracking experiments and method comparison demonstrate the robustness and out-performance of the developed framework.

  • 6.
    Ahlberg, Jörgen
    et al.
    Dept. of IR Systems, Div. of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Dornaika, Fadi
    Computer Vision Center, Universitat Autonoma de Barcelona, Bellaterra, Spain.
    Parametric Face Modeling and Tracking2005Inngår i: Handbook of Face Recognition / [ed] Stan Z. Li, Anil K. Jain, Springer-Verlag New York, 2005, s. 65-87Kapittel i bok, del av antologi (Annet vitenskapelig)
  • 7.
    Ahlberg, Jörgen
    et al.
    Linköpings universitet, Institutionen för systemteknik, Bildkodning. Linköpings universitet, Tekniska högskolan. Div. of Sensor Technology, Swedish Defence Research Agency, Linköping, Sweden.
    Forchheimer, Robert
    Linköpings universitet, Institutionen för systemteknik, Bildkodning. Linköpings universitet, Tekniska högskolan.
    Face tracking for model-based coding and face animation2003Inngår i: International journal of imaging systems and technology (Print), ISSN 0899-9457, E-ISSN 1098-1098, Vol. 13, nr 1, s. 8-22Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We present a face and facial feature tracking system able to extract animation parameters describing the motion and articulation of a human face in real-time on consumer hardware. The system is based on a statistical model of face appearance and a search algorithm for adapting the model to an image. Speed and robustness is discussed, and the system evaluated in terms of accuracy.

  • 8.
    Ahlberg, Jörgen
    et al.
    Div. of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Klasén, Lena
    Div. of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Surveillance Systems for Urban Crisis Management2005Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    We present a concept for combing 3D models and multiple heterogeneous sensors into a surveillance system enabling superior situation awareness. The concept has many military as well as civilian applications. A key issue is the use of a 3D environment model of the area to be surveyed, typically an urban area. In addition to the 3D model, the area of interest is monitored over time using multiple heterogeneous sensors, such as optical, acoustic, and/or seismic sensors. Data and analysis results from the sensors are visualized in the 3D model, thus putting them in a common reference frame and making their spatial and temporal relations obvious. The result is highlighted by an example where data from different sensor systems is integrated in a 3D model of a Swedish urban area.

  • 9.
    Ahlberg, Jörgen
    et al.
    Linköpings universitet, Institutionen för systemteknik, Informationskodning. Linköpings universitet, Tekniska högskolan.
    Li, Haibo
    Linköpings universitet, Institutionen för systemteknik, Informationskodning. Linköpings universitet, Tekniska högskolan.
    Representing and Compressing MPEG-4 Facial Animation Parameters using Facial Action Basis Functions1999Inngår i: IEEE Transactions on Circuits and Systems, ISSN 0098-4094, E-ISSN 1558-1276, Vol. 9, nr 3, s. 405-410Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In model-based, or semantic, coding, parameters describing the nonrigid motion of objects, e.g., the mimics of a face, are of crucial interest. The facial animation parameters (FAPs) specified in MPEG-4 compose a very rich set of such parameters, allowing a wide range of facial motion. However, the FAPs are typically correlated and also constrained in their motion due to the physiology of the human face. We seek here to utilize this spatial correlation to achieve efficient compression. As it does not introduce any interframe delay, the method is suitable for interactive applications, e.g., videophone and interactive video, where low delay is a vital issue.

  • 10.
    Ahlberg, Jörgen
    et al.
    Termisk Systemteknik AB Linköping, Sweden; Visage Technologies AB Linköping, Sweden.
    Markuš, Nenad
    Human-Oriented Technologies Laboratory, Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia.
    Berg, Amanda
    Termisk Systemteknik AB, Linköping, Sweden.
    Multi-person fever screening using a thermal and a visual camera2015Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    We propose a system to automatically measure the body temperature of persons as they pass. In contrast to exisitng systems, the persons do not need to stop and look into a camera one-by-one. Instead, their eye corners are automatically detected and the temperatures therein measured using a thermal camera. The system handles multiple simultaneous persons and can thus be used where a flow of people pass, such as at airport gates.

  • 11.
    Ahlberg, Jörgen
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Glana Sensors AB, Sweden.
    Renhorn, Ingmar
    Glana Sensors AB, Sweden.
    Chevalier, Tomas
    Scienvisic AB, Sweden.
    Rydell, Joakim
    FOI, Swedish Defence Research Agency, Sweden.
    Bergström, David
    FOI, Swedish Defence Research Agency, Sweden.
    Three-dimensional hyperspectral imaging technique2017Inngår i: ALGORITHMS AND TECHNOLOGIES FOR MULTISPECTRAL, HYPERSPECTRAL, AND ULTRASPECTRAL IMAGERY XXIII / [ed] Miguel Velez-Reyes; David W. Messinger, SPIE - International Society for Optical Engineering, 2017, Vol. 10198, artikkel-id 1019805Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Hyperspectral remote sensing based on unmanned airborne vehicles is a field increasing in importance. The combined functionality of simultaneous hyperspectral and geometric modeling is less developed. A configuration has been developed that enables the reconstruction of the hyperspectral three-dimensional (3D) environment. The hyperspectral camera is based on a linear variable filter and a high frame rate, high resolution camera enabling point-to-point matching and 3D reconstruction. This allows the information to be combined into a single and complete 3D hyperspectral model. In this paper, we describe the camera and illustrate capabilities and difficulties through real-world experiments.

  • 12.
    Ahlberg, Jörgen
    et al.
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Renhorn, Ingmar G.
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Wadströmer, Niclas
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    An information measure of sensor performance and its relation to the ROC curve2010Inngår i: Proc. SPIE 7695, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVI / [ed] Sylvia S. Shen; Paul E. Lewis, SPIE - International Society for Optical Engineering, 2010, s. Art.nr. 7695-72-Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The ROC curve is the most frequently used performance measure for detection methods and the underlying sensor configuration. Common problems are that the ROC curve does not present a single number that can be compared to other systems and that no discrimination between sensor performance and algorithm performance is done. To address the first problem, a number of measures are used in practice, like detection rate at a specific false alarm rate, or area-under-curve. For the second problem, we proposed in a previous paper1 an information theoretic method for measuring sensor performance. We now relate the method to the ROC curve, show that it is equivalent to selecting a certain point on the ROC curve, and that this point is easily determined. Our scope is hyperspectral data, studying discrimination between single pixels.

  • 13.
    Ahlman, Gustav
    Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Improved Temporal Resolution Using Parallel Imaging in Radial-Cartesian 3D functional MRI2011Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    MRI (Magnetic Resonance Imaging) is a medical imaging method that uses magnetic fields in order to retrieve images of the human body. This thesis revolves around a novel acquisition method of 3D fMRI (functional Magnetic Resonance Imaging) called PRESTO-CAN that uses a radial pattern in order to sample the (kx,kz)-plane of k-space (the frequency domain), and a Cartesian sample pattern in the ky-direction. The radial sample pattern allows for a denser sampling of the central parts of k-space, which contain the most basic frequency information about the structure of the recorded object. This allows for higher temporal resolution to be achieved compared with other sampling methods since a fewer amount of total samples are needed in order to retrieve enough information about how the object has changed over time. Since fMRI is mainly used for monitoring blood flow in the brain, increased temporal resolution means that we can be able to track fast changes in brain activity more efficiently.The temporal resolution can be further improved by reducing the time needed for scanning, which in turn can be achieved by applying parallel imaging. One such parallel imaging method is SENSE (SENSitivity Encoding). The scan time is reduced by decreasing the sampling density, which causes aliasing in the recorded images. The aliasing is removed by the SENSE method by utilizing the extra information provided by the fact that multiple receiver coils with differing sensitivities are used during the acquisition. By measuring the sensitivities of the respective receiver coils and solving an equation system with the aliased images, it is possible to calculate how they would have looked like without aliasing.In this master thesis, SENSE has been successfully implemented in PRESTO-CAN. By using normalized convolution in order to refine the sensitivity maps of the receiver coils, images with satisfying quality was able to be reconstructed when reducing the k-space sample rate by a factor of 2, and images of relatively good quality also when the sample rate was reduced by a factor of 4. In this way, this thesis has been able to contribute to the improvement of the temporal resolution of the PRESTO-CAN method.

  • 14.
    Akin, H. Levent
    et al.
    Bogazici University, Turkey.
    Ito, Nobuhiro
    Aichi Institute of Technology, Japan.
    Jacoff, Adam
    National Institute of Standards, USA.
    Kleiner, Alexander
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerade datorsystem. Linköpings universitet, Tekniska högskolan.
    Pellenz, Johannes
    V&R Vision & Robotics GmbH, Germany.
    Visser, Arnoud
    University of Amsterdam, Holland.
    RoboCup Rescue Robot and Simulation Leagues2013Inngår i: The AI Magazine, ISSN 0738-4602, Vol. 34, nr 1Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The RoboCup Rescue Robot and Simulation competitions have been held since 2000. The experience gained during these competitions has increased the maturity level of the field, which allowed deploying robots after real disasters (e.g. Fukushima Daiichi nuclear disaster). This article provides an overview of these competitions and highlights the state of the art and the lessons learned.

  • 15.
    Albonico, Andrea
    et al.
    Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, Canada.
    Furubacke, Amanda
    Linköpings universitet, Medicinska fakulteten. Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, Canada.
    Barton, Jason J. S.
    Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, Canada.
    Oruc, Ipek
    Department of Ophthalmology and Visual Sciences, University of British Columbia, Canada; Program in Neuroscience, University of British Columbia, Canada.
    Perceptual efficiency and the inversion effect for faces, words and houses2018Inngår i: Vision Research, ISSN 0042-6989, E-ISSN 1878-5646, Vol. 153, s. 91-97Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Face and visual word recognition are two key forms of expert visual processing. In the domain of object recognition, it has been suggested that expert processing is characterized by the use of different mechanisms from the ones involved in general object recognition. It has been suggested that one traditional marker of expert processing is the inversion effect. To investigate whether face and word recognition differ from general object recognition, we compared the effect of inversion on the perceptual efficiency of face and visual word recognition as well as on the recognition of a third, non-expert object category, houses. From the comparison of identification contrast thresholds to an ideal observer, we derived the efficiency and equivalent input noise of stimulus processing in both upright and inverted orientations. While efficiency reflects the efficacy in sampling the available information, equivalent input noise is associated with the degradation of the stimulus signal within the visual system. We hypothesized that large inversion effects for efficiency and/or equivalent input noise should characterize expert high-level processes, and asked whether this would be true for both faces and words, but not houses. However, we found that while face recognition efficiency was profoundly reduced by inversion, the efficiency of word and house recognition was minimally influenced by the orientation manipulation. Inversion did not affect equivalent input noise. These results suggest that even though faces and words are both considered expert processes, only the efficiency of the mechanism involved in face recognition is sensitive to orientation.

  • 16.
    Amundin, Mats
    et al.
    Kolmården Wildlife Park.
    Hållsten, Henrik
    Filosofiska institutionen, Stockholms universitet.
    Eklund, Robert
    Linköpings universitet, Institutionen för kultur och kommunikation, Avdelningen för språk och kultur. Linköpings universitet, Filosofiska fakulteten.
    Karlgren, Jussi
    Kungliga Tekniska Högskolan.
    Molinder, Lars
    Carnegie Investment Bank, Swedden.
    A proposal to use distributional models to analyse dolphin vocalisation2017Inngår i: Proceedings of the 1st International Workshop on Vocal Interactivity in-and-between Humans, Animals and Robots, VIHAR 2017 / [ed] Angela Dassow, Ricard Marxer & Roger K. Moore, 2017, s. 31-32Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper gives a brief introduction to the starting points of an experimental project to study dolphin communicative behaviour using distributional semantics, with methods implemented for the large scale study of human language.

  • 17.
    Andersson, Adam
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Range Gated Viewing with Underwater Camera2005Independent thesis Basic level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    The purpose of this master thesis, performed at FOI, was to evaluate a range gated underwater camera, for the application identification of bottom objects. The master thesis was supported by FMV within the framework of “arbetsorder Systemstöd minjakt (Jan Andersson, KC Vapen)”. The central part has been field trials, which have been performed in both turbid and clear water. Conclusions about the performance of the camera system have been done, based on resolution and contrast measurements during the field trials. Laboratory testing has also been done to measure system specific parameters, such as the effective gate profile and camera gate distances.

    The field trials shows that images can be acquired at significantly longer distances with the tested gated camera, compared to a conventional video camera. The distance where the target can be detected is increased by a factor of 2. For images suitable for mine identification, the increase is about 1.3. However, studies of the performance of other range gated systems shows that the increase in range for mine identification can be about 1.6. Gated viewing has also been compared to other technical solutions for underwater imaging.

  • 18.
    Andersson, Anna
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap.
    Eklund, Klara
    Linköpings universitet, Institutionen för teknik och naturvetenskap.
    A Study of Oriented Mottle in Halftone Print2007Independent thesis Advanced level (degree of Magister), 20 poäng / 30 hpOppgave
    Abstract [en]

    Coated solid bleached board belongs to the top-segment of paperboards. One important property of paperboard is the printability. In this diploma work a specific print defect, oriented mottle, has been studied in association with Iggesund Paperboard. The objectives of the work were to develop a method for analysis of the dark and light areas of oriented mottle, to analyse these areas, and to clarify the effect from the print, coating and paperboard surface related factors. This would clarify the origin of oriented mottle and predict oriented mottle on unprinted paperboard. The objectives were fulfilled by analysing the areas between the dark halftone dots, the amount of coating and the ink penetration, the micro roughness and the topography. The analysis of the areas between the dark halftone dots was performed on several samples and the results were compared regarding different properties. The other methods were only applied on a limited selection of samples. The results from the study showed that the intensity differences between the dark halftone dots were enhanced in the dark areas, the coating amount was lower in the dark areas and the ink did not penetrate into the paperboard. The other results showed that areas with high transmission corresponded to dark areas, smoother micro roughness, lower coating amount and high topography. A combination of the information from these properties might be used to predict oriented mottle. The oriented mottle is probably an optical phenomenon in half tone prints, and originates from variations in the coating and other paperboard properties.

  • 19.
    Andersson, Christian
    Linköpings universitet, Institutionen för systemteknik.
    Simulering av filtrerade skärmfärger2005Independent thesis Basic level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    This report present a working model for simulation of what happens to colors displayed on screens when they are observed through optical filters. The results of the model can be used to visually, on one screen, simulate another screen with an applied optical filter. The model can also produce CIE color difference values for the simulated screen colors. The model is data driven and requires spectral measurements for at least the screen to be simulated and the physical filters that will be used. The model is divided into three separate modules or steps where each of the modules can be easily replaced by alternative implementations or solutions. Results from tests performed show that the model can be used for prototyping of optical filters even though the tests of the specific algorithms chosen show there is room for improvements in quality. There is nothing that indicates that future work with this model would not produce better quality in its results.

  • 20.
    Andersson, Maria
    et al.
    Division of Information Systems, FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Ntalampiras, Stavros
    Department of Electrical and Computer Engineering, University of Patras, Patras, Greece.
    Ganchev, Todor
    Department of Electrical and Computer Engineering, University of Patras, Patras, Greece.
    Rydell, Joakim
    Division of Information Systems, FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Ahlberg, Jörgen
    Division of Information Systems, FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Fakotakis, Nikos
    Department of Electrical and Computer Engineering, University of Patras, Patras, Greece.
    Fusion of Acoustic and Optical Sensor Data for Automatic Fight Detection in Urban Environments2010Inngår i: Information Fusion (FUSION), 2010 13th Conference on, IEEE conference proceedings, 2010, s. 1-8Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We propose a two-stage method for detection of abnormal behaviours, such as aggression and fights in urban environment, which is applicable to operator support in surveillance applications. The proposed method is based on fusion of evidence from audio and optical sensors. In the first stage, a number of modalityspecific detectors perform recognition of low-level events. Their outputs act as input to the second stage, which performs fusion and disambiguation of the firststage detections. Experimental evaluation on scenes from the outdoor part of the PROMETHEUS database demonstrated the practical viability of the proposed approach. We report a fight detection rate of 81% when both audio and optical information are used. Reduced performance is observed when evidence from audio data is excluded from the fusion process. Finally, in the case when only evidence from one camera is used for detecting the fights, the recognition performance is poor. 

  • 21.
    Andersson, Maria
    et al.
    FOI Swedish Defence Research Agency.
    Rydell, Joakim
    FOI Swedish Defence Research Agency.
    Ahlberg, Jörgen
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. FOI Swedish Defence Research Agency.
    Estimation of crowd behaviour using sensor networks and sensor fusion2009Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Commonly, surveillance operators are today monitoring a large number of CCTV screens, trying to solve the complex cognitive tasks of analyzing crowd behavior and detecting threats and other abnormal behavior. Information overload is a rule rather than an exception. Moreover, CCTV footage lacks important indicators revealing certain threats, and can also in other respects be complemented by data from other sensors. This article presents an approach to automatically interpret sensor data and estimate behaviors of groups of people in order to provide the operator with relevant warnings. We use data from distributed heterogeneous sensors (visual cameras and a thermal infrared camera), and process the sensor data using detection algorithms. The extracted features are fed into a hidden Markov model in order to model normal behavior and detect deviations. We also discuss the use of radars for weapon detection.

  • 22.
    Andersson, Olov
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerade datorsystem. Linköpings universitet, Tekniska fakulteten.
    Methods for Scalable and Safe Robot Learning2017Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Robots are increasingly expected to go beyond controlled environments in laboratories and factories, to enter real-world public spaces and homes. However, robot behavior is still usually engineered for narrowly defined scenarios. To manually encode robot behavior that works within complex real world environments, such as busy work places or cluttered homes, can be a daunting task. In addition, such robots may require a high degree of autonomy to be practical, which imposes stringent requirements on safety and robustness. \setlength{\parindent}{2em}\setlength{\parskip}{0em}The aim of this thesis is to examine methods for automatically learning safe robot behavior, lowering the costs of synthesizing behavior for complex real-world situations. To avoid task-specific assumptions, we approach this from a data-driven machine learning perspective. The strength of machine learning is its generality, given sufficient data it can learn to approximate any task. However, being embodied agents in the real-world, robots pose a number of difficulties for machine learning. These include real-time requirements with limited computational resources, the cost and effort of operating and collecting data with real robots, as well as safety issues for both the robot and human bystanders.While machine learning is general by nature, overcoming the difficulties with real-world robots outlined above remains a challenge. In this thesis we look for a middle ground on robot learning, leveraging the strengths of both data-driven machine learning, as well as engineering techniques from robotics and control. This includes combing data-driven world models with fast techniques for planning motions under safety constraints, using machine learning to generalize such techniques to problems with high uncertainty, as well as using machine learning to find computationally efficient approximations for use on small embedded systems.We demonstrate such behavior synthesis techniques with real robots, solving a class of difficult dynamic collision avoidance problems under uncertainty, such as induced by the presence of humans without prior coordination. Initially using online planning offloaded to a desktop CPU, and ultimately as a deep neural network policy embedded on board a 7 quadcopter.

    Delarbeid
    1. Model-Based Reinforcement Learning in Continuous Environments Using Real-Time Constrained Optimization
    Åpne denne publikasjonen i ny fane eller vindu >>Model-Based Reinforcement Learning in Continuous Environments Using Real-Time Constrained Optimization
    2015 (engelsk)Inngår i: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI) / [ed] Blai Bonet and Sven Koenig, AAAI Press, 2015, s. 2497-2503Konferansepaper, Publicerat paper (Fagfellevurdert)
    Abstract [en]

    Reinforcement learning for robot control tasks in continuous environments is a challenging problem due to the dimensionality of the state and action spaces, time and resource costs for learning with a real robot as well as constraints imposed for its safe operation. In this paper we propose a model-based reinforcement learning approach for continuous environments with constraints. The approach combines model-based reinforcement learning with recent advances in approximate optimal control. This results in a bounded-rationality agent that makes decisions in real-time by efficiently solving a sequence of constrained optimization problems on learned sparse Gaussian process models. Such a combination has several advantages. No high-dimensional policy needs to be computed or stored while the learning problem often reduces to a set of lower-dimensional models of the dynamics. In addition, hard constraints can easily be included and objectives can also be changed in real-time to allow for multiple or dynamic tasks. The efficacy of the approach is demonstrated on both an extended cart pole domain and a challenging quadcopter navigation task using real data.

    sted, utgiver, år, opplag, sider
    AAAI Press, 2015
    Emneord
    Reinforcement Learning, Gaussian Processes, Optimization, Robotics
    HSV kategori
    Identifikatorer
    urn:nbn:se:liu:diva-113385 (URN)978-1-57735-698-1 (ISBN)
    Konferanse
    Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI), January 25-30, 2015, Austin, Texas, USA.
    Forskningsfinansiär
    Linnaeus research environment CADICSeLLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsSwedish Foundation for Strategic Research VINNOVAEU, FP7, Seventh Framework Programme
    Tilgjengelig fra: 2015-01-16 Laget: 2015-01-16 Sist oppdatert: 2018-01-11bibliografisk kontrollert
    2. Model-Predictive Control with Stochastic Collision Avoidance using Bayesian Policy Optimization
    Åpne denne publikasjonen i ny fane eller vindu >>Model-Predictive Control with Stochastic Collision Avoidance using Bayesian Policy Optimization
    2016 (engelsk)Inngår i: IEEE International Conference on Robotics and Automation (ICRA), 2016, Institute of Electrical and Electronics Engineers (IEEE), 2016, s. 4597-4604Konferansepaper, Publicerat paper (Fagfellevurdert)
    Abstract [en]

    Robots are increasingly expected to move out of the controlled environment of research labs and into populated streets and workplaces. Collision avoidance in such cluttered and dynamic environments is of increasing importance as robots gain more autonomy. However, efficient avoidance is fundamentally difficult since computing safe trajectories may require considering both dynamics and uncertainty. While heuristics are often used in practice, we take a holistic stochastic trajectory optimization perspective that merges both collision avoidance and control. We examine dynamic obstacles moving without prior coordination, like pedestrians or vehicles. We find that common stochastic simplifications lead to poor approximations when obstacle behavior is difficult to predict. We instead compute efficient approximations by drawing upon techniques from machine learning. We propose to combine policy search with model-predictive control. This allows us to use recent fast constrained model-predictive control solvers, while gaining the stochastic properties of policy-based methods. We exploit recent advances in Bayesian optimization to efficiently solve the resulting probabilistically-constrained policy optimization problems. Finally, we present a real-time implementation of an obstacle avoiding controller for a quadcopter. We demonstrate the results in simulation as well as with real flight experiments.

    sted, utgiver, år, opplag, sider
    Institute of Electrical and Electronics Engineers (IEEE), 2016
    Serie
    Proceedings of IEEE International Conference on Robotics and Automation, ISSN 1050-4729
    Emneord
    Robot Learning, Collision Avoidance, Robotics, Bayesian Optimization, Model Predictive Control
    HSV kategori
    Identifikatorer
    urn:nbn:se:liu:diva-126769 (URN)10.1109/ICRA.2016.7487661 (DOI)000389516203138 ()
    Konferanse
    IEEE International Conference on Robotics and Automation (ICRA), 2016, Stockholm, May 16-21
    Prosjekter
    CADICSELLIITNFFP6CUASSHERPA
    Forskningsfinansiär
    Linnaeus research environment CADICSELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsEU, FP7, Seventh Framework ProgrammeSwedish Foundation for Strategic Research
    Tilgjengelig fra: 2016-04-04 Laget: 2016-04-04 Sist oppdatert: 2018-01-10bibliografisk kontrollert
    3. Deep Learning Quadcopter Control via Risk-Aware Active Learning
    Åpne denne publikasjonen i ny fane eller vindu >>Deep Learning Quadcopter Control via Risk-Aware Active Learning
    2017 (engelsk)Inngår i: Proceedings of The Thirty-first AAAI Conference on Artificial Intelligence (AAAI) / [ed] Satinder Singh and Shaul Markovitch, AAAI Press, 2017, Vol. 5, s. 3812-3818Konferansepaper, Publicerat paper (Fagfellevurdert)
    Abstract [en]

    Modern optimization-based approaches to control increasingly allow automatic generation of complex behavior from only a model and an objective. Recent years has seen growing interest in fast solvers to also allow real-time operation on robots, but the computational cost of such trajectory optimization remains prohibitive for many applications. In this paper we examine a novel deep neural network approximation and validate it on a safe navigation problem with a real nano-quadcopter. As the risk of costly failures is a major concern with real robots, we propose a risk-aware resampling technique. Contrary to prior work this active learning approach is easy to use with existing solvers for trajectory optimization, as well as deep learning. We demonstrate the efficacy of the approach on a difficult collision avoidance problem with non-cooperative moving obstacles. Our findings indicate that the resulting neural network approximations are least 50 times faster than the trajectory optimizer while still satisfying the safety requirements. We demonstrate the potential of the approach by implementing a synthesized deep neural network policy on the nano-quadcopter microcontroller.

    sted, utgiver, år, opplag, sider
    AAAI Press, 2017
    Serie
    Proceedings of the AAAI Conference on Artificial Intelligence, ISSN 2159-5399, E-ISSN 2374-3468 ; 5
    HSV kategori
    Identifikatorer
    urn:nbn:se:liu:diva-132800 (URN)978-1-57735-784-1 (ISBN)
    Konferanse
    Thirty-First AAAI Conference on Artificial Intelligence (AAAI), 2017, San Francisco, February 4–9.
    Prosjekter
    ELLIITCADICSNFFP6SYMBICLOUDCUGS
    Forskningsfinansiär
    Linnaeus research environment CADICSELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsEU, FP7, Seventh Framework ProgrammeCUGS (National Graduate School in Computer Science)Swedish Foundation for Strategic Research
    Tilgjengelig fra: 2016-11-25 Laget: 2016-11-25 Sist oppdatert: 2018-01-13bibliografisk kontrollert
  • 23.
    Andersson, Olov
    et al.
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerad datorsystem. Linköpings universitet, Tekniska högskolan.
    Heintz, Fredrik
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerad datorsystem. Linköpings universitet, Tekniska högskolan.
    Doherty, Patrick
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerad datorsystem. Linköpings universitet, Tekniska högskolan.
    Model-Based Reinforcement Learning in Continuous Environments Using Real-Time Constrained Optimization2015Inngår i: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI) / [ed] Blai Bonet and Sven Koenig, AAAI Press, 2015, s. 2497-2503Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Reinforcement learning for robot control tasks in continuous environments is a challenging problem due to the dimensionality of the state and action spaces, time and resource costs for learning with a real robot as well as constraints imposed for its safe operation. In this paper we propose a model-based reinforcement learning approach for continuous environments with constraints. The approach combines model-based reinforcement learning with recent advances in approximate optimal control. This results in a bounded-rationality agent that makes decisions in real-time by efficiently solving a sequence of constrained optimization problems on learned sparse Gaussian process models. Such a combination has several advantages. No high-dimensional policy needs to be computed or stored while the learning problem often reduces to a set of lower-dimensional models of the dynamics. In addition, hard constraints can easily be included and objectives can also be changed in real-time to allow for multiple or dynamic tasks. The efficacy of the approach is demonstrated on both an extended cart pole domain and a challenging quadcopter navigation task using real data.

  • 24.
    Andersson, Olov
    et al.
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerade datorsystem. Linköpings universitet, Tekniska fakulteten.
    Wzorek, Mariusz
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerade datorsystem. Linköpings universitet, Tekniska fakulteten.
    Doherty, Patrick
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerade datorsystem. Linköpings universitet, Tekniska fakulteten.
    Deep Learning Quadcopter Control via Risk-Aware Active Learning2017Inngår i: Proceedings of The Thirty-first AAAI Conference on Artificial Intelligence (AAAI) / [ed] Satinder Singh and Shaul Markovitch, AAAI Press, 2017, Vol. 5, s. 3812-3818Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Modern optimization-based approaches to control increasingly allow automatic generation of complex behavior from only a model and an objective. Recent years has seen growing interest in fast solvers to also allow real-time operation on robots, but the computational cost of such trajectory optimization remains prohibitive for many applications. In this paper we examine a novel deep neural network approximation and validate it on a safe navigation problem with a real nano-quadcopter. As the risk of costly failures is a major concern with real robots, we propose a risk-aware resampling technique. Contrary to prior work this active learning approach is easy to use with existing solvers for trajectory optimization, as well as deep learning. We demonstrate the efficacy of the approach on a difficult collision avoidance problem with non-cooperative moving obstacles. Our findings indicate that the resulting neural network approximations are least 50 times faster than the trajectory optimizer while still satisfying the safety requirements. We demonstrate the potential of the approach by implementing a synthesized deep neural network policy on the nano-quadcopter microcontroller.

  • 25.
    Andersson, Robert
    Linköpings universitet, Institutionen för systemteknik.
    A calibration method for laser-triangulating 3D cameras2008Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    A laser-triangulating range camera uses a laser plane to light an object. If the position of the laser relative to the camera as well as certrain properties of the camera is known, it is possible to calculate the coordinates for all points along the profile of the object. If either the object or the camera and laser has a known motion, it is possible to combine several measurements to get a three-dimensional view of the object.

    Camera calibration is the process of finding the properties of the camera and enough information about the setup so that the desired coordinates can be calculated. Several methods for camera calibration exist, but this thesis proposes a new method that has the advantages that the objects needed are relatively inexpensive and that only objects in the laser plane need to be observed. Each part of the method is given a thorough description. Several mathematical derivations have also been added as appendices for completeness.

    The proposed method is tested using both synthetic and real data. The results show that the method is suitable even when high accuracy is needed. A few suggestions are also made about how the method can be improved further.

  • 26.
    Anliot, Manne
    Linköpings universitet, Institutionen för systemteknik.
    Volume Estimation of Airbags: A Visual Hull Approach2005Independent thesis Basic level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    This thesis presents a complete and fully automatic method for estimating the volume of an airbag, through all stages of its inflation, with multiple synchronized high-speed cameras.

    Using recorded contours of the inflating airbag, its visual hull is reconstructed with a novel method: The intersections of all back-projected contours are first identified with an accelerated epipolar algorithm. These intersections, together with additional points sampled from concave surface regions of the visual hull, are then Delaunay triangulated to a connected set of tetrahedra. Finally, the visual hull is extracted by carving away the tetrahedra that are classified as inconsistent with the contours, according to a voting procedure.

    The volume of an airbag's visual hull is always larger than the airbag's real volume. By projecting a known synthetic model of the airbag into the cameras, this volume offset is computed, and an accurate estimate of the real airbag volume is extracted.

    Even though volume estimates can be computed for all camera setups, the cameras should be specially posed to achieve optimal results. Such poses are uniquely found for different airbag models with a separate, fully automatic, simulated annealing algorithm.

    Satisfying results are presented for both synthetic and real-world data.

  • 27.
    Anwer, Rao Muhammad
    et al.
    Aalto Univ, Finland.
    Khan, Fahad
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Laaksonen, Jorma
    Aalto Univ, Finland.
    Two-Stream Part-based Deep Representation for Human Attribute Recognition2018Inngår i: 2018 INTERNATIONAL CONFERENCE ON BIOMETRICS (ICB), IEEE , 2018, s. 90-97Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Recognizing human attributes in unconstrained environments is a challenging computer vision problem. State-of-the-art approaches to human attribute recognition are based on convolutional neural networks (CNNs). The de facto practice when training these CNNs on a large labeled image dataset is to take RGB pixel values of an image as input to the network. In this work, we propose a two-stream part-based deep representation for human attribute classification. Besides the standard RGB stream, we train a deep network by using mapped coded images with explicit texture information, that complements the standard RGB deep model. To integrate human body parts knowledge, we employ the deformable part-based models together with our two-stream deep model. Experiments are performed on the challenging Human Attributes (HAT-27) Dataset consisting of 27 different human attributes. Our results clearly show that (a) the two-stream deep network provides consistent gain in performance over the standard RGB model and (b) that the attribute classification results are further improved with our two-stream part-based deep representations, leading to state-of-the-art results.

  • 28.
    Arvidsson, Lars
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Stereoseende i realtid2007Independent thesis Basic level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    In this thesis, two real-time stereo methods have been implemented and evaluated. The first one is based on blockmatching and the second one is based on local phase. The goal was to be able to run the algorithms at real-time and examine which one is best. The blockmatching method performed better than the phase based method, both in speed and accuracy. SIMD operations (Single Instruction Multiple Data) have been used in the processor giving a speed boost by a factor of two.

  • 29.
    Auer, Cornelia
    et al.
    Zuse Institute Berlin, Berlin, Germany.
    Nair, Jaya
    IIIT – Bangalore, Electronics City, Hosur Road, Bangalore, India.
    Zobel, Valentin
    Zuse Institue Berlin, Berlin, Germany.
    Hotz, Ingrid
    Zuse Institue Berlin, Berlin, Germany.
    2D Tensor Field Segmentation2011Inngår i: Dagstuhl Follow-Ups, E-ISSN 1868-8977, Vol. 2, s. 17-35Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We present a topology-based segmentation as means for visualizing 2D symmetric tensor fields. The segmentation uses directional as well as eigenvalue characteristics of the underlying field to delineate cells of similar (or dissimilar) behavior in the tensor field. A special feature of the resulting cells is that their shape expresses the tensor behavior inside the cells and thus also can be considered as a kind of glyph representation. This allows a qualitative comprehension of important structures of the field. The resulting higher-level abstraction of the field provides valuable analysis. The extraction of the integral topological skeleton using both major and minor eigenvector fields serves as a structural pre-segmentation and renders all directional structures in the field. The resulting curvilinear cells are bounded by tensorlines and already delineate regions of equivalent eigenvector behavior. This pre-segmentation is further adaptively refined to achieve a segmentation reflecting regions of similar eigenvalue and eigenvector characteristics. Cell refinement involves both subdivision and merging of cells achieving a predetermined resolution, accuracy and uniformity of the segmentation. The buildingblocks of the approach can be intuitively customized to meet the demands or different applications. Application to tensor fields from numerical stress simulations demonstrates the effectiveness of our method.

  • 30.
    Axelsson, Emil
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Costa, Jonathas
    NYU, NY 10003 USA.
    Silva, Claudio
    NYU, NY 10003 USA.
    Emmart, Carter
    Amer Museum Nat Hist, NY 10024 USA.
    Bock, Alexander
    Linköpings universitet, Institutionen för teknik och naturvetenskap. Linköpings universitet, Tekniska fakulteten.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Dynamic Scene Graph: Enabling Scaling, Positioning, and Navigation in the Universe2017Inngår i: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 36, nr 3, s. 459-468Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this work, we address the challenge of seamlessly visualizing astronomical data exhibiting huge scale differences in distance, size, and resolution. One of the difficulties is accurate, fast, and dynamic positioning and navigation to enable scaling over orders of magnitude, far beyond the precision of floating point arithmetic. To this end we propose a method that utilizes a dynamically assigned frame of reference to provide the highest possible numerical precision for all salient objects in a scene graph. This makes it possible to smoothly navigate and interactively render, for example, surface structures on Mars and the Milky Way simultaneously. Our work is based on an analysis of tracking and quantification of the propagation of precision errors through the computer graphics pipeline using interval arithmetic. Furthermore, we identify sources of precision degradation, leading to incorrect object positions in screen-space and z-fighting. Our proposed method operates without near and far planes while maintaining high depth precision through the use of floating point depth buffers. By providing interoperability with order-independent transparency algorithms, direct volume rendering, and stereoscopy, our approach is well suited for scientific visualization. We provide the mathematical background, a thorough description of the method, and a reference implementation.

  • 31.
    Barnada, Marc
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Goethe University of Frankfurt, Germany.
    Conrad, Christian
    Goethe University of Frankfurt, Germany.
    Bradler, Henry
    Goethe University of Frankfurt, Germany.
    Ochs, Matthias
    Goethe University of Frankfurt, Germany.
    Mester, Rudolf
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Goethe University of Frankfurt, Germany.
    Estimation of Automotive Pitch, Yaw, and Roll using Enhanced Phase Correlation on Multiple Far-field Windows2015Inngår i: 2015 IEEE Intelligent Vehicles Symposium (IV), IEEE , 2015, s. 481-486Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The online-estimation of yaw, pitch, and roll of a moving vehicle is an important ingredient for systems which estimate egomotion, and 3D structure of the environment in a moving vehicle from video information. We present an approach to estimate these angular changes from monocular visual data, based on the fact that the motion of far distant points is not dependent on translation, but only on the current rotation of the camera. The presented approach does not require features (corners, edges,...) to be extracted. It allows to estimate in parallel also the illumination changes from frame to frame, and thus allows to largely stabilize the estimation of image correspondences and motion vectors, which are most often central entities needed for computating scene structure, distances, etc. The method is significantly less complex and much faster than a full egomotion computation from features, such as PTAM [6], but it can be used for providing motion priors and reduce search spaces for more complex methods which perform a complete analysis of egomotion and dynamic 3D structure of the scene in which a vehicle moves.

  • 32.
    Benderius, Björn
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Laser Triangulation Using Spacetime Analysis2007Independent thesis Advanced level (degree of Master (One Year)), 20 poäng / 30 hpOppgave
    Abstract [en]

    In this thesis spacetime analysis is applied to laser triangulation in an attempt to eliminate certain artifacts caused mainly by reflectance variations of the surface being measured. It is shown that spacetime analysis do eliminate these artifacts almost completely, it is also shown that the shape of the laser beam used no longer is critical thanks to the spacetime analysis, and that in some cases the laser probably even could be exchanged for a non-coherent light source. Furthermore experiments of running the derived algorithm on a GPU (Graphics Processing Unit) are conducted with very promising results.

    The thesis starts by deriving the theory needed for doing spacetime analysis in a laser triangulation setup taking perspective distortions into account, then several experiments evaluating the method is conducted.

  • 33.
    Berg, Amanda
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB, Linköping, Sweden.
    Detection and Tracking in Thermal Infrared Imagery2016Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Thermal cameras have historically been of interest mainly for military applications. Increasing image quality and resolution combined with decreasing price and size during recent years have, however, opened up new application areas. They are now widely used for civilian applications, e.g., within industry, to search for missing persons, in automotive safety, as well as for medical applications. Thermal cameras are useful as soon as it is possible to measure a temperature difference. Compared to cameras operating in the visual spectrum, they are advantageous due to their ability to see in total darkness, robustness to illumination variations, and less intrusion on privacy.

    This thesis addresses the problem of detection and tracking in thermal infrared imagery. Visual detection and tracking of objects in video are research areas that have been and currently are subject to extensive research. Indications oftheir popularity are recent benchmarks such as the annual Visual Object Tracking (VOT) challenges, the Object Tracking Benchmarks, the series of workshops on Performance Evaluation of Tracking and Surveillance (PETS), and the workshops on Change Detection. Benchmark results indicate that detection and tracking are still challenging problems.

    A common belief is that detection and tracking in thermal infrared imagery is identical to detection and tracking in grayscale visual imagery. This thesis argues that the preceding allegation is not true. The characteristics of thermal infrared radiation and imagery pose certain challenges to image analysis algorithms. The thesis describes these characteristics and challenges as well as presents evaluation results confirming the hypothesis.

    Detection and tracking are often treated as two separate problems. However, some tracking methods, e.g. template-based tracking methods, base their tracking on repeated specific detections. They learn a model of the object that is adaptively updated. That is, detection and tracking are performed jointly. The thesis includes a template-based tracking method designed specifically for thermal infrared imagery, describes a thermal infrared dataset for evaluation of template-based tracking methods, and provides an overview of the first challenge on short-term,single-object tracking in thermal infrared video. Finally, two applications employing detection and tracking methods are presented.

    Delarbeid
    1. A Thermal Object Tracking Benchmark
    Åpne denne publikasjonen i ny fane eller vindu >>A Thermal Object Tracking Benchmark
    2015 (engelsk)Konferansepaper, Publicerat paper (Fagfellevurdert)
    Abstract [en]

    Short-term single-object (STSO) tracking in thermal images is a challenging problem relevant in a growing number of applications. In order to evaluate STSO tracking algorithms on visual imagery, there are de facto standard benchmarks. However, we argue that tracking in thermal imagery is different than in visual imagery, and that a separate benchmark is needed. The available thermal infrared datasets are few and the existing ones are not challenging for modern tracking algorithms. Therefore, we hereby propose a thermal infrared benchmark according to the Visual Object Tracking (VOT) protocol for evaluation of STSO tracking methods. The benchmark includes the new LTIR dataset containing 20 thermal image sequences which have been collected from multiple sources and annotated in the format used in the VOT Challenge. In addition, we show that the ranking of different tracking principles differ between the visual and thermal benchmarks, confirming the need for the new benchmark.

    sted, utgiver, år, opplag, sider
    IEEE, 2015
    HSV kategori
    Identifikatorer
    urn:nbn:se:liu:diva-121001 (URN)10.1109/AVSS.2015.7301772 (DOI)000380619700052 ()978-1-4673-7632-7 (ISBN)
    Konferanse
    12th IEEE International Conference on Advanced Video- and Signal-based Surveillance, Karlsruhe, Germany, August 25-28 2015
    Tilgjengelig fra: 2015-09-02 Laget: 2015-09-02 Sist oppdatert: 2018-01-11bibliografisk kontrollert
    2. The Thermal Infrared Visual Object Tracking VOT-TIR2015 Challenge Results
    Åpne denne publikasjonen i ny fane eller vindu >>The Thermal Infrared Visual Object Tracking VOT-TIR2015 Challenge Results
    Vise andre…
    2015 (engelsk)Inngår i: Proceedings of the IEEE International Conference on Computer Vision, Institute of Electrical and Electronics Engineers (IEEE), 2015, s. 639-651Konferansepaper, Publicerat paper (Fagfellevurdert)
    Abstract [en]

    The Thermal Infrared Visual Object Tracking challenge 2015, VOTTIR2015, aims at comparing short-term single-object visual trackers that work on thermal infrared (TIR) sequences and do not apply prelearned models of object appearance. VOT-TIR2015 is the first benchmark on short-term tracking in TIR sequences. Results of 24 trackers are presented. For each participating tracker, a short description is provided in the appendix. The VOT-TIR2015 challenge is based on the VOT2013 challenge, but introduces the following novelties: (i) the newly collected LTIR (Linköping TIR) dataset is used, (ii) the VOT2013 attributes are adapted to TIR data, (iii) the evaluation is performed using insights gained during VOT2013 and VOT2014 and is similar to VOT2015.

    sted, utgiver, år, opplag, sider
    Institute of Electrical and Electronics Engineers (IEEE), 2015
    Serie
    IEEE International Conference on Computer Vision. Proceedings, ISSN 1550-5499
    HSV kategori
    Identifikatorer
    urn:nbn:se:liu:diva-126917 (URN)10.1109/ICCVW.2015.86 (DOI)000380434700077 ()978-146738390-5 (ISBN)
    Eksternt samarbeid:
    Konferanse
    IEEE International Conference on Computer Vision Workshop (ICCVW. 7-13 Dec. 2015 Santiago, Chile
    Tilgjengelig fra: 2016-04-07 Laget: 2016-04-07 Sist oppdatert: 2018-01-10bibliografisk kontrollert
    3. Channel Coded Distribution Field Tracking for Thermal Infrared Imagery
    Åpne denne publikasjonen i ny fane eller vindu >>Channel Coded Distribution Field Tracking for Thermal Infrared Imagery
    2016 (engelsk)Inngår i: PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), IEEE , 2016, s. 1248-1256Konferansepaper, Publicerat paper (Fagfellevurdert)
    Abstract [en]

    We address short-term, single-object tracking, a topic that is currently seeing fast progress for visual video, for the case of thermal infrared (TIR) imagery. The fast progress has been possible thanks to the development of new template-based tracking methods with online template updates, methods which have not been explored for TIR tracking. Instead, tracking methods used for TIR are often subject to a number of constraints, e.g., warm objects, low spatial resolution, and static camera. As TIR cameras become less noisy and get higher resolution these constraints are less relevant, and for emerging civilian applications, e.g., surveillance and automotive safety, new tracking methods are needed. Due to the special characteristics of TIR imagery, we argue that template-based trackers based on distribution fields should have an advantage over trackers based on spatial structure features. In this paper, we propose a template-based tracking method (ABCD) designed specifically for TIR and not being restricted by any of the constraints above. In order to avoid background contamination of the object template, we propose to exploit background information for the online template update and to adaptively select the object region used for tracking. Moreover, we propose a novel method for estimating object scale change. The proposed tracker is evaluated on the VOT-TIR2015 and VOT2015 datasets using the VOT evaluation toolkit and a comparison of relative ranking of all common participating trackers in the challenges is provided. Further, the proposed tracker, ABCD, and the VOT-TIR2015 winner SRDCFir are evaluated on maritime data. Experimental results show that the ABCD tracker performs particularly well on thermal infrared sequences.

    sted, utgiver, år, opplag, sider
    IEEE, 2016
    Serie
    IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, ISSN 2160-7508
    HSV kategori
    Identifikatorer
    urn:nbn:se:liu:diva-134402 (URN)10.1109/CVPRW.2016.158 (DOI)000391572100151 ()978-1-5090-1438-5 (ISBN)978-1-5090-1437-8 (ISBN)
    Konferanse
    Computer Vision and Pattern Recognition Workshops (CVPRW), 2016 IEEE Conference on
    Forskningsfinansiär
    Swedish Research Council, D0570301EU, FP7, Seventh Framework Programme, 312784EU, FP7, Seventh Framework Programme, 607567
    Tilgjengelig fra: 2017-02-09 Laget: 2017-02-09 Sist oppdatert: 2018-01-13
    4. Detecting Rails and Obstacles Using a Train-Mounted Thermal Camera
    Åpne denne publikasjonen i ny fane eller vindu >>Detecting Rails and Obstacles Using a Train-Mounted Thermal Camera
    2015 (engelsk)Inngår i: Image Analysis: 19th Scandinavian Conference, SCIA 2015, Copenhagen, Denmark, June 15-17, 2015. Proceedings / [ed] Rasmus R. Paulsen; Kim S. Pedersen, Springer, 2015, s. 492-503Konferansepaper, Publicerat paper (Fagfellevurdert)
    Abstract [en]

    We propose a method for detecting obstacles on the railway in front of a moving train using a monocular thermal camera. The problem is motivated by the large number of collisions between trains and various obstacles, resulting in reduced safety and high costs. The proposed method includes a novel way of detecting the rails in the imagery, as well as a way to detect anomalies on the railway. While the problem at a first glance looks similar to road and lane detection, which in the past has been a popular research topic, a closer look reveals that the problem at hand is previously unaddressed. As a consequence, relevant datasets are missing as well, and thus our contribution is two-fold: We propose an approach to the novel problem of obstacle detection on railways and we describe the acquisition of a novel data set.

    sted, utgiver, år, opplag, sider
    Springer, 2015
    Serie
    Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 9127
    Emneord
    Thermal imaging; Computer vision; Train safety; Railway detection; Anomaly detection; Obstacle detection
    HSV kategori
    Identifikatorer
    urn:nbn:se:liu:diva-119507 (URN)10.1007/978-3-319-19665-7_42 (DOI)978-3-319-19664-0 (ISBN)978-3-319-19665-7 (ISBN)
    Konferanse
    19th Scandinavian Conference, SCIA 2015, Copenhagen, Denmark, June 15-17, 2015
    Tilgjengelig fra: 2015-06-22 Laget: 2015-06-18 Sist oppdatert: 2018-02-07bibliografisk kontrollert
    5. Enhanced analysis of thermographic images for monitoring of district heat pipe networks
    Åpne denne publikasjonen i ny fane eller vindu >>Enhanced analysis of thermographic images for monitoring of district heat pipe networks
    2016 (engelsk)Inngår i: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 83, nr 2, s. 215-223Artikkel i tidsskrift (Fagfellevurdert) Published
    Abstract [en]

    We address two problems related to large-scale aerial monitoring of district heating networks. First, we propose a classification scheme to reduce the number of false alarms among automatically detected leakages in district heating networks. The leakages are detected in images captured by an airborne thermal camera, and each detection corresponds to an image region with abnormally high temperature. This approach yields a significant number of false positives, and we propose to reduce this number in two steps; by (a) using a building segmentation scheme in order to remove detections on buildings, and (b) to use a machine learning approach to classify the remaining detections as true or false leakages. We provide extensive experimental analysis on real-world data, showing that this post-processing step significantly improves the usefulness of the system. Second, we propose a method for characterization of leakages over time, i.e., repeating the image acquisition one or a few years later and indicate areas that suffer from an increased energy loss. We address the problem of finding trends in the degradation of pipe networks in order to plan for long-term maintenance, and propose a visualization scheme exploiting the consecutive data collections.

    sted, utgiver, år, opplag, sider
    Elsevier, 2016
    Emneord
    Remote thermography; Classification; Pattern recognition; District heating; Thermal infrared
    HSV kategori
    Identifikatorer
    urn:nbn:se:liu:diva-133004 (URN)10.1016/j.patrec.2016.07.002 (DOI)000386874800013 ()
    Merknad

    Funding Agencies|Swedish Research Council (Vetenskapsradet) through project Learning systems for remote thermography [621-2013-5703]; Swedish Research Council [2014-6227]

    Tilgjengelig fra: 2016-12-08 Laget: 2016-12-07 Sist oppdatert: 2018-11-26
  • 34.
    Berg, Amanda
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB Linköping, Sweden.
    Ahlberg, Jörgen
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB Linköping, Sweden.
    Classifying district heating network leakages in aerial thermal imagery2014Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    In this paper we address the problem of automatically detecting leakages in underground pipes of district heating networks from images captured by an airborne thermal camera. The basic idea is to classify each relevant image region as a leakage if its temperature exceeds a threshold. This simple approach yields a significant number of false positives. We propose to address this issue by machine learning techniques and provide extensive experimental analysis on real-world data. The results show that this postprocessing step significantly improves the usefulness of the system.

  • 35.
    Berg, Amanda
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    A thermal infrared dataset for evaluation of short-term tracking methods2015Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    During recent years, thermal cameras have decreased in both size and cost while improving image quality. The area of use for such cameras has expanded with many exciting applications, many of which require tracking of objects. While being subject to extensive research in the visual domain, tracking in thermal imagery has historically been of interest mainly for military purposes. The available thermal infrared datasets for evaluating methods addressing these problems are few and the ones that do are not challenging enough for today’s tracking algorithms. Therefore, we hereby propose a thermal infrared dataset for evaluation of short-term tracking methods. The dataset consists of 20 sequences which have been collected from multiple sources and the data format used is in accordance with the Visual Object Tracking (VOT) Challenge.

  • 36.
    Berg, Amanda
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    A Thermal Object Tracking Benchmark2015Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Short-term single-object (STSO) tracking in thermal images is a challenging problem relevant in a growing number of applications. In order to evaluate STSO tracking algorithms on visual imagery, there are de facto standard benchmarks. However, we argue that tracking in thermal imagery is different than in visual imagery, and that a separate benchmark is needed. The available thermal infrared datasets are few and the existing ones are not challenging for modern tracking algorithms. Therefore, we hereby propose a thermal infrared benchmark according to the Visual Object Tracking (VOT) protocol for evaluation of STSO tracking methods. The benchmark includes the new LTIR dataset containing 20 thermal image sequences which have been collected from multiple sources and annotated in the format used in the VOT Challenge. In addition, we show that the ranking of different tracking principles differ between the visual and thermal benchmarks, confirming the need for the new benchmark.

  • 37.
    Berg, Amanda
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Tekniska fakulteten.
    Channel Coded Distribution Field Tracking for Thermal Infrared Imagery2016Inngår i: PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), IEEE , 2016, s. 1248-1256Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We address short-term, single-object tracking, a topic that is currently seeing fast progress for visual video, for the case of thermal infrared (TIR) imagery. The fast progress has been possible thanks to the development of new template-based tracking methods with online template updates, methods which have not been explored for TIR tracking. Instead, tracking methods used for TIR are often subject to a number of constraints, e.g., warm objects, low spatial resolution, and static camera. As TIR cameras become less noisy and get higher resolution these constraints are less relevant, and for emerging civilian applications, e.g., surveillance and automotive safety, new tracking methods are needed. Due to the special characteristics of TIR imagery, we argue that template-based trackers based on distribution fields should have an advantage over trackers based on spatial structure features. In this paper, we propose a template-based tracking method (ABCD) designed specifically for TIR and not being restricted by any of the constraints above. In order to avoid background contamination of the object template, we propose to exploit background information for the online template update and to adaptively select the object region used for tracking. Moreover, we propose a novel method for estimating object scale change. The proposed tracker is evaluated on the VOT-TIR2015 and VOT2015 datasets using the VOT evaluation toolkit and a comparison of relative ranking of all common participating trackers in the challenges is provided. Further, the proposed tracker, ABCD, and the VOT-TIR2015 winner SRDCFir are evaluated on maritime data. Experimental results show that the ABCD tracker performs particularly well on thermal infrared sequences.

  • 38.
    Berg, Amanda
    et al.
    Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Institutionen för systemteknik, Datorseende. Termisk Syst Tekn AB, Diskettgatan 11 B, SE-58335 Linkoping, Sweden.
    Ahlberg, Jörgen
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Syst Tekn AB, Diskettgatan 11 B, SE-58335 Linkoping, Sweden.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Enhanced analysis of thermographic images for monitoring of district heat pipe networks2016Inngår i: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 83, nr 2, s. 215-223Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We address two problems related to large-scale aerial monitoring of district heating networks. First, we propose a classification scheme to reduce the number of false alarms among automatically detected leakages in district heating networks. The leakages are detected in images captured by an airborne thermal camera, and each detection corresponds to an image region with abnormally high temperature. This approach yields a significant number of false positives, and we propose to reduce this number in two steps; by (a) using a building segmentation scheme in order to remove detections on buildings, and (b) to use a machine learning approach to classify the remaining detections as true or false leakages. We provide extensive experimental analysis on real-world data, showing that this post-processing step significantly improves the usefulness of the system. Second, we propose a method for characterization of leakages over time, i.e., repeating the image acquisition one or a few years later and indicate areas that suffer from an increased energy loss. We address the problem of finding trends in the degradation of pipe networks in order to plan for long-term maintenance, and propose a visualization scheme exploiting the consecutive data collections.

  • 39.
    Berg, Amanda
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Generating Visible Spectrum Images from Thermal Infrared2018Inngår i: Proceedings 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops CVPRW 2018, Institute of Electrical and Electronics Engineers (IEEE), 2018, s. 1224-1233Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Transformation of thermal infrared (TIR) images into visual, i.e. perceptually realistic color (RGB) images, is a challenging problem. TIR cameras have the ability to see in scenarios where vision is severely impaired, for example in total darkness or fog, and they are commonly used, e.g., for surveillance and automotive applications. However, interpretation of TIR images is difficult, especially for untrained operators. Enhancing the TIR image display by transforming it into a plausible, visual, perceptually realistic RGB image presumably facilitates interpretation. Existing grayscale to RGB, so called, colorization methods cannot be applied to TIR images directly since those methods only estimate the chrominance and not the luminance. In the absence of conventional colorization methods, we propose two fully automatic TIR to visual color image transformation methods, a two-step and an integrated approach, based on Convolutional Neural Networks. The methods require neither pre- nor postprocessing, do not require any user input, and are robust to image pair misalignments. We show that the methods do indeed produce perceptually realistic results on publicly available data, which is assessed both qualitatively and quantitatively.

  • 40.
    Berg, Amanda
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Object Tracking in Thermal Infrared Imagery based on Channel Coded Distribution Fields2017Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    We address short-term, single-object tracking, a topic that is currently seeing fast progress for visual video, for the case of thermal infrared (TIR) imagery. Tracking methods designed for TIR are often subject to a number of constraints, e.g., warm objects, low spatial resolution, and static camera. As TIR cameras become less noisy and get higher resolution these constraints are less relevant, and for emerging civilian applications, e.g., surveillance and automotive safety, new tracking methods are needed. Due to the special characteristics of TIR imagery, we argue that template-based trackers based on distribution fields should have an advantage over trackers based on spatial structure features. In this paper, we propose a templatebased tracking method (ABCD) designed specifically for TIR and not being restricted by any of the constraints above. The proposed tracker is evaluated on the VOT-TIR2015 and VOT2015 datasets using the VOT evaluation toolkit and a comparison of relative ranking of all common participating trackers in the challenges is provided. Experimental results show that the ABCD tracker performs particularly well on thermal infrared sequences.

  • 41.
    Berg, Amanda
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Visual Spectrum Image Generation fromThermal Infrared2019Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    We address short-term, single-object tracking, a topic that is currently seeing fast progress for visual video, for the case of thermal infrared (TIR) imagery. Tracking methods designed for TIR are often subject to a number of constraints, e.g., warm objects, low spatial resolution, and static camera. As TIR cameras become less noisy and get higher resolution these constraints are less relevant, and for emerging civilian applications, e.g., surveillance and automotive safety, new tracking methods are needed. Due to the special characteristics of TIR imagery, we argue that template-based trackers based on distribution fields should have an advantage over trackers based on spatial structure features. In this paper, we propose a templatebased tracking method (ABCD) designed specifically for TIR and not being restricted by any of the constraints above. The proposed tracker is evaluated on the VOT-TIR2015 and VOT2015 datasets using the VOT evaluation toolkit and a comparison of relative ranking of all common participating trackers in the challenges is provided. Experimental results show that the ABCD tracker performs particularly well on thermal infrared sequences.

  • 42.
    Berg, Amanda
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Häger, Gustav
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Ahlberg, Jörgen
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB, Linköping, Sweden.
    An Overview of the Thermal Infrared Visual Object Tracking VOT-TIR2015 Challenge2016Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    The Thermal Infrared Visual Object Tracking (VOT-TIR2015) Challenge was organized in conjunction with ICCV2015. It was the first benchmark on short-term,single-target tracking in thermal infrared (TIR) sequences. The challenge aimed at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. It was based on the VOT2013 Challenge, but introduced the following novelties: (i) the utilization of the LTIR (Linköping TIR) dataset, (ii) adaption of the VOT2013 attributes to thermal data, (iii) a similar evaluation to that of VOT2015. This paper provides an overview of the VOT-TIR2015 Challenge as well as the results of the 24 participating trackers.

  • 43.
    Berg, Amanda
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Termisk Systemteknik AB, Linköping, Sweden.
    Johnander, Joakim
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Zenuity AB, Göteborg, Sweden.
    Durand de Gevigney, Flavie
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Grenoble INP, France.
    Ahlberg, Jörgen
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Semi-automatic Annotation of Objects in Visual-Thermal Video2019Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Deep learning requires large amounts of annotated data. Manual annotation of objects in video is, regardless of annotation type, a tedious and time-consuming process. In particular, for scarcely used image modalities human annotationis hard to justify. In such cases, semi-automatic annotation provides an acceptable option.

    In this work, a recursive, semi-automatic annotation method for video is presented. The proposed method utilizesa state-of-the-art video object segmentation method to propose initial annotations for all frames in a video based on only a few manual object segmentations. In the case of a multi-modal dataset, the multi-modality is exploited to refine the proposed annotations even further. The final tentative annotations are presented to the user for manual correction.

    The method is evaluated on a subset of the RGBT-234 visual-thermal dataset reducing the workload for a human annotator with approximately 78% compared to full manual annotation. Utilizing the proposed pipeline, sequences are annotated for the VOT-RGBT 2019 challenge.

  • 44.
    Berg, Martin
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Pose Recognition for Tracker Initialization Using 3D Models2008Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    In this thesis it is examined whether the pose of an object can be determined by a system trained with a synthetic 3D model of said object. A number of variations of methods using P-channel representation are examined. Reference images are rendered from the 3D model, features, such as gradient orientation and color information are extracted and encoded into P-channels. The P-channel representation is then used to estimate an overlapping channel representation, using B1-spline functions, to estimate a density function for the feature set. Experiments were conducted with this representation as well as the raw P-channel representation in conjunction with a number of distance measures and estimation methods.

    It is shown that, with correct preprocessing and choice of parameters, the pose can be detected with some accuracy and, if not in real-time, fast enough to be useful in a tracker initialization scenario. It is also concluded that the success rate of the estimation depends heavily on the nature of the object.

  • 45.
    Berger, Cyrille
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerade datorsystem. Linköpings universitet, Tekniska högskolan.
    Colour perception graph for characters segmentation2014Inngår i: Advances in Visual Computing: 10th International Symposium, ISVC 2014, Las Vegas, NV, USA, December 8-10, 2014, Proceedings / [ed] George Bebis, Richard Boyle, Bahram Parvin, Darko Koracin, Ryan McMahan, Jason Jerald, Hui Zhang, Steven M. Drucker, Chandra Kambhamettu, Maha El Choubassi, Zhigang Deng, Mark Carlson, Springer, 2014, s. 598-608Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Characters recognition in natural images is a challenging problem, asit involves segmenting characters of various colours on various background. Inthis article, we present a method for segmenting images that use a colour percep-tion graph. Our algorithm is inspired by graph cut segmentation techniques andit use an edge detection technique for filtering the graph before the graph-cut aswell as merging segments as a final step. We also present both qualitative andquantitative results, which show that our algorithm perform at slightly better andfaster to a state of the art algorithm.

  • 46.
    Berger, Cyrille
    Laboratoire d'Analyse et d'Architecture des Systèmes (LAAS), l'Université Toulouse, France.
    Perception de la géométrie de l'environment pour la navigation autonome2009Doktoravhandling, monografi (Annet vitenskapelig)
    Abstract [fr]

    Le but de la recherche en robotique mobile est de donner aux robots la capacité d'accomplir des missions dans un environnement qui n'est pas parfaitement connu. Mission, qui consiste en l'exécution d'un certain nombre d'actions élémentaires (déplacement, manipulation d'objets...) et qui nécessite une localisation précise, ainsi que la construction d'un bon modèle géométrique de l'environnement, a partir de l'exploitation de ses propres capteurs, des capteurs externes, de l'information provenant d'autres robots et de modèle existant, par exemple d'un système d'information géographique. L'information commune est la géométrie de l'environnement. La première partie du manuscrit couvre les différentes méthodes d'extraction de l'information géométrique. La seconde partie présente la création d'un modèle géométrique en utilisant un graphe, ainsi qu'une méthode pour extraire de l'information du graphe et permettre au robot de se localiser dans l'environnement.

  • 47.
    Berger, Cyrille
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerade datorsystem. Linköpings universitet, Tekniska högskolan.
    Strokes detection for skeletonisation of characters shapes2014Inngår i: Advances in Visual Computing: 10th International Symposium, ISVC 2014, Las Vegas, NV, USA, December 8-10, 2014, Proceedings, Part II / [ed] George Bebis, Richard Boyle, Bahram Parvin, Darko Koracin, Ryan McMahan, Jason Jerald, Hui Zhang, Steven M. Drucker, Chandra Kambhamettu, Maha El Choubassi, Zhigang Deng, Mark Carlson, Springer, 2014, s. 510-520Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Skeletonisation is a key process in character recognition in natural images. Under the assumption that a character is made of a stroke of uniform colour, with small variation in thickness, the process of recognising characters can be decomposed in the three steps. First the image is segmented, then each segment is transformed into a set of connected strokes (skeletonisation), which are then abstracted in a descriptor that can be used to recognise the character. The main issue with skeletonisation is the sensitivity with noise, and especially, the presence of holes in the masks. In this article, a new method for the extraction of strokes is presented, which address the problem of holes in the mask and does not use any parameters.

  • 48.
    Berger, Cyrille
    Linköpings universitet, Institutionen för datavetenskap, KPLAB - Laboratoriet för kunskapsbearbetning. Linköpings universitet, Tekniska högskolan.
    Toward rich geometric map for SLAM: Online Detection of Planes in 2D LIDAR2012Inngår i: Proceedings of the International Workshop on Perception for Mobile Robots Autonomy (PEMRA), 2012Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Rich geometric models of the environment are needed for robots to accomplish their missions. However a robot operatingin a large environment would require a compact representation.

    In this article, we present a method that relies on the idea that a plane appears as a line segment in a 2D scan, andthat by tracking those lines frame after frame, it is possible to estimate the parameters of that plane. The method istherefore divided in three steps: fitting line segments on the points of the 2D scan, tracking those line segments inconsecutive scan and estimating the parameters with a graph based SLAM (Simultaneous Localisation And Mapping)algorithm.

  • 49.
    Berger, Cyrille
    et al.
    Linköpings universitet, Institutionen för datavetenskap, KPLAB - Laboratoriet för kunskapsbearbetning. Linköpings universitet, Tekniska högskolan.
    Lacroix, Simon
    LAAS.
    DSeg: Détection directe de segments dans une image2010Inngår i: 17ème congrès francophone AFRIF-AFIA Reconnaissance des Formes et Intelligence Artificielle (RFIA), 2010Konferansepaper (Fagfellevurdert)
    Abstract [fr]

    Cet article présente une approche ``model-driven'' pour détecter des segmentsde droite dans une image. L'approche détecte les segments de manièreincrémentale sur la base du gradient de l'image, en exploitant un filtre deKalman linéaire qui estime les paramètres de la droite support des segments etles variances associées. Les algorithmes sont rapides et robustes au bruit etaux variations d'illumination de la scène perçue, ils permettent de détecterdes segments plus longs que les approches existantes guidées par les données(``data-driven''), et ils ne nécessitent pas de délicate détermination deparamètres. Des résultats avec différentes conditions d'éclairage et descomparaisons avec les approches existantes sont présentés.

  • 50.
    Berger, Cyrille
    et al.
    Linköpings universitet, Institutionen för datavetenskap, KPLAB - Laboratoriet för kunskapsbearbetning.
    Lacroix, Simon
    LAAS.
    Modélisation de l'environnement par facettes planes pour la Cartographie et la Localisation Simultanées par stéréovision2008Inngår i: Reconnaissance des Formes et Intelligence Artificielle (RFIA), 2008Konferansepaper (Fagfellevurdert)
1234567 1 - 50 of 475
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf