liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 46 of 46
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Andersson, Mats
    et al.
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.
    Filter Networks1999In: Proceedings of Signal and Image Processing (SIP'99), Nassau, Bahamas: IASTED , 1999, p. 213-217Conference paper (Refereed)
    Abstract [en]

    This paper presents a new and efficient approach for optimization and implementation of filter banks e.g. velocity channels, orientation channels and scale spaces. The multi layered structure of a filter network enable a powerful decomposition of complex filters into simple filter components and the intermediary results may contribute to several output nodes. Compared to a direct implementation a filter network uses only a fraction of the coefficients to provide the same result. The optimization procedure is recursive and all filters on each level are optimized simultaneously. The individual filters of the network, in general, contain very few non-zero coefficients, but there are are no restrictions on the spatial position of the coefficients, they may e.g. be concentrated on a line or be sparsely scattered. An efficient implementation of a quadrature filter hierarchy for generic purposes using sparse filter components is presented.

  • 2.
    Andersson, Mats
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Sequential Filter Trees for Efficient 2D 3D and 4D Orientation Estimation1998Report (Other academic)
    Abstract [en]

    A recursive method to condense general multidimensional FIR-filters into a sequence of simple kernels with mainly one dimensional extent has been worked out. Convolver networks adopted for 2, 3 and 4D signals is presented and the performance is illustrated for spherically separable quadrature filters. The resulting filter responses are mapped to a non biased tensor representation where the local tensor constitutes a robust estimate of both the shape and the orientation (velocity) of the neighbourhood. A qualitative evaluation of this General Sequential Filter concept results in no detectable loss in accuracy when compared to conventional FIR (Finite Impulse Response) filters but the computational complexity is reduced several orders in magnitude. For the examples presented in this paper the attained speed-up is 5, 25 and 300 times for 2D, 3D and 4D data respectively The magnitude of the attained speed-up implies that complex spatio-temporal analysis can be performed using standard hardware, such as a powerful workstation, in close to real time. Due to the soft implementation of the convolver and the tree structure of the sequential filtering approach the processing is simple to reconfigure for the outer as well as the inner (vector length) dimensionality of the signal. The implementation was made in AVS (Application Visualization System) using modules written in C.

  • 3.
    Andersson, Thord
    et al.
    n/a.
    Granlund, Gösta H.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Farnebäck, Gunnar
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Nordberg, Klas
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    WITAS Project at Computer Vision Laboratory; A status report (Jan 1998)1998In: Proceedings of the SSAB symposium on image analysis: Uppsala, Sweden, 1998, p. 113-116Conference paper (Refereed)
    Abstract [en]

    WITAS will be engaged in goal-directed basic research in the area of intelligent autonomous vehicles and other autonomous systems. In this paper an overview of the project is given together with a presentation of our research interests in the project. The current status of our part in the project is also given.

  • 4.
    Bigun, Josef
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Granlund, Gösta H.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Multidimensional orientation estimation with applications to texture analysis and optical flow1991In: IEEE Transaction on Pattern Analysis and Machine Intelligence, ISSN 0162-8828, E-ISSN 1939-3539, Vol. 13, no 8, p. 775-790Article in journal (Refereed)
    Abstract [en]

    The problem of detection of orientation in finite dimensional Euclidean spaces is solved in the least squares sense. In particular, the theory is developed for the case when such orientation computations are necessary at all local neighborhoods of the n-dimensional Euclidean space. Detection of orientation is shown to correspond to fitting an axis or a plane to the Fourier transform of an n-dimensional structure. The solution of this problem is related to the solution of a well-known matrix eigenvalue problem. Moreover, it is shown that the necessary computations can be performed in the spatial domain without actually doing a Fourier transformation. Along with the orientation estimate, a certainty measure, based on the error of the fit, is proposed. Two applications in image analysis are considered: texture segmentation and optical flow. An implementation for 2-D (texture features) as well as 3-D (optical flow) is presented. In the case of 2-D, the method exploits the properties of the complex number field to by-pass the eigenvalue analysis, improving the speed and the numerical stability of the method. The theory is verified by experiments which confirm accurate orientation estimates and reliable certainty measures in the presence of noise. The comparative results indicate that the proposed theory produces algorithms computing robust texture features as well as optical flow. The computations are highly parallelizable and can be used in realtime image analysis since they utilize only elementary functions in a closed form (up to dimension 4) and Cartesian separable convolutions.

  • 5.
    Bigun, Josef
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Granlund, Gösta H.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Multidimensional orientation: texture analysis and optical flow1991In: Proceedings of the SSAB Symposium on Image Analysis: Stockholm, 1991, p. 110-113Conference paper (Refereed)
  • 6.
    Doherty, Patrick
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, KPLAB - Knowledge Processing Lab.
    Granlund, Gösta
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Kuchcinski, Krzysztof
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, KPLAB - Knowledge Processing Lab.
    Sandewall, Erik Johan
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, CASL - Cognitive Autonomous Systems Laboratory.
    Nordberg, Klas
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Skarman, Erik
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, EMTEK - Entity for Methodology and Technology of Knowledge Management.
    Wiklund, Johan
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    The WITAS unmanned aerial vehicle project2000In: Proceedings of the 14th European Conference on Artificial Intelligence (ECAI) / [ed] Werner Horn, Amsterdam: IOS Press , 2000, , p. 747-755p. 747-755Conference paper (Refereed)
    Abstract [en]

    The purpose of this paper is to provide a broad overview of the WITAS Unmanned Aerial Vehicle Project. The WITAS UAV project is an ambitious, long-term basic research project with the goal of developing technologies and functionalities necessary for the successful deployment of a fully autonomous UAV operating over diverse geographical terrain containing road and traffic networks. Theproject is multi-disciplinary in nature, requiring many different research competences, and covering a broad spectrum of basic research issues, many of which relate to current topics in artificial intelligence. A number of topics considered are knowledge representation issues, active vision systems and their integration with deliberative/reactive architectures, helicopter modeling and control, ground operator dialogue systems, actual physical platforms, and a number of simulation techniques.

  • 7.
    Felsberg, Michael
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Larsson, Fredrik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wadströmer, Niclas
    FOI.
    Ahlberg, Jörgen
    Termisk Systemteknik AB.
    Online Learning of Correspondences between Images2013In: IEEE Transaction on Pattern Analysis and Machine Intelligence, ISSN 0162-8828, E-ISSN 1939-3539, Vol. 35, no 1, p. 118-129Article in journal (Refereed)
    Abstract [en]

    We propose a novel method for iterative learning of point correspondences between image sequences. Points moving on surfaces in 3D space are projected into two images. Given a point in either view, the considered problem is to determine the corresponding location in the other view. The geometry and distortions of the projections are unknown as is the shape of the surface. Given several pairs of point-sets but no access to the 3D scene, correspondence mappings can be found by excessive global optimization or by the fundamental matrix if a perspective projective model is assumed. However, an iterative solution on sequences of point-set pairs with general imaging geometry is preferable. We derive such a method that optimizes the mapping based on Neyman's chi-square divergence between the densities representing the uncertainties of the estimated and the actual locations. The densities are represented as channel vectors computed with a basis function approach. The mapping between these vectors is updated with each new pair of images such that fast convergence and high accuracy are achieved. The resulting algorithm runs in real-time and is superior to state-of-the-art methods in terms of convergence and accuracy in a number of experiments.

  • 8.
    Felsberg, Michael
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Granlund, Gösta
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Exploratory learning structures in artificial cognitive systems2009In: Image and Vision Computing, ISSN 0262-8856, Vol. 27, no 11, p. 1671-1687Article in journal (Refereed)
    Abstract [en]

    The major goal of the COSPAL project is to develop an artificial cognitive system architecture, with the ability to autonomously extend its capabilities. Exploratory learning is one strategy that allows an extension of competences as provided by the environment of the system. Whereas classical learning methods aim at best for a parametric generalization, i.e., concluding from a number of samples of a problem class to the problem class itself, exploration aims at applying acquired competences to a new problem class, and to apply generalization on a conceptual level, resulting in new models. Incremental or online learning is a crucial requirement to perform exploratory learning. In the COSPAL project, we mainly investigate reinforcement-type learning methods for exploratory learning, and in this paper we focus on the organization of cognitive systems for efficient operation. Learning is used over the entire system. It is organized in the form of four nested loops, where the outermost loop reflects the user-reinforcement-feedback loop, the intermediate two loops switch between different solution modes at symbolic respectively sub-symbolic level, and the innermost loop performs the acquired competences in terms of perception-action cycles. We present a system diagram which explains this process in more detail. We discuss the learning strategy in terms of learning scenarios provided by the user. This interaction between user (teacher) and system is a major difference to classical robotics systems, where the system designer places his world model into the system. We believe that this is the key to extendable robust system behavior and successful interaction of humans and artificial cognitive systems. We furthermore address the issue of bootstrapping the system, and, in particular, the visual recognition module. We give some more in-depth details about our recognition method and how feedback from higher levels is implemented. The described system is however work in progress and no final results are available yet. The available preliminary results that we have achieved so far, clearly point towards a successful proof of the architecture concept.

  • 9.
    Felsberg, Michael
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Jonsson, Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Moe, Anders
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Granlund, Gösta
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Exploratory Learning Structure in Artificial Cognitive Systems2006Report (Other academic)
    Abstract [en]

    One major goal of the COSPAL project is to develop an artificial cognitive system architecture with the capability of exploratory learning. Exploratory learning is a strategy that allows to apply generalization on a conceptual level, resulting in an extension of competences. Whereas classical learning methods aim at best possible generalization, i.e., concluding from a number of samples of a problem class to the problem class itself, exploration aims at applying acquired competences to a new problem class. Incremental or online learning is an inherent requirement to perform exploratory learning.

    Exploratory learning requires new theoretic tools and new algorithms. In the COSPAL project, we mainly investigate reinforcement-type learning methods for exploratory learning and in this paper we focus on its algorithmic aspect. Learning is performed in terms of four nested loops, where the outermost loop reflects the user-reinforcement-feedback loop, the intermediate two loops switch between different solution modes at symbolic respectively sub-symbolic level, and the innermost loop performs the acquired competences in terms of perception-action cycles. We present a system diagram which explains this process in more detail.

    We discuss the learning strategy in terms of learning scenarios provided by the user. This interaction between user (’teacher’) and system is a major difference to most existing systems where the system designer places his world model into the system. We believe that this is the key to extendable robust system behavior and successful interaction of humans and artificial cognitive systems.

    We furthermore address the issue of bootstrapping the system, and, in particular, the visual recognition module.We give some more in-depth details about our recognition method and how feedback from higher levels is implemented. The described system is however work in progress and no final results are available yet. The available preliminary results that we have achieved so far, clearly point towards a successful proof of the architecture concept.

  • 10.
    Felsberg, Michael
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Wiklund, Johan
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Jonsson, Erik
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Moe, Anders
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Granlund, Gösta
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Exploratory Learning Strucutre in Artificial Cognitive Systems2007In: International Cognitive Vision Workshop, Bielefeld: eCollections , 2007Conference paper (Other academic)
    Abstract [en]

    One major goal of the COSPAL project is to develop an artificial cognitive system architecture with the capability of exploratory learning. Exploratory learning is a strategy that allows to apply generalization on a conceptual level, resulting in an extension of competences. Whereas classical learning methods aim at best possible generalization, i.e., concluding from a number of samples of a problem class to the problem class itself, exploration aims at applying acquired competences to a new problem class. Incremental or online learning is an inherent requirement to perform exploratory learning.

    Exploratory learning requires new theoretic tools and new algorithms. In the COSPAL project, we mainly investigate reinforcement-type learning methods for exploratory learning and in this paper we focus on its algorithmic aspect. Learning is performed in terms of four nested loops, where the outermost loop reflects the user-reinforcement-feedback loop, the intermediate two loops switch between different solution modes at symbolic respectively sub-symbolic level, and the innermost loop performs the acquired competences in terms of perception-action cycles. We present a system diagram which explains this process in more detail.

    We discuss the learning strategy in terms of learning scenarios provided by the user. This interaction between user ('teacher') and system is a major difference to most existing systems where the system designer places his world model into the system. We believe that this is the key to extendable robust system behavior and successful interaction of humans and artificial cognitive systems.

    We furthermore address the issue of bootstrapping the system, and, in particular, the visual recognition module. We give some more in-depth details about our recognition method and how feedback from higher levels is implemented. The described system is however work in progress and no final results are available yet. The available preliminary results that we have achieved so far, clearly point towards a successful proof of the architecture concept.

  • 11.
    Forssen, Per-Erik
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Granlund, Gösta
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Channel Representation of Colour Images2002Report (Other academic)
    Abstract [en]

    In this report we describe how an RGB component colour image may be expanded into a set of channel images, and how the original colour image may be reconstructed from these. We also demonstrate the effect of averaging on the channel images and how it differs from conventional averaging. Finally we demonstrate how boundaries can be detected as a change in the confidence of colour state.

  • 12.
    Granlund, Gösta H.
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.
    Westelius, Carl-Johan
    n/a.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Issues in Robot Vision1994In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 12, no 3, p. 131-148Article in journal (Refereed)
    Abstract [en]

    In this paper, we discuss certain issues regarding robot vision. The main theme will be the importance of the choice of information representation. We will see the implications at different parts of a robot vision structure. We deal with aspects of pre-attentive versus attentive vision, control mechanisms for low level focus of attention, and representation of motion as the orientation of hyperplanes in multdimensional time-space. Issues of scale will be touched upon, and finally, a depth-from stereo algorithm based on guadrature filter phase is presented.

  • 13.
    Granlund, Gösta H.
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Low Level Operations1995In: Signal Processing for Computer Vision / [ed] Gösta H. Granlund and Hans Knutsson, Dordrecht: Kluwer , 1995, p. 97-116Chapter in book (Refereed)
    Abstract [en]

    This chapter gives an introductory treatment of operations andrepresentations for low-level features in multi-dimensional spaces. Animportant issue is how to combine contributions from several filtersto provide robust statements in accordance with certain low-levelmodels. This chapter gives an introduction to the problems ofunambiguous mappings in multi-dimensional spaces.

  • 14.
    Granlund, Gösta
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Nordberg, Klas
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Doherty, Patrick
    Linköping University, Department of Computer and Information Science, KPLAB - Knowledge Processing Lab. Linköping University, The Institute of Technology.
    Skarman, Erik
    Linköping University, Department of Computer and Information Science, EMTEK - Entity for Methodology and Technology of Knowledge Management. Linköping University, The Institute of Technology.
    Sandewall, Erik
    Linköping University, Department of Computer and Information Science, CASL - Cognitive Autonomous Systems Laboratory. Linköping University, The Institute of Technology.
    WITAS: An Intelligent Autonomous Aircraft Using Active Vision2000In: Proceedings of the UAV 2000 International Technical Conference and Exhibition (UAV), Paris, France: Euro UVS , 2000Conference paper (Refereed)
    Abstract [en]

    The WITAS Unmanned Aerial Vehicle Project is a long term basic research project located at Linköping University (LIU), Sweden. The project is multi-disciplinary in nature and involves cooperation with different departments at LIU, and a number of other universities in Europe, the USA, and South America. In addition to academic cooperation, the project involves collaboration with a number of private companies supplying products and expertise related to simulation tools and models, and the hardware and sensory platforms used for actual flight experimentation with the UAV. Currently, the project is in its second phase with an intended duration from 2000-2003.

    This paper will begin with a brief overview of the project, but will focus primarily on the computer vision related issues associated with interpreting the operational environment which consists of traffic and road networks and vehicular patterns associated with these networks.

  • 15.
    Johansson, Björn
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Granlund, Gösta
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Combining shadow detection and simulation for estimation of vehicle size and position2009In: PATTERN RECOGNITION LETTERS, ISSN 0167-8655, Vol. 30, no 8, p. 751-759Article in journal (Refereed)
    Abstract [en]

    This paper presents a method that combines shadow detection and a 3D box model including shadow simulation, for estimation of size and position of vehicles. We define a similarity measure between a simulated image of a 3D box, including the box shadow, and a captured image that is classified into background/foreground/shadow. The similarity Measure is used in all optimization procedure to find the optimal box state. It is shown in a number of experiments and examples how the combination shadow detection/simulation improves the estimation compared to just using detection or simulation, especially when the shadow detection or the simulation is inaccurate. We also describe a tracking system that utilizes the estimated 3D boxes, including highlight detection, spatial window instead of a time based window for predicting heading, and refined box size estimates by weighting accumulated estimates depending oil view. Finally, we show example results.

  • 16.
    Johansson, Björn
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Granlund, Gösta
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Goals and status within the IVSS project2006In: Seminar on "Cognitive vision in traffic analyses": Lund, Sweden, 2006Conference paper (Refereed)
  • 17.
    Järvinen, Arto
    et al.
    n/a.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Study of information mapping in Kohonen--Networks1989Report (Other academic)
  • 18.
    Knutsson, Hans
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Biomedical Engineering, Medical Informatics.
    Andersson, Mats
    Linköping University, The Institute of Technology. Linköping University, Department of Biomedical Engineering, Medical Informatics.
    Borga, Magnus
    Linköping University, The Institute of Technology. Linköping University, Department of Biomedical Engineering, Medical Informatics.
    Wiklund, Johan
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Automated generation of representations in vision2000In: International Conference on Pattern Recognition ICPR,2000, Barcelona, Spain: IEEE , 2000, p. 59-66 vol.3Conference paper (Refereed)
    Abstract [en]

    This paper presents a general strategy for automated generation of efficient representations in vision. The approach is highly task oriented and what constitutes the relevant information is defined by a set of examples. The examples are pairs of situations that are dependent through the chosen feature but are otherwise independent. Particularly important concepts in the work are mutual information and canonical correlation. How visual operators and representations can be generated from examples are presented for a number of features, e.g. local orientation, disparity and motion. Interesting similarities to biological vision functions are observed. The results clearly demonstrates the potential of combining advanced filtering techniques and learning strategies based on canonical correlation analysis (CCA).

  • 19.
    Knutsson, Hans
    et al.
    Linköping University, Department of Biomedical Engineering. Linköping University, The Institute of Technology.
    Andersson, Mats
    Linköping University, The Institute of Technology.
    Haglund, Leif
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Orientation and Velocity1995In: Signal Processing for Computer Vision / [ed] Gösta H. Granlund and Hans Knutsson, Dordrecht: Kluwer , 1995, p. 219-258Chapter in book (Refereed)
    Abstract [en]

    This chapter introduces the use of tensors in estimation of local structure and orientation. The tensor representation is shown to be crucial to unambiguous and continuous representation of local orientation in multiple dimensions. In addition to orientation the tensor representation also conveys the degree and type of local anisotropy. The orientation estimation approach is developed in detail for two, three and four dimensions and is shown to be extendable to higher dimensions. Examples and performance measures are given for processing of images, volumes and time sequences.

  • 20.
    Knutsson, Hans
    et al.
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.
    Andersson, Mats
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Advanced Filter Design1999In: Proceedings of the 11th Scandinavian Conference on Image Analysis: Greenland, SCIA , 1999, p. 185-193Conference paper (Refereed)
    Abstract [en]

    This paper presents a general approach for obtaining optimal filters as well as filter sequences. A filter is termed optimal when it minimizes a chosen distance measure with respect to an ideal filter. The method allows specification of the metric via simultaneous weighting functions in multiple domains, e.g. the spatio-temporal space and the Fourier space. Metric classes suitable for optimization of localized filters for multidimensional signal processing are suggested and discussed.

    It is shown how convolution kernels for efficient spatio-temporal filtering can be implemented in practical situations. The method is based on applying a set of jointly optimized filter kernels in sequence. The optimization of sequential filters is performed using a novel recursive optimization technique. A number of optimization examples are given that demonstrate the role of key parameters such as: number of kernel coefficients, number of filters in sequence, spatio-temporal and Fourier space metrics.

    The sequential filtering method enables filtering using only a small fraction of the number of filter coefficients required using conventional filtering. In multidimensional filtering applications the method potentially outperforms both standard convolution and FFT based approaches by two-digit numbers.

  • 21.
    Knutsson, Hans
    et al.
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.
    Andersson, Mats
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Multiple Space Filter Design1999In: Proceedings of the SSAB symposium on image analysis: Gothenburg, 1999Conference paper (Refereed)
    Abstract [en]

    This paper presents a general approach for obtaining optimal filters as well as filter sequences. A filter is termed optimal when it minimizes a chosen distance measure with respect to an ideal filter. The method allows specification of the metric via simultaneous weighting functions in multiple domains, e.g. the spatio-temporal space and the Fourier space. It is shown how convolution kernels for efficient spatio-temporal filtering can be implemented in practical situations. The method is based on applying a set of jointly optimized filter kernels in sequence. The optimization of sequential filters is performed using a novel recursive optimization technique. A number of optimization examples are given that demonstrate the role of key parameters such as: number of kernel coefficients, number of filters in sequence, spatio-temporal and Fourier space metrics. In multidimensional filtering applications the method potentially outperforms both standard convolution and FFT based approaches by two-digit numbers.

  • 22.
    Krebs, Andreas
    et al.
    Dept. Aerodynamics/Fluid Mech., BTU Cottbus, Germany.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Optimization of Quadrature Filters Based on the Numerical Integration of Improper Integrals2011In: Pattern Recognition: 33rd annual DAGM conference, Frankfurt, Germany / [ed] Rudolf Mester and Michael Felsberg, Springer Berlin/Heidelberg, 2011, Vol. 6835, p. 91-100Conference paper (Refereed)
    Abstract [en]

    Convolution kernels are a commonly used tool in computer vision. These kernels are often specified by an ideal frequency response and the actual filter coefficients are obtained by minimizing some weighted distance with respect to the ideal filter. State-of-the-art approaches usually replace the continuous frequency response by a discrete Fourier spectrum with a multitude of samples compared to the kernel size, depending on the smoothness of the ideal filter and the weight function. The number of samples in the Fourier domain grows exponentially with the dimensionality and becomes a bottleneck concerning memory requirements.

    In this paper we propose a method that avoids the discretization of the frequency space and makes filter optimization feasible in higher dimensions than the standard approach. The result is no longer depending on the choice of the sampling grid and it remains exact even if the weighting function is singular in the origin. The resulting improper integrals are efficiently computed using Gauss-Jacobi quadrature.

  • 23. Källhammer, Jan-Erik
    et al.
    Eriksson, Dick
    Granlund, Gösta
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Felsberg, Michael
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Moe, Anders
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Johansson, Björn
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Wiklund, Johan
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Forssén, Per-Erik
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Near Zone Pedestrian Detection using a Low-Resolution FIR Sensor2007In: Intelligent Vehicles Symposium, 2007 IEEE, Istanbul, Turkey: IEEE , 2007, , p. 339-345Conference paper (Refereed)
    Abstract [en]

    This paper explores the possibility to use a single low-resolution FIR camera for detection of pedestrians in the near zone in front of a vehicle. A low resolution sensor reduces the cost of the system, as well as the amount of data that needs to be processed in each frame.

    We present a system that makes use of hot-spots and image positions of a near constant bearing to detect potential pedestrians. These detections provide seeds for an energy minimization algorithm that fits a pedestrian model to the detection. Since false alarms are hard to tolerate, the pedestrian model is then tracked, and the distance-to-collision (DTC) is measured by integrating size change measurements at sub-pixel accuracy, and the car velocity. The system should only engage braking for detections on a collision course, with a reliably measured DTC.

    Preliminary experiments on a number of recorded near collision sequences indicate that our method may be useful for ranges up to about 10m using an 80x60 sensor, and somewhat more using a 160x120 sensor. We also analyze the robustness of the evaluated algorithm with respect to dead pixels, a potential problem for low-resolution sensors.

  • 24.
    Merino, Luis
    et al.
    Pablo de Olavide University, Crta. Utrera km. 1, 41013 Seville, Spain.
    Caballero, Fernando
    Robotics, Vision and Control Group, University of Seville, Camino de los Descubrimientos s/n, 41092 Seville, Spain.
    Ferruz, Joaquín
    Robotics, Vision and Control Group, University of Seville, Camino de los Descubrimientos s/n, 41092 Seville, Spain.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Forssen, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Ollero, Anibal
    Robotics, Vision and Control Group, University of Seville, Camino de los Descubrimientos s/n, 41092 Seville, Spain.
    Multi-UAV Cooperative Perception Techniques2007In: Multiple Heterogeneous Unmanned Aerial Vehicles / [ed] Aníbal Ollero and Ivan Maza, Berlin / Heidelberg: Springer , 2007, Vol. 37, p. 67-110Chapter in book (Other (popular science, discussion, etc.))
    Abstract [en]

    This Chapter is devoted to the cooperation of multiple UAVs for environment perception. First, probabilistic methods for multi-UAV cooperative perception are analyzed. Then, the problem of multi-UAV detection, localization and tracking is described, and local image processing techniques are presented. Then, the Chapter shows two approaches based on the Information Filter and on evidence grid representations.

  • 25.
    Merino, Luis
    et al.
    Escuela Politécnica Superior, Universidad Pablo de Olavide, 41013 Sevilla, Spain.
    Caballero, Fernando
    Escuela Superior de Ingenieros, Universidad de Sevilla, 41092 Sevilla, Spain.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Ferruz, Joaquín
    Escuela Superior de Ingenieros, Universidad de Sevilla, 41092 Sevilla, Spain.
    Martinez-de Dios, Jose Ramiro
    Escuela Superior de Ingenieros, Universidad de Sevilla, 41092 Sevilla, Spain.
    Moe, Anders
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Nordberg, Klas
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Ollero, Anibal
    Escuela Superior de Ingenieros, Universidad de Sevilla, 41092 Sevilla, Spain.
    Single and Multi-UAV Relative Position Estimation Based on Natural Landmarks2007In: Advances in Unmanned Aerial Vehicles: State of the Art and the Road to Autonomy / [ed] Kimon P. Valavanis, Netherlands: Springer , 2007, p. 267-307Chapter in book (Other (popular science, discussion, etc.))
    Abstract [en]

    This Chapter presents a vision-based method for unmanned aerial vehicle (UAV) motion estimation that uses as input an image motion field obtained from matches of point-like features. The Chapter enhances visionbased techniques developed for single UAV localization and demonstrates how they can be modified to deal with the problem of multi-UAV relative position estimation. The proposed approach is built upon the assumption that if different UAVs identify, using their cameras, common objects in a scene, the relative pose displacement between the UAVs can be computed from these correspondences under certain assumptions. However, although point-like features are suitable for local UAV motion estimation, finding matches between images collected using different cameras is a difficult task that may be overcome using blob features. Results justify the proposed approach.

  • 26.
    Merino, Luis
    et al.
    Pablo de Olavide University, Seville, Spain, Pablo de Olavide University, Cita. Utrera km. 1, 41013 Seville, Spain).
    Wiklund, Johan
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Caballero, Fernando
    System Engineering and Automation Department.
    Moe, Anders
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Martinez-de Dios, Jose Ramiro
    Forssén, Per-Erik
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Nordberg, Klas
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Ollero, Annibal
    Department of E.S Ingenieros, University of Seville.
    Vision-Based Multi-UAV Position Estimation2006In: IEEE Robotics & Automation Magazine, ISSN 1070-9932, Vol. 13, no 3, p. 53-62Article in journal (Refereed)
    Abstract [en]

    This paper describes a method for vision-based unmanned aerial vehicle (UAV) motion estimation from multiple planar homographies. The paper also describes the determination of the relative displacement between different UAVs employing techniques for blob feature extraction and matching. It then presents and shows experimental results of the application of the proposed technique to multi-UAV detection of forest fires.  

  • 27.
    Nordberg, Klas
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Doherty, Patrick
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, KPLAB - Knowledge Processing Lab.
    Farnebäck, Gunnar
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Forssén, Per-Erik
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Granlund, Gösta
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Moe, Anders
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Wiklund, Johan
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Vision for a UAV helicopter2002In: International Conference on Intelligent Robots and Systems (IROS), Workshop on Aerial Robotics: Lausanne, Switzerland, 2002Conference paper (Other academic)
    Abstract [en]

    This paper presents and overview of the basic and applied research carried out by the Computer Vision Laboratory, Linköping University, in the WITAS UAV Project. This work includes customizing and redesigning vision methods to fit the particular needs and restrictions imposed by the UAV platform, e.g., for low-level vision, motion estimation, navigation, and tracking. It also includes a new learning structure for association of perception-action activations, and a runtime system for implementation and execution of vision algorithms. The paper contains also a brief introduction to the WITAS UAV Project.

  • 28.
    Nordberg, Klas
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Doherty, Patrick
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, KPLAB - Knowledge Processing Lab.
    Forssén, Per-Erik
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Wiklund, Johan
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Andersson, Per
    A flexible runtime system for image processing in a distributed computational environment for an unmanned aerial vehicle2006In: International Journal of Pattern Recognition and Artificial Intelligence, ISSN 0218-0014, Vol. 20, no 5, p. 763-780Article in journal (Refereed)
    Abstract [en]

    A runtime system for implementation of image processing operations is presented. It is designed for working in a flexible and distributed environment related to the software architecture of a newly developed UAV system. The software architecture can be characterized at a coarse scale as a layered system, with a deliberative layer at the top, a reactive layer in the middle, and a processing layer at the bottom. At a finer scale each of the three levels is decomposed into sets of modules which communicate using CORBA, allowing system development and deployment on the UAV to be made in a highly flexible way. Image processing takes place in a dedicated module located in the process layer, and is the main focus of the paper. This module has been designed as a runtime system for data flow graphs, allowing various processing operations to be created online and on demand by the higher levels of the system. The runtime system is implemented in Java, which allows development and deployment to be made on a wide range of hardware/software configurations. Optimizations for particular hardware platforms have been made using Java's native interface.

  • 29. Ollero, Anibal
    et al.
    Lacroix, Simon
    Merino, Luis
    Gancet, Jeremi
    Wiklund, Johan
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Remuß, Volker
    Perez, Iker Veiga
    Gutiérrez, Luis G.
    Viegas, Domingos Xavier
    Benitez, Miguel Angel González
    Mallet, Anthony
    Alami, Rachid
    Chatila, Raja
    Hommel, Günter
    Lechuga, F. J. Colmenero
    Arrue, Begoña C.
    Ferruz, Joaquin
    Martinez-de Dios, Jose Ramiro
    Caballero, Fernando
    Multiple Eyes in the Skies2005In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 12, no 2, p. 46-57Article in journal (Refereed)
    Abstract [en]

    The management of environmental and industrial disasters, search and rescue operations, surveillance of natural scenarios, environmental monitoring, and many other field robotics applications require high mobility and the need to reach locations that are difficult to access with ground vehicles. In many cases, the use of aerial vehicles is the best way to approach the objective to get information or to deploy instrumentation. Unmanned air vehicles (UAVs) have significantly increased their flight performance and autonomous onboard processing capabilities in the last ten years. But a single aerial vehicle equipped with a large array of different sensors of various modalities is limited at any time to a single viewpoint. A team of aerial vehicles, however, can simultaneously collect information from multiple locations and exploit the information derived from multiple disparate points. Furthermore, having a team with multiple heterogeneous aerial vehicles offers additional advantages due to the possibility of beneficial complementarities of the vehicles.

  • 30.
    Svensson, Björn
    et al.
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Andersson, Mats
    Linköping University, Department of Biomedical Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Issues on filter networks for efficient convolution2004In: Proceedings of the Swedish Symposium on Image Analysis (2004), Uppsala, 2004, p. 94-97Conference paper (Other academic)
    Abstract [en]

    This paper presents the new project Efficient Convolution Operators for Image Processing of Volumes and Volume Sequences . The project is carried out in collaboration with Context Vision AB.

    By using sequential filtering on 3D and 4D data, the number of nonzero filter coefficients for a desired filter set can be significantly reduced. A sequential convolution structure in combination with a convolver designed for sparse filters is a powerful tool for filtering of multidimensional signals.

    The project mainly concerns the design of filter networks, that approximate a desired filter set, while keeping the computational load as low as possible. This is clearly an optimization problem, but it can be formulated in several different ways due to the complexity.

    The project is in an initial state and the paper focuses on experiences from prior work and discuss possible approaches for the future progress.

  • 31.
    Westelius, Carl-Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Robust Vergence Control Using Scale--Space Phase Information1992Report (Other academic)
  • 32.
    Westelius, Carl-Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Westin, Carl-Fredrik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Phase-based Disparity Estimation1995In: Vision as Process: Basic Research on Computer Vision Systems / [ed] J. L. Crowley, H. I. Christensen, Berlin: Springer-Verlag , 1995, p. 157-178Chapter in book (Other academic)
    Abstract [en]

    The problem of estimating depth information from two or more images of a scene is one which has received considerable attention over the years and a wide variety of methods have been proposed to solve it [Barnard and Fichsler, 1982; Fleck, 1991]. Methods based on correlation and methods using some form of feature matching between the images have found most widespread use. Of these, the latter have attracted increasing attention since the work of Marr [Marr, 1982], in which the features are zero-crossings on varying scales. These methods share an underlying basis of spatial domain operations.

    In recent years, however, increasing interest has been shown in computational models of vision based primarily on a localized frequency domain representation - the Gabor representation [Gabor, 1946; Adelson and Bergen, 1985], first suggested in the context of computer vision by Granlund [Granlund, 1978].

  • 33.
    Westelius, Carl-Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Westin, Carl-Fredrik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Prototyping, Visualization and Simulation Using the Application Visualization System1994In: Experimental Environments for Computer Vision and Image Processing / [ed] H. I. Christensen and J. L. Crowley, Singapore: World Scientific Publishing Co. Pte. Ltd. , 1994, p. 33-62Chapter in book (Other academic)
    Abstract [en]

    The Application Visualization System software from Advanced Visual Systems Inc is an interactive visualization environment for scientists, engineers and technical professionals. This report contains a short overview of the AVS software packages and a discussion about its general performance. The software package has actively been used at the Computer Vision Laboratory, Linköping University, during the last three years. The AVS package has been used in many applications. Examples are generating images from a virtual environment, simulation of a controllable robot with a stereo camera head and visualization of multidimensional data structures. Lately we also have used AVS for handling communication between different processes which may be distributed on different machines. AVS was primarily developed as a tool for visualization of complex data sets. However, another important aspect of the software is that it can be used as an advanced workbench for controlling networks of Unix processes (including external ones on different machine types) using simple visual programming.

  • 34.
    Westin, Carl-Fredrik
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Westelius, Carl-Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Granlund, Gösta
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    ESPRIT Basic Research Action 7108, Vision as Process, DR.B.2: Integration of Multi-level Control Loops and FOA1994Report (Other academic)
  • 35.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Image Sequence Analysis for Tracking of Moving Objects1987Licentiate thesis, monograph (Other academic)
    Abstract [en]

    There are a number of different algorithms for object motion estimation in image sequences. Almost every algorithm is based on one of three different mathematical methods. An overview of these methods is given, together with some published application examples on object tracking.

    A method for tracking of multiple moving objects has been developed on a GOP 300 image processing system. This method works on image sequences with a stationary background, and can be divided into the following steps:

    1. Find the positions for all objects that have moved.
    2. Predict the new positions for all known objects.
    3. Match these two sets of points.
    4. Produce the required output.

    These steps are repeated for every sample of the sequence. As an output from every processed sample in a test sequence, both a resulting image and a record in a datafile have been generated. The resulting image is a copy of the actual sample with the active object identities overlayed at the corresponding positions. The resulting images have been stored on a video tape.

  • 36.
    Wiklund, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Granlund, Gösta H.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Image Sequence Analysis for Object Tracking.1987In: Proc. of The 5th Scandinavian Conference on Image Analysis: Stockholm, 1987, p. 641-648Conference paper (Refereed)
  • 37.
    Wiklund, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Granlund, Gösta H.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Tracking of Multiple Moving Objects1986In: Proceedings of the Second International Workshop on Time-Varying Image Processing and Moving Object Recognition: Florence, Italy / [ed] V. Cappellini, Amsterdam: Elsevier Science Publishers B.V. , 1986, p. 241-250Conference paper (Refereed)
  • 38.
    Wiklund, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Haglund, Leif
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Granlund, Gösta H.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Time Sequence Analysis Using Multi-Resolution Spatio-Temporal Filters1989In: Time-Varying Image Processing and Moving Object Recognition, 2: Florence, Italy / [ed] V. Cappellini, Amsterdam: Elsevier Science Publishers , 1989, p. 258-265Conference paper (Refereed)
    Abstract [en]

    A methodology for spatio-temporal filtering of image sequences is under development at Computer Vision Laboratory, Linköping University. In later years scale analysis has been found to be a necessary tool in image analysis of stationary images. It is our belief that a combination of spatio-temporal filtering and scale analysis is required to get satisfactory results on image sequences. A growing need and the availability of more powerful computers are the most important reasons for this development. The objectives and proposed methods are discussed in relation to known properties of mammal vision.

  • 39.
    Wiklund, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    A Generalized Convolver1995In: SCIA9: Uppsala, Sweden, 1995Conference paper (Refereed)
    Abstract [en]

    A scheme for performing generalized convolutions is presented. A flexible convolver, which runs on standard workstations, has been implemented. It is designed for maximum throughput and flexibility. The implementation incorporates spatio-temporal convolutions with configurable vector combinations. It can handle general multi-linear operations, i.e. tensor operations on multidimensional data of any order. The input data and the kernel coefficients can be of arbitrary vector length. The convolver is configurable for IIR filters in the time dimension. Other features of the implemented convolver are scattered kernel data, region of interest and subsampling. The implementation is done as a C-library and a graphical user interface in AVS (Application Visualization System).

  • 40.
    Wiklund, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    A Generalized Convolver1996Report (Other academic)
    Abstract [en]

    A scheme for performing generalized convolutions is presented. A flexible convolver, which runs on standard workstations, has been implemented. It is designed for maximum throughput and flexibility. The implementation incorporates spatio-temporal convolutions with configurable vector combinations. It can handle general multilinear operations, i.e. tensor operations on multidimensional data of any order. The input data and the kernel coefficients can be of arbitrary vector length. The convolver is configurable for IIR filters in the time dimension. Other features of the implemented convolver are scattered kernel data, region of interest and subsampling. The implementation is done as a C-library and a graphical user interface in AVS (Application Visualization System).

  • 41.
    Wiklund, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wilson, Roland
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    A Hierarchical Stereo Algorithm1991Report (Other academic)
  • 42.
    Wiklund, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Nicolas, Vincent
    Université catholique de Louvain, Communications and Remote Sensing Lab., Place du Levant, 2, B-1348 Louvain-La-Neuve, Belgium.
    Alface, Patrice R.
    Université catholique de Louvain, Communications and Remote Sensing Lab., Place du Levant, 2, B-1348 Louvain-La-Neuve, Belgium.
    Andersson, Mats
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.
    T-flash: Tensor Visualization in Medical Studio2009In: Tensors in Image Processing and Computer Vision, Springer London, 2009, p. 455-466Conference paper (Refereed)
    Abstract [en]

    Tensor valued data are frequently used in medical imaging. For a 3-dimensional second order tensor such data imply at least six degrees of freedom for each voxel. The operators ability to perceive this information is of outmost importance and in many cases a limiting factor for the interpretation of the data. In this paper we propose a decomposition of such tensor fields using the Tflash tensor glyphs that intuitively conveys important tensor features to a human observer. A matlab implementation for visualization of single tensors are described in detail and a VTK/ITK implementation for visualization of tensor fields have been developed as a Medical Studio component.

  • 43.
    Wiklund, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Nordberg, Klas
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Software architecture and middleware for artificial cognitive systems2010In: International Conference on Cognitive Systems, 2010Conference paper (Other academic)
  • 44.
    Wiklund, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Westelius, Carl-Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Hierarchical Phase Based Disparity Estimation1992Report (Other academic)
  • 45.
    Wiklund, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Westelius, Carl-Johan
    Linköping University, Department of Electrical Engineering, Computer Vision.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Hierarchical Phase Based Disparity Estimation1992In: Proceedings of 2nd Singapore International Conference on Image Processing: 7-11 September 1992, Singapore / [ed] V. Srinivasan, Ong Sim Heng and Ang Yew Hock, Singapore, River Edge, NJ: World Scientific Publishing , 1992Conference paper (Refereed)
  • 46.
    Wiklund, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Westin, Carl-Fredrik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Westelius, Carl-Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    AVS, Application Visualization System, Software Evaluation Report1993Report (Other academic)
1 - 46 of 46
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf