liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Wiklund, Johan
Publications (10 of 46) Show all publications
Felsberg, M., Larsson, F., Wiklund, J., Wadströmer, N. & Ahlberg, J. (2013). Online Learning of Correspondences between Images. IEEE Transaction on Pattern Analysis and Machine Intelligence, 35(1), 118-129
Open this publication in new window or tab >>Online Learning of Correspondences between Images
Show others...
2013 (English)In: IEEE Transaction on Pattern Analysis and Machine Intelligence, ISSN 0162-8828, E-ISSN 1939-3539, Vol. 35, no 1, p. 118-129Article in journal (Refereed) Published
Abstract [en]

We propose a novel method for iterative learning of point correspondences between image sequences. Points moving on surfaces in 3D space are projected into two images. Given a point in either view, the considered problem is to determine the corresponding location in the other view. The geometry and distortions of the projections are unknown as is the shape of the surface. Given several pairs of point-sets but no access to the 3D scene, correspondence mappings can be found by excessive global optimization or by the fundamental matrix if a perspective projective model is assumed. However, an iterative solution on sequences of point-set pairs with general imaging geometry is preferable. We derive such a method that optimizes the mapping based on Neyman's chi-square divergence between the densities representing the uncertainties of the estimated and the actual locations. The densities are represented as channel vectors computed with a basis function approach. The mapping between these vectors is updated with each new pair of images such that fast convergence and high accuracy are achieved. The resulting algorithm runs in real-time and is superior to state-of-the-art methods in terms of convergence and accuracy in a number of experiments.

Place, publisher, year, edition, pages
IEEE Computer Society, 2013
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-79260 (URN)10.1109/TPAMI.2012.65 (DOI)000311127700012 ()
Projects
DIPLECSGARNICSELLIITETTCUASFOCUSCADICS
Note

funding agencies|EC|215078247947|ELLIIT||Strategic Area for ICT research||CADICS||Swedish Government||Swedish Research Council||CUAS||FOCUS||Swedish Foundation for Strategic Research||

Available from: 2012-07-05 Created: 2012-07-05 Last updated: 2017-12-07
Krebs, A., Wiklund, J. & Felsberg, M. (2011). Optimization of Quadrature Filters Based on the Numerical Integration of Improper Integrals. In: Rudolf Mester and Michael Felsberg (Ed.), Pattern Recognition: 33rd annual DAGM conference, Frankfurt, Germany. Paper presented at 33rd DAGM Symposium, Frankfurt/Main, Germany, August 31 – September 2, 2011 (pp. 91-100). Springer Berlin/Heidelberg, 6835
Open this publication in new window or tab >>Optimization of Quadrature Filters Based on the Numerical Integration of Improper Integrals
2011 (English)In: Pattern Recognition: 33rd annual DAGM conference, Frankfurt, Germany / [ed] Rudolf Mester and Michael Felsberg, Springer Berlin/Heidelberg, 2011, Vol. 6835, p. 91-100Conference paper, Published paper (Refereed)
Abstract [en]

Convolution kernels are a commonly used tool in computer vision. These kernels are often specified by an ideal frequency response and the actual filter coefficients are obtained by minimizing some weighted distance with respect to the ideal filter. State-of-the-art approaches usually replace the continuous frequency response by a discrete Fourier spectrum with a multitude of samples compared to the kernel size, depending on the smoothness of the ideal filter and the weight function. The number of samples in the Fourier domain grows exponentially with the dimensionality and becomes a bottleneck concerning memory requirements.

In this paper we propose a method that avoids the discretization of the frequency space and makes filter optimization feasible in higher dimensions than the standard approach. The result is no longer depending on the choice of the sampling grid and it remains exact even if the weighting function is singular in the origin. The resulting improper integrals are efficiently computed using Gauss-Jacobi quadrature.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2011
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 6835
Keywords
Localized kernels, filter optimization, Gauss-Jacobi quadrature
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-69604 (URN)10.1007/978-3-642-23123-0_10 (DOI)978-3-642-23122-3 (ISBN)
Conference
33rd DAGM Symposium, Frankfurt/Main, Germany, August 31 – September 2, 2011
Available from: 2011-07-05 Created: 2011-07-05 Last updated: 2017-04-11Bibliographically approved
Wiklund, J., Nordberg, K. & Felsberg, M. (2010). Software architecture and middleware for artificial cognitive systems. In: International Conference on Cognitive Systems.
Open this publication in new window or tab >>Software architecture and middleware for artificial cognitive systems
2010 (English)In: International Conference on Cognitive Systems, 2010Conference paper, Published paper (Other academic)
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-58322 (URN)
Projects
DIPLECS
Available from: 2010-08-11 Created: 2010-08-11 Last updated: 2016-05-04
Johansson, B., Wiklund, J., Forssén, P.-E. & Granlund, G. (2009). Combining shadow detection and simulation for estimation of vehicle size and position. PATTERN RECOGNITION LETTERS, 30(8), 751-759
Open this publication in new window or tab >>Combining shadow detection and simulation for estimation of vehicle size and position
2009 (English)In: PATTERN RECOGNITION LETTERS, ISSN 0167-8655, Vol. 30, no 8, p. 751-759Article in journal (Refereed) Published
Abstract [en]

This paper presents a method that combines shadow detection and a 3D box model including shadow simulation, for estimation of size and position of vehicles. We define a similarity measure between a simulated image of a 3D box, including the box shadow, and a captured image that is classified into background/foreground/shadow. The similarity Measure is used in all optimization procedure to find the optimal box state. It is shown in a number of experiments and examples how the combination shadow detection/simulation improves the estimation compared to just using detection or simulation, especially when the shadow detection or the simulation is inaccurate. We also describe a tracking system that utilizes the estimated 3D boxes, including highlight detection, spatial window instead of a time based window for predicting heading, and refined box size estimates by weighting accumulated estimates depending oil view. Finally, we show example results.

Keywords
Vehicle tracking, 3D box model, Object size estimation, Shadow detection, Shadow simulation
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-19420 (URN)10.1016/j.patrec.2009.03.005 (DOI)
Note
Original Publication: Björn Johansson, Johan Wiklund, Per-Erik Forssén and Gösta Granlund, Combining shadow detection and simulation for estimation of vehicle size and position, 2009, PATTERN RECOGNITION LETTERS, (30), 8, 751-759. http://dx.doi.org/10.1016/j.patrec.2009.03.005 Copyright: Elsevier Science B.V., Amsterdam. http://www.elsevier.com/ Available from: 2009-06-29 Created: 2009-06-22 Last updated: 2015-12-10Bibliographically approved
Felsberg, M., Wiklund, J. & Granlund, G. (2009). Exploratory learning structures in artificial cognitive systems. Image and Vision Computing, 27(11), 1671-1687
Open this publication in new window or tab >>Exploratory learning structures in artificial cognitive systems
2009 (English)In: Image and Vision Computing, ISSN 0262-8856, Vol. 27, no 11, p. 1671-1687Article in journal (Refereed) Published
Abstract [en]

The major goal of the COSPAL project is to develop an artificial cognitive system architecture, with the ability to autonomously extend its capabilities. Exploratory learning is one strategy that allows an extension of competences as provided by the environment of the system. Whereas classical learning methods aim at best for a parametric generalization, i.e., concluding from a number of samples of a problem class to the problem class itself, exploration aims at applying acquired competences to a new problem class, and to apply generalization on a conceptual level, resulting in new models. Incremental or online learning is a crucial requirement to perform exploratory learning. In the COSPAL project, we mainly investigate reinforcement-type learning methods for exploratory learning, and in this paper we focus on the organization of cognitive systems for efficient operation. Learning is used over the entire system. It is organized in the form of four nested loops, where the outermost loop reflects the user-reinforcement-feedback loop, the intermediate two loops switch between different solution modes at symbolic respectively sub-symbolic level, and the innermost loop performs the acquired competences in terms of perception-action cycles. We present a system diagram which explains this process in more detail. We discuss the learning strategy in terms of learning scenarios provided by the user. This interaction between user (teacher) and system is a major difference to classical robotics systems, where the system designer places his world model into the system. We believe that this is the key to extendable robust system behavior and successful interaction of humans and artificial cognitive systems. We furthermore address the issue of bootstrapping the system, and, in particular, the visual recognition module. We give some more in-depth details about our recognition method and how feedback from higher levels is implemented. The described system is however work in progress and no final results are available yet. The available preliminary results that we have achieved so far, clearly point towards a successful proof of the architecture concept.

National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-21198 (URN)10.1016/j.imavis.2009.02.012 (DOI)
Note
Original Publication: Michael Felsberg, Johan Wiklund and Gösta Granlund, Exploratory learning structures in artificial cognitive systems, 2009, Image and Vision Computing, (27), 11, 1671-1687. http://dx.doi.org/10.1016/j.imavis.2009.02.012 Copyright: Elsevier Science B.V., Amsterdam. http://www.elsevier.com/ Available from: 2009-09-30 Created: 2009-09-30 Last updated: 2016-05-04
Wiklund, J., Nicolas, V., Alface, P. R., Andersson, M. & Knutsson, H. (2009). T-flash: Tensor Visualization in Medical Studio. In: Tensors in Image Processing and Computer Vision: . Paper presented at Tensor in Image Processing and Computer Vision (pp. 455-466). Springer London
Open this publication in new window or tab >>T-flash: Tensor Visualization in Medical Studio
Show others...
2009 (English)In: Tensors in Image Processing and Computer Vision, Springer London, 2009, p. 455-466Conference paper, Published paper (Refereed)
Abstract [en]

Tensor valued data are frequently used in medical imaging. For a 3-dimensional second order tensor such data imply at least six degrees of freedom for each voxel. The operators ability to perceive this information is of outmost importance and in many cases a limiting factor for the interpretation of the data. In this paper we propose a decomposition of such tensor fields using the Tflash tensor glyphs that intuitively conveys important tensor features to a human observer. A matlab implementation for visualization of single tensors are described in detail and a VTK/ITK implementation for visualization of tensor fields have been developed as a Medical Studio component.

Place, publisher, year, edition, pages
Springer London, 2009
Series
Advances in Pattern Recognition, ISSN 1617-7916
National Category
Biomedical Laboratory Science/Technology
Identifiers
urn:nbn:se:liu:diva-60132 (URN)10.1007/978-1-84882-299-3_21 (DOI)978-1-84882-298-6 (ISBN)978-1-84882-299-3 (ISBN)
Conference
Tensor in Image Processing and Computer Vision
Available from: 2010-10-08 Created: 2010-10-06 Last updated: 2015-06-10Bibliographically approved
Felsberg, M., Wiklund, J., Jonsson, E., Moe, A. & Granlund, G. (2007). Exploratory Learning Strucutre in Artificial Cognitive Systems. In: International Cognitive Vision Workshop. Paper presented at The 5th International Conference on Computer Vision Systems, 2007, 21-24 March, Bielefeld University, Germany. Bielefeld: eCollections
Open this publication in new window or tab >>Exploratory Learning Strucutre in Artificial Cognitive Systems
Show others...
2007 (English)In: International Cognitive Vision Workshop, Bielefeld: eCollections , 2007Conference paper, Published paper (Other academic)
Abstract [en]

One major goal of the COSPAL project is to develop an artificial cognitive system architecture with the capability of exploratory learning. Exploratory learning is a strategy that allows to apply generalization on a conceptual level, resulting in an extension of competences. Whereas classical learning methods aim at best possible generalization, i.e., concluding from a number of samples of a problem class to the problem class itself, exploration aims at applying acquired competences to a new problem class. Incremental or online learning is an inherent requirement to perform exploratory learning.

Exploratory learning requires new theoretic tools and new algorithms. In the COSPAL project, we mainly investigate reinforcement-type learning methods for exploratory learning and in this paper we focus on its algorithmic aspect. Learning is performed in terms of four nested loops, where the outermost loop reflects the user-reinforcement-feedback loop, the intermediate two loops switch between different solution modes at symbolic respectively sub-symbolic level, and the innermost loop performs the acquired competences in terms of perception-action cycles. We present a system diagram which explains this process in more detail.

We discuss the learning strategy in terms of learning scenarios provided by the user. This interaction between user ('teacher') and system is a major difference to most existing systems where the system designer places his world model into the system. We believe that this is the key to extendable robust system behavior and successful interaction of humans and artificial cognitive systems.

We furthermore address the issue of bootstrapping the system, and, in particular, the visual recognition module. We give some more in-depth details about our recognition method and how feedback from higher levels is implemented. The described system is however work in progress and no final results are available yet. The available preliminary results that we have achieved so far, clearly point towards a successful proof of the architecture concept.

Place, publisher, year, edition, pages
Bielefeld: eCollections, 2007
Keywords
artificial cognitive system, perception action learning, exploratory learning, cognitive bootstrapping
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-39511 (URN)10.2390/biecoll-icvs2007-173 (DOI)49069 (Local ID)49069 (Archive number)49069 (OAI)
Conference
The 5th International Conference on Computer Vision Systems, 2007, 21-24 March, Bielefeld University, Germany
Projects
COSPAL
Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2016-05-04Bibliographically approved
Merino, L., Caballero, F., Ferruz, J., Wiklund, J., Forssen, P.-E. & Ollero, A. (2007). Multi-UAV Cooperative Perception Techniques. In: Aníbal Ollero and Ivan Maza (Ed.), Multiple Heterogeneous Unmanned Aerial Vehicles: (pp. 67-110). Berlin / Heidelberg: Springer, 37
Open this publication in new window or tab >>Multi-UAV Cooperative Perception Techniques
Show others...
2007 (English)In: Multiple Heterogeneous Unmanned Aerial Vehicles / [ed] Aníbal Ollero and Ivan Maza, Berlin / Heidelberg: Springer , 2007, Vol. 37, p. 67-110Chapter in book (Other (popular science, discussion, etc.))
Abstract [en]

This Chapter is devoted to the cooperation of multiple UAVs for environment perception. First, probabilistic methods for multi-UAV cooperative perception are analyzed. Then, the problem of multi-UAV detection, localization and tracking is described, and local image processing techniques are presented. Then, the Chapter shows two approaches based on the Information Filter and on evidence grid representations.

Place, publisher, year, edition, pages
Berlin / Heidelberg: Springer, 2007
Series
Springer Tracts in Advanced Robotics, ISSN 1610-7438, E-ISSN 1610-742X ; 37
Keywords
Aerial Robotics, Cooperative Perception, Cooperative Robotics, Multi-UAV Systems
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-51495 (URN)10.1007/978-3-540-73958-6_4 (DOI)00000000 (PubMedID)978-3-540-73957-9 (ISBN)
Projects
COMETS
Available from: 2009-11-04 Created: 2009-11-04 Last updated: 2018-02-19
Källhammer, J.-E., Eriksson, D., Granlund, G., Felsberg, M., Moe, A., Johansson, B., . . . Forssén, P.-E. (2007). Near Zone Pedestrian Detection using a Low-Resolution FIR Sensor. In: Intelligent Vehicles Symposium, 2007 IEEE: . Istanbul, Turkey: IEEE
Open this publication in new window or tab >>Near Zone Pedestrian Detection using a Low-Resolution FIR Sensor
Show others...
2007 (English)In: Intelligent Vehicles Symposium, 2007 IEEE, Istanbul, Turkey: IEEE , 2007, , p. 339-345Conference paper, Published paper (Refereed)
Abstract [en]

This paper explores the possibility to use a single low-resolution FIR camera for detection of pedestrians in the near zone in front of a vehicle. A low resolution sensor reduces the cost of the system, as well as the amount of data that needs to be processed in each frame.

We present a system that makes use of hot-spots and image positions of a near constant bearing to detect potential pedestrians. These detections provide seeds for an energy minimization algorithm that fits a pedestrian model to the detection. Since false alarms are hard to tolerate, the pedestrian model is then tracked, and the distance-to-collision (DTC) is measured by integrating size change measurements at sub-pixel accuracy, and the car velocity. The system should only engage braking for detections on a collision course, with a reliably measured DTC.

Preliminary experiments on a number of recorded near collision sequences indicate that our method may be useful for ranges up to about 10m using an 80x60 sensor, and somewhat more using a 160x120 sensor. We also analyze the robustness of the evaluated algorithm with respect to dead pixels, a potential problem for low-resolution sensors.

Place, publisher, year, edition, pages
Istanbul, Turkey: IEEE, 2007. p. 339-345
Series
Intelligent Vehicles Symposium, ISSN 1931-0587
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-39510 (URN)10.1109/IVS.2007.4290137 (DOI)49068 (Local ID)1-4244-1067-3 (ISBN)49068 (Archive number)49068 (OAI)
Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2016-05-04
Merino, L., Caballero, F., Forssén, P.-E., Wiklund, J., Ferruz, J., Martinez-de Dios, J. R., . . . Ollero, A. (2007). Single and Multi-UAV Relative Position Estimation Based on Natural Landmarks. In: Kimon P. Valavanis (Ed.), Advances in Unmanned Aerial Vehicles: State of the Art and the Road to Autonomy (pp. 267-307). Netherlands: Springer
Open this publication in new window or tab >>Single and Multi-UAV Relative Position Estimation Based on Natural Landmarks
Show others...
2007 (English)In: Advances in Unmanned Aerial Vehicles: State of the Art and the Road to Autonomy / [ed] Kimon P. Valavanis, Netherlands: Springer , 2007, p. 267-307Chapter in book (Other (popular science, discussion, etc.))
Abstract [en]

This Chapter presents a vision-based method for unmanned aerial vehicle (UAV) motion estimation that uses as input an image motion field obtained from matches of point-like features. The Chapter enhances visionbased techniques developed for single UAV localization and demonstrates how they can be modified to deal with the problem of multi-UAV relative position estimation. The proposed approach is built upon the assumption that if different UAVs identify, using their cameras, common objects in a scene, the relative pose displacement between the UAVs can be computed from these correspondences under certain assumptions. However, although point-like features are suitable for local UAV motion estimation, finding matches between images collected using different cameras is a difficult task that may be overcome using blob features. Results justify the proposed approach.

Place, publisher, year, edition, pages
Netherlands: Springer, 2007
Series
Microprocessor-Based and Intelligent Systems Engineering ; 33
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-51244 (URN)10.1007/978-1-4020-6114-1_9 (DOI)978-1-4020-6113-4 (ISBN)978-1-4020-6114-1 (ISBN)
Projects
COMETS: “Real-time coordination and control of multiple heterogeneous unmanned aerial vehicles”, IST-2001-34304
Available from: 2009-10-23 Created: 2009-10-23 Last updated: 2015-12-10
Organisations

Search in DiVA

Show all publications