liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Granlund, Gösta
Alternative names
Publications (10 of 174) Show all publications
Johansson, B., Wiklund, J., Forssén, P.-E. & Granlund, G. (2009). Combining shadow detection and simulation for estimation of vehicle size and position. PATTERN RECOGNITION LETTERS, 30(8), 751-759
Open this publication in new window or tab >>Combining shadow detection and simulation for estimation of vehicle size and position
2009 (English)In: PATTERN RECOGNITION LETTERS, ISSN 0167-8655, Vol. 30, no 8, p. 751-759Article in journal (Refereed) Published
Abstract [en]

This paper presents a method that combines shadow detection and a 3D box model including shadow simulation, for estimation of size and position of vehicles. We define a similarity measure between a simulated image of a 3D box, including the box shadow, and a captured image that is classified into background/foreground/shadow. The similarity Measure is used in all optimization procedure to find the optimal box state. It is shown in a number of experiments and examples how the combination shadow detection/simulation improves the estimation compared to just using detection or simulation, especially when the shadow detection or the simulation is inaccurate. We also describe a tracking system that utilizes the estimated 3D boxes, including highlight detection, spatial window instead of a time based window for predicting heading, and refined box size estimates by weighting accumulated estimates depending oil view. Finally, we show example results.

Keywords
Vehicle tracking, 3D box model, Object size estimation, Shadow detection, Shadow simulation
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-19420 (URN)10.1016/j.patrec.2009.03.005 (DOI)
Note
Original Publication: Björn Johansson, Johan Wiklund, Per-Erik Forssén and Gösta Granlund, Combining shadow detection and simulation for estimation of vehicle size and position, 2009, PATTERN RECOGNITION LETTERS, (30), 8, 751-759. http://dx.doi.org/10.1016/j.patrec.2009.03.005 Copyright: Elsevier Science B.V., Amsterdam. http://www.elsevier.com/ Available from: 2009-06-29 Created: 2009-06-22 Last updated: 2015-12-10Bibliographically approved
Felsberg, M., Wiklund, J. & Granlund, G. (2009). Exploratory learning structures in artificial cognitive systems. Image and Vision Computing, 27(11), 1671-1687
Open this publication in new window or tab >>Exploratory learning structures in artificial cognitive systems
2009 (English)In: Image and Vision Computing, ISSN 0262-8856, Vol. 27, no 11, p. 1671-1687Article in journal (Refereed) Published
Abstract [en]

The major goal of the COSPAL project is to develop an artificial cognitive system architecture, with the ability to autonomously extend its capabilities. Exploratory learning is one strategy that allows an extension of competences as provided by the environment of the system. Whereas classical learning methods aim at best for a parametric generalization, i.e., concluding from a number of samples of a problem class to the problem class itself, exploration aims at applying acquired competences to a new problem class, and to apply generalization on a conceptual level, resulting in new models. Incremental or online learning is a crucial requirement to perform exploratory learning. In the COSPAL project, we mainly investigate reinforcement-type learning methods for exploratory learning, and in this paper we focus on the organization of cognitive systems for efficient operation. Learning is used over the entire system. It is organized in the form of four nested loops, where the outermost loop reflects the user-reinforcement-feedback loop, the intermediate two loops switch between different solution modes at symbolic respectively sub-symbolic level, and the innermost loop performs the acquired competences in terms of perception-action cycles. We present a system diagram which explains this process in more detail. We discuss the learning strategy in terms of learning scenarios provided by the user. This interaction between user (teacher) and system is a major difference to classical robotics systems, where the system designer places his world model into the system. We believe that this is the key to extendable robust system behavior and successful interaction of humans and artificial cognitive systems. We furthermore address the issue of bootstrapping the system, and, in particular, the visual recognition module. We give some more in-depth details about our recognition method and how feedback from higher levels is implemented. The described system is however work in progress and no final results are available yet. The available preliminary results that we have achieved so far, clearly point towards a successful proof of the architecture concept.

National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-21198 (URN)10.1016/j.imavis.2009.02.012 (DOI)
Note
Original Publication: Michael Felsberg, Johan Wiklund and Gösta Granlund, Exploratory learning structures in artificial cognitive systems, 2009, Image and Vision Computing, (27), 11, 1671-1687. http://dx.doi.org/10.1016/j.imavis.2009.02.012 Copyright: Elsevier Science B.V., Amsterdam. http://www.elsevier.com/ Available from: 2009-09-30 Created: 2009-09-30 Last updated: 2016-05-04
Granlund, G. (2009). Special issue on Perception, Action and Learning. Image and Vision Computing, 27(11), 1639-1640
Open this publication in new window or tab >>Special issue on Perception, Action and Learning
2009 (English)In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 27, no 11, p. 1639-1640Article in journal, Editorial material (Refereed) Published
Place, publisher, year, edition, pages
Elsevier, 2009
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-21196 (URN)10.1016/j.imavis.2009.06.002 (DOI)000270127800001 ()2-s2.0-68649097022 (Scopus ID)
Available from: 2009-09-30 Created: 2009-09-30 Last updated: 2017-12-13Bibliographically approved
Felsberg, M. & Granlund, G. (2008). Fusing Dynamic Percepts and Symbols in Cognitive Systems. In: International Conference on Cognitive Systems.
Open this publication in new window or tab >>Fusing Dynamic Percepts and Symbols in Cognitive Systems
2008 (English)In: International Conference on Cognitive Systems, 2008Conference paper, Published paper (Refereed)
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-44879 (URN)78100 (Local ID)78100 (Archive number)78100 (OAI)
Projects
DIPLECS
Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2016-05-04
Felsberg, M., Wiklund, J., Jonsson, E., Moe, A. & Granlund, G. (2007). Exploratory Learning Strucutre in Artificial Cognitive Systems. In: International Cognitive Vision Workshop. Paper presented at The 5th International Conference on Computer Vision Systems, 2007, 21-24 March, Bielefeld University, Germany. Bielefeld: eCollections
Open this publication in new window or tab >>Exploratory Learning Strucutre in Artificial Cognitive Systems
Show others...
2007 (English)In: International Cognitive Vision Workshop, Bielefeld: eCollections , 2007Conference paper, Published paper (Other academic)
Abstract [en]

One major goal of the COSPAL project is to develop an artificial cognitive system architecture with the capability of exploratory learning. Exploratory learning is a strategy that allows to apply generalization on a conceptual level, resulting in an extension of competences. Whereas classical learning methods aim at best possible generalization, i.e., concluding from a number of samples of a problem class to the problem class itself, exploration aims at applying acquired competences to a new problem class. Incremental or online learning is an inherent requirement to perform exploratory learning.

Exploratory learning requires new theoretic tools and new algorithms. In the COSPAL project, we mainly investigate reinforcement-type learning methods for exploratory learning and in this paper we focus on its algorithmic aspect. Learning is performed in terms of four nested loops, where the outermost loop reflects the user-reinforcement-feedback loop, the intermediate two loops switch between different solution modes at symbolic respectively sub-symbolic level, and the innermost loop performs the acquired competences in terms of perception-action cycles. We present a system diagram which explains this process in more detail.

We discuss the learning strategy in terms of learning scenarios provided by the user. This interaction between user ('teacher') and system is a major difference to most existing systems where the system designer places his world model into the system. We believe that this is the key to extendable robust system behavior and successful interaction of humans and artificial cognitive systems.

We furthermore address the issue of bootstrapping the system, and, in particular, the visual recognition module. We give some more in-depth details about our recognition method and how feedback from higher levels is implemented. The described system is however work in progress and no final results are available yet. The available preliminary results that we have achieved so far, clearly point towards a successful proof of the architecture concept.

Place, publisher, year, edition, pages
Bielefeld: eCollections, 2007
Keywords
artificial cognitive system, perception action learning, exploratory learning, cognitive bootstrapping
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-39511 (URN)10.2390/biecoll-icvs2007-173 (DOI)49069 (Local ID)49069 (Archive number)49069 (OAI)
Conference
The 5th International Conference on Computer Vision Systems, 2007, 21-24 March, Bielefeld University, Germany
Projects
COSPAL
Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2016-05-04Bibliographically approved
Duits, R., Felsberg, M., Granlund, G. & ter Haar Romeny, B. M. (2007). Image Analysis and Reconstruction using a Wavelet Transform Constructed from a Reducible Representation of the Euclidean Motion Group. International Journal of Computer Vision, 72(1), 79-102
Open this publication in new window or tab >>Image Analysis and Reconstruction using a Wavelet Transform Constructed from a Reducible Representation of the Euclidean Motion Group
2007 (English)In: International Journal of Computer Vision, ISSN 0920-5691, E-ISSN 1573-1405, Vol. 72, no 1, p. 79-102Article in journal (Refereed) Published
Abstract [en]

Inspired by the early visual system of many mammalians we consider the construction of-and reconstruction from- an orientation score Uf:R2×S1→C as a local orientation representation of an image, f:R2→R . The mapping f↦Uf is a wavelet transform Wψ corresponding to a reducible representation of the Euclidean motion group onto L2(R2) and oriented wavelet ψ∈L2(R2) . This wavelet transform is a special case of a recently developed generalization of the standard wavelet theory and has the practical advantage over the usual wavelet approaches in image analysis (constructed by irreducible representations of the similitude group) that it allows a stable reconstruction from one (single scale) orientation score. Since our wavelet transform is a unitary mapping with stable inverse, we directly relate operations on orientation scores to operations on images in a robust manner.

Furthermore, by geometrical examination of the Euclidean motion group G=R2R×T , which is the domain of our orientation scores, we deduce that an operator Φ on orientation scores must be left invariant to ensure that the corresponding operator W−1ψΦWψ on images is Euclidean invariant. As an example we consider all linear second order left invariant evolutions on orientation scores corresponding to stochastic processes on G. As an application we detect elongated structures in (medical) images and automatically close the gaps between them.

Finally, we consider robust orientation estimates by means of channel representations, where we combine robust orientation estimation and learning of wavelets resulting in an auto-associative processing of orientation features. Here linear averaging of the channel representation is equivalent to robust orientation estimation and an adaptation of the wavelet to the statistics of the considered image class leads to an auto-associative behavior of the system.

National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-41574 (URN)10.1007/s11263-006-8894-5 (DOI)57835 (Local ID)57835 (Archive number)57835 (OAI)
Note

The original publication is available at www.springerlink.com: Remco Duits, Michael Felsberg, Gösta Granlund and Bart M. ter Haar Romeny, Image Analysis and Reconstruction using a Wavelet Transform Constructed from a Reducible Representation of the Euclidean Motion Group, 2007, International journal of computer vision., (72), 1, 79-102. http://dx.doi.org/10.1007/s11263-006-8894-5 Copyright: Springer Science Business Media http://www.springerlink.com/

Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2017-12-13
Källhammer, J.-E., Eriksson, D., Granlund, G., Felsberg, M., Moe, A., Johansson, B., . . . Forssén, P.-E. (2007). Near Zone Pedestrian Detection using a Low-Resolution FIR Sensor. In: Intelligent Vehicles Symposium, 2007 IEEE: . Istanbul, Turkey: IEEE
Open this publication in new window or tab >>Near Zone Pedestrian Detection using a Low-Resolution FIR Sensor
Show others...
2007 (English)In: Intelligent Vehicles Symposium, 2007 IEEE, Istanbul, Turkey: IEEE , 2007, , p. 339-345Conference paper, Published paper (Refereed)
Abstract [en]

This paper explores the possibility to use a single low-resolution FIR camera for detection of pedestrians in the near zone in front of a vehicle. A low resolution sensor reduces the cost of the system, as well as the amount of data that needs to be processed in each frame.

We present a system that makes use of hot-spots and image positions of a near constant bearing to detect potential pedestrians. These detections provide seeds for an energy minimization algorithm that fits a pedestrian model to the detection. Since false alarms are hard to tolerate, the pedestrian model is then tracked, and the distance-to-collision (DTC) is measured by integrating size change measurements at sub-pixel accuracy, and the car velocity. The system should only engage braking for detections on a collision course, with a reliably measured DTC.

Preliminary experiments on a number of recorded near collision sequences indicate that our method may be useful for ranges up to about 10m using an 80x60 sensor, and somewhat more using a 160x120 sensor. We also analyze the robustness of the evaluated algorithm with respect to dead pixels, a potential problem for low-resolution sensors.

Place, publisher, year, edition, pages
Istanbul, Turkey: IEEE, 2007. p. 339-345
Series
Intelligent Vehicles Symposium, ISSN 1931-0587
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-39510 (URN)10.1109/IVS.2007.4290137 (DOI)49068 (Local ID)1-4244-1067-3 (ISBN)49068 (Archive number)49068 (OAI)
Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2016-05-04
Granlund, G. (2006). A Cognitive Vision Architecture Integrating Neural Networks with Symbolic Processing. Künstliche Intelligenz (2), 18-24
Open this publication in new window or tab >>A Cognitive Vision Architecture Integrating Neural Networks with Symbolic Processing
2006 (English)In: Künstliche Intelligenz, ISSN 0933-1875, no 2, p. 18-24Article in journal (Other academic) Published
Abstract [en]

A fundamental property of cognitive vision systems is that they shall be extendable, which requires that they can both acquire and store information autonomously. The paper discusses organization of systems to allow this, and proposes an architecture for cognitive vision systems. The architecture consists of two parts. The first part, step by step learns a mapping from percepts directly onto actions or states. In the learning phase, action precedes perception, as action space is much less complex. This requires a semantic information representation, allowing computation and storage with respect to similarity. The second part uses invariant or symbolic representations, which are derived mainly from system and action states. Through active exploration, a system builds up concept spaces or models. This allows the system to subsequently acquire information using passive observation or language. The structure has been used to learn object properties, and constitutes the basic concepts for a European project COSPAL, within the IST programme.

National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-37181 (URN)33874 (Local ID)33874 (Archive number)33874 (OAI)
Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2011-01-11
Forssén, P.-E., Johansson, B. & Granlund, G. (2006). Channel Associative Networks for Multiple Valued Mappings. In: 2nd International Cognitive Vision Workshop (pp. 4-11).
Open this publication in new window or tab >>Channel Associative Networks for Multiple Valued Mappings
2006 (English)In: 2nd International Cognitive Vision Workshop, 2006, p. 4-11Conference paper, Published paper (Other academic)
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-36318 (URN)30975 (Local ID)30975 (Archive number)30975 (OAI)
Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2015-12-10
Felsberg, M., Wiklund, J., Jonsson, E., Moe, A. & Granlund, G. (2006). Exploratory Learning Structure in Artificial Cognitive Systems. Linköping: Linköping University Electronic Press
Open this publication in new window or tab >>Exploratory Learning Structure in Artificial Cognitive Systems
Show others...
2006 (English)Report (Other academic)
Abstract [en]

One major goal of the COSPAL project is to develop an artificial cognitive system architecture with the capability of exploratory learning. Exploratory learning is a strategy that allows to apply generalization on a conceptual level, resulting in an extension of competences. Whereas classical learning methods aim at best possible generalization, i.e., concluding from a number of samples of a problem class to the problem class itself, exploration aims at applying acquired competences to a new problem class. Incremental or online learning is an inherent requirement to perform exploratory learning.

Exploratory learning requires new theoretic tools and new algorithms. In the COSPAL project, we mainly investigate reinforcement-type learning methods for exploratory learning and in this paper we focus on its algorithmic aspect. Learning is performed in terms of four nested loops, where the outermost loop reflects the user-reinforcement-feedback loop, the intermediate two loops switch between different solution modes at symbolic respectively sub-symbolic level, and the innermost loop performs the acquired competences in terms of perception-action cycles. We present a system diagram which explains this process in more detail.

We discuss the learning strategy in terms of learning scenarios provided by the user. This interaction between user (’teacher’) and system is a major difference to most existing systems where the system designer places his world model into the system. We believe that this is the key to extendable robust system behavior and successful interaction of humans and artificial cognitive systems.

We furthermore address the issue of bootstrapping the system, and, in particular, the visual recognition module.We give some more in-depth details about our recognition method and how feedback from higher levels is implemented. The described system is however work in progress and no final results are available yet. The available preliminary results that we have achieved so far, clearly point towards a successful proof of the architecture concept.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2006. p. 5
Series
LiTH-ISY-R, ISSN 1400-3902 ; 2738
Keywords
COSPAL project
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-54326 (URN)LiTH-ISY-R-2738 (ISRN)
Available from: 2010-03-09 Created: 2010-03-09 Last updated: 2016-05-04Bibliographically approved
Organisations

Search in DiVA

Show all publications