liu.seSearch for publications in DiVA
Change search
Refine search result
3456789 251 - 300 of 595
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 251.
    Gustafsson, Gabriella
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Multiphase Motion Estimation in a Two Phase Flow2005Independent thesis Basic level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    To improve the control of a steel casting process ABB has developed an Electro Magnetic Brake (EMBR). This product is designed to improve steel quality, i.e. reduce non-metallic inclusions and blisters as well as risk of surface cracks. There is a demand of increasing the steel quality and in order to optimize the steel casting, simulations and experiments play an important role in achieving this. An advanced CFD simulation model has been created to carry out this task.

    The validation of the simulation model is performed on a water model that has been built for this purpose. This water model also makes experiments possible. One step to the simulation model is to measure the velocity and motion pattern of the seeding particles and the air bubbles in the water model to see if it corresponds to the simulation results.

    Since the water is transparent, seeding particles have been added to the liquid in order to observe the motion of the water. They have the same density as water. Hence the particles will follow the flow accurately. The motions of the air bubbles that are added into the water model need also to be observed since they influence the flow pattern.

    An algorithm - ”Transparent motions” - is thoroughly inspected and implemented. ”Transparent motions” was originally designed to post process x-ray images. However in this thesis, it is investigated whether the algorithm might be applicable to the water model and the image sequences containing seeding particles and air bubbles that are going to be used for motion estimation.

    The result show satisfying results for image sequences of particles only, however with a camera with a faster sampling interval, these results would improve. For image sequences with both bubbles and particles no results have been achieved.

  • 252.
    Haglund, Leif
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Adaptive Multidimensional Filtering1991Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis contains a presentation and an analysis of adaptive filtering strategies for multidimensional data. The size, shape and orientation of the flter are signal controlled and thus adapted locally to each neighbourhood according to a predefined model. The filter is constructed as a linear weighting of fixed oriented bandpass filters having the same shape but different orientations. The adaptive filtering methods have been tested on both real data and synthesized test data in 2D, e.g. still images, 3D, e.g. image sequences or volumes, with good results. In 4D, e.g. volume sequences, the algorithm is given in its mathematical form. The weighting coefficients are given by the inner products of a tensor representing the local structure of the data and the tensors representing the orientation of the filters.

    The procedure and lter design in estimating the representation tensor are described. In 2D, the tensor contains information about the local energy, the optimal orientation and a certainty of the orientation. In 3D, the information in the tensor is the energy, the normal to the best ftting local plane and the tangent to the best fitting line, and certainties of these orientations. In the case of time sequences, a quantitative comparison of the proposed method and other (optical flow) algorithms is presented.

    The estimation of control information is made in different scales. There are two main reasons for this. A single filter has a particular limited pass band which may or may not be tuned to the different sized objects to describe. Second, size or scale is a descriptive feature in its own right. All of this requires the integration of measurements from different scales. The increasing interest in wavelet theory supports the idea that a multiresolution approach is necessary. Hence the resulting adaptive filter will adapt also in size and to different orientations in different scales.

  • 253.
    Haglund, Leif
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Hierarchical Scale Analysis of Images Using Phase Description1989Licentiate thesis, monograph (Other academic)
    Abstract [en]

    Scale analysis and description has over the last years become one of the major research field!<! in image processing. There are two main reasons for this. A single filter has a particular limited pass band which may or may not be tuned to the different sized objects to describe. Second, size or scale is a descriptive feature in its own right. All of this requires the integration of measurements from different scales.

    The thesis describes a new algorithm which detects in what scale an event appears and also in what scale it disappears. In this way the scale space is subdivided into a number of intervals. Within each scale interval a consistency check is performed to get the certainty of the detection. It will be shown that using a three-dimensional phase representation of image data, it is possible to do both the subdivision and the consistency check in a simple manner. The scale levels between different events are detected when a certain dot product becomes negative and the consistency will be a vector summation between these scales. The specific levels where a split of scale space occurs will, of course, be contextually dependent and there will also be different numbers of levels in different parts of the images. Finally an application of size description of this information will be described.

  • 254.
    Haglund, Leif
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Bårman, Håkan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Estimation of Velocity and Acceleration in Time Sequences1992In: Theory & Applications of Image Analysis: eds P. Johansen and S. Olsen / [ed] P. Johansen and S. Olsen, Singapore: World Scientific Publishing Co , 1992, p. 223-236Chapter in book (Refereed)
  • 255.
    Haglund, Leif
    et al.
    n/a.
    Bårman, Håkan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Estimation of Velocity and Acceleration in Time Sequences1991In: Proceedings of the 7th Scandinavian Conference on Image Analysis: Aalborg, Denmark, 1991, p. 1033-1041Conference paper (Refereed)
  • 256.
    Haglund, Leif
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Fleet, David
    n/a.
    Stable Estimation of Image Orientation1994In: Proceedings of the IEEE-ICIP, 1994, p. 68-72Conference paper (Refereed)
  • 257.
    Haglund, Leif
    et al.
    n/a.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Granlund, Gösta H.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    On Phase Representation of Image Information1989In: The 6th Scandinavian Conference on Image Analysis: Oulu, Finland, 1989, p. 1082-1089Conference paper (Refereed)
  • 258.
    Haglund, Leif
    et al.
    n/a.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Granlund, Gösta H.
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    On Scale and Orientation Adaptive Filtering1992In: Proceedings of the SSAB Symposium on Image Analysis: Uppsala, 1992Conference paper (Refereed)
  • 259.
    Haglund, Leif
    et al.
    n/a.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Granlund, Gösta H.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Scale Analysis Using Phase Representation1989In: The 6th Scandinavian Conference on Image Analysis: Oulu, Finland, 1989, p. 1118-1125Conference paper (Refereed)
  • 260.
    Haglund, Leif
    et al.
    n/a.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Granlund, Gösta H.
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Scale and Orientation Adaptive Filtering1993In: SCIA8: Tromso, Norway, 1993Conference paper (Refereed)
    Abstract [en]

    This paper contains a presentation of a scale and orientation adaptive filtering strategy for images. The size, shape and orientation of the filter are signal controlled and thus locally adapted to each neighbourhood according to an estimated model. On each scale the filter is constructed as a linear weighting of fixed oriented bandpass filters having the same shape but different orientations. The resulting filter is interpolated from all scale levels, and spans over more than 6 octaves. It is possible to reconstruct an enhanced original image from the filtered images. The performance of the reconstruction algorithm displays two desirable but normally contradictory features, namely edge enhancement and an improvement of the signal-to-noise ratio. The adaptive filtering method has been tested on both real data and synthesized test data. The results are very good on a wide variety of images from moderate signal-to-noise ratios to low, even lower than 0 dB, SNR.

  • 261.
    Hallenberg, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Robot Tool Center Point Calibration using Computer Vision2007Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Today, tool center point calibration is mostly done by a manual procedure. The method is very time consuming and the result may vary due to how skilled the operators are.

    This thesis proposes a new automated iterative method for tool center point calibration of industrial robots, by making use of computer vision and image processing techniques. The new method has several advantages over the manual calibration method. Experimental verifications have shown that the proposed method is much faster, still delivering a comparable or even better accuracy. The setup of the proposed method is very easy, only one USB camera connected to a laptop computer is needed and no contact with the robot tool is necessary during the calibration procedure.

    The method can be split into three different parts. Initially, the transformation between the robot wrist and the tool is determined by solving a closed loop of homogeneous transformations. Second an image segmentation procedure is described for finding point correspondences on a rotation symmetric robot tool. The image segmentation part is necessary for performing a measurement with six degrees of freedom of the camera to tool transformation. The last part of the proposed method is an iterative procedure which automates an ordinary four point tool center point calibration algorithm. The iterative procedure ensures that the accuracy of the tool center point calibration only depends on the accuracy of the camera when registering a movement between two positions.

  • 262.
    Hedborg, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Pose Estimation and Structure Analysisof Image Sequences2009Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Autonomous navigation for ground vehicles has many challenges. Autonomous systems must be able to self-localise, avoid obstacles and determine navigable surfaces. This thesis studies several aspects of autonomous navigation with a particular emphasis on vision, motivated by it being a primary component for navigation in many high-level biological organisms.  The key problem of self-localisation or pose estimation can be solved through analysis of the changes in appearance of rigid objects observed from different view points. We therefore describe a system for structure and motion estimation for real-time navigation and obstacle avoidance. With the explicit assumption of a calibrated camera, we have studied several schemes for increasing accuracy and speed of the estimation.The basis of most structure and motion pose estimation algorithms is a good point tracker. However point tracking is computationally expensive and can occupy a large portion of the CPU resources. In thisthesis we show how a point tracker can be implemented efficiently on the graphics processor, which results in faster tracking of points and the CPU being available to carry out additional processing tasks.In addition we propose a novel view interpolation approach, that can be used effectively for pose estimation given previously seen views. In this way, a vehicle will be able to estimate its location by interpolating previously seen data.Navigation and obstacle avoidance may be carried out efficiently using structure and motion, but only whitin a limited range from the camera. In order to increase this effective range, additional information needs to be incorporated, more specifically the location of objects in the image. For this, we propose a real-time object recognition method, which uses P-channel matching, which may be used for improving navigation accuracy at distances where structure estimation is unreliable.

    List of papers
    1. Real-Time View-Based Pose Recognition and Interpolation for Tracking Initialization
    Open this publication in new window or tab >>Real-Time View-Based Pose Recognition and Interpolation for Tracking Initialization
    2007 (English)In: Journal of Real-Time Image Processing, ISSN 1861-8200, E-ISSN 1861-8219, Journal of real-time image processing, ISSN 1861-8200, Vol. 2, no 2-3, p. 103-115Article in journal (Refereed) Published
    Abstract [en]

    In this paper we propose a new approach to real-time view-based pose recognition and interpolation. Pose recognition is particularly useful for identifying camera views in databases, video sequences, video streams, and live recordings. All of these applications require a fast pose recognition process, in many cases video real-time. It should further be possible to extend the database with new material, i.e., to update the recognition system online. The method that we propose is based on P-channels, a special kind of information representation which combines advantages of histograms and local linear models. Our approach is motivated by its similarity to information representation in biological systems but its main advantage is its robustness against common distortions such as clutter and occlusion. The recognition algorithm consists of three steps: (1) low-level image features for color and local orientation are extracted in each point of the image; (2) these features are encoded into P-channels by combining similar features within local image regions; (3) the query P-channels are compared to a set of prototype P-channels in a database using a least-squares approach. The algorithm is applied in two scene registration experiments with fisheye camera data, one for pose interpolation from synthetic images and one for finding the nearest view in a set of real images. The method compares favorable to SIFT-based methods, in particular concerning interpolation. The method can be used for initializing pose-tracking systems, either when starting the tracking or when the tracking has failed and the system needs to re-initialize. Due to its real-time performance, the method can also be embedded directly into the tracking system, allowing a sensor fusion unit choosing dynamically between the frame-by-frame tracking and the pose recognition.

    Keywords
    computer vision
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-39505 (URN)10.1007/s11554-007-0044-y (DOI)49062 (Local ID)49062 (Archive number)49062 (OAI)
    Note
    Original Publication: Michael Felsberg and Johan Hedborg, Real-Time View-Based Pose Recognition and Interpolation for Tracking Initialization, 2007, Journal of real-time image processing, (2), 2-3, 103-115. http://dx.doi.org/10.1007/s11554-007-0044-y Copyright: Springer Science Business MediaAvailable from: 2009-10-10 Created: 2009-10-10 Last updated: 2017-12-13Bibliographically approved
    2. Real-Time Visual Recognition of Objects and Scenes Using P-Channel Matching
    Open this publication in new window or tab >>Real-Time Visual Recognition of Objects and Scenes Using P-Channel Matching
    2007 (English)In: Proceedings 15th Scandinavian Conference on Image Analysis / [ed] Bjarne K. Ersboll and Kim S. Pedersen, Berlin, Heidelberg: Springer, 2007, Vol. 4522, p. 908-917Conference paper, Published paper (Refereed)
    Abstract [en]

    In this paper we propose a new approach to real-time view-based object recognition and scene registration. Object recognition is an important sub-task in many applications, as e.g., robotics, retrieval, and surveillance. Scene registration is particularly useful for identifying camera views in databases or video sequences. All of these applications require a fast recognition process and the possibility to extend the database with new material, i.e., to update the recognition system online. The method that we propose is based on P-channels, a special kind of information representation which combines advantages of histograms and local linear models. Our approach is motivated by its similarity to information representation in biological systems but its main advantage is its robustness against common distortions as clutter and occlusion. The recognition algorithm extracts a number of basic, intensity invariant image features, encodes them into P-channels, and compares the query P-channels to a set of prototype P-channels in a database. The algorithm is applied in a cross-validation experiment on the COIL database, resulting in nearly ideal ROC curves. Furthermore, results from scene registration with a fish-eye camera are presented.

    Place, publisher, year, edition, pages
    Berlin, Heidelberg: Springer, 2007
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 4522
    Keywords
    Object recognition - scene registration - P-channels - real-time processing - view-based computer vision
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-21618 (URN)10.1007/978-3-540-73040-8 (DOI)978-3-540-73039-2 (ISBN)
    Conference
    15th Scandinavian Conference, SCIA 2007, June 10-24, Aalborg, Denmark
    Note

    Original Publication: Michael Felsberg and Johan Hedborg, Real-Time Visual Recognition of Objects and Scenes Using P-Channel Matching, 2007, Proc. 15th Scandinavian Conference on Image Analysis, 908-917. http://dx.doi.org/10.1007/978-3-540-73040-8 Copyright: Springer

    Available from: 2009-10-05 Created: 2009-10-05 Last updated: 2017-03-23Bibliographically approved
    3. Fast and Accurate Structure and Motion Estimation
    Open this publication in new window or tab >>Fast and Accurate Structure and Motion Estimation
    2009 (English)In: International Symposium on Visual Computing / [ed] George Bebis, Richard Boyle, Bahram Parvin, Darko Koracin, Yoshinori Kuno, Junxian Wang, Jun-Xuan Wang, Junxian Wang, Renato Pajarola and Peter Lindstrom et al., Berlin Heidelberg: Springer-Verlag , 2009, p. 211-222Conference paper, Oral presentation only (Refereed)
    Abstract [en]

    This paper describes a system for structure-and-motion estimation for real-time navigation and obstacle avoidance. We demonstrate it technique to increase the efficiency of the 5-point solution to the relative pose problem. This is achieved by a novel sampling scheme, where We add a distance constraint on the sampled points inside the RANSAC loop. before calculating the 5-point solution. Our setup uses the KLT tracker to establish point correspondences across tone in live video We also demonstrate how an early outlier rejection in the tracker improves performance in scenes with plenty of occlusions. This outlier rejection scheme is well Slated to implementation on graphics hardware. We evaluate the proposed algorithms using real camera sequences with fine-tuned bundle adjusted data as ground truth. To strenghten oar results we also evaluate using sequences generated by a state-of-the-art rendering software. On average we are able to reduce the number of RANSAC iterations by half and thereby double the speed.

    Place, publisher, year, edition, pages
    Berlin Heidelberg: Springer-Verlag, 2009
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743 ; Volume 5875
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-50624 (URN)10.1007/978-3-642-10331-5_20 (DOI)000278937300020 ()
    Conference
    5th International Symposium, ISVC 2009, November 30 - December 2, Las Vegas, NV, USA
    Projects
    DIPLECS
    Available from: 2009-10-13 Created: 2009-10-13 Last updated: 2016-05-04Bibliographically approved
    4. Real time camera ego-motion compensation and lens undistortion on GPU
    Open this publication in new window or tab >>Real time camera ego-motion compensation and lens undistortion on GPU
    2007 (English)Manuscript (preprint) (Other academic)
    Abstract [en]

    This paper describes a GPU implementation for simultaneous camera ego-motion compensation and lens undistortion. The main idea is to transform the image under an ego-motion constraint so that trackedpoints in the image, that are assumed to come from the ego-motion, maps as close as possible to their averageposition in time. The lens undistortion is computed si-multaneously. We compare the performance with and without compensation using two measures; mean timedifference and mean statistical background subtraction.

    Publisher
    p. 8
    Keywords
    GPU, camera ego-motion compensation, lens undistortion
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-58547 (URN)
    Available from: 2010-08-18 Created: 2010-08-13 Last updated: 2011-01-25Bibliographically approved
    5. KLT Tracking Implementation on the GPU
    Open this publication in new window or tab >>KLT Tracking Implementation on the GPU
    2007 (English)In: Proceedings SSBA 2007 / [ed] Magnus Borga, Anders Brun and Michael Felsberg;, 2007Conference paper, Oral presentation only (Other academic)
    Abstract [en]

    The GPU is the main processing unit on a graphics card. A modern GPU typically provides more than ten times the computational power of an ordinary PC processor. This is a result of the high demands for speed and image quality in computer games. This paper investigates the possibility of exploiting this computational power for tracking points in image sequences. Tracking points is used in many computer vision tasks, such as tracking moving objects, structure from motion, face tracking etc. The algorithm was successfully implemented on the GPU and a large speed up was achieved.

    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-21602 (URN)
    Conference
    SSBA, Swedish Symposium in Image Analysis 2007, 14-15 March, Linköping, Sweden
    Available from: 2009-10-05 Created: 2009-10-05 Last updated: 2016-05-04
    6. Synthetic Ground Truth for Feature Trackers
    Open this publication in new window or tab >>Synthetic Ground Truth for Feature Trackers
    2008 (English)In: Swedish Symposium on Image Analysis 2008, 2008Conference paper, Published paper (Other academic)
    Abstract [en]

    Good data sets for evaluation of computer visionalgorithms are important for the continuedprogress of the field. There exist good evaluationsets for many applications, but there are othersfor which good evaluation sets are harder to comeby. One such example is feature tracking, wherethere is an obvious difficulty in the collection ofdata. Good evaluation data is important both forcomparisons of different algorithms, and to detectweaknesses in a specific method.All image data is a result of light interactingwith its environment. These interactions are sowell modelled in rendering software that sometimesnot even the sharpest human eye can tell the differencebetween reality and simulation. In this paperwe thus propose to use a high quality renderingsystem to create evaluation data for sparse pointcorrespondence trackers.

    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-58548 (URN)
    Conference
    Swedish Symposium on Image Analysis 2008, 13-14 Marsh, Lund, Sweden
    Available from: 2010-08-18 Created: 2010-08-13 Last updated: 2015-12-10Bibliographically approved
  • 263.
    Hedborg, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Fast and Robust Relative Pose Estimation for Forward and Sideways Motions2010In: SSBA, 2010Conference paper (Other academic)
  • 264.
    Hedborg, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Fast and Accurate Ego-Motion Estimation2009Conference paper (Refereed)
    Abstract [en]

    This paper describes a system that efficiently uses the KLT tracker together with a calibrated 5-point solver for structure-from-motion (SfM). Our system uses a GPU to perform tracking, and the CPU for SfM.

    In this setup, it is advantageous to run the tracker both forwards and backwards in time, to detect incorrectly tracked points. We introduce a modification to the point selection inside the RANSAC step of the 5-point solver, and demonstrate how this speeds up the algorithm. Our evaluations are done using both real camera sequences, and data from a state-of-the art rendering engine with associated ground-truth.

  • 265.
    Hedborg, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Synthetic Ground Truth for Feature Trackers2008In: Swedish Symposium on Image Analysis 2008, 2008Conference paper (Other academic)
    Abstract [en]

    Good data sets for evaluation of computer visionalgorithms are important for the continuedprogress of the field. There exist good evaluationsets for many applications, but there are othersfor which good evaluation sets are harder to comeby. One such example is feature tracking, wherethere is an obvious difficulty in the collection ofdata. Good evaluation data is important both forcomparisons of different algorithms, and to detectweaknesses in a specific method.All image data is a result of light interactingwith its environment. These interactions are sowell modelled in rendering software that sometimesnot even the sharpest human eye can tell the differencebetween reality and simulation. In this paperwe thus propose to use a high quality renderingsystem to create evaluation data for sparse pointcorrespondence trackers.

  • 266.
    Hedborg, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Fast and Accurate Structure and Motion Estimation2009In: International Symposium on Visual Computing / [ed] George Bebis, Richard Boyle, Bahram Parvin, Darko Koracin, Yoshinori Kuno, Junxian Wang, Jun-Xuan Wang, Junxian Wang, Renato Pajarola and Peter Lindstrom et al., Berlin Heidelberg: Springer-Verlag , 2009, p. 211-222Conference paper (Refereed)
    Abstract [en]

    This paper describes a system for structure-and-motion estimation for real-time navigation and obstacle avoidance. We demonstrate it technique to increase the efficiency of the 5-point solution to the relative pose problem. This is achieved by a novel sampling scheme, where We add a distance constraint on the sampled points inside the RANSAC loop. before calculating the 5-point solution. Our setup uses the KLT tracker to establish point correspondences across tone in live video We also demonstrate how an early outlier rejection in the tracker improves performance in scenes with plenty of occlusions. This outlier rejection scheme is well Slated to implementation on graphics hardware. We evaluate the proposed algorithms using real camera sequences with fine-tuned bundle adjusted data as ground truth. To strenghten oar results we also evaluate using sequences generated by a state-of-the-art rendering software. On average we are able to reduce the number of RANSAC iterations by half and thereby double the speed.

  • 267.
    Hedborg, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Johansson, Björn
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Real time camera ego-motion compensation and lens undistortion on GPU2007Manuscript (preprint) (Other academic)
    Abstract [en]

    This paper describes a GPU implementation for simultaneous camera ego-motion compensation and lens undistortion. The main idea is to transform the image under an ego-motion constraint so that trackedpoints in the image, that are assumed to come from the ego-motion, maps as close as possible to their averageposition in time. The lens undistortion is computed si-multaneously. We compare the performance with and without compensation using two measures; mean timedifference and mean statistical background subtraction.

  • 268.
    Hedborg, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Skoglund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    KLT Tracking Implementation on the GPU2007In: Proceedings SSBA 2007 / [ed] Magnus Borga, Anders Brun and Michael Felsberg;, 2007Conference paper (Other academic)
    Abstract [en]

    The GPU is the main processing unit on a graphics card. A modern GPU typically provides more than ten times the computational power of an ordinary PC processor. This is a result of the high demands for speed and image quality in computer games. This paper investigates the possibility of exploiting this computational power for tracking points in image sequences. Tracking points is used in many computer vision tasks, such as tracking moving objects, structure from motion, face tracking etc. The algorithm was successfully implemented on the GPU and a large speed up was achieved.

  • 269.
    Hedlund, Gunnar
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Närmaskbestämning från stereoseende2005Independent thesis Basic level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    Detta examensarbete utreder avståndsbedömning med hjälp av bildbehandling och stereoseende för känd kamerauppställning.

    Idag existerar ett stort antal beräkningsmetoder för att få ut avstånd till objekt, men metodernas prestanda har knappt mätts. Detta arbete tittar huvudsakligen på olika blockbaserade metoder för avståndsbedömning och tittar på möjligheter samt begränsningar då man använder sig av känd kunskap inom bildbehandling och stereoseende för avståndsbedömning. Arbetet är gjort på Bofors Defence AB i Karlskoga, Sverige, i syfte att slutligen användas i ett optiskt sensorsystem. Arbetet utreder beprövade

    Resultaten pekar mot att det är svårt att bestämma en närmask, avstånd till samtliga synliga objekt, men de testade metoderna bör ändå kunna användas punktvis för att beräkna avstånd. Den bästa metoden bygger på att man beräknar minsta absolutfelet och enbart behåller de säkraste värdena.

  • 270.
    Hedlund, Martin
    et al.
    n/a.
    Granlund, Gösta H.
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.
    A Consistency Operation for Line and Curve Enhancement1982In: The Computer Society Conference on PR&IP: Anaheim, California, 1982Conference paper (Refereed)
  • 271.
    Hedlund, Martin
    et al.
    n/a.
    Granlund, Gösta H.
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.
    Image Filtering and Relaxation Procedures using Hierarchical Models1981In: Proceedings of the 2nd Scandinavian Conference on Image Analysis: Finland, 1981Conference paper (Refereed)
  • 272.
    Helmer, Scott
    et al.
    UBC.
    Meger, David
    UBC.
    Forssén, Per-Erik
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Southey, Tristram
    UBC.
    McCann, Sancho
    UBC.
    Fazli, Pooyan
    UBC.
    Little, James J.
    UBC.
    Lowe, David G.
    UBC.
    The UBC Semantic Robot Vision System2007In: AAAI,2007, Vancouver: AAAI Press , 2007Conference paper (Refereed)
  • 273.
    Hemmendorff, Magnus
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Single and Multiple Motion Field Estimation1999Licentiate thesis, monograph (Other academic)
    Abstract [en]

    This thesis presents a framework for estimation of motion fields both for single and multiple layers. All the methods have in common that they generate or use constraints on the local motion. Motion constraints are represented by vectors whose directions describe one component of the local motion and whose magnitude indicate confidence.

    Two novel methods for estimating these motion constraints are presented. Both methods take two images as input and apply orientation sensitive quadrature filters. One method is similar to a gradient method applied on the phase from the complex filter outputs. The other method is based on novel results using canonical correlation presented in this thesis.

    Parametric models, e.g. affine or FEM, are used to estimate motion from constraints on local motion. In order to estimate smooth fields for models with many parameters, cost functions on deformations are introduced.

    Motions of transparent multiple layers are estimated by implicit or explicit clustering of motion constraints into groups. General issues and difficulties in analysis of multiple motions are described. An extension of the known EM algorithm is presented together with experimental results on multiple transparent layers with affine motions. Good accuracy in estimation allows reconstruction of layers using a backprojection algorithm. As an alternative to the EM algorithm, this thesis also introduces a method based on higher order tensors.

    A result with potential applicatications in a number of diffeerent research fields is the extension of canonical correlation to handle complex variables. Correlation is maximized using a novel method that can handle singular covariance matrices.

  • 274.
    Hemmendorff, Magnus
    et al.
    n/a.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Andersson, Mats T.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Kronander, Torbjörn
    n/a.
    Motion compensated digital subraction angiography1999In: Proceedings of SPIE's International Symposium on Medical Imaging, vol 3661,  1999: San Diego, USA, 1999, Vol. 3661 Image ProcessingConference paper (Refereed)
    Abstract [en]

    Digital subtraction angiography, whether based on traditional X-ray or MR, suffers from patient motion artifacts. Until now, the usual remedy is to pixel shift by hand, or in some cases performing a global pixel shift semi-automatically. This is time consuming, and cannot handle rotations or local varying deformations over the image. We have developed a fully automatic algorithm that provides for motion compensation in the presence of large local deformations. Our motion compensation is very accurate for ordinary motions, including large rotations and deformations. It does not matter if the motions are irregular over time. For most images, it takes about a second per image to get adequate accuracy. The method is based on using the phase from filter banks of quadrature filters tuned in different directions and frequencies. Unlike traditional methods for optical flow and correlation, our method is more accurate and less susceptible to disturbing changes in the image, e.g. a moving contrast bolus. The implications for common practice are that radiologists' time can be significantly reduced in ordinary peripheral angiographies and that the number of retakes due to large or local motion artifacts will be much reduced.

  • 275.
    Holm Ovrén, Hannes
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Emilsson, Erika
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Missile approach warning using multi-spectral imagery2010Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Man portable air defence systems, MANPADS, pose a big threat to civilian and military aircraft. This thesis aims to find methods that could be used in a missile approach warning system based on infrared cameras.

    The two main tasks of the completed system are to classify the type of missile, and also to estimate its position and velocity from a sequence of images.

    The classification is based on hidden Markov models, one-class classifiers, and multi-class classifiers.

    Position and velocity estimation uses a model of the observed intensity as a function of real intensity, image coordinates, distance and missile orientation. The estimation is made by an extended Kalman filter.

    We show that fast classification of missiles based on radiometric data and a hidden Markov model is possible and works well, although more data would be needed to verify the results.

    Estimating the position and velocity works fairly well if the initial parameters are known. Unfortunately, some of these parameters can not be computed using the available sensor data.

  • 276.
    Håkansson, Staffan
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Detektering av sprickor i vägytor med hjälp av Datorseende2005Independent thesis Basic level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis describes new methods for automatic crack detection in pavements. Cracks in pavements can be used as an early indication for the need of reparation.

    Automatic crack detection is preferable compared to manual inventory; the repeatability can be better, the inventory can be done at a higher speed and can be done without interruption of the traffic.

    The automatic and semi-automatic crack detection systems that exist today use Image Analysis methods. There are today powerful methods available in the area of Computer Vision. These methods work in higher dimensions with greater complexity and generate measures of local signal properties, while Image Analyses methods for crack detection use morphological operations on binary images.

    Methods for digitalizing video data on VHS-cassettes and stitching images from nearby frames have been developed.

    Four methods for crack detection have been evaluated, and two of them have been used to form a crack detection and classification program implemented in the calculation program Matlab.

    One image set was used during the implementation and another image set was used for validation. The crack detection system did perform correct detection on 99.2 percent when analysing the images which were used during implementation. The result of the crack detection on the validation data was not very good. When the program is being used on data from other pavements than the one used during implementation, information about the surface texture is required to calibrate the crack detection.

  • 277.
    Isaksson, Marcus
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Face Detection and Pose Estimation using Triplet Invariants2002Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Face detection and pose estimation are two widely studied problems - mainly because of their use as subcomponents in important applications, e.g. face recognition. In this thesis I investigate a new approach to the general problem of object detection and pose estimation and apply it to faces. Face detection can be considered a special case of this general problem, but is complicated by the fact that faces are non-rigid objects. The basis of the new approach is the use of scale and orientation invariant feature structures - feature triplets - extracted from the image, as well as a biologically inspired associative structure which maps from feature triplets to desired responses (position, pose, etc.). The feature triplets are constructed from curvature features in the image and coded in a way to represent distances between major facial features (eyes, nose and mouth). The final system has been evaluated on different sets of face images.

  • 278.
    Isoz, Wilhelm
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Calibration of Multispectral Sensors2005Independent thesis Basic level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis describes and evaluates a number of approaches and algorithms for nonuniform correction (NUC) and suppression of fixed pattern noise in a image sequence. The main task for this thesis work was to create a general NUC for infrared focal plane arrays. To create a radiometrically correct NUC, reference based methods using polynomial approximation are used instead of the more common scene based methods which creates a cosmetic NUC.

    The pixels that can not be adjusted to give a correct value for the incomming radiation are defined as dead. Four separate methods of identifying dead pixels are used to find these pixels. Both the scene sequence and calibration data are used in these identifying methods.

    The algorithms and methods have all been tested by using real image sequences. A graphical user interface using the presented algorithms has been created in Matlab to simplify the correction of image sequences. An implementation to convert the corrected values from the images to radiance and temperature is also performed.

  • 279.
    Jilken, L.
    et al.
    n/a.
    Bäcklund, J.
    n/a.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Automatic Fatigue Threshold Value Testing1978In: Conf. on Mechanisms of Deformation and Fracture: Luleå, Sweden, 1978Conference paper (Refereed)
  • 280.
    Johansson, Björn
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    A Survey on: Contents Based Search in Image Databases2000Report (Other academic)
    Abstract [en]

    This survey contains links and facts to a number of projects on content based search in image databases around the world today. The main focus is on what kind of image features is used but also the user interface and the users possibility to interact with the system (i.e. what 'visual language' is used).

  • 281.
    Johansson, Björn
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Backprojection of Some Image Symmetries Based on a Local Orientation Description2000Report (Other academic)
    Abstract [en]

    Some image patterns, e.g. circles, hyperbolic curves, star patterns etc., can be described in a compact way using local orientation. The features mentioned above is part of a family of patterns called rotational symmetries. This theory can be used to detect image patterns from the local orientation in double angle representation of an images. Some of the rotational symmetries are described originally from the local orientation without being designed to detect a certain feature. The question is then: given a description in double angle representation, what kind of image features does this description correspond to? This 'inverse', or backprojection, is not unambiguous - many patterns has the same local orientation description. This report answers this question for the case of rotational symmetries and also for some other descriptions.

  • 282.
    Johansson, Björn
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Curvature Detection using Polynomial Fitting on Local Orientation2000Report (Other academic)
    Abstract [en]

    This report describes a technique to detect curvature. The technique uses local polynomial fitting on a local orientation description of an image. The idea is based on the theory of rotational symmetries which describes curvature, circles, star-patterns etc. The local polynomial fitting is shown to be equivalent to calculating partial derivatives on a lowpass version of the local orientation. The new method can therefore be very efficiently implemented both in the singlescale case and in the multiscale case.

  • 283.
    Johansson, Björn
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Low Level Operations and Learning in Computer Vision2004Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis presents some concepts and methods for low level computer vision and learning, with object recognition as the primary application.

    An efficient method for detection of local rotational symmetries in images is presented. Rotational symmetries include circle patterns, star patterns, and certain high curvature patterns. The method for detection of these patterns is based on local moments computed on a local orientation description in double angle representation, which makes the detection invariant to the sign of the local direction vectors. Some methods are also suggested to increase the selectivity of the detection method. The symmetries can serve as feature descriptors and interest points for use in hierarchical matching structures for object recognition and related problems.

    A view-based method for 3D object recognition and estimation of object pose from a single image is also presented. The method is based on simple feature vector matching and clustering. Local orientation regions computed at interest points are used as features for matching. The regions are computed such that they are invariant to translation, rotation, and locally invariant to scale. Each match casts a vote on a certain object pose, rotation, scale, and position, and a joint estimate is found by a clustering procedure. The method is demonstrated on a number of real images and the region features are compared with the SIFT descriptor, which is another standard region feature for the same application.

    Finally, a new associative network is presented which applies the channel representation for both input and output data. This representation is sparse and monopolar, and is a simple yet powerful representation of scalars and vectors. It is especially suited for representation of several values simultaneously, a property that is inherited by the network and something which is useful in many computer vision problems. The chosen representation enables us to use a simple linear model for non-linear mappings. The linear model parameters are found by solving a least squares problem with a non-negative constraint, which gives a sparse regularized solution.

  • 284.
    Johansson, Björn
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Multidimensional signal recognition, invariant to affine transformation and time-shift, using canonical correlation1997Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Chapter 2 describes the concept of canonical correlation. This you have to know about in order to understand the continuing discussion.

    Chapter 3 introduce you to the problem that was to be solved.

    Chapter 4, 5 and 6 discusses three different suggestions how to approach the problem. Each chapter begins with a section of experiments as a motivation of the approach. Then follows some theory and mathematical manipulations to structure the thoughts. The last sections contains discussions and suggestions concerning the approach.

    Finally chapter 7 contains a summary and a comparismental discussion of the approaches.

     

  • 285.
    Johansson, Björn
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Multiscale Curvature Detection in Computer Vision2001Licentiate thesis, monograph (Other academic)
    Abstract [en]

    This thesis presents a new method for detection of complex curvatures such as corners, circles, and star patterns. The method is based on a second degree local polynomial model applied to a local orientation description in double angle representation. The theory of rotational symmetries is used to compute curvature responses from the parameters of the polynomial model. The responses are made more selective using a scheme of inhibition between different symmetry models. These symmetries can serve as feature points at a high abstraction level for use in hierarchical matching structures for 3D estimation, object recognition, image database search, etc.

    A very efficient approximative algorithm for single and multiscale polynomial expansion is developed, which is used for detection of the complex curvatures in one or several scales. The algorithm is based on the simple observation that polynomial functions multiplied with a Gaussian function can be described in terms of partial derivatives of the Gaussian. The approximative polynomial expansion algorithm is evaluated in an experiment to estimate local orientation on 3D data, and the performance is comparable to previously tested algorithms which are more computationally expensive.

    The curvature algorithm is demonstrated on natural images and in an object recognition experiment. Phase histograms based on the curvature features are developed and shown to be useful as an alternative compact image representation.

    The importance of curvature is furthermore motivated by reviewing examples from biological and perceptual studies. The usefulness of local orientation information to detect curvature is also motivated by an experiment about learning a corner detector.

  • 286.
    Johansson, Björn
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    On Classification: Simultaneously Reducing Dimensionality and Finding Automatic Representation using Canonical Correlation2001Report (Other academic)
    Abstract [en]

    This report describes an idea based on the work in [1], where an algorithm for learning automatic representation of visual operators is presented. The algorithm in [1] uses canonical correlation to find a suitable subspace in which the signal is invariant to some desired properties. This report presents a related approach specially designed for classification problems. The goal is to find a subspace in which the signal is invariant within each class, and at the same time compute the class representation in that subspace. This algorithm is closely related to the one in [1], but less computationally demanding, and it is shown that the two algorithms are equivalent if we have equal number of training samples for each class. Even though the new algorithm is designed for pure classification problems it can still be used to learn visual operators as will be shown in the experiment section. [1] M. Borga. Learning Multidimensional Signal Processing. PhD thesis, Linköping University, Sweden, SE-581 83 Linköping, 1998. Dissertation No 531, ISBN 91-7219-202-X.

  • 287.
    Johansson, Björn
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    On Sparse Associative Networks: A Least Squares Formulation2001Report (Other academic)
    Abstract [en]

    This report is a complement to the working document [1], where a sparse associative network is described. This report shows that the net learning rule in [1] can be viewed as the solution to a weighted least squares problem. This means that we can apply the theory framework of least squares problems, and compare the net rule with some other iterative algorithms that solve the same problem. The learning rule is compared with the gradient search algorithm and the RPROP algorithm in a simple synthetic experiment. The gradient rule has the slowest convergence while the associative and the RPROP rules have similar convergence. The associative learning rule has a smaller initial error than the RPROP rule though.

    It is also shown in the same experiment that we get a faster convergence if we have a monopolar constraint on the solution, i.e. if the solution is constrained to be non-negative. The least squares error is a bit higher but the norm of the solution is smaller, which gives a smaller interpolation error.

    The report also discusses a generalization of the least squares model, which include other known function approximation models.

    [1] G Granlund. Paralell Learning in Artificial Vision Systems: Working Document. Dept. EE, Linköping University, 2000

  • 288.
    Johansson, Björn
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Representing Multiple Orientations in 2D with Orientation Channel Histograms2002Report (Other academic)
    Abstract [en]

    The channel representation is a simple yet powerful representation of scalars and vectors. It is especially suited for representation of several scalars at the same time without mixing them up.

    This report is partly intended to serve as a simple illustration of the channel representation. The report shows how the channels can be used to represent multiple orientations in two dimensions. The idea is to make a channel representation of the local orientation angle computed from the image gradient. The representation basically becomes an orientation histogram with overlapping bins.

    The channel histogram is compared with the orientation tensor, which is another representation of orientation. The performance comparable to tensors in the simple signal case, but decreases slightly for increasing number of channels. The channel histogram outperforms the tensors on non-simple signals.

  • 289.
    Johansson, Björn
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Rotational Symmetries, a Quick Tutorial2001Other (Other academic)
  • 290.
    Johansson, Björn
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Borga, Magnus
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Learning Corner Orientation Using Canonical Correlation2001In: Proceedings of the SSAB Symposium on Image Analysis: Norrköping, 2001, p. 89-92Conference paper (Refereed)
    Abstract [en]

    This paper shows how canonical correlation can be used to learn a detector for corner orientation invariant to corner angle and intensity. Pairs of images with the same corner orientation but different angle and intensity are used as training samples. Three different image representations; intensity values, products between intensity values, and local orientation are examined. The last representation gives a well behaved result that is easy to decode into the corner orientation. To reduce dimensionality, parameters from a polynomial model fitted on the different representations is also considered. This reduction did not affect the performance of the system.

  • 291.
    Johansson, Björn
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Elfving, Tommy
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    Kozlov, Vladimir
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Applied Mathematics.
    Censor, Y.
    Department of Mathematics, University of Haifa, Mt. Carmel, Haifa 31905, Israel.
    Forssén, Per-Erik
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Granlund, Gösta
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    The application of an oblique-projected Landweber method to a model of supervised learning2006In: Mathematical and computer modelling, ISSN 0895-7177, E-ISSN 1872-9479, Vol. 43, no 7-8, p. 892-909Article in journal (Refereed)
    Abstract [en]

    This paper brings together a novel information representation model for use in signal processing and computer vision problems, with a particular algorithmic development of the Landweber iterative algorithm. The information representation model allows a representation of multiple values for a variable as well as an expression for confidence. Both properties are important for effective computation using multi-level models, where a choice between models will be implementable as part of the optimization process. It is shown that in this way the algorithm can deal with a class of high-dimensional, sparse, and constrained least-squares problems, which arise in various computer vision learning tasks, such as object recognition and object pose estimation. While the algorithm has been applied to the solution of such problems, it has so far been used heuristically. In this paper we describe the properties and some of the peculiarities of the channel representation and optimization, and put them on firm mathematical ground. We consider the optimization a convexly constrained weighted least-squares problem and propose for its solution a projected Landweber method which employs oblique projections onto the closed convex constraint set. We formulate the problem, present the algorithm and work out its convergence properties, including a rate-of-convergence result. The results are put in perspective with currently available projected Landweber methods. An application to supervised learning is described, and the method is evaluated in an experiment involving function approximation, as well as application to transient signals. © 2006 Elsevier Ltd. All rights reserved.

  • 292.
    Johansson, Björn
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Farnebäck, Gunnar
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    A Theoretical Comparison of Different Orientation Tensors2002In: Proceedings SSAB02 Symposium on Image Analysis,2002, 2002, p. 69-73Conference paper (Other academic)
  • 293.
    Johansson, Björn
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Granlund, Gösta
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Fast selective detection of rotational symmetries using normalized inhibition2000In: Proceedings of the 6th European Conference on Computer Vision, Dublin, Ireland, June 26 - July 1, Part I / [ed] David Vernon, London: Springer, 2000, Vol. 1842, p. 871-887Chapter in book (Refereed)
    Abstract [en]

    Perceptual experiments indicate that corners and curvature are very important features in the process of recognition. This paper presents a new method to efficiently detect rotational symmetries, which describe complex curvature such as corners, circles, star- and spiral patterns. The method is designed to give selective and sparse responses. It works in three steps, first extract local orientation from a gray-scale or color image, second correlate the orientation image with rotational symmetry filters and third let the filter responses inhibit each other in order to get more selective responses, The correlations can be made efficient by separating the 2D-filters into a small number of 1D-filters. These symmetries can serve as feature points at a high abstraction level for use in hierarchical matching structures for 3D-estimation, object recognition, etc.

  • 294.
    Johansson, Björn
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Knutsson, Hans
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Granlund, Gösta
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Detecting Rotational Symmetries using Normalized Convolution2000In: Proceedings of the 15th International Conference on Pattern Recognition,2000, IEEE , 2000, p. 496-500 vol.3Conference paper (Refereed)
    Abstract [en]

    Perceptual experiments indicate that corners and curvature are very important features in the process of recognition. This paper presents a new method to detect rotational symmetries, which describes complex curvature such as corners, circles, star, and spiral patterns. It works in two steps: 1) it extracts local orientation from a gray-scale or color image; and 2) it applies normalized convolution on the orientation image with rotational symmetry filters as basis functions. These symmetries can serve as feature points at a high abstraction level for use in hierarchical matching structures for 3D estimation, object recognition, image database retrieval, etc

  • 295.
    Johansson, Björn
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Moe, Anders
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Object Recognition in 3D Laser Radar Data using Plane triplets2005Report (Other academic)
    Abstract [en]

    This report describes a method to detect and recognize objects from 3D laser radar data. The method is based on local descriptors computed from triplets of planes that are estimated from the data set. Each descriptor that is computed on query data is compared with descriptors computed on object model data to get a hypothesis of object class and pose. An hypothesis is either verified or rejected using a similarity measure between the model data set and the query data set.

  • 296.
    Johansson, Björn
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Moe, Anders
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Patch-Duplets for Object Recognition and Pose Estimation2004In: Proceedings SSBA04 Symposium on Image Analysis,2004, 2004, p. 78-81Conference paper (Other academic)
  • 297.
    Johansson, Björn
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Moe, Anders
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Patch-Duplets for Object Recognition and Pose Estimation2003Report (Other academic)
    Abstract [en]

    This report describes a view-based method for object recognition and estimation of object pose in still images. The method is based on feature vector matching and clustering. A set of interest points, in this case star-patterns, is detected and combined into pairs. A pair of patches, centered around each point in the pair, is extracted from a local orientation image. The patch orientation and size depends on the relative positions of the points, which make them invariant to translation, rotation, and scale. Each pair of patches constitutes a feature vector. The method is demonstrated on a number of real images.

  • 298.
    Johansson, Björn
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Moe, Anders
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Patch-Duplets for Object Recognition and Pose Estimation2005In: 2nd Canadian Conference on Computer and Robot Vision,2005, 2005Conference paper (Refereed)
  • 299.
    Johansson, Björn
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Söderberg, Robert
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    A Repeatability Test for Two Orientation Based Interest Point Detectors2004Report (Other academic)
    Abstract [en]

    This report evaluates the stability of two image interest point detectors, star-pattern points and points based on the fourth order tensor. The Harris operator is also included for comparison. Different image transformations are applied and the repeatability of points between a reference image and each of the transformed images are computed. The transforms are plane rotation, change in scale, change in view, and change in lightning conditions. We conclude that the result largely depends on the image content. The star-pattern points and the fourth order tensor models the image as locally straight lines, while the Harris operator is based on simple/non-simple signals. The two methods evaluated here perform equally well or better than the Harris operator if the model is valid, and perform worse otherwise.

  • 300.
    Johansson, Björn
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Granlund, Gösta
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Combining shadow detection and simulation for estimation of vehicle size and position2009In: PATTERN RECOGNITION LETTERS, ISSN 0167-8655, Vol. 30, no 8, p. 751-759Article in journal (Refereed)
    Abstract [en]

    This paper presents a method that combines shadow detection and a 3D box model including shadow simulation, for estimation of size and position of vehicles. We define a similarity measure between a simulated image of a 3D box, including the box shadow, and a captured image that is classified into background/foreground/shadow. The similarity Measure is used in all optimization procedure to find the optimal box state. It is shown in a number of experiments and examples how the combination shadow detection/simulation improves the estimation compared to just using detection or simulation, especially when the shadow detection or the simulation is inaccurate. We also describe a tracking system that utilizes the estimated 3D boxes, including highlight detection, spatial window instead of a time based window for predicting heading, and refined box size estimates by weighting accumulated estimates depending oil view. Finally, we show example results.

3456789 251 - 300 of 595
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf