liu.seSök publikationer i DiVA
Ändra sökning
Avgränsa sökresultatet
1234 151 - 195 av 195
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 151. Krüger, Norbert
    et al.
    Felsberg, Michael
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Wörgötter, Florentin
    Processing Multi-modal Primitives from Image Sequences2004Ingår i: EIS2004,2004, 2004Konferensbidrag (Refereegranskat)
  • 152. Källhammer, Jan-Erik
    et al.
    Eriksson, Dick
    Granlund, Gösta
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Felsberg, Michael
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Moe, Anders
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Johansson, Björn
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Wiklund, Johan
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Forssén, Per-Erik
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Near Zone Pedestrian Detection using a Low-Resolution FIR Sensor2007Ingår i: Intelligent Vehicles Symposium, 2007 IEEE, Istanbul, Turkey: IEEE , 2007, , s. 339-345Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper explores the possibility to use a single low-resolution FIR camera for detection of pedestrians in the near zone in front of a vehicle. A low resolution sensor reduces the cost of the system, as well as the amount of data that needs to be processed in each frame.

    We present a system that makes use of hot-spots and image positions of a near constant bearing to detect potential pedestrians. These detections provide seeds for an energy minimization algorithm that fits a pedestrian model to the detection. Since false alarms are hard to tolerate, the pedestrian model is then tracked, and the distance-to-collision (DTC) is measured by integrating size change measurements at sub-pixel accuracy, and the car velocity. The system should only engage braking for detections on a collision course, with a reliably measured DTC.

    Preliminary experiments on a number of recorded near collision sequences indicate that our method may be useful for ranges up to about 10m using an 80x60 sensor, and somewhat more using a 160x120 sensor. We also analyze the robustness of the evaluated algorithm with respect to dead pixels, a potential problem for low-resolution sensors.

  • 153. Köthe, Ullrich
    et al.
    Felsberg, Michael
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Riesz-transforms versus derivatives: On the relationship between the boundary tensor and the energy tensor2005Ingår i: Scale Space and PDE Methods in Computer Vision, 2005, Vol. 3459, s. 179-191Konferensbidrag (Refereegranskat)
    Abstract [en]

    Traditionally, quadrature filters and derivatives have been considered as alternative approaches to low-level image analysis. In this paper we show that there actually exist close connections: We define the quadrature-based boundary tensor and the derivative-based gradient energy tenser which exhibit very similar behavior. We analyse the reason for this and determine how to minimize the difference. These insights lead to a simple and very efficient integrated feature detection algorithm.

  • 154.
    Larsson, Fredrik
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Using Fourier Descriptors and Spatial Models for Traffic Sign Recognition2011Ingår i: Image Analysis: 17th Scandinavian Conference, SCIA 2011, Ystad, Sweden, May 2011. Proceedings / [ed] Anders Heyden, Fredrik Kahl, Springer Berlin/Heidelberg, 2011, s. 238-249Konferensbidrag (Refereegranskat)
    Abstract [en]

    Traffic sign recognition is important for the development of driver assistance systems and fully autonomous vehicles. Even though GPS navigator systems works well for most of the time, there will always be situations when they fail. In these cases, robust vision based systems are required. Traffic signs are designed to have distinct colored fields separated by sharp boundaries. We propose to use locally segmented contours combined with an implicit star-shaped object model as prototypes for the different sign classes. The contours are described by Fourier descriptors. Matching of a query image to the sign prototype database is done by exhaustive search. This is done efficiently by using the correlation based matching scheme for Fourier descriptors and a fast cascaded matching scheme for enforcing the spatial requirements. We demonstrated on a publicly available database state of the art performance.

  • 155.
    Larsson, Fredrik
    et al.
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Forssen, Per-Erik
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Correlating Fourier descriptors of local patches for road sign recognition2011Ingår i: IET Computer Vision, ISSN 1751-9632, E-ISSN 1751-9640, Vol. 5, nr 4, s. 244-254Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The Fourier descriptors (FDs) is a classical but still popular method for contour matching. The key idea is to apply the Fourier transform to a periodic representation of the contour, which results in a shape descriptor in the frequency domain. FDs are most commonly used to compare object silhouettes and object contours; the authors instead use this well-established machinery to describe local regions to be used in an object-recognition framework. Many approaches to matching FDs are based on the magnitude of each FD component, thus ignoring the information contained in the phase. Keeping the phase information requires us to take into account the global rotation of the contour and shifting of the contour samples. The authors show that the sum-of-squared differences of FDs can be computed without explicitly de-rotating the contours. The authors compare correlation-based matching against affine-invariant Fourier descriptors (AFDs) and WARP-matched FDs and demonstrate that correlation-based approach outperforms AFDs and WARP on real data. As a practical application the authors demonstrate the proposed correlation-based matching on a road sign recognition task.

  • 156.
    Larsson, Fredrik
    et al.
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Forssén, Per-Erik
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Patch Contour Matching by Correlating Fourier Descriptors2009Ingår i: Digital Image Computing: Techniques and Applications (DICTA), IEEE Computer Society , 2009, s. 40-46Konferensbidrag (Refereegranskat)
    Abstract [en]

    Fourier descriptors (FDs) is a classical but still popular method for contour matching. The key idea is to apply the Fourier transform to a periodic representation of the contour, which results in a shape descriptor in the frequency domain. Fourier descriptors have mostly been used to compare object silhouettes and object contours; we instead use this well established machinery to describe local regions to be used in an object recognition framework. We extract local regions using the Maximally Stable Extremal Regions (MSER) detector and represent the external contour by FDs. Many approaches to matching FDs are based on the magnitude of each FD component, thus ignoring the information contained in the phase. Keeping the phase information requires us to take into account the global rotation of the contour and shifting of the contour samples. We show that the sum-of-squared differences of FDs can be computed without explicitly de-rotating the contours. We compare our correlation based matching against affine-invariant Fourier descriptors (AFDs) and demonstrate that our correlation based approach outperforms AFDs on real world data.

  • 157.
    Larsson, Fredrik
    et al.
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Forssén, Per-Erik
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Using Fourier descriptors for local region matching2009Ingår i: SSBA, 2009Konferensbidrag (Övrigt vetenskapligt)
  • 158.
    Larsson, Fredrik
    et al.
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Jonsson, Erik
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Felsberg, Michael
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Learning Floppy Robot Control2008Ingår i: SSBA,2008, 2008, s. 39-42Konferensbidrag (Övrigt vetenskapligt)
  • 159.
    Larsson, Fredrik
    et al.
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Jonsson, Erik
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Simultaneously learning to recognize and control a low-cost robotic arm2009Ingår i: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 27, nr 11, s. 1729-1739Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper, we present a visual servoing method based on a learned mapping between feature space and control space. Using a suitable recognition algorithm, we present and evaluate a complete method that simultaneously learns the appearance and control of a low-cost robotic arm. The recognition part is trained using an action precedes perception approach. The novelty of this paper, apart from the visual servoing method per se, is the combination of visual servoing with gripper recognition. We show that we can achieve high precision positioning without knowing in advance what the robotic arm looks like or how it is controlled.

  • 160.
    Larsson, Fredrik
    et al.
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Jonsson, Erik
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Felsberg, Michael
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Visual Servoing Based on Learned Inverse Kinematics2007Ingår i: SSBA,2007, 2007, s. 21-24Konferensbidrag (Övrigt vetenskapligt)
  • 161.
    Larsson, Fredrik
    et al.
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Jonsson, Erik
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Felsberg, Michael
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Visual Servoing for Floppy Robots using LWPR2007Ingår i: RoboMat,2007, 2007Konferensbidrag (Refereegranskat)
  • 162.
    Meneghetti, Giulia
    et al.
    Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Danelljan, Martin
    Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Felsberg, Michael
    Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Nordberg, Klas
    Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Image alignment for panorama stitching in sparsely structured environments2015Ingår i: Image Analysis. SCIA 2015. / [ed] Paulsen, Rasmus R., Pedersen, Kim S., Springer, 2015, s. 428-439Konferensbidrag (Refereegranskat)
    Abstract [en]

    Panorama stitching of sparsely structured scenes is an open research problem. In this setting, feature-based image alignment methods often fail due to shortage of distinct image features. Instead, direct image alignment methods, such as those based on phase correlation, can be applied. In this paper we investigate correlation-based image alignment techniques for panorama stitching of sparsely structured scenes. We propose a novel image alignment approach based on discriminative correlation filters (DCF), which has recently been successfully applied to visual tracking. Two versions of the proposed DCF-based approach are evaluated on two real and one synthetic panorama dataset of sparsely structured indoor environments. All three datasets consist of images taken on a tripod rotating 360 degrees around the vertical axis through the optical center. We show that the proposed DCF-based methods outperform phase correlation-based approaches on these datasets.

  • 163.
    Mester, Rudolf
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Univeristy of Frankfurt, Germany.
    Felsberg, MichaelLinköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Pattern Recognition: 33rd DAGM Symposium, Frankfurt/Main, Germany, August 31 - September 2, 2011, Proceedings2011Proceedings (redaktörskap) (Övrigt vetenskapligt)
    Abstract [en]

    This book constitutes the refereed proceedings of the 33rd Symposium of the German Association for Pattern Recognition, DAGM 2011, held in Frankfurt/Main, Germany, in August/September 2011. The 20 revised full papers and 22 revised poster papers were carefully reviewed and selected from 98 submissions. The papers are organized in topical sections on object recognition, adverse vision conditions challenge, shape and matching, segmentation and early vision, robot vision, machine learning, and motion. The volume also includes the young researcher's forum, a section where a carefully jury-selected ensemble of young researchers present their Master thesis work.

  • 164.
    Nawaz, Tahir
    et al.
    Computational Vision Group, Department of Computer Science, University of Reading.
    Berg, Amanda
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB, Linköping, Sweden.
    Ferryman, James
    Computational Vision Group, Department of Computer Science, University of Reading.
    Ahlberg, Jörgen
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Effective evaluation of privacy protection techniques in visible and thermal imagery2017Ingår i: Journal of Electronic Imaging (JEI), ISSN 1017-9909, E-ISSN 1560-229X, Vol. 26, nr 5, artikel-id 051408Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Privacy protection may be defined as replacing the original content in an image region with a new (less intrusive) content having modified target appearance information to make it less recognizable by applying a privacy protection technique. Indeed the development of privacy protection techniques needs also to be complemented with an established objective evaluation method to facilitate their assessment and comparison. Generally, existing evaluation methods rely on the use of subjective judgements or assume a specific target type in image data and use target detection and recognition accuracies to assess privacy protection. This work proposes a new annotation-free evaluation method that is neither subjective nor assumes a specific target type. It assesses two key aspects of privacy protection: protection and utility. Protection is quantified as an appearance similarity and utility is measured as a structural similarity between original and privacy-protected image regions. We performed an extensive experimentation using six challenging datasets (having 12 video sequences) including a new dataset (having six sequences) that contains visible and thermal imagery. The new dataset, called TST-Priv, is made available online below for community. We demonstrate effectiveness of the proposed method by evaluating six image-based privacy protection techniques, and also show comparisons of the proposed method over existing methods.

  • 165.
    Pagani, Alain
    et al.
    DFKI, Kaiserslautern University.
    Stricker, Didier
    DFKI, Kaiserslautern University.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Integral P-channels for fast and robust region matching2009Ingår i:  Image Processing (ICIP), 2009 16th IEEE International Conference, 2009, s. 213-216Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a new method for matching a region between an input and a query image, based on the P-channel representation of pixel-based image features such as grayscale and color information, local gradient orientation and local spatial coordinates. We introduce the concept of integral P-channels, which conciliates the concepts of P-channel and integral images. Using integral images, the P-channel representation of a given region is extracted with a few arithmetic operations. This enables a fast nearest-neighbor search in all possible target regions. We present extensive experimental results and show that our approach compares favorably to existing methods for region matching such as histograms or region covariance.

  • 166.
    Persson, Mikael
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Piccini, Tommaso
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Mester, Rudolf
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Frankfurt University, Germany.
    Robust Stereo Visual Odometry from Monocular Techniques2015Ingår i: 2015 IEEE Intelligent Vehicles Symposium (IV), Institute of Electrical and Electronics Engineers (IEEE), 2015, s. 686-691Konferensbidrag (Refereegranskat)
    Abstract [en]

    Visual odometry is one of the most active topics in computer vision. The automotive industry is particularly interested in this field due to the appeal of achieving a high degree of accuracy with inexpensive sensors such as cameras. The best results on this task are currently achieved by systems based on a calibrated stereo camera rig, whereas monocular systems are generally lagging behind in terms of performance. We hypothesise that this is due to stereo visual odometry being an inherently easier problem, rather than than due to higher quality of the state of the art stereo based algorithms. Under this hypothesis, techniques developed for monocular visual odometry systems would be, in general, more refined and robust since they have to deal with an intrinsically more difficult problem. In this work we present a novel stereo visual odometry system for automotive applications based on advanced monocular techniques. We show that the generalization of these techniques to the stereo case result in a significant improvement of the robustness and accuracy of stereo based visual odometry. We support our claims by the system results on the well known KITTI benchmark, achieving the top rank for visual only systems∗ .

  • 167.
    Piccini, Tommaso
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Persson, Mikael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Nordberg, Klas
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Mester, Rudolf
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. VSI, Frankfurt University.
    Good Edgels to Track: Beating the Aperture Problem with Epipolar Geometry2015Ingår i: COMPUTER VISION - ECCV 2014 WORKSHOPS, PT II / [ed] Agapito, Lourdes and Bronstein, Michael M. and Rother, Carsten, Elsevier, 2015, s. 652-664Konferensbidrag (Refereegranskat)
    Abstract [en]

    An open issue in multiple view geometry and structure from motion, applied to real life scenarios, is the sparsity of the matched key-points and of the reconstructed point cloud. We present an approach that can significantly improve the density of measured displacement vectors in a sparse matching or tracking setting, exploiting the partial information of the motion field provided by linear oriented image patches (edgels). Our approach assumes that the epipolar geometry of an image pair already has been computed, either in an earlier feature-based matching step, or by a robustified differential tracker. We exploit key-points of a lower order, edgels, which cannot provide a unique 2D matching, but can be employed if a constraint on the motion is already given. We present a method to extract edgels, which can be effectively tracked given a known camera motion scenario, and show how a constrained version of the Lucas-Kanade tracking procedure can efficiently exploit epipolar geometry to reduce the classical KLT optimization to a 1D search problem. The potential of the proposed methods is shown by experiments performed on real driving sequences.

  • 168.
    Robinson, Andreas
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Persson, Mikael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Robust Accurate Extrinsic Calibration of Static Non-overlapping Cameras2017Ingår i: Computer Analysis of Images and Patterns: 17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24, 2017, Proceedings, Part II / [ed] Michael Felsberg, Anders Heyden and Norbert Krüger, Springer, 2017, Vol. 10425, s. 342-353Konferensbidrag (Refereegranskat)
    Abstract [en]

    An increasing number of robots and autonomous vehicles are equipped with multiple cameras to achieve surround-view sensing. The estimation of their relative poses, also known as extrinsic parameter calibration, is a challenging problem, particularly in the non-overlapping case. We present a simple and novel extrinsic calibration method based on standard components that performs favorably to existing approaches. We further propose a framework for predicting the performance of different calibration configurations and intuitive error metrics. This makes selecting a good camera configuration straightforward. We evaluate on rendered synthetic images and show good results as measured by angular and absolute pose differences, as well as the reprojection error distributions.

  • 169.
    Scharr, Hanno
    et al.
    Forschungszentrum Juelich.
    Felsberg, Michael
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Forssén, Per-Erik
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Noise Adaptive Channel Smoothing of Low-Dose Images2003Ingår i: Computer Vision for the Nano-Scale Workshop accompanying CVPR 2003,2003, Madison: IEEE Computer Society , 2003Konferensbidrag (Refereegranskat)
  • 170.
    Skoglund, Johan
    et al.
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Covariance estimation for SAD block matching2007Ingår i: Image Analysis: 15th Scandinavian Conference, SCIA 2007, Aalborg, Denmark, June 10-14, 2007 / [ed] Bjarne Kjær Ersbøll and Kim Steenstrup Pedersen, Springer Berlin/Heidelberg, 2007, s. 374-382Konferensbidrag (Refereegranskat)
    Abstract [en]

    The estimation of a patch position in an image is a long established but still relevant topic with many applications, e.g. pose estimation and tracking in image sequences. In most systems the position estimate needs to be fused with other estimates, and hence, covariance information is required to weight the different estimates in the right way. In this paper we address the issue with covariance estimation in the case of sum of absolute difference (SAD) block matching. First, we derive the theory for covariance estimation in the case of SAD matching. Second, we evaluate the suggested method in a virtual 3D patch tracking scenario in order to verify the performance in real-world scenarios.

  • 171.
    Skoglund, Johan
    et al.
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Felsberg, Michael
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Evaluation of Subpixel Tracking Algorithms2006Ingår i: International Symposium on Visual Computing,2006, 2006, s. 375-Konferensbidrag (Refereegranskat)
  • 172.
    Skoglund, Johan
    et al.
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Felsberg, Michael
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Bildbehandling.
    Fast Image Processing Using SSE22005Ingår i: Fast Image Processing Using SSE2,2005, 2005Konferensbidrag (Refereegranskat)
    Abstract [en]

     In this paper we discuss the benefits of writing code for a specific processor and exploiting all its capabilities. We shows that in some situations it is possible to significantly reduce the time consumption by using SSE2, a Single Instruction Multiple Data (SIMD) extension available in new Pentium processors. Speed of the Harris operator is used for evaluation. All experiments are run on a Pentium 4 and the results are compared between ordinary C-code and code using SSE2. The purpose is not only to achieve a significant speed-up of the code, but also to benefit from SSE2 code with the least possible programming effort.

  • 173.
    Wallenberg, Marcus
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Forssén, Per-Erik
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Dellen, Babette
    Institut de Robotica i Informatica Industrial (CSIC-UPC) Llorens i Artigas 4-6, 08028 Barcelona, Spain.
    Channel Coding for Joint Colour and Depth Segmentation2011Ingår i: Proceedings of  Pattern Recognition 33rd DAGM Symposium, Frankfurt/Main, Germany, August 31 - September 2 / [ed] Rudolf Mester and Michael Felsberg, Springer, 2011, s. 306-315Konferensbidrag (Refereegranskat)
    Abstract [en]

    Segmentation is an important preprocessing step in many applications. Compared to colour segmentation, fusion of colour and depth greatly improves the segmentation result. Such a fusion is easy to do by stacking measurements in different value dimensions, but there are better ways. In this paper we perform fusion using the channel representation, and demonstrate how a state-of-the-art segmentation algorithm can be modified to use channel values as inputs. We evaluate segmentation results on data collected using the Microsoft Kinect peripheral for Xbox 360, using the superparamagnetic clustering algorithm. Our experiments show that depth gradients are more useful than depth values for segmentation, and that channel coding both colour and depth gradients makes tuned parameter settings generalise better to novel images.

  • 174.
    Wallenberg, Marcus
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Forssén, Per-Erik
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Dellen, Babette
    Institut de Robotica i Informatica Industrial, Barcelona, Spain.
    Leaf Segmentation using the Kinect2011Ingår i: Proceedings of SSBA 2011 Symposium on Image Analysis, 2011Konferensbidrag (Övrig (populärvetenskap, debatt, mm))
    Abstract [en]

    Segmentation is an important preprocessing step in many applications. Purely colour-based segmentation is often problematic. For this reason, we here investigate fusion of depth and colour information, to facilitate robust segmentation of single images. We evaluate segmentation results on data collected using the Microsoft Kinect peripheral for Xbox 360, using superparamagnetic clustering. We also propose a method for aligning and encoding colour and depth information from the Kinect device. As we show in the paper, the fusion of depth and colour information produces more semantically relevant segments for scene analysis than either depth or colour separately.

  • 175.
    Wiklund, Johan
    et al.
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Nordberg, Klas
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Software architecture and middleware for artificial cognitive systems2010Ingår i: International Conference on Cognitive Systems, 2010Konferensbidrag (Övrigt vetenskapligt)
  • 176.
    Windridge, David
    et al.
    University of Surrey, Guildford, U.K..
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Shaukat, Affan
    University of Surrey, Guildford, U.K..
    A Framework for Hierarchical Perception–Action Learning Utilizing Fuzzy Reasoning2013Ingår i: IEEE transactions on systems, man and cybernetics. Part B. Cybernetics, ISSN 1083-4419, E-ISSN 1941-0492, Vol. 43, nr 1, s. 155-169Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Perception-action (P-A) learning is an approach to cognitive system building that seeks to reduce the complexity associated with conventional environment-representation/action-planning approaches. Instead, actions are directly mapped onto the perceptual transitions that they bring about, eliminating the need for intermediate representation and significantly reducing training requirements. We here set out a very general learning framework for cognitive systems in which online learning of the P-A mapping may be conducted within a symbolic processing context, so that complex contextual reasoning can influence the P-A mapping. In utilizing a variational calculus approach to define a suitable objective function, the P-A mapping can be treated as an online learning problem via gradient descent using partial derivatives. Our central theoretical result is to demonstrate top-down modulation of low-level perceptual confidences via the Jacobian of the higher levels of a subsumptive P-A hierarchy. Thus, the separation of the Jacobian as a multiplying factor between levels within the objective function naturally enables the integration of abstract symbolic manipulation in the form of fuzzy deductive logic into the P-A mapping learning. We experimentally demonstrate that the resulting framework achieves significantly better accuracy than using P-A learning without top-down modulation. We also demonstrate that it permits novel forms of context-dependent multilevel P-A mapping, applying the mechanism in the context of an intelligent driver assistance system.

  • 177.
    Zografos, Vasileios
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Lenz, Reiner
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    The Weibull manifold in low-level image processing: an application to automatic image focusing.2013Ingår i: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 31, nr 5, s. 401-417Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper, we introduce a novel framework for low-level image processing and analysis. First, we process images with very simple, difference-based filter functions. Second, we fit the 2-parameter Weibull distribution to the filtered output. This maps each image to the 2D Weibull manifold. Third, we exploit the information geometry of this manifold and solve low-level image processing tasks as minimisation problems on point sets. For a proof-of-concept example, we examine the image autofocusing task. We propose appropriate cost functions together with a simple implicitly-constrained manifold optimisation algorithm and show that our framework compares very favourably against common autofocus methods from literature. In particular, our approach exhibits the best overall performance in terms of combined speed and accuracy

  • 178.
    Zografos, Vasileios
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Lenz, Reiner
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Ringaby, Erik
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Nordberg, Klas
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Fast segmentation of sparse 3D point trajectories using group theoretical invariants2015Ingår i: COMPUTER VISION - ACCV 2014, PT IV / [ed] D. Cremers, I. Reid, H. Saito, M.-H. Yang, Springer, 2015, Vol. 9006, s. 675-691Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a novel approach for segmenting different motions from 3D trajectories. Our approach uses the theory of transformation groups to derive a set of invariants of 3D points located on the same rigid object. These invariants are inexpensive to calculate, involving primarily QR factorizations of small matrices. The invariants are easily converted into a set of robust motion affinities and with the use of a local sampling scheme and spectral clustering, they can be incorporated into a highly efficient motion segmentation algorithm. We have also captured a new multi-object 3D motion dataset, on which we have evaluated our approach, and compared against state-of-the-art competing methods from literature. Our results show that our approach outperforms all methods while being robust to perspective distortions and degenerate configurations.

  • 179.
    Åström, Freddie
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Baravdish, George
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Kommunikations- och transportsystem. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    A Tensor Variational Formulation of Gradient Energy Total Variation2015Ingår i: ENERGY MINIMIZATION METHODS IN COMPUTER VISION AND PATTERN RECOGNITION, EMMCVPR 2015, Springer Berlin/Heidelberg, 2015, Vol. 8932, s. 307-320Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a novel variational approach to a tensor-based total variation formulation which is called gradient energy total variation, GETV. We introduce the gradient energy tensor into the GETV and show that the corresponding Euler-Lagrange (E-L) equation is a tensor-based partial differential equation of total variation type. Furthermore, we give a proof which shows that GETV is a convex functional. This approach, in contrast to the commonly used structure tensor, enables a formal derivation of the corresponding E-L equation. Experimental results suggest that GETV compares favourably to other state of the art variational denoising methods such as extended anisotropic diffusion (EAD) and total variation (TV) for gray-scale and colour images.

  • 180.
    Åström, Freddie
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Baravdish, George
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Kommunikations- och transportsystem. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    On Tensor-Based PDEs and their Corresponding Variational Formulations with Application to Color Image Denoising2012Konferensbidrag (Refereegranskat)
    Abstract [en]

    The case when a partial differential equation (PDE) can be considered as an Euler-Lagrange (E-L) equation of an energy functional, consisting of a data term and a smoothness term is investigated. We show the necessary conditions for a PDE to be the E-L equation for a corresponding functional. This energy functional is applied to a color image denoising problem and it is shown that the method compares favorably to current state-of-the-art color image denoising techniques.

  • 181.
    Åström, Freddie
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Tekniska fakulteten.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Tekniska fakulteten.
    On the Choice of Tensor Estimation for Corner Detection, Optical Flow and Denoising2015Ingår i: COMPUTER VISION - ACCV 2014 WORKSHOPS, PT II / [ed] C.V. Jawahar and Shiguang Shan, Springer, 2015, Vol. 9009, s. 15s. 16-30Konferensbidrag (Refereegranskat)
    Abstract [en]

    Many image processing methods such as corner detection,optical flow and iterative enhancement make use of image tensors. Generally, these tensors are estimated using the structure tensor. In this work we show that the gradient energy tensor can be used as an alternativeto the structure tensor in several cases. We apply the gradient energy tensor to common image problem applications such as corner detection, optical flow and image enhancement. Our experimental results suggest that the gradient energy tensor enables real-time tensor-based image enhancement using the graphical processing unit (GPU) and we obtain 40% increase of frame rate without loss of image quality.

  • 182.
    Åström, Freddie
    et al.
    Heidelberg University, Germany.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Baravdish, George
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Kommunikations- och transportsystem. Linköpings universitet, Tekniska fakulteten.
    Mapping-Based Image Diffusion2017Ingår i: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 57, nr 3, s. 293-323Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this work, we introduce a novel tensor-based functional for targeted image enhancement and denoising. Via explicit regularization, our formulation incorporates application-dependent and contextual information using first principles. Few works in literature treat variational models that describe both application-dependent information and contextual knowledge of the denoising problem. We prove the existence of a minimizer and present results on tensor symmetry constraints, convexity, and geometric interpretation of the proposed functional. We show that our framework excels in applications where nonlinear functions are present such as in gamma correction and targeted value range filtering. We also study general denoising performance where we show comparable results to dedicated PDE-based state-of-the-art methods.

  • 183.
    Åström, Freddie
    et al.
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Baravdish, George
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Kommunikations- och transportsystem. Linköpings universitet, Tekniska högskolan.
    Lundström, Claes
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Targeted Iterative Filtering2013Konferensbidrag (Refereegranskat)
    Abstract [en]

    The assessment of image denoising results depends on the respective application area, i.e. image compression, still-image acquisition, and medical images require entirely different behavior of the applied denoising method. In this paper we propose a novel, nonlinear diffusion scheme that is derived from a linear diffusion process in a value space determined by the application. We show that application-driven linear diffusion in the transformed space compares favorably with existing nonlinear diffusion techniques. 

  • 184.
    Åström, Freddie
    et al.
    Heidelberg Collaboratory for Image Processing Heidelberg University Heidelberg, Germany.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Scharr, Hanno
    BG-2: Plant Sciences Forschungszentrum Jülich 52425, Jülich, Germany.
    Adaptive sharpening of multimodal distributions2015Ingår i: Colour and Visual Computing Symposium (CVCS), 2015 / [ed] Marius Pedersen and Jean-Baptiste Thomas, IEEE , 2015Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this work we derive a novel framework rendering measured distributions into approximated distributions of their mean. This is achieved by exploiting constraints imposed by the Gauss-Markov theorem from estimation theory, being valid for mono-modal Gaussian distributions. It formulates the relation between the variance of measured samples and the so-called standard error, being the standard deviation of their mean. However, multi-modal distributions are present in numerous image processing scenarios, e.g. local gray value or color distributions at object edges, or orientation or displacement distributions at occlusion boundaries in motion estimation or stereo. Our method not only aims at estimating the modes of these distributions together with their standard error, but at describing the whole multi-modal distribution. We utilize the method of channel representation, a kind of soft histogram also known as population codes, to represent distributions in a non-parametric, generic fashion. Here we apply the proposed scheme to general mono- and multimodal Gaussian distributions to illustrate its effectiveness and compliance with the Gauss-Markov theorem.

  • 185.
    Åström, Freddie
    et al.
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Zografos, Vasileios
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Density Driven Diffusion2013Ingår i: 18th Scandinavian Conferences on Image Analysis, 2013, 2013, s. 718-730Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this work we derive a novel density driven diffusion scheme for image enhancement. Our approach, called D3, is a semi-local method that uses an initial structure-preserving oversegmentation step of the input image.  Because of this, each segment will approximately conform to a homogeneous region in the image, allowing us to easily estimate parameters of the underlying stochastic process thus achieving adaptive non-linear filtering. Our method is capable of producing competitive results when compared to state-of-the-art methods such as non-local means, BM3D and tensor driven diffusion on both color and grayscale images.

  • 186.
    Öfjäll, Kristoffer
    et al.
    Visionists AB, Gothenburg, Sweden.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Approximative Coding Methods for Channel Representations2018Ingår i: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 60, nr 6, s. 929-940Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Most methods that address computer vision prob-lems require powerful visual features. Many successfulapproaches apply techniques motivated from nonparametricstatistics. The channel representation provides a frameworkfornonparametricdistributionrepresentation.Althoughearlywork has focused on a signal processing view of the rep-resentation, the channel representation can be interpretedin probabilistic terms, e.g., representing the distribution oflocal image orientation. In this paper, a variety of approxi-mative channel-based algorithms for probabilistic problemsare presented: a novel efficient algorithm for density recon-struction, a novel and efficient scheme for nonlinear griddingof densities, and finally a novel method for estimating Copuladensities. The experimental results provide evidence that byrelaxing the requirements for exact solutions, efficient algo-rithms are obtained

  • 187.
    Öfjäll, Kristoffer
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Biologically Inspired Online Learning of Visual Autonomous Driving2014Ingår i: Proceedings British Machine Vision Conference 2014 / [ed] Michel Valstar; Andrew French; Tony Pridmore, BMVA Press , 2014, s. 137-156Konferensbidrag (Refereegranskat)
    Abstract [en]

    While autonomously driving systems accumulate more and more sensors as well as highly specialized visual features and engineered solutions, the human visual system provides evidence that visual input and simple low level image features are sufficient for successful driving. In this paper we propose extensions (non-linear update and coherence weighting) to one of the simplest biologically inspired learning schemes (Hebbian learning). We show that this is sufficient for online learning of visual autonomous driving, where the system learns to directly map low level image features to control signals. After the initial training period, the system seamlessly continues autonomously. This extended Hebbian algorithm, qHebb, has constant bounds on time and memory complexity for training and evaluation, independent of the number of training samples presented to the system. Further, the proposed algorithm compares favorably to state of the art engineered batch learning algorithms.

  • 188.
    Öfjäll, Kristoffer
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Combining Vision, Machine Learning and Automatic Control to Play the Labyrinth Game2012Ingår i: Proceedings of SSBA, Swedish Symposium on Image Analysis, 2012, 2012Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    The labyrinth game is a simple yet challenging platform, not only for humans but also for control algorithms and systems. The game is easy to understand but still very hard to master. From a system point of view, the ball behavior is in general easy to model but close to the obstacles there are severe non-linearities. Additionally, the far from flat surface on which the ball rolls provides for changing dynamics depending on the ball position.

    The general dynamics of the system can easily be handled by traditional automatic control methods. Taking the obstacles and uneven surface into account would require very detailed models of the system. A simple deterministic control algorithm is combined with a learning control method. The simple control method provides initial training data. As thelearning method is trained, the system can learn from the results of its own actions and the performance improves well beyond the performance of the initial controller.

    A vision system and image analysis is used to estimate the ball position while a combination of a PID controller and a learning controller based on LWPR is used to learn to steer the ball through the maze.

  • 189.
    Öfjäll, Kristoffer
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Integrating Learning and Optimization for Active Vision Inverse Kinematics2013Ingår i: Proceedings of SSBA, Swedish Symposium on Image Analysis, 2013, 2013Konferensbidrag (Övrigt vetenskapligt)
  • 190.
    Öfjäll, Kristoffer
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Online Learning and Mode Switching for Autonomous Driving from Demonstration2014Ingår i: Proceedings of SSBA, Swedish Symposium on Image Analysis, 2014, 2014Konferensbidrag (Övrigt vetenskapligt)
  • 191.
    Öfjäll, Kristoffer
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Online learning of autonomous driving using channel representations of multi-modal joint distributions2015Ingår i: Proceedings of SSBA, Swedish Symposium on Image Analysis, 2015, Swedish Society for automated image analysis , 2015Konferensbidrag (Övrigt vetenskapligt)
  • 192.
    Öfjäll, Kristoffer
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Online Learning of Vision-Based Robot Control during Autonomous Operation2015Ingår i: New Development in Robot Vision / [ed] Yu Sun, Aman Behal and Chi-Kit Ronald Chung, Springer Berlin/Heidelberg, 2015, s. 137-156Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    Online learning of vision-based robot control requires appropriate activation strategies during operation. In this chapter we present such a learning approach with applications to two areas of vision-based robot control. In the first setting, selfevaluation is possible for the learning system and the system autonomously switches to learning mode for producing the necessary training data by exploration. The other application is in a setting where external information is required for determining the correctness of an action. Therefore, an operator provides training data when required, leading to an automatic mode switch to online learning from demonstration. In experiments for the first setting, the system is able to autonomously learn the inverse kinematics of a robotic arm. We propose improvements producing more informative training data compared to random exploration. This reduces training time and limits learning to regions where the learnt mapping is used. The learnt region is extended autonomously on demand. In experiments for the second setting, we present an autonomous driving system learning a mapping from visual input to control signals, which is trained by manually steering the robot. After the initial training period, the system seamlessly continues autonomously. Manual control can be taken back at any time for providing additional training.

  • 193.
    Öfjäll, Kristoffer
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Weighted Update and Comparison for Channel-Based Distribution Field Tracking2015Ingår i: COMPUTER VISION - ECCV 2014 WORKSHOPS, PT II, Springer, 2015, Vol. 8926, s. 218-231Konferensbidrag (Refereegranskat)
    Abstract [en]

    There are three major issues for visual object trackers: modelrepresentation, search and model update. In this paper we address thelast two issues for a specic model representation, grid based distributionmodels by means of channel-based distribution elds. Particularly weaddress the comparison part of searching. Previous work in the areahas used standard methods for comparison and update, not exploitingall the possibilities of the representation. In this work we propose twocomparison schemes and one update scheme adapted to the distributionmodel. The proposed schemes signicantly improve the accuracy androbustness on the Visual Object Tracking (VOT) 2014 Challenge dataset.

  • 194.
    Öfjäll, Kristoffer
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Robinson, Andreas
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Visual Autonomous Road Following by Symbiotic Online Learning2016Ingår i: Intelligent Vehicles Symposium (IV), 2016 IEEE, 2016, s. 136-143Konferensbidrag (Refereegranskat)
    Abstract [en]

    Recent years have shown great progress in driving assistance systems, approaching autonomous driving step by step. Many approaches rely on lane markers however, which limits the system to larger paved roads and poses problems during winter. In this work we explore an alternative approach to visual road following based on online learning. The system learns the current visual appearance of the road while the vehicle is operated by a human. When driving onto a new type of road, the human driver will drive for a minute while the system learns. After training, the human driver can let go of the controls. The present work proposes a novel approach to online perception-action learning for the specific problem of road following, which makes interchangeably use of supervised learning (by demonstration), instantaneous reinforcement learning, and unsupervised learning (self-reinforcement learning). The proposed method, symbiotic online learning of associations and regression (SOLAR), extends previous work on qHebb-learning in three ways: priors are introduced to enforce mode selection and to drive learning towards particular goals, the qHebb-learning methods is complemented with a reinforcement variant, and a self-assessment method based on predictive coding is proposed. The SOLAR algorithm is compared to qHebb-learning and deep learning for the task of road following, implemented on a model RC-car. The system demonstrates an ability to learn to follow paved and gravel roads outdoors. Further, the system is evaluated in a controlled indoor environment which provides quantifiable results. The experiments show that the SOLAR algorithm results in autonomous capabilities that go beyond those of existing methods with respect to speed, accuracy, and functionality. 

  • 195.
    Öfjäll, Kristoffer
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Michael, Felsberg
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Rapid Explorative Direct Inverse Kinematics Learning of Relevant Locations for Active Vision2013Ingår i: IEEE Workshop on Robot Vision(WORV) 2013, IEEE conference proceedings, 2013, s. 14-19Konferensbidrag (Refereegranskat)
    Abstract [en]

    An online method for rapidly learning the inverse kinematics of a redundant robotic arm is presented addressing the special requirements of active vision for visual inspection tasks. The system is initialized with a model covering a small area around the starting position, which is then incrementally extended by exploration. The number of motions during this process is minimized by only exploring configurations required for successful completion of the task at hand. The explored area is automatically extended online and on demand.To achieve this, state of the art methods for learning and numerical optimization are combined in a tight implementation where parts of the learned model, the Jacobians, are used during optimization, resulting in significant synergy effects. In a series of standard experiments, we show that the integrated method performs better than using both methods sequentially.

1234 151 - 195 av 195
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf