liu.seSearch for publications in DiVA
Change search
Refine search result
78910 451 - 470 of 470
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 451.
    Zitinski Elias, Paula
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Nyström, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Color separation for improved perceived image quality in terms of graininess and gamut2017In: Color Research and Application, ISSN 0361-2317, E-ISSN 1520-6378, Vol. 42, no 4, p. 486-497Article in journal (Refereed)
    Abstract [en]

    Multi-channel printing employs additional inks to improve the perceived image quality by reducing the graininess and augmenting the printer gamut. It also requires a color separation that deals with the one-to-many mapping problem imposed when using more than three inks. The proposed separation model incorporates a multilevel halftoning algorithm, reducing the complexity of the print characterization by grouping inks of similar hues in the same channel. In addition, a cost function is proposed that weights selected factors influencing the print and perceived image quality, namely color accuracy, graininess and ink consumption. The graininess perception is qualitatively assessed using S-CIELAB, a spatial low-pass filtering mimicking the human visual system. By applying it to a large set of samples, a generalized prediction quantifying the perceived graininess is carried out and incorporated as a criterion in the color separation. The results of the proposed model are compared with the separation giving the best colorimetric match, showing improvements in the perceived image quality in terms of graininess at a small cost of color accuracy and ink consumption. (c) 2016 Wiley Periodicals, Inc.

  • 452.
    Zobel, Valentin
    et al.
    Zuse Institue Berlin.
    Reininghaus, Jan
    Zuse Institue Berlin.
    Hotz, Ingrid
    Zuse Institue Berlin.
    Visualization of Two-Dimensional Symmetric Tensor Fields Using the Heat Kernel Signature2014In: Topological Methods in Data Analysis and Visualization: Theory, Algorithms, and Applications / [ed] Peer-Timo Bremer, Ingrid Hotz, Valerio Pascucci, Ronald Peikert, Springer, 2014, p. 249-262Chapter in book (Refereed)
    Abstract [en]

    We propose a method for visualizing two-dimensional symmetric positive definite tensor fields using the Heat Kernel Signature (HKS). The HKS is derived from the heat kernel and was originally introduced as an isometry invariant shape signature. Each positive definite tensor field defines a Riemannian manifold by considering the tensor field as a Riemannian metric. On this Riemmanian manifold we can apply the definition of the HKS. The resulting scalar quantity is used for the visualization of tensor fields. The HKS is closely related to the Gaussian curvature of the Riemannian manifold and the time parameter of the heat kernel allows a multiscale analysis in a natural way. In this way, the HKS represents field related scale space properties, enabling a level of detail analysis of tensor fields. This makes the HKS an interesting new scalar quantity for tensor fields, which differs significantly from usual tensor invariants like the trace or the determinant. A method for visualization and a numerical realization of the HKS for tensor fields is proposed in this chapter. To validate the approach we apply it to some illustrating simple examples as isolated critical points and to a medical diffusion tensor data set.

  • 453.
    Zobel, Valentin
    et al.
    Leipzig University, Leipzig, Germany.
    Reininghaus, Jan
    Institute of Science and Technology Austria, Klosterneuburg, Austria.
    Hotz, Ingrid
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Visualizing Symmetric Indefinite 2D Tensor Fields using the Heat Kernel Signature2015In: Visualization and Processing of Tensors and Higher Order Descriptors for Multi-Valued Data / [ed] Ingrid Hotz, Thomas Schultz, Cham: Springer, 2015, p. 257-267Chapter in book (Refereed)
    Abstract [en]

    The Heat Kernel Signature (HKS) is a scalar quantity which is derived from the heat kernel of a given shape. Due to its robustness, isometry invariance, and multiscale nature, it has been successfully applied in many geometric applications. From a more general point of view, the HKS can be considered as a descriptor of the metric of a Riemannian manifold. Given a symmetric positive definite tensor field we may interpret it as the metric of some Riemannian manifold and thereby apply the HKS to visualize and analyze the given tensor data. In this paper, we propose a generalization of this approach that enables the treatment of indefinite tensor fields, like the stress tensor, by interpreting them as a generator of a positive definite tensor field. To investigate the usefulness of this approach we consider the stress tensor from the two-point-load model example and from a mechanical work piece.

  • 454.
    Zografos, Vasileios
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Enhancing motion segmentation by combination of complementary affinities2012In: Proceedings of the 21st Internationa Conference on Pattern Recognition, 2012, p. 2198-2201Conference paper (Other academic)
    Abstract [en]

    Complementary information, when combined in the right way, is capable of improving clustering and segmentation problems. In this paper, we show how it is possible to enhance motion segmentation accuracy with a very simple and inexpensive combination of complementary information, which comes from the column and row spaces of the same measurement matrix. We test our approach on the Hopkins155 dataset where it outperforms all other state-of-the-art methods.

  • 455.
    Zografos, Vasileios
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Lenz, Reiner
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    The Weibull manifold in low-level image processing: an application to automatic image focusing.2013In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 31, no 5, p. 401-417Article in journal (Refereed)
    Abstract [en]

    In this paper, we introduce a novel framework for low-level image processing and analysis. First, we process images with very simple, difference-based filter functions. Second, we fit the 2-parameter Weibull distribution to the filtered output. This maps each image to the 2D Weibull manifold. Third, we exploit the information geometry of this manifold and solve low-level image processing tasks as minimisation problems on point sets. For a proof-of-concept example, we examine the image autofocusing task. We propose appropriate cost functions together with a simple implicitly-constrained manifold optimisation algorithm and show that our framework compares very favourably against common autofocus methods from literature. In particular, our approach exhibits the best overall performance in terms of combined speed and accuracy

  • 456.
    Åkerlind, Christina
    et al.
    Linköping University, Department of Physics, Chemistry and Biology. Linköping University, Faculty of Science & Engineering. FOI, Linköping, Sweden.
    Fagerström, Jan
    FOI, Linköping, Sweden.
    Hallberg, Tomas
    FOI, Linköping, Sweden.
    Kariis, Hans
    FOI, Linköping, Sweden.
    Evaluation criteria for spectral design of camouflage2015In: Proc. SPIE 9653, Target and Background Signatures / [ed] Karin U. Stein; Ric H. M. A. Schleijpen, SPIE - International Society for Optical Engineering, 2015, Vol. 9653, p. Art.no: 9653-2-Conference paper (Refereed)
    Abstract [en]

    In development of visual (VIS) and infrared (IR) camouflage for signature management, the aim is the design of surface properties of an object to spectrally match or adapt to a background and thereby minimizing the contrast perceived by a threatening sensor. The so called 'ladder model" relates the requirements for task measure of effectiveness with surface structure properties through the steps signature effectiveness and object signature. It is intended to link materials properties via platform signature to military utility and vice versa. Spectral design of a surface intends to give it a desired wavelength dependent optical response to fit a specific application of interest. Six evaluation criteria were stated, with the aim to aid the process to put requirement on camouflage and for evaluation. The six criteria correspond to properties such as reflectance, gloss, emissivity, and degree of polarization as well as dynamic properties, and broadband or multispectral properties. These criteria have previously been exemplified on different kinds of materials and investigated separately. Anderson and Åkerlind further point out that the six criteria rarely were considered or described all together in one and same publication previously. The specific level of requirement of the different properties must be specified individually for each specific situation and environment to minimize the contrast between target and a background. The criteria or properties are not totally independent of one another. How they are correlated is part of the theme of this paper. However, prioritization has been made due to the limit of space. Therefore all of the interconnections between the six criteria will not be considered in the work of this report. The ladder step previous to digging into the different material composition possibilities and choice of suitable materials and structures (not covered here), includes the object signature and decision of what the spectral response should be, when intended for a specific environment. The chosen spectral response should give a low detection probability (DP). How detection probability connects to image analysis tools and implementation of the six criteria is part of this work.

  • 457.
    Åström, Freddie
    et al.
    Heidelberg University, Germany.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Baravdish, George
    Linköping University, Department of Science and Technology, Communications and Transport Systems. Linköping University, Faculty of Science & Engineering.
    Mapping-Based Image Diffusion2017In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 57, no 3, p. 293-323Article in journal (Refereed)
    Abstract [en]

    In this work, we introduce a novel tensor-based functional for targeted image enhancement and denoising. Via explicit regularization, our formulation incorporates application-dependent and contextual information using first principles. Few works in literature treat variational models that describe both application-dependent information and contextual knowledge of the denoising problem. We prove the existence of a minimizer and present results on tensor symmetry constraints, convexity, and geometric interpretation of the proposed functional. We show that our framework excels in applications where nonlinear functions are present such as in gamma correction and targeted value range filtering. We also study general denoising performance where we show comparable results to dedicated PDE-based state-of-the-art methods.

  • 458.
    Åström, Freddie
    et al.
    Heidelberg Collaboratory for Image Processing Heidelberg University Heidelberg, Germany.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Scharr, Hanno
    BG-2: Plant Sciences Forschungszentrum Jülich 52425, Jülich, Germany.
    Adaptive sharpening of multimodal distributions2015In: Colour and Visual Computing Symposium (CVCS), 2015 / [ed] Marius Pedersen and Jean-Baptiste Thomas, IEEE , 2015Conference paper (Refereed)
    Abstract [en]

    In this work we derive a novel framework rendering measured distributions into approximated distributions of their mean. This is achieved by exploiting constraints imposed by the Gauss-Markov theorem from estimation theory, being valid for mono-modal Gaussian distributions. It formulates the relation between the variance of measured samples and the so-called standard error, being the standard deviation of their mean. However, multi-modal distributions are present in numerous image processing scenarios, e.g. local gray value or color distributions at object edges, or orientation or displacement distributions at occlusion boundaries in motion estimation or stereo. Our method not only aims at estimating the modes of these distributions together with their standard error, but at describing the whole multi-modal distribution. We utilize the method of channel representation, a kind of soft histogram also known as population codes, to represent distributions in a non-parametric, generic fashion. Here we apply the proposed scheme to general mono- and multimodal Gaussian distributions to illustrate its effectiveness and compliance with the Gauss-Markov theorem.

  • 459.
    Öfjäll, Kristoffer
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Adaptive Supervision Online Learning for Vision Based Autonomous Systems2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Driver assistance systems in modern cars now show clear steps towards autonomous driving and improvements are presented in a steady pace. The total number of sensors has also decreased from the vehicles of the initial DARPA challenge, more resembling a pile of sensors with a car underneath. Still, anyone driving a tele-operated toy using a video link is a demonstration that a single camera provides enough information about the surronding world.  

    Most lane assist systems are developed for highway use and depend on visible lane markers. However, lane markers may not be visible due to snow or wear, and there are roads without lane markers. With a slightly different approach, autonomous road following can be obtained on almost any kind of road. Using realtime online machine learning, a human driver can demonstrate driving on a road type unknown to the system and after some training, the system can seamlessly take over. The demonstrator system presented in this work has shown capability of learning to follow different types of roads as well as learning to follow a person. The system is based solely on vision, mapping camera images directly to control signals.  

    Such systems need the ability to handle multiple-hypothesis outputs as there may be several plausible options in similar situations. If there is an obstacle in the middle of the road, the obstacle can be avoided by going on either side. However the average action, going straight ahead, is not a viable option. Similarly, at an intersection, the system should follow one road, not the average of all roads.  

    To this end, an online machine learning framework is presented where inputs and outputs are represented using the channel representation. The learning system is structurally simple and computationally light, based on neuropsychological ideas presented by Donald Hebb over 60 years ago. Nonetheless the system has shown a cabability to learn advanced tasks. Furthermore, the structure of the system permits a statistical interpretation where a non-parametric representation of the joint distribution of input and output is generated. Prediction generates the conditional distribution of the output, given the input.  

    The statistical interpretation motivates the introduction of priors. In cases with multiple options, such as at intersections, a prior can select one mode in the multimodal distribution of possible actions. In addition to the ability to learn from demonstration, a possibility for immediate reinforcement feedback is presented. This allows for a system where the teacher can choose the most appropriate way of training the system, at any time and at her own discretion.  

    The theoretical contributions include a deeper analysis of the channel representation. A geometrical analysis illustrates the cause of decoding bias commonly present in neurologically inspired representations, and measures to counteract it. Confidence values are analyzed and interpreted as evidence and coherence. Further, the use of the truncated cosine basis function is motivated.  

    Finally, a selection of applications is presented, such as autonomous road following by online learning and head pose estimation. A method founded on the same basic principles is used for visual tracking, where the probabilistic representation of target pixel values allows for changes in target appearance.

  • 460.
    Öfjäll, Kristoffer
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    LEAP, A Platform for Evaluation of Control Algorithms2010Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Most people are familiar with the BRIO labyrinth game and the challenge of guiding the ball through the maze. The goal of this project was to use this game to create a platform for evaluation of control algorithms. The platform was used to evaluate a few different controlling algorithms, both traditional automatic control algorithms as well as algorithms based on online incremental learning.

    The game was fitted with servo actuators for tilting the maze. A camera together with computer vision algorithms were used to estimate the state of the game. The evaluated controlling algorithm had the task of calculating a proper control signal, given the estimated state of the game.

    The evaluated learning systems used traditional control algorithms to provide initial training data. After initial training, the systems learned from their own actions and after a while they outperformed the controller used to provide initial training.

  • 461.
    Öfjäll, Kristoffer
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Online Learning for Robot Vision2014Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    In tele-operated robotics applications, the primary information channel from the robot to its human operator is a video stream. For autonomous robotic systems however, a much larger selection of sensors is employed, although the most relevant information for the operation of the robot is still available in a single video stream. The issue lies in autonomously interpreting the visual data and extracting the relevant information, something humans and animals perform strikingly well. On the other hand, humans have great diculty expressing what they are actually looking for on a low level, suitable for direct implementation on a machine. For instance objects tend to be already detected when the visual information reaches the conscious mind, with almost no clues remaining regarding how the object was identied in the rst place. This became apparent already when Seymour Papert gathered a group of summer workers to solve the computer vision problem 48 years ago [35].

    Articial learning systems can overcome this gap between the level of human visual reasoning and low-level machine vision processing. If a human teacher can provide examples of what to be extracted and if the learning system is able to extract the gist of these examples, the gap is bridged. There are however some special demands on a learning system for it to perform successfully in a visual context. First, low level visual input is often of high dimensionality such that the learning system needs to handle large inputs. Second, visual information is often ambiguous such that the learning system needs to be able to handle multi modal outputs, i.e. multiple hypotheses. Typically, the relations to be learned  are non-linear and there is an advantage if data can be processed at video rate, even after presenting many examples to the learning system. In general, there seems to be a lack of such methods.

    This thesis presents systems for learning perception-action mappings for robotic systems with visual input. A range of problems are discussed, such as vision based autonomous driving, inverse kinematics of a robotic manipulator and controlling a dynamical system. Operational systems demonstrating solutions to these problems are presented. Two dierent approaches for providing training data are explored, learning from demonstration (supervised learning) and explorative learning (self-supervised learning). A novel learning method fullling the stated demands is presented. The method, qHebb, is based on associative Hebbian learning on data in channel representation. Properties of the method are demonstrated on a vision-based autonomously driving vehicle, where the system learns to directly map low-level image features to control signals. After an initial training period, the system seamlessly continues autonomously. In a quantitative evaluation, the proposed online learning method performed comparably with state of the art batch learning methods.

    List of papers
    1. Autonomous Navigation and Sign Detector Learning
    Open this publication in new window or tab >>Autonomous Navigation and Sign Detector Learning
    Show others...
    2013 (English)In: IEEE Workshop on Robot Vision(WORV) 2013, IEEE , 2013, p. 144-151Conference paper, Published paper (Refereed)
    Abstract [en]

    This paper presents an autonomous robotic system that incorporates novel Computer Vision, Machine Learning and Data Mining algorithms in order to learn to navigate and discover important visual entities. This is achieved within a Learning from Demonstration (LfD) framework, where policies are derived from example state-to-action mappings. For autonomous navigation, a mapping is learnt from holistic image features (GIST) onto control parameters using Random Forest regression. Additionally, visual entities (road signs e.g. STOP sign) that are strongly associated to autonomously discovered modes of action (e.g. stopping behaviour) are discovered through a novel Percept-Action Mining methodology. The resulting sign detector is learnt without any supervision (no image labeling or bounding box annotations are used). The complete system is demonstrated on a fully autonomous robotic platform, featuring a single camera mounted on a standard remote control car. The robot carries a PC laptop, that performs all the processing on board and in real-time.

    Place, publisher, year, edition, pages
    IEEE, 2013
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-86214 (URN)10.1109/WORV.2013.6521929 (DOI)978-1-4673-5647-3 (ISBN)978-1-4673-5646-6 (ISBN)
    Conference
    IEEE Workshop on Robot Vision (WORV 2013), 15-17 January 2013, Clearwater Beach, FL, USA
    Projects
    ELLIITETTCUASUK EPSRC: EP/H023135/1
    Available from: 2012-12-11 Created: 2012-12-11 Last updated: 2016-06-14
    2. Online Learning of Vision-Based Robot Control during Autonomous Operation
    Open this publication in new window or tab >>Online Learning of Vision-Based Robot Control during Autonomous Operation
    2015 (English)In: New Development in Robot Vision / [ed] Yu Sun, Aman Behal and Chi-Kit Ronald Chung, Springer Berlin/Heidelberg, 2015, p. 137-156Chapter in book (Refereed)
    Abstract [en]

    Online learning of vision-based robot control requires appropriate activation strategies during operation. In this chapter we present such a learning approach with applications to two areas of vision-based robot control. In the first setting, selfevaluation is possible for the learning system and the system autonomously switches to learning mode for producing the necessary training data by exploration. The other application is in a setting where external information is required for determining the correctness of an action. Therefore, an operator provides training data when required, leading to an automatic mode switch to online learning from demonstration. In experiments for the first setting, the system is able to autonomously learn the inverse kinematics of a robotic arm. We propose improvements producing more informative training data compared to random exploration. This reduces training time and limits learning to regions where the learnt mapping is used. The learnt region is extended autonomously on demand. In experiments for the second setting, we present an autonomous driving system learning a mapping from visual input to control signals, which is trained by manually steering the robot. After the initial training period, the system seamlessly continues autonomously. Manual control can be taken back at any time for providing additional training.

    Place, publisher, year, edition, pages
    Springer Berlin/Heidelberg, 2015
    Series
    Cognitive Systems Monographs, ISSN 1867-4925 ; Vol. 23
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-110891 (URN)10.1007/978-3-662-43859-6_8 (DOI)978-3-662-43858-9 (ISBN)978-3-662-43859-6 (ISBN)
    Available from: 2014-09-26 Created: 2014-09-26 Last updated: 2018-01-11Bibliographically approved
    3. Biologically Inspired Online Learning of Visual Autonomous Driving
    Open this publication in new window or tab >>Biologically Inspired Online Learning of Visual Autonomous Driving
    2014 (English)In: Proceedings British Machine Vision Conference 2014 / [ed] Michel Valstar; Andrew French; Tony Pridmore, BMVA Press , 2014, p. 137-156Conference paper, Published paper (Refereed)
    Abstract [en]

    While autonomously driving systems accumulate more and more sensors as well as highly specialized visual features and engineered solutions, the human visual system provides evidence that visual input and simple low level image features are sufficient for successful driving. In this paper we propose extensions (non-linear update and coherence weighting) to one of the simplest biologically inspired learning schemes (Hebbian learning). We show that this is sufficient for online learning of visual autonomous driving, where the system learns to directly map low level image features to control signals. After the initial training period, the system seamlessly continues autonomously. This extended Hebbian algorithm, qHebb, has constant bounds on time and memory complexity for training and evaluation, independent of the number of training samples presented to the system. Further, the proposed algorithm compares favorably to state of the art engineered batch learning algorithms.

    Place, publisher, year, edition, pages
    BMVA Press, 2014
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-110890 (URN)10.5244/C.28.94 (DOI)1901725529 (ISBN)
    Conference
    British Machine Vision Conference 2014, Nottingham, UK September 1-5 2014
    Note

    The video contains the online learning autonomous driving system in operation. Data from the system has been synchronized with the video and is shown overlaid. The actuated steering singnal is visualized as the position of a blue dot. The steering signal predicted by the system is visualized by a green circle. During autonomous operation, these two coincide. When the vehicle is controlled manually (training), the word MANUAL is displayed in the video.The first sequence evaluates the ability of the system to stay on the road during road reconfiguration. The results of the first sequence indicate that the system primarily reacts to features on the road, not features in the surrounding area. The second sequence evaluates the multi-modal abilities of the system. After initial training, the vehicle follows the outer track, going straight in the two three-way junctions. By forcing the vehicle to turn right at one intersection, by means of a short application of manual control, a new mode is introduced. When the system later reaches the same intersection, the vehicle either turns or continues straight ahead depending on which of the two modes is the strongest. The ordering of the modes depends on slight variation in the approach to the junction and on noise.The third sequence is longer, evaluating both multi-modal abilities and effects of track reconfiguration. Container: MP4Codec: h264 1280x720

    Available from: 2014-09-26 Created: 2014-09-26 Last updated: 2018-01-11Bibliographically approved
    4. Combining Vision, Machine Learning and Automatic Control to Play the Labyrinth Game
    Open this publication in new window or tab >>Combining Vision, Machine Learning and Automatic Control to Play the Labyrinth Game
    2012 (English)In: Proceedings of SSBA, Swedish Symposium on Image Analysis, 2012, 2012Conference paper, Published paper (Other academic)
    Abstract [en]

    The labyrinth game is a simple yet challenging platform, not only for humans but also for control algorithms and systems. The game is easy to understand but still very hard to master. From a system point of view, the ball behavior is in general easy to model but close to the obstacles there are severe non-linearities. Additionally, the far from flat surface on which the ball rolls provides for changing dynamics depending on the ball position.

    The general dynamics of the system can easily be handled by traditional automatic control methods. Taking the obstacles and uneven surface into account would require very detailed models of the system. A simple deterministic control algorithm is combined with a learning control method. The simple control method provides initial training data. As thelearning method is trained, the system can learn from the results of its own actions and the performance improves well beyond the performance of the initial controller.

    A vision system and image analysis is used to estimate the ball position while a combination of a PID controller and a learning controller based on LWPR is used to learn to steer the ball through the maze.

    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-110888 (URN)
    Conference
    Swedish Symposium on Image Analysis for 2012, March 8-9, Stockholm, Sweden
    Available from: 2014-09-26 Created: 2014-09-26 Last updated: 2018-01-11Bibliographically approved
  • 462.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Biologically Inspired Online Learning of Visual Autonomous Driving2014In: Proceedings British Machine Vision Conference 2014 / [ed] Michel Valstar; Andrew French; Tony Pridmore, BMVA Press , 2014, p. 137-156Conference paper (Refereed)
    Abstract [en]

    While autonomously driving systems accumulate more and more sensors as well as highly specialized visual features and engineered solutions, the human visual system provides evidence that visual input and simple low level image features are sufficient for successful driving. In this paper we propose extensions (non-linear update and coherence weighting) to one of the simplest biologically inspired learning schemes (Hebbian learning). We show that this is sufficient for online learning of visual autonomous driving, where the system learns to directly map low level image features to control signals. After the initial training period, the system seamlessly continues autonomously. This extended Hebbian algorithm, qHebb, has constant bounds on time and memory complexity for training and evaluation, independent of the number of training samples presented to the system. Further, the proposed algorithm compares favorably to state of the art engineered batch learning algorithms.

  • 463.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Combining Vision, Machine Learning and Automatic Control to Play the Labyrinth Game2012In: Proceedings of SSBA, Swedish Symposium on Image Analysis, 2012, 2012Conference paper (Other academic)
    Abstract [en]

    The labyrinth game is a simple yet challenging platform, not only for humans but also for control algorithms and systems. The game is easy to understand but still very hard to master. From a system point of view, the ball behavior is in general easy to model but close to the obstacles there are severe non-linearities. Additionally, the far from flat surface on which the ball rolls provides for changing dynamics depending on the ball position.

    The general dynamics of the system can easily be handled by traditional automatic control methods. Taking the obstacles and uneven surface into account would require very detailed models of the system. A simple deterministic control algorithm is combined with a learning control method. The simple control method provides initial training data. As thelearning method is trained, the system can learn from the results of its own actions and the performance improves well beyond the performance of the initial controller.

    A vision system and image analysis is used to estimate the ball position while a combination of a PID controller and a learning controller based on LWPR is used to learn to steer the ball through the maze.

  • 464.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Integrating Learning and Optimization for Active Vision Inverse Kinematics2013In: Proceedings of SSBA, Swedish Symposium on Image Analysis, 2013, 2013Conference paper (Other academic)
  • 465.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Online Learning and Mode Switching for Autonomous Driving from Demonstration2014In: Proceedings of SSBA, Swedish Symposium on Image Analysis, 2014, 2014Conference paper (Other academic)
  • 466.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Online learning of autonomous driving using channel representations of multi-modal joint distributions2015In: Proceedings of SSBA, Swedish Symposium on Image Analysis, 2015, Swedish Society for automated image analysis , 2015Conference paper (Other academic)
  • 467.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Online Learning of Vision-Based Robot Control during Autonomous Operation2015In: New Development in Robot Vision / [ed] Yu Sun, Aman Behal and Chi-Kit Ronald Chung, Springer Berlin/Heidelberg, 2015, p. 137-156Chapter in book (Refereed)
    Abstract [en]

    Online learning of vision-based robot control requires appropriate activation strategies during operation. In this chapter we present such a learning approach with applications to two areas of vision-based robot control. In the first setting, selfevaluation is possible for the learning system and the system autonomously switches to learning mode for producing the necessary training data by exploration. The other application is in a setting where external information is required for determining the correctness of an action. Therefore, an operator provides training data when required, leading to an automatic mode switch to online learning from demonstration. In experiments for the first setting, the system is able to autonomously learn the inverse kinematics of a robotic arm. We propose improvements producing more informative training data compared to random exploration. This reduces training time and limits learning to regions where the learnt mapping is used. The learnt region is extended autonomously on demand. In experiments for the second setting, we present an autonomous driving system learning a mapping from visual input to control signals, which is trained by manually steering the robot. After the initial training period, the system seamlessly continues autonomously. Manual control can be taken back at any time for providing additional training.

  • 468.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Weighted Update and Comparison for Channel-Based Distribution Field Tracking2015In: COMPUTER VISION - ECCV 2014 WORKSHOPS, PT II, Springer, 2015, Vol. 8926, p. 218-231Conference paper (Refereed)
    Abstract [en]

    There are three major issues for visual object trackers: modelrepresentation, search and model update. In this paper we address thelast two issues for a specic model representation, grid based distributionmodels by means of channel-based distribution elds. Particularly weaddress the comparison part of searching. Previous work in the areahas used standard methods for comparison and update, not exploitingall the possibilities of the representation. In this work we propose twocomparison schemes and one update scheme adapted to the distributionmodel. The proposed schemes signicantly improve the accuracy androbustness on the Visual Object Tracking (VOT) 2014 Challenge dataset.

  • 469.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Robinson, Andreas
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Visual Autonomous Road Following by Symbiotic Online Learning2016In: Intelligent Vehicles Symposium (IV), 2016 IEEE, 2016, p. 136-143Conference paper (Refereed)
    Abstract [en]

    Recent years have shown great progress in driving assistance systems, approaching autonomous driving step by step. Many approaches rely on lane markers however, which limits the system to larger paved roads and poses problems during winter. In this work we explore an alternative approach to visual road following based on online learning. The system learns the current visual appearance of the road while the vehicle is operated by a human. When driving onto a new type of road, the human driver will drive for a minute while the system learns. After training, the human driver can let go of the controls. The present work proposes a novel approach to online perception-action learning for the specific problem of road following, which makes interchangeably use of supervised learning (by demonstration), instantaneous reinforcement learning, and unsupervised learning (self-reinforcement learning). The proposed method, symbiotic online learning of associations and regression (SOLAR), extends previous work on qHebb-learning in three ways: priors are introduced to enforce mode selection and to drive learning towards particular goals, the qHebb-learning methods is complemented with a reinforcement variant, and a self-assessment method based on predictive coding is proposed. The SOLAR algorithm is compared to qHebb-learning and deep learning for the task of road following, implemented on a model RC-car. The system demonstrates an ability to learn to follow paved and gravel roads outdoors. Further, the system is evaluated in a controlled indoor environment which provides quantifiable results. The experiments show that the SOLAR algorithm results in autonomous capabilities that go beyond those of existing methods with respect to speed, accuracy, and functionality. 

  • 470.
    Örtenberg, Alexander
    et al.
    Linköping University, Department of Medical and Health Sciences, Division of Radiological Sciences. Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Faculty of Medicine and Health Sciences.
    Magnusson, Maria
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Department of Medical and Health Sciences, Division of Radiological Sciences. Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Faculty of Science & Engineering. Linköping University, Faculty of Medicine and Health Sciences.
    Sandborg, Michael
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Medical and Health Sciences, Division of Radiological Sciences. Linköping University, Faculty of Medicine and Health Sciences. Region Östergötland, Center for Surgery, Orthopaedics and Cancer Treatment, Department of Radiation Physics.
    Alm Carlsson, Gudrun
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Medical and Health Sciences, Division of Radiological Sciences. Linköping University, Faculty of Medicine and Health Sciences. Region Östergötland, Center for Surgery, Orthopaedics and Cancer Treatment, Department of Radiation Physics.
    Malusek, Alexandr
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Medical and Health Sciences, Division of Radiological Sciences. Linköping University, Faculty of Medicine and Health Sciences.
    PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA2016In: Radiation Protection Dosimetry, ISSN 0144-8420, E-ISSN 1742-3406, Vol. 169, no 1-4, p. 405-409Article in journal (Refereed)
    Abstract [en]

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code’s execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained.

78910 451 - 470 of 470
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf