liu.seSearch for publications in DiVA
Change search
Refine search result
5678 351 - 367 of 367
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 351.
    Åström, Freddie
    et al.
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Baravdish, George
    Linköping University, Department of Science and Technology, Communications and Transport Systems. Linköping University, The Institute of Technology.
    Lundström, Claes
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Targeted Iterative Filtering2013Conference paper (Refereed)
    Abstract [en]

    The assessment of image denoising results depends on the respective application area, i.e. image compression, still-image acquisition, and medical images require entirely different behavior of the applied denoising method. In this paper we propose a novel, nonlinear diffusion scheme that is derived from a linear diffusion process in a value space determined by the application. We show that application-driven linear diffusion in the transformed space compares favorably with existing nonlinear diffusion techniques. 

  • 352.
    Åström, Freddie
    et al.
    Heidelberg Collaboratory for Image Processing Heidelberg University Heidelberg, Germany.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Scharr, Hanno
    BG-2: Plant Sciences Forschungszentrum Jülich 52425, Jülich, Germany.
    Adaptive sharpening of multimodal distributions2015In: Colour and Visual Computing Symposium (CVCS), 2015 / [ed] Marius Pedersen and Jean-Baptiste Thomas, IEEE , 2015Conference paper (Refereed)
    Abstract [en]

    In this work we derive a novel framework rendering measured distributions into approximated distributions of their mean. This is achieved by exploiting constraints imposed by the Gauss-Markov theorem from estimation theory, being valid for mono-modal Gaussian distributions. It formulates the relation between the variance of measured samples and the so-called standard error, being the standard deviation of their mean. However, multi-modal distributions are present in numerous image processing scenarios, e.g. local gray value or color distributions at object edges, or orientation or displacement distributions at occlusion boundaries in motion estimation or stereo. Our method not only aims at estimating the modes of these distributions together with their standard error, but at describing the whole multi-modal distribution. We utilize the method of channel representation, a kind of soft histogram also known as population codes, to represent distributions in a non-parametric, generic fashion. Here we apply the proposed scheme to general mono- and multimodal Gaussian distributions to illustrate its effectiveness and compliance with the Gauss-Markov theorem.

  • 353.
    Åström, Freddie
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Koker, Rasit
    Engineering Faculty Esentepe Kampus, Computer Engineering Department, Sakarya University, Turkey.
    A parallel neural network approach to prediction of Parkinson´s Disease2011In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 38, no 10, p. 12470-12474Article in journal (Refereed)
    Abstract [en]

    Recently the neural network based diagnosis of medical diseases has taken a great deal of attention. In this paper a parallel feed-forward neural network structure is used in the prediction of Parkinson’s Disease. The main idea of this paper is using more than a unique neural network to reduce the possibility of decision with error. The output of each neural network is evaluated by using a rule-based system for the final decision. Another important point in this paper is that during the training process, unlearned data of each neural network is collected and used in the training set of the next neural network. The designed parallel network system significantly increased the robustness of the prediction. A set of nine parallel neural networks yielded an improvement of 8.4% on the prediction of Parkinson’s Disease compared to a single unique network. Furthermore, it is demonstrated that the designed system, to some extent, deals with the problems of imbalanced data sets.

  • 354.
    Åström, Freddie
    et al.
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Zografos, Vasileios
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Density Driven Diffusion2013In: 18th Scandinavian Conferences on Image Analysis, 2013, 2013, p. 718-730Conference paper (Refereed)
    Abstract [en]

    In this work we derive a novel density driven diffusion scheme for image enhancement. Our approach, called D3, is a semi-local method that uses an initial structure-preserving oversegmentation step of the input image.  Because of this, each segment will approximately conform to a homogeneous region in the image, allowing us to easily estimate parameters of the underlying stochastic process thus achieving adaptive non-linear filtering. Our method is capable of producing competitive results when compared to state-of-the-art methods such as non-local means, BM3D and tensor driven diffusion on both color and grayscale images.

  • 355.
    Öfjäll, Kristoffer
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Adaptive Supervision Online Learning for Vision Based Autonomous Systems2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Driver assistance systems in modern cars now show clear steps towards autonomous driving and improvements are presented in a steady pace. The total number of sensors has also decreased from the vehicles of the initial DARPA challenge, more resembling a pile of sensors with a car underneath. Still, anyone driving a tele-operated toy using a video link is a demonstration that a single camera provides enough information about the surronding world.  

    Most lane assist systems are developed for highway use and depend on visible lane markers. However, lane markers may not be visible due to snow or wear, and there are roads without lane markers. With a slightly different approach, autonomous road following can be obtained on almost any kind of road. Using realtime online machine learning, a human driver can demonstrate driving on a road type unknown to the system and after some training, the system can seamlessly take over. The demonstrator system presented in this work has shown capability of learning to follow different types of roads as well as learning to follow a person. The system is based solely on vision, mapping camera images directly to control signals.  

    Such systems need the ability to handle multiple-hypothesis outputs as there may be several plausible options in similar situations. If there is an obstacle in the middle of the road, the obstacle can be avoided by going on either side. However the average action, going straight ahead, is not a viable option. Similarly, at an intersection, the system should follow one road, not the average of all roads.  

    To this end, an online machine learning framework is presented where inputs and outputs are represented using the channel representation. The learning system is structurally simple and computationally light, based on neuropsychological ideas presented by Donald Hebb over 60 years ago. Nonetheless the system has shown a cabability to learn advanced tasks. Furthermore, the structure of the system permits a statistical interpretation where a non-parametric representation of the joint distribution of input and output is generated. Prediction generates the conditional distribution of the output, given the input.  

    The statistical interpretation motivates the introduction of priors. In cases with multiple options, such as at intersections, a prior can select one mode in the multimodal distribution of possible actions. In addition to the ability to learn from demonstration, a possibility for immediate reinforcement feedback is presented. This allows for a system where the teacher can choose the most appropriate way of training the system, at any time and at her own discretion.  

    The theoretical contributions include a deeper analysis of the channel representation. A geometrical analysis illustrates the cause of decoding bias commonly present in neurologically inspired representations, and measures to counteract it. Confidence values are analyzed and interpreted as evidence and coherence. Further, the use of the truncated cosine basis function is motivated.  

    Finally, a selection of applications is presented, such as autonomous road following by online learning and head pose estimation. A method founded on the same basic principles is used for visual tracking, where the probabilistic representation of target pixel values allows for changes in target appearance.

  • 356.
    Öfjäll, Kristoffer
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Online Learning for Robot Vision2014Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    In tele-operated robotics applications, the primary information channel from the robot to its human operator is a video stream. For autonomous robotic systems however, a much larger selection of sensors is employed, although the most relevant information for the operation of the robot is still available in a single video stream. The issue lies in autonomously interpreting the visual data and extracting the relevant information, something humans and animals perform strikingly well. On the other hand, humans have great diculty expressing what they are actually looking for on a low level, suitable for direct implementation on a machine. For instance objects tend to be already detected when the visual information reaches the conscious mind, with almost no clues remaining regarding how the object was identied in the rst place. This became apparent already when Seymour Papert gathered a group of summer workers to solve the computer vision problem 48 years ago [35].

    Articial learning systems can overcome this gap between the level of human visual reasoning and low-level machine vision processing. If a human teacher can provide examples of what to be extracted and if the learning system is able to extract the gist of these examples, the gap is bridged. There are however some special demands on a learning system for it to perform successfully in a visual context. First, low level visual input is often of high dimensionality such that the learning system needs to handle large inputs. Second, visual information is often ambiguous such that the learning system needs to be able to handle multi modal outputs, i.e. multiple hypotheses. Typically, the relations to be learned  are non-linear and there is an advantage if data can be processed at video rate, even after presenting many examples to the learning system. In general, there seems to be a lack of such methods.

    This thesis presents systems for learning perception-action mappings for robotic systems with visual input. A range of problems are discussed, such as vision based autonomous driving, inverse kinematics of a robotic manipulator and controlling a dynamical system. Operational systems demonstrating solutions to these problems are presented. Two dierent approaches for providing training data are explored, learning from demonstration (supervised learning) and explorative learning (self-supervised learning). A novel learning method fullling the stated demands is presented. The method, qHebb, is based on associative Hebbian learning on data in channel representation. Properties of the method are demonstrated on a vision-based autonomously driving vehicle, where the system learns to directly map low-level image features to control signals. After an initial training period, the system seamlessly continues autonomously. In a quantitative evaluation, the proposed online learning method performed comparably with state of the art batch learning methods.

    List of papers
    1. Autonomous Navigation and Sign Detector Learning
    Open this publication in new window or tab >>Autonomous Navigation and Sign Detector Learning
    Show others...
    2013 (English)In: IEEE Workshop on Robot Vision(WORV) 2013, IEEE , 2013, p. 144-151Conference paper, Published paper (Refereed)
    Abstract [en]

    This paper presents an autonomous robotic system that incorporates novel Computer Vision, Machine Learning and Data Mining algorithms in order to learn to navigate and discover important visual entities. This is achieved within a Learning from Demonstration (LfD) framework, where policies are derived from example state-to-action mappings. For autonomous navigation, a mapping is learnt from holistic image features (GIST) onto control parameters using Random Forest regression. Additionally, visual entities (road signs e.g. STOP sign) that are strongly associated to autonomously discovered modes of action (e.g. stopping behaviour) are discovered through a novel Percept-Action Mining methodology. The resulting sign detector is learnt without any supervision (no image labeling or bounding box annotations are used). The complete system is demonstrated on a fully autonomous robotic platform, featuring a single camera mounted on a standard remote control car. The robot carries a PC laptop, that performs all the processing on board and in real-time.

    Place, publisher, year, edition, pages
    IEEE, 2013
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-86214 (URN)10.1109/WORV.2013.6521929 (DOI)978-1-4673-5647-3 (ISBN)978-1-4673-5646-6 (ISBN)
    Conference
    IEEE Workshop on Robot Vision (WORV 2013), 15-17 January 2013, Clearwater Beach, FL, USA
    Projects
    ELLIITETTCUASUK EPSRC: EP/H023135/1
    Available from: 2012-12-11 Created: 2012-12-11 Last updated: 2016-06-14
    2. Online Learning of Vision-Based Robot Control during Autonomous Operation
    Open this publication in new window or tab >>Online Learning of Vision-Based Robot Control during Autonomous Operation
    2015 (English)In: New Development in Robot Vision / [ed] Yu Sun, Aman Behal and Chi-Kit Ronald Chung, Springer Berlin/Heidelberg, 2015, p. 137-156Chapter in book (Refereed)
    Abstract [en]

    Online learning of vision-based robot control requires appropriate activation strategies during operation. In this chapter we present such a learning approach with applications to two areas of vision-based robot control. In the first setting, selfevaluation is possible for the learning system and the system autonomously switches to learning mode for producing the necessary training data by exploration. The other application is in a setting where external information is required for determining the correctness of an action. Therefore, an operator provides training data when required, leading to an automatic mode switch to online learning from demonstration. In experiments for the first setting, the system is able to autonomously learn the inverse kinematics of a robotic arm. We propose improvements producing more informative training data compared to random exploration. This reduces training time and limits learning to regions where the learnt mapping is used. The learnt region is extended autonomously on demand. In experiments for the second setting, we present an autonomous driving system learning a mapping from visual input to control signals, which is trained by manually steering the robot. After the initial training period, the system seamlessly continues autonomously. Manual control can be taken back at any time for providing additional training.

    Place, publisher, year, edition, pages
    Springer Berlin/Heidelberg, 2015
    Series
    Cognitive Systems Monographs, ISSN 1867-4925 ; Vol. 23
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-110891 (URN)10.1007/978-3-662-43859-6_8 (DOI)978-3-662-43858-9 (ISBN)978-3-662-43859-6 (ISBN)
    Available from: 2014-09-26 Created: 2014-09-26 Last updated: 2018-01-11Bibliographically approved
    3. Biologically Inspired Online Learning of Visual Autonomous Driving
    Open this publication in new window or tab >>Biologically Inspired Online Learning of Visual Autonomous Driving
    2014 (English)In: Proceedings British Machine Vision Conference 2014 / [ed] Michel Valstar; Andrew French; Tony Pridmore, BMVA Press , 2014, p. 137-156Conference paper, Poster (with or without abstract) (Refereed)
    Abstract [en]

    While autonomously driving systems accumulate more and more sensors as well as highly specialized visual features and engineered solutions, the human visual system provides evidence that visual input and simple low level image features are sufficient for successful driving. In this paper we propose extensions (non-linear update and coherence weighting) to one of the simplest biologically inspired learning schemes (Hebbian learning). We show that this is sufficient for online learning of visual autonomous driving, where the system learns to directly map low level image features to control signals. After the initial training period, the system seamlessly continues autonomously. This extended Hebbian algorithm, qHebb, has constant bounds on time and memory complexity for training and evaluation, independent of the number of training samples presented to the system. Further, the proposed algorithm compares favorably to state of the art engineered batch learning algorithms.

    Place, publisher, year, edition, pages
    BMVA Press, 2014
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-110890 (URN)10.5244/C.28.94 (DOI)1901725529 (ISBN)
    Conference
    British Machine Vision Conference 2014, Nottingham, UK September 1-5 2014
    Note

    The video contains the online learning autonomous driving system in operation. Data from the system has been synchronized with the video and is shown overlaid. The actuated steering singnal is visualized as the position of a blue dot. The steering signal predicted by the system is visualized by a green circle. During autonomous operation, these two coincide. When the vehicle is controlled manually (training), the word MANUAL is displayed in the video.The first sequence evaluates the ability of the system to stay on the road during road reconfiguration. The results of the first sequence indicate that the system primarily reacts to features on the road, not features in the surrounding area. The second sequence evaluates the multi-modal abilities of the system. After initial training, the vehicle follows the outer track, going straight in the two three-way junctions. By forcing the vehicle to turn right at one intersection, by means of a short application of manual control, a new mode is introduced. When the system later reaches the same intersection, the vehicle either turns or continues straight ahead depending on which of the two modes is the strongest. The ordering of the modes depends on slight variation in the approach to the junction and on noise.The third sequence is longer, evaluating both multi-modal abilities and effects of track reconfiguration. Container: MP4Codec: h264 1280x720

    Available from: 2014-09-26 Created: 2014-09-26 Last updated: 2019-11-11Bibliographically approved
    4. Combining Vision, Machine Learning and Automatic Control to Play the Labyrinth Game
    Open this publication in new window or tab >>Combining Vision, Machine Learning and Automatic Control to Play the Labyrinth Game
    2012 (English)In: Proceedings of SSBA, Swedish Symposium on Image Analysis, 2012, 2012Conference paper, Published paper (Other academic)
    Abstract [en]

    The labyrinth game is a simple yet challenging platform, not only for humans but also for control algorithms and systems. The game is easy to understand but still very hard to master. From a system point of view, the ball behavior is in general easy to model but close to the obstacles there are severe non-linearities. Additionally, the far from flat surface on which the ball rolls provides for changing dynamics depending on the ball position.

    The general dynamics of the system can easily be handled by traditional automatic control methods. Taking the obstacles and uneven surface into account would require very detailed models of the system. A simple deterministic control algorithm is combined with a learning control method. The simple control method provides initial training data. As thelearning method is trained, the system can learn from the results of its own actions and the performance improves well beyond the performance of the initial controller.

    A vision system and image analysis is used to estimate the ball position while a combination of a PID controller and a learning controller based on LWPR is used to learn to steer the ball through the maze.

    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-110888 (URN)
    Conference
    Swedish Symposium on Image Analysis for 2012, March 8-9, Stockholm, Sweden
    Available from: 2014-09-26 Created: 2014-09-26 Last updated: 2018-01-11Bibliographically approved
  • 357.
    Öfjäll, Kristoffer
    et al.
    Visionists AB, Gothenburg, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Approximative Coding Methods for Channel Representations2018In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 60, no 6, p. 929-940Article in journal (Refereed)
    Abstract [en]

    Most methods that address computer vision prob-lems require powerful visual features. Many successfulapproaches apply techniques motivated from nonparametricstatistics. The channel representation provides a frameworkfornonparametricdistributionrepresentation.Althoughearlywork has focused on a signal processing view of the rep-resentation, the channel representation can be interpretedin probabilistic terms, e.g., representing the distribution oflocal image orientation. In this paper, a variety of approxi-mative channel-based algorithms for probabilistic problemsare presented: a novel efficient algorithm for density recon-struction, a novel and efficient scheme for nonlinear griddingof densities, and finally a novel method for estimating Copuladensities. The experimental results provide evidence that byrelaxing the requirements for exact solutions, efficient algo-rithms are obtained

  • 358.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Biologically Inspired Online Learning of Visual Autonomous Driving2014In: Proceedings British Machine Vision Conference 2014 / [ed] Michel Valstar; Andrew French; Tony Pridmore, BMVA Press , 2014, p. 137-156Conference paper (Refereed)
    Abstract [en]

    While autonomously driving systems accumulate more and more sensors as well as highly specialized visual features and engineered solutions, the human visual system provides evidence that visual input and simple low level image features are sufficient for successful driving. In this paper we propose extensions (non-linear update and coherence weighting) to one of the simplest biologically inspired learning schemes (Hebbian learning). We show that this is sufficient for online learning of visual autonomous driving, where the system learns to directly map low level image features to control signals. After the initial training period, the system seamlessly continues autonomously. This extended Hebbian algorithm, qHebb, has constant bounds on time and memory complexity for training and evaluation, independent of the number of training samples presented to the system. Further, the proposed algorithm compares favorably to state of the art engineered batch learning algorithms.

  • 359.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Combining Vision, Machine Learning and Automatic Control to Play the Labyrinth Game2012In: Proceedings of SSBA, Swedish Symposium on Image Analysis, 2012, 2012Conference paper (Other academic)
    Abstract [en]

    The labyrinth game is a simple yet challenging platform, not only for humans but also for control algorithms and systems. The game is easy to understand but still very hard to master. From a system point of view, the ball behavior is in general easy to model but close to the obstacles there are severe non-linearities. Additionally, the far from flat surface on which the ball rolls provides for changing dynamics depending on the ball position.

    The general dynamics of the system can easily be handled by traditional automatic control methods. Taking the obstacles and uneven surface into account would require very detailed models of the system. A simple deterministic control algorithm is combined with a learning control method. The simple control method provides initial training data. As thelearning method is trained, the system can learn from the results of its own actions and the performance improves well beyond the performance of the initial controller.

    A vision system and image analysis is used to estimate the ball position while a combination of a PID controller and a learning controller based on LWPR is used to learn to steer the ball through the maze.

  • 360.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Integrating Learning and Optimization for Active Vision Inverse Kinematics2013In: Proceedings of SSBA, Swedish Symposium on Image Analysis, 2013, 2013Conference paper (Other academic)
  • 361.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Online Learning and Mode Switching for Autonomous Driving from Demonstration2014In: Proceedings of SSBA, Swedish Symposium on Image Analysis, 2014, 2014Conference paper (Other academic)
  • 362.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Online learning of autonomous driving using channel representations of multi-modal joint distributions2015In: Proceedings of SSBA, Swedish Symposium on Image Analysis, 2015, Swedish Society for automated image analysis , 2015Conference paper (Other academic)
  • 363.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Online Learning of Vision-Based Robot Control during Autonomous Operation2015In: New Development in Robot Vision / [ed] Yu Sun, Aman Behal and Chi-Kit Ronald Chung, Springer Berlin/Heidelberg, 2015, p. 137-156Chapter in book (Refereed)
    Abstract [en]

    Online learning of vision-based robot control requires appropriate activation strategies during operation. In this chapter we present such a learning approach with applications to two areas of vision-based robot control. In the first setting, selfevaluation is possible for the learning system and the system autonomously switches to learning mode for producing the necessary training data by exploration. The other application is in a setting where external information is required for determining the correctness of an action. Therefore, an operator provides training data when required, leading to an automatic mode switch to online learning from demonstration. In experiments for the first setting, the system is able to autonomously learn the inverse kinematics of a robotic arm. We propose improvements producing more informative training data compared to random exploration. This reduces training time and limits learning to regions where the learnt mapping is used. The learnt region is extended autonomously on demand. In experiments for the second setting, we present an autonomous driving system learning a mapping from visual input to control signals, which is trained by manually steering the robot. After the initial training period, the system seamlessly continues autonomously. Manual control can be taken back at any time for providing additional training.

  • 364.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Weighted Update and Comparison for Channel-Based Distribution Field Tracking2015In: COMPUTER VISION - ECCV 2014 WORKSHOPS, PT II, Springer, 2015, Vol. 8926, p. 218-231Conference paper (Refereed)
    Abstract [en]

    There are three major issues for visual object trackers: modelrepresentation, search and model update. In this paper we address thelast two issues for a specic model representation, grid based distributionmodels by means of channel-based distribution elds. Particularly weaddress the comparison part of searching. Previous work in the areahas used standard methods for comparison and update, not exploitingall the possibilities of the representation. In this work we propose twocomparison schemes and one update scheme adapted to the distributionmodel. The proposed schemes signicantly improve the accuracy androbustness on the Visual Object Tracking (VOT) 2014 Challenge dataset.

  • 365.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Robinson, Andreas
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Visual Autonomous Road Following by Symbiotic Online Learning2016In: Intelligent Vehicles Symposium (IV), 2016 IEEE, 2016, p. 136-143Conference paper (Refereed)
    Abstract [en]

    Recent years have shown great progress in driving assistance systems, approaching autonomous driving step by step. Many approaches rely on lane markers however, which limits the system to larger paved roads and poses problems during winter. In this work we explore an alternative approach to visual road following based on online learning. The system learns the current visual appearance of the road while the vehicle is operated by a human. When driving onto a new type of road, the human driver will drive for a minute while the system learns. After training, the human driver can let go of the controls. The present work proposes a novel approach to online perception-action learning for the specific problem of road following, which makes interchangeably use of supervised learning (by demonstration), instantaneous reinforcement learning, and unsupervised learning (self-reinforcement learning). The proposed method, symbiotic online learning of associations and regression (SOLAR), extends previous work on qHebb-learning in three ways: priors are introduced to enforce mode selection and to drive learning towards particular goals, the qHebb-learning methods is complemented with a reinforcement variant, and a self-assessment method based on predictive coding is proposed. The SOLAR algorithm is compared to qHebb-learning and deep learning for the task of road following, implemented on a model RC-car. The system demonstrates an ability to learn to follow paved and gravel roads outdoors. Further, the system is evaluated in a controlled indoor environment which provides quantifiable results. The experiments show that the SOLAR algorithm results in autonomous capabilities that go beyond those of existing methods with respect to speed, accuracy, and functionality. 

  • 366.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Michael, Felsberg
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Rapid Explorative Direct Inverse Kinematics Learning of Relevant Locations for Active Vision2013In: IEEE Workshop on Robot Vision(WORV) 2013, IEEE conference proceedings, 2013, p. 14-19Conference paper (Refereed)
    Abstract [en]

    An online method for rapidly learning the inverse kinematics of a redundant robotic arm is presented addressing the special requirements of active vision for visual inspection tasks. The system is initialized with a model covering a small area around the starting position, which is then incrementally extended by exploration. The number of motions during this process is minimized by only exploring configurations required for successful completion of the task at hand. The explored area is automatically extended online and on demand.To achieve this, state of the art methods for learning and numerical optimization are combined in a tight implementation where parts of the learned model, the Jacobians, are used during optimization, resulting in significant synergy effects. In a series of standard experiments, we show that the integrated method performs better than using both methods sequentially.

  • 367.
    Örtenberg, Alexander
    et al.
    Linköping University, Department of Medical and Health Sciences, Division of Radiological Sciences. Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Faculty of Medicine and Health Sciences.
    Magnusson, Maria
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Department of Medical and Health Sciences, Division of Radiological Sciences. Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Faculty of Science & Engineering. Linköping University, Faculty of Medicine and Health Sciences.
    Sandborg, Michael
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Medical and Health Sciences, Division of Radiological Sciences. Linköping University, Faculty of Medicine and Health Sciences. Region Östergötland, Center for Surgery, Orthopaedics and Cancer Treatment, Department of Radiation Physics.
    Alm Carlsson, Gudrun
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Medical and Health Sciences, Division of Radiological Sciences. Linköping University, Faculty of Medicine and Health Sciences. Region Östergötland, Center for Surgery, Orthopaedics and Cancer Treatment, Department of Radiation Physics.
    Malusek, Alexandr
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Medical and Health Sciences, Division of Radiological Sciences. Linköping University, Faculty of Medicine and Health Sciences.
    PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA2016In: Radiation Protection Dosimetry, ISSN 0144-8420, E-ISSN 1742-3406, Vol. 169, no 1-4, p. 405-409Article in journal (Refereed)
    Abstract [en]

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code’s execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained.

5678 351 - 367 of 367
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf