Biologically Inspired Online Learning of Visual Autonomous Driving
2014 (English)In: Proceedings British Machine Vision Conference 2014 / [ed] Michel Valstar; Andrew French; Tony Pridmore, BMVA Press , 2014, 137-156 p.Conference paper (Refereed)
While autonomously driving systems accumulate more and more sensors as well as highly specialized visual features and engineered solutions, the human visual system provides evidence that visual input and simple low level image features are sufficient for successful driving. In this paper we propose extensions (non-linear update and coherence weighting) to one of the simplest biologically inspired learning schemes (Hebbian learning). We show that this is sufficient for online learning of visual autonomous driving, where the system learns to directly map low level image features to control signals. After the initial training period, the system seamlessly continues autonomously. This extended Hebbian algorithm, qHebb, has constant bounds on time and memory complexity for training and evaluation, independent of the number of training samples presented to the system. Further, the proposed algorithm compares favorably to state of the art engineered batch learning algorithms.
Place, publisher, year, edition, pages
BMVA Press , 2014. 137-156 p.
Computer Vision and Robotics (Autonomous Systems)
IdentifiersURN: urn:nbn:se:liu:diva-110890DOI: 10.5244/C.28.94ISBN: 1901725529OAI: oai:DiVA.org:liu-110890DiVA: diva2:750039
British Machine Vision Conference 2014, Nottingham, UK September 1-5 2014
The video contains the online learning autonomous driving system in operation. Data from the system has been synchronized with the video and is shown overlaid. The actuated steering singnal is visualized as the position of a blue dot. The steering signal predicted by the system is visualized by a green circle. During autonomous operation, these two coincide. When the vehicle is controlled manually (training), the word MANUAL is displayed in the video.The first sequence evaluates the ability of the system to stay on the road during road reconfiguration. The results of the first sequence indicate that the system primarily reacts to features on the road, not features in the surrounding area. The second sequence evaluates the multi-modal abilities of the system. After initial training, the vehicle follows the outer track, going straight in the two three-way junctions. By forcing the vehicle to turn right at one intersection, by means of a short application of manual control, a new mode is introduced. When the system later reaches the same intersection, the vehicle either turns or continues straight ahead depending on which of the two modes is the strongest. The ordering of the modes depends on slight variation in the approach to the junction and on noise.The third sequence is longer, evaluating both multi-modal abilities and effects of track reconfiguration. Container: MP4Codec: h264 1280x7202014-09-262014-09-262016-06-14Bibliographically approved