liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
HiperLearn: A High Performance Learning Architecture
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.ORCID iD: 0000-0002-5698-5983
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
2002 (English)Report (Other academic)
Abstract [en]

A new architecture for learning systems has been developed. A number of particular design features in combination result in a very high performance and excellent robustness. The architecture uses a monopolar channel information representation. The channel representation implies a partially overlapping mapping of signals into a higher-dimensional space, such that a flexible but continuous restructuring mapping can be made. The high-dimensional mapping introduces locality in the information representation, which is directly available in wavelets or filter outputs. Single level maps using this representation can produce closed decision regions, thereby eliminating the need for costly back-propagation. The monopolar property implies that data only utilizes one polarity, say positive values, in addition to zero, allowing zero to represent no information. This leads to an efficient sparse representation.

The processing mode of the architecture is association where the mapping of feature inputs onto desired state outputs is learned from a representative training set. The sparse monopolar representation together with locality, using individual learning rates, allows a fast optimization, as the system exhibits linear complexity. Mapping into multiple channels gives a strategy to use confidence statements in data, leading to a low sensitivity to noise in features. The result is an architecture allowing systems with a complexity of some hundred thousand features described by some hundred thousand samples to be trained in typically less than an hour. Experiments that demonstrate functionality and noise immunity are presented. The architecture has been applied to the design of hyper complex operations for view centered object recognition in robot vision.

Place, publisher, year, edition, pages
2002. , 20 p.
Series
LiTH-ISY-R, ISSN 1400-3902 ; 2409
Keyword [en]
channel representation, monopolar, association
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:liu:diva-36330ISRN: LiTH-ISY-R-2409Local ID: 30988OAI: oai:DiVA.org:liu-36330DiVA: diva2:257178
Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2015-12-10Bibliographically approved

Open Access in DiVA

No full text

Authority records BETA

Granlund, GöstaForssén, Per-ErikJohansson, Björn

Search in DiVA

By author/editor
Granlund, GöstaForssén, Per-ErikJohansson, Björn
By organisation
Computer VisionThe Institute of Technology
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 160 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf