liu.seSearch for publications in DiVA
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
HiperLearn: A High Performance Learning Architecture
Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.ORCID-id: 0000-0002-5698-5983
Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
2002 (engelsk)Rapport (Annet vitenskapelig)
Abstract [en]

A new architecture for learning systems has been developed. A number of particular design features in combination result in a very high performance and excellent robustness. The architecture uses a monopolar channel information representation. The channel representation implies a partially overlapping mapping of signals into a higher-dimensional space, such that a flexible but continuous restructuring mapping can be made. The high-dimensional mapping introduces locality in the information representation, which is directly available in wavelets or filter outputs. Single level maps using this representation can produce closed decision regions, thereby eliminating the need for costly back-propagation. The monopolar property implies that data only utilizes one polarity, say positive values, in addition to zero, allowing zero to represent no information. This leads to an efficient sparse representation.

The processing mode of the architecture is association where the mapping of feature inputs onto desired state outputs is learned from a representative training set. The sparse monopolar representation together with locality, using individual learning rates, allows a fast optimization, as the system exhibits linear complexity. Mapping into multiple channels gives a strategy to use confidence statements in data, leading to a low sensitivity to noise in features. The result is an architecture allowing systems with a complexity of some hundred thousand features described by some hundred thousand samples to be trained in typically less than an hour. Experiments that demonstrate functionality and noise immunity are presented. The architecture has been applied to the design of hyper complex operations for view centered object recognition in robot vision.

sted, utgiver, år, opplag, sider
2002. , s. 20
Serie
LiTH-ISY-R, ISSN 1400-3902 ; 2409
Emneord [en]
channel representation, monopolar, association
HSV kategori
Identifikatorer
URN: urn:nbn:se:liu:diva-36330ISRN: LiTH-ISY-R-2409Lokal ID: 30988OAI: oai:DiVA.org:liu-36330DiVA, id: diva2:257178
Tilgjengelig fra: 2009-10-10 Laget: 2009-10-10 Sist oppdatert: 2015-12-10bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Personposter BETA

Granlund, GöstaForssén, Per-ErikJohansson, Björn

Søk i DiVA

Av forfatter/redaktør
Granlund, GöstaForssén, Per-ErikJohansson, Björn
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric

urn-nbn
Totalt: 179 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf