Subspace Optimization Techniques for Classification Problems
2003 (English)Report (Other academic)
The nonlinear conjugate gradients method is a very powerful program in the search for Bayes error optimal linear subspaces for classification problems. In this report, techniques to find linear subspaces where the classification error is minimized are surveyed. Summary statistics models of normal populations are used to form smooth, non-convex objective functions of a linear transformation that reduces the dimensionality. Objective functions that are based on the Mahalanobis or Bhattacharyya distances and that are closely related to the probability of misclassification are derived, as well as their subspace gradients. Different approaches to minimize those objective functions are investigated: Householder and Givens parameterizations as well as steepest descent and conjugate gradient methods. The methods are evaluated on experimental data with respect to convergence rate and subspace classification accuracy.
Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2003. , 71 p.
LiTH-ISY-R, ISSN 1400-3902 ; 2534
IdentifiersURN: urn:nbn:se:liu:diva-55970ISRN: LiTH-ISY-R-2534OAI: oai:DiVA.org:liu-55970DiVA: diva2:316766