This report is a complement to the working document , where a sparse associative network is described. This report shows that the net learning rule in  can be viewed as the solution to a weighted least squares problem. This means that we can apply the theory framework of least squares problems, and compare the net rule with some other iterative algorithms that solve the same problem. The learning rule is compared with the gradient search algorithm and the RPROP algorithm in a simple synthetic experiment. The gradient rule has the slowest convergence while the associative and the RPROP rules have similar convergence. The associative learning rule has a smaller initial error than the RPROP rule though.
It is also shown in the same experiment that we get a faster convergence if we have a monopolar constraint on the solution, i.e. if the solution is constrained to be non-negative. The least squares error is a bit higher but the norm of the solution is smaller, which gives a smaller interpolation error.
The report also discusses a generalization of the least squares model, which include other known function approximation models.
 G Granlund. Paralell Learning in Artificial Vision Systems: Working Document. Dept. EE, Linköping University, 2000
2001. , 23 p.