A lattice-ladder multilayer perceptron (LLMLP) is an appealing structure for advanced signal processing in a sense that it is nonlinear, possesses infinite impulse response and stability monitoring of it during training is simple. However, even moderate implementation of LLMLP training hinders the fact that a lot of storage and computation power must be allocated. In this paper we deal with the problem of computational efficiency of LLMLP training algorithms that are based on computation of gradients, e.g., backpropagation, conjugate-gradient or Levenberg-Marquardt.The paper aims to explore most computationally demanding calculations---computation of gradients for lattice (rotation)parameters. Here we find and propose to use for training of several LLMLP architectures a simplest in terms of storage and number of delay elements computation of exact gradients, assuming that the coefficients of the lattice-ladder filter are held stationary.