The least-squares identification of FIR systems is analyzed assuming that the noise is a bounded signal and the input signal is a pseudo-random binary sequence. A lower bound on the worst-case transfer function error shows that the least-square estimate of the transfer function diverges as the order of the FIR system is increased. This implies that, in the presence of the worst-case noise, the trade-off between the estimation error due to the disturbance and the bias error (due to unmodeled dynamics) is significantly different from the corresponding trade-off in the random error case: with a worst-case formulation, the model complexity should not increase indefinitely as the size of the data set increases.
In a continuous-time nonlinear driftless control system, an involutive flow is a composition of input profiles that does not excite any Lie bracket. Such flow composition is trivial, as it corresponds to a "forth and back" cyclic motion obtained rewinding the system along the same path. The aim of this paper is to show that, on the contrary, when a (nonexact) discretization of the nonlinear driftless control system is steered along the same trivial input path, it produces a net motion, which is related to the gap between the discretization used and the exact discretization given by a Taylor expansion. These violations of involutivity can be used to provide an estimate of the local truncation error of numerical integration schemes. In the special case in which the state of the driftless control system admits a splitting into shape and phase variables, our result corresponds to saying that the geometric phases of the discretization need not obey an area rule, i.e., even zero-area cycles in shape space can lead to nontrivial geometric phases. (C) 2017 Elsevier B.V. All rights reserved.
In optimization algorithms used for on-line Model Predictive Control (MPC), linear systems of equations are often solved in each iteration. This is true both for Active Set methods as well as for Interior Point methods, and for linear MPC as well as for nonlinear MPC and hybrid MPC. The main computational effort is spent while solving these linear systems of equations, and hence, it is of greatest interest to solve them efficiently. Classically, the optimization problem has been formulated in either of two ways. One leading to a sparse linear system of equations involving relatively many variables to compute in each iteration and another one leading to a dense linear system of equations involving relatively few variables. In this work, it is shown that it is possible not only to consider these two distinct choices of formulations. Instead it is shown that it is possible to create an entire family of formulations with different levels of sparsity and number of variables, and that this extra degree of freedom can be exploited to obtain even better performance with the software and hardware at hand. This result also provides a better answer to a recurring question in MPC; should the sparse or dense formulation be used.
In optimization routines used for on-line Model Predictive Control (MPC), linear systems of equations are solved in each iteration. This is true both for Active Set (AS) solvers as well as for Interior Point (IP) solvers, and for linear MPC as well as for nonlinear MPC and hybrid MPC. The main computational effort is spent while solving these linear systems of equations, and hence, it is of great interest to solve them efficiently. In high performance solvers for MPC, this is performed using Riccati recursions or generic sparsity exploiting algorithms. To be able to get this performance gain, the problem has to be formulated in a sparse way which introduces more variables. The alternative is to use a smaller formulation where the objective function Hessian is dense. In this work, it is shown that it is possible to exploit the structure also when using the dense formulation. More specifically, it is shown that it is possible to efficiently compute a standard Cholesky factorization for the dense formulation. This results in a computational complexity that grows quadratically in the prediction horizon length instead of cubically as for the generic Cholesky factorization.
The norm-optimal iterative learning control (ilc) algorithm for linear systems is extended to an estimation-based norm-optimal ilc algorithm where the controlled variables are not directly available as measurements. A separation lemma is presented, stating that if a stationary Kalman filter is used for linear time-invariant systems then the ilc design is independent of the dynamics in the Kalman filter. Furthermore, the objective function in the optimisation problem is modified to incorporate the full probability density function of the error. Utilising the Kullback–Leibler divergence leads to an automatic and intuitive way of tuning the ilc algorithm. Finally, the concept is extended to non-linear state space models using linearisation techniques, where it is assumed that the full state vector is estimated and used in the ilc algorithm. Stability and convergence properties for the proposed scheme are also derived.
We present a system identification method for problems with partially missing inputs and outputs. The method is based on a subspace formulation and uses the nuclear norm heuristic for structured low-rank matrix approximation, with the missing input and output values as the optimization variables. We also present a fast implementation of the alternating direction method of multipliers (ADMM) to solve regularized or non-regularized nuclear norm optimization problems with Hankel structure. This makes it possible to solve quite large system identification problems. Experimental results show that the nuclear norm optimization approach to subspace identification is comparable to the standard subspace methods when no inputs and outputs are missing, and that the performance degrades gracefully as the percentage of missing inputs and outputs increases.
Some results on robustness properties for the well known Smith controller are given. In particular we focus on the gain margins and the phase margin for this controller.
We propose a method for model reduction on a given frequency range, without the use of input and output filter weights. The method uses a nonlinear optimization approach to minimize a frequency limited H2 like cost function. An important contribution of the paper is the derivation of the gradient of the proposed cost function. The fact that we have a closed form expression for the gradient and that considerations have been taken to make the gradient computationally efficient to compute enables us to efficiently use off-the-shelf optimization software to solve the optimization problem. © 2014 Elsevier B.V. All rights reserved.
The problem of minimizing a weighted sum of the input and output variances from a linear scalar system is considered. This is viewed as an optimization problem with the parameters of the regulator as unknowns and it is proved that if the regulator is flexible enough, then every local minimum to this problem is a global optimum. This result is useful if a gradient method is used to find the optimal regulator.
This paper presents an algorithm for stabilizing a linear system with bounded disturbances, by a sampled-data regulator. The prior knowledge required is that the system has finite order and bounded disturbances. Bounds on the disturbances and on the order of the system need not be known.
The implications of this technical result on theoretical convergence issues in adaptive control are also discussed.