In this paper, we introduce an optimal average cost learning framework to solve output regulation problem for linear systems with unknown dynamics. Our optimal framework aims to design the controller to achieve output tracking and disturbance rejection while minimizing the average cost. We derive the Hamilton-Jacobi-Bellman (HJB) equation for the optimal average cost problem and develop a reinforcement algorithm to solve it. Our proposed algorithm is an off-policy routine which learns the optimal average cost solution completely model-free. We rigorously analyze the convergence of the proposed algorithm. Compared to previous approaches for optimal tracking controller design, we elevate the need for judicious selection of the discounting factor and the proposed algorithm can be implemented completely model-free. We support our theoretical results with a simulation example. (C) 2019 Elsevier Ltd. All rights reserved.
It is a well-known fact that externally positive linear systems may fail to have a minimal positive realization. In order to investigate these cases, we introduce the notion of minimal eventually positive realization, for which the state update matrix becomes positive after a certain power. Eventually positive realizations capture the idea that in the impulse response of an externally positive system the state of a minimal realization may fail to be positive, but only transiently. As a consequence, we show that in discrete-time it is possible to use downsampling to obtain minimal positive realizations matching decimated sequences of Markov coefficients of the impulse response. In continuous-time, instead, if the sampling time is chosen sufficiently long, a minimal eventually positive realization leads always to a sampled realization which is minimal and positive.
The aim of this paper is to modify continuous-time bounded confidence opinion dynamics models so that "changes of opinion" (intended as changes of the sign of the initial states) are never induced during the evolution. Such sign invariance can be achieved by letting opinions of different sign localized near the origin interact negatively, or neglect each other, or even repel each other. In all cases, it is possible to obtain sign-preserving bounded confidence models with state-dependent connectivity and with a clustering behavior similar to that of a standard bounded confidence model. (C) 2018 Elsevier Ltd. All rights reserved.
State-space smoothing has found many applications in science and engineering. Under linear and Gaussian assumptions, smoothed estimates can be obtained using efficient recursions, for example Rauch Tung Striebel and Mayne Fraser algorithms. Such schemes are equivalent to linear algebraic techniques that minimize a convex quadratic objective function with structure induced by the dynamic model. These classical formulations fall short in many important circumstances. For instance, smoothers obtained using quadratic penalties can fail when outliers are present in the data, and cannot track impulsive inputs and abrupt state changes. Motivated by these shortcomings, generalized Kalman smoothing formulations have been proposed in the last few years, replacing quadratic models with more suitable, often nonsmooth, convex functions. In contrast to classical models, these general estimators require use of iterated algorithms, and these have received increased attention from control, signal processing, machine learning, and optimization communities. In this survey we show that the optimization viewpoint provides the control and signal processing community great freedom in the development of novel modeling and inference frameworks for dynamical systems. We discuss general statistical models for dynamic systems, making full use of nonsmooth convex penalties and constraints, and providing links to important models in signal processing and machine learning. We also survey optimization techniques for these formulations, paying close attention to dynamic problem structure. Modeling concepts and algorithms are illustrated with numerical examples. (C) 2017 Elsevier Ltd. All rights reserved.
In this article we present a parametric branch and bound algorithm for computation of optimal and suboptimal solutions to parametric mixed-integer quadratic programs and parametric mixed-integer linear programs. The algorithm returns an optimal or suboptimal parametric solution with the level of suboptimality requested by the user. An interesting application of the proposed parametric branch and bound procedure is suboptimal explicit MPC for hybrid systems, where the introduced user-defined suboptimality tolerance reduces the storage requirements and the online computational effort, or even enables the computation of a suboptimal MPC controller in cases where the computation of the optimal MPC controller would be intractable. Moreover, stability of the system in closed loop with the suboptimal controller can be guaranteed a priori.
The main objective in this work is to compare different convex relaxations for Model Predictive Control (MPC) problems with mixed real valued and binary valued control signals. In the problem description considered, the objective function is quadratic, the dynamics are linear, and the inequality constraints on states and control signals are all linear. The relaxations are related theoretically and the quality of the bounds and the computational complexities are compared in numerical experiments. The investigated relaxations include the Quadratic Programming (QP) relaxation, the standard Semidefinite Programming (SDP) relaxation, and an equality constrained SDP relaxation. The equality constrained SDP relaxation appears to be new in the context of hybrid MPC and the result presented in this work indicates that it can be useful as an alternative relaxation, which is less computationally demanding than the ordinary SDP relaxation and which often gives a better bound than the bound from the QP relaxation. Furthermore, it is discussed how the result from the SDP relaxations can be used to generate suboptimal solutions to the control problem. Moreover, it is also shown that the equality constrained SDP relaxation is equivalent to a QP in an important special case.
In this paper, we consider the problem of identifying a linear map from measurements which are subject to intermittent and arbitrarily large errors. This is a fundamental problem in many estimation-related applications such as fault detection; state estimation in lossy networks, hybrid system identification, robust estimation, etc. The problem is hard because it exhibits some intrinsic combinatorial features. Therefore, obtaining an effective solution necessitates relaxations that are both solvable at a reasonable cost and effective in the sense that they can return the true parameter vector. The current paper discusses a nonsmooth convex optimization approach and provides a new analysis of its behavior. In particular, it is shown that under appropriate conditions on the data, an exact estimate can be recovered from data corrupted by a large (even infinite) number of gross errors. (C) 2016 Elsevier Ltd. All rights reserved.
In this paper the question of estimating the order in the context of subspace methods is addressed. Three different approaches are presented and the asymptotic properties thereof derived. Two of these methods are based on the information contained in the estimated singular values, while the third method is based on the estimated innovation variance. The case with observed inputs is treated as well as the case without exogenous inputs. The two methods based on the singular values are shown to be consistent under fairly mild assumptions, while the same result for the third approach is only obtained on a generic set. The former can be applied to Larimore type of procedures as well as to MOESP type of procedures, whereas the third is only applied to Larimore type of algorithms. This has implications for the estimation of the order of systems, which are close to the exceptional set, as is shown in a numerical example. All the estimation methods involve the choice of a penalty term. Sufficient conditions on the penalty term to guarantee consistency are derived. The effects of different choices of the penalty term are investigated in a simulation study. (C) 2001 Elsevier Science Ltd. All rights reserved.
Three different order estimation criteria in the context of subspace algorithms are introduced and sufficient conditions for strong consistency are derived. A simulation study points to open questions.
In this paper the effect of some weighting matrices on the asymptotic variance of the estimates of linear discrete time state space systems estimated using subspace methods is investigated. The analysis deals with systems with white or without observed inputs and refers to the Larimore type of subspace procedures. The main result expresses the asymptotic variance of the system matrix estimates in canonical form as a function of some of the user choices, clarifying the question on how to choose them optimally. It is shown, that the CCA weighting scheme leads to optimal accuracy. The expressions for the asymptotic variance can be implemented more efficiently as compared to the ones previously published.
A promising method for estimation of the time-delay in continuous-time linear dynamical systems uses the phase of the all-pass part of a discrete-time model of the system. We have discovered that this method can sometimes fail totally and we suggest a method for avoiding such failures.
The state estimation problem for linear systems with linear state equality constraints was dealt with in Ko andamp; Bitmead [Ko, S., andamp; Bitmead, R. (2007). State estimation for linear systems with state equality constraints. Automatica, 43, 1363-1368]. In this correspondence, it is first shown that a necessary assumption on the covariance of the process noise is missing in the main result of the paper. It is then shown that the main result of the paper can be achieved in a convenient and more general way without any additional assumptions on the covariance of the process noise except positive definiteness.
The first order stable spline (SS-1) kernel (also known as the tunedcorrelated kernel) is used extensively in regularized system identification, where the impulse response is modeled as a zero-mean Gaussian process whose covariance function is given by well designed and tuned kernels. In this paper, we discuss the maximum entropy properties of this kernel. In particular, we formulate the exact maximum entropy problem solved by the SS-1 kernel without Gaussian and uniform sampling assumptions. Under general sampling assumption, we also derive the special structure of the SS-1 kernel (e.g. its tridiagonal inverse and factorization have closed form expression), also giving to it a maximum entropy covariance completion interpretation.
In this paper, we study the global robust stabilization problem of strict feedforward systems subject to input unmodeled dynamics. We present a recursive design method for a nested saturation controller which globally stabilizes the closed-loop system in the presence of input unmodeled dynamics. One of the difficulties of the problem is that the Jacobian linearization of our system at the origin may not be stabilizable. We overcome this difficulty by employing a special version of the small gain theorem to address the local stability, and, respectively, the asymptotic small gain theorem to establish the global convergence property, of the closed-loop system An example is given to show that a redesign of the controller is required to guarantee the global robust asymptotic stability in the presence of the input unmodeled dynamics.
There has been recently a trend to study linear system identification with high order finite impulse response (FIR) models using the regularized least-squares approach. One key of this approach is to solve the hyper-parameter estimation problem that is usually nonconvex. Our goal here is to investigate implementation of algorithms for solving the hyper-parameter estimation problem that can deal with both large data sets and possibly ill-conditioned computations. In particular, a QR factorization based matrix-inversion-free algorithm is proposed to evaluate the cost function in an efficient and accurate way. It is also shown that the gradient and Hessian of the cost function can be computed based on the same QR factorization. Finally, the proposed algorithm and ideas are verified by Monte-Carlo simulations on a large data-bank of test systems and data sets.
Intrigued by some recent results on impulse response estimation by kernel and nonparametric techniques, we revisit the old problem of transfer function estimation from input-output measurements. We formulate a classical regularization approach, focused on finite impulse response (FIR) models, and find that regularization is necessary to cope with the high variance problem. This basic, regularized least squares approach is then a focal point for interpreting other techniques, like Bayesian inference and Gaussian process regression. The main issue is how to determine a suitable regularization matrix (Bayesian prior or kernel). Several regularization matrices are provided and numerically evaluated on a data bank of test systems and data sets. Our findings based on the data bank are as follows. The classical regularization approach with carefully chosen regularization matrices shows slightly better accuracy and clearly better robustness in estimating the impulse response than the standard approach - the prediction error method/maximum likelihood (PEM/ML) approach. If the goal is to estimate a model of given order as well as possible, a low order model is often better estimated by the PEM/ML approach, and a higher order model is often better estimated by model reduction on a high order regularized FIR model estimated with careful regularization. Moreover, an optimal regularization matrix that minimizes the mean square error matrix is derived and studied. The importance of this result lies in that it gives the theoretical upper bound on the accuracy that can be achieved for this classical regularization approach.
This work investigates how stochastic sampling jitter noise affects the result of system identification, and proposes a modification of known approaches to mitigate the effects of sampling jitter, when the jitter is unknown and not directly measurable. By just assuming conventional additive measurement noise, the analysis shows that the identified model will get a bias in the transfer function amplitude that increases for higher frequencies. A frequency domain approach with a continuous-time model allows an analysis framework for sampling jitter noise. The bias and covariance in the frequency domain model are derived. These are used in bias compensated (weighted) least squares algorithms, and by asymptotic arguments this leads to a maximum likelihood algorithm. Continuous-time output error models are used for numerical illustrations.
Random multisines have successfully been used as input signals in many system identification experiments. In this paper, it is shown that scalar random multisine signals with a flat amplitude spectrum are separable of order one. The separability property means that certain conditional expectations are linear and it implies that random multisines can easily be used to obtain accurate estimates of the linear time-invariant part of a Hammerstein system. Furthermore, higher order separability is investigated.
Nonlinear systems can be approximated by linear time-invariant (LTI) models in-many ways. Here, LTI models that are optimal approximations in the mean-square error sense are analyzed. A necessary and sufficient condition on the input signal for the optimal LTI approximation of an arbitrary nonlinear finite impulse response (NFIR) system to be a linear finite impulse response (FIR) model is presented. This condition says that the in ut should be separable of a certain order, i.e., that certain conditional expectations should be,P linear. For the special case of Gaussian input signals, this condition is closely related to a generalized version of Bussgang's classic theorem about static nonlinearities. It is shown that this generalized theorem can be used for structure identification and for the identification of generalized Wiener-Hammerstein systems.
Analyzing fault diagnosability performance for a given model, before developing a diagnosis algorithm, can be used to answer questions like “How difficult is it to detect a fault f_{i}?” or “How difficult is it to isolate a fault f_{i} from a fault f_{j}?”. The main contributions are the derivation of a measure, distinguishability, and a method for analyzing fault diagnosability performance of discrete-time descriptor models. The method, based on the Kullback–Leibler divergence, utilizes a stochastic characterization of the different fault modes to quantify diagnosability performance. Another contribution is the relation between distinguishability and the fault to noise ratio of residual generators. It is also shown how to design residual generators with maximum fault to noise ratio if the noise is assumed to be i.i.d. Gaussian signals. Finally, the method is applied to a heavy duty diesel engine model to exemplify how to analyze diagnosability performance of non-linear dynamic models.
The problem of designing the identification experiments to make them maximally informative with respect to the intended model use is studied. The focus is on how to identify models that are good for control, so called `Identification for Control'. A main result is that we derive explicit expressions for the optimal controller and the optimal reference signal spectrum to use in the identification experiment for the case that only the misfit in the dynamics model is penalized and when a linear combination of the input and output variances is constrained.
An algorithm is proposed for computing which sensor additions make a diagnosis requirement specification regarding fault detectability and isolability attainable for a given linear differential-algebraic model. Restrictions on possible sensor locations can be given, and if the diagnosis specification is not attainable with any available sensor addition, the algorithm provides the solutions that maximize specification fulfillment. Previous approaches with similar objectives have been based on the model structure only. Since the proposed algorithm utilizes the analytical expressions, it can handle models where structural approaches fail.
A fundamental part of a fault diagnosis system is the residual generator. Here a new method, the minimal polynomial basis approach, for design of residual generators for linear systems, is presented. The residual generation problem is transformed into a problem of finding polynomial bases for null-spaces of polynomial matrices. This is a standard problem in established linear systems theory, which means that numerically efficient computational tools are generally available. It is shown that the minimal polynomial basis approach can find all possible residual generators and explicitly those of minimal order. © 2001 Elsevier Science Ltd. All rights reserved.
Consistency relations are often used to design residual generators based on non-linear process models. A main difficulty is that they generally include time differentiated versions of known signals which are difficult to estimate in a noisy environment. The main results of this paper show how to lower the need to estimate derivatives of known signals in order to compute a residual. Necessary and sufficient conditions for lowering the order of the derivatives one step are presented and a main step in the approach is to obtain a state-space realization of the residual generator. An attractive feature of the approach is that general differential algebraic system descriptions can be handled in the same way as for example ordinary differential equations and also that stability of the residual generator is always guaranteed.
The current demand for more complex models has initiated a shift away from state-space models towards models described by differential-algebraic equations (DAEs). These models arise as the natural product of object-oriented modeling languages, such as Modelica. However, the mathematics of DAEs is somewhat more involved than the standard state-space theory. The aim of this work is to present a well-posed description of a linear stochastic differential-algebraic equation and more importantly explain how well-posed estimation problems can be formed. We will consider both the system identification problem and the state estimation problem. Besides providing the necessary theory we will also explain how the procedures can be implemented by means of efficient numerical methods.
The purpose of the design of identification experiments is to make the collected data maximally informative with respect to the intended use of the model, subject to constraints that might be at hand. When the true system is replaced by an estimated model, there results a performance degradation that is due to the error in the transfer function estimates. Using some recent asymptotic expressions for the bias and the variance of the estimated transfer function, it is shown how this performance degradation can be minimized by a proper experiment design. Several applications, where it is beneficial to let the experiment be carried out in closed loop, are highlighted.
Asymptotic variance expressions are analyzed for models that are identified on the basis of closed-loop data. The considered methods comprise the classical `direct' method, as well as the more recently developed indirect methods, employing coprime factorized models, dual Youla/Kucera parametrizations and the two-stage approach. The variance expressions are compared with the open-loop situation, and evaluated in terms of their relevance for subsequent model-based control design.
This paper treats several aspects relevant to identification of continuous-time Output error (OE) models based on sampled data. The exact method for doing this is well known both for data given in the time and frequency domains. This approach becomes somewhat complex, especially for non-uniformly sampled data. We study various ways to approximate the exact method for reasonably fast sampling. While an objective is to gain insights into the non-uniform sampling case, this paper only gives explicit results for uniform sampling.
This paper treats several aspects relevant to the identification of continuous-time output error (OE) models based on non-uniformly sampled output data. The exact method for doing this is well known in the time domain, where the continuous-time system is discretized, simulated and the result is fitted in a mean square sense to measured data. The material presented here is based on a method proposed in a companion paper (Gillberg andamp; Ljung, 2010) which deals with the same topic but for the case of uniformly sampled data. In this text it will be shown how that method suggests that the output should be reconstructed using a B-spline with uniformly distributed knots. This representation can then be used to directly identify the continuous-time system without proceeding via discretization. Only the relative degree of the model is used to choose the order of the spline.
The subject of this paper is the direct identification of continuous-time autoregressive moving average (CARMA) models. The topic is viewed from the frequency domain perspective which then turns the reconstruction of the continuous-time power spectral density (CT-PSD) into a key issue. The first part of the paper therefore concerns the approximate estimation of the CT-PSD from uniformly sampled data under the assumption that the model has a certain relative degree. The approach has its point of origin in the frequency domain Whittle likelihood estimator. The discrete- or continuous-time spectral densities are estimated from equidistant samples of the output. For low sampling rates the discrete-time spectral density is modeled directly by its continuous-time spectral density using the Poisson summation formula. In the case of rapid sampling the continuous-time spectral density is estimated directly by modifying its discrete-time counterpart.
A survey of robustness of nonlinear state feedback is given. For series perturbations there are fairly complete results, showing that under mild restrictions an optimal controller can tolerate an infinite increase in gain. For gain reductions there are some results for systems linear in the control. In particular there is a 50% reduction tolerance if the control penalty is quadratic. Usually the optimal controller cannot be computed exactly. There are some results showing the effects of truncation on the robustness. Essentially robustness is maintained but in a reduced (computable) part of the state space.
We consider the problem of transferring the output of a linear system from one equilibrium value to another under control amplitude constraint. It is possible to give a simple lower bound on the time required, expressed in the longest time constant, the gain of the system, the change in y and the control bound. This theoretical bound appears to be useful as a practical lower bound of the rise time.
Iterative learning control (ILC) based on minimization of a quadratic criterion in the control error and the input signal is considered. The focus is on the frequency domain properties of the algorithm, and how it is able to handle non-minimum phase systems. Experiments carried out on a commercial industrial robot are also presented.
The disturbance properties of high order iterative learning control (ILC) algorithms are considered. An error equation is formulated, and using statistical models of the load and measurement disturbances an equation for the covariance matrix of the control error vector is derived. The results are exemplified by analytic derivation of the covariance matrix for a second order ILC algorithm.
An approach to estimate the tire-road friction during normal drive using only the wheel slip, that is, the relative difference in wheel velocities, is presented. The driver can be informed about the maximum friction force and be alarmed for sudden changes. Friction-related parameters are estimated using only signals from standard sensors in a modern car. An adaptive estimator is presented for a model linear in parameters, which is designed to work for periods of poor excitation, errors in variables, simultaneous slow and fast parameter drifts and abrupt changes. The physical relation between these parameters and the maximal friction force is determined from extensive field trials using a Volvo 850 GLT as a test car.
This paper addresses issues in closed-loop performance monitoring. Particular attention is paid to detecting whether an observed deviation from nominal performance is due to a disturbance or due to a control relevant system change. This is achieved by introducing a novel performance measure that allows feasible application of a standard CUSUM change detector. The paper includes explicit results on the risk of mistaking a disturbance for a system change. The algorithm has been implemented in real time on a DSP and evaluated on a DC motor.
Classical approaches to determine a suitable model structure from observed input-output data are based on hypothesis tests and information-based criteria. Recently, the model structure has been considered as a stochastic variable, and standard estimation techniques have been proposed. The resulting estimators are closely related to the aforementioned methods. However, it turns out that there are a number of prior choices in the problem formulation, which are crucial for the estimators' behavior. The contribution of this paper is to clarify the role of the prior choices, to examine a number of possibilities and to show which estimators are consistent. This is done in a linear regression framework. For autoregressive models, we also investigate a novel prior assumption on stability, and give the estimator for the model order and the parameters themselves.
The Quantization Theorem I (QT I) implies that the likelihood function can be reconstructed from quantized sensor observations, given that appropriate dithering noise is added before quantization. We present constructive algorithms to generate such dithering noise. The application to maximum likelihood estimation (mle) is studied in particular. In short, dithering has the same role for amplitude quantization as an anti-alias filter has for sampling, in that it enables perfect reconstruction of the dithered but unquantized signal’s likelihood function. Without dithering, the likelihood function suffers from a kind of aliasing expressed as a counterpart to Poisson’s summation formula which makes the exact mle intractable to compute. With dithering, it is demonstrated that standard mle algorithms can be re-used on a smoothed likelihood function of the original signal, and statistically efficiency is obtained. The implication of dithering to the Cramér–Rao Lower Bound (CRLB) is studied, and illustrative examples are provided.
System identification based on quantized observations requires either approximations of the quantization noise, leading to suboptimal algorithms, or dedicated algorithms tailored to the quantization noise properties. This contribution studies fundamental issues in estimation that relate directly to the core methods in system identification. As a first contribution, results from statistical quantization theory are surveyed and applied to both moment calculations (mean, variance etc) and the likelihood function of the measured signal. In particular, the role of adding dithering noise at the sensor is studied. The overall message is that tailored dithering noise can considerably simplify the derivation of optimal estimators. The price for this is a decreased signal to noise ratio, and a second contribution is a detailed study of these effects in terms of the Cramer-Rao lower bound. The common additive uniform noise approximation of quantization is discussed, compared, and interpreted in light of the suggested approaches.
It is often necessary in practice to perform identification experiments on systems operating in closed loop. There has been some confusion about the possibilities of successful identification in such cases, evidently due to the fact that certain common methods then fail. A rapidly increasing literature on the problem is briefly surveyed in this paper, and an overview of a particular approach is given. It is shown that prediction error identification methods, applied in a direct fashion will given correct estimates in a number of feedback cases. Furthermore, the accuracy is not necessarily worse in the presence of feedback; in fact optimal inputs may very well require feedback terms. Some practical applications are also described.
The Wiener model is a block oriented model, having a linear dynamic system followed by a static nonlinearity. The dominating approach to estimate the components of this model has been to minimize the error between the simulated and the measured outputs. We show that this will, in general, lead to biased estimates if there are other disturbances present than measurement noise. The implications of Bussgang's theorem in this context are also discussed. For the case with general disturbances, we derive the Maximum Likelihood method and show how it can be efficiently implemented. Comparisons between this new algorithm and the traditional approach, confirm that the new method is unbiased and also has superior accuracy.
In this paper we derive the maximum likelihood problem for missing data from a Gaussian model. We present in total eight different equivalent formulations of the resulting optimization problem, four out of which are nonlinear least squares formulations. Among these formulations are also formulations based on the expectation-maximization algorithm. Expressions for the derivatives needed in order to solve the optimization problems are presented. We also present numerical comparisons for two of the formulations for an ARMAX model.
In this paper, ideas from iterative feedback tuning (IFT) are incorporated into relay auto-tuning of the proportional-plus-integral-plus-derivative (PID) controller. The PID controller is auto-tuned to give specified phase margin and bandwidth. Good tuning performance according to the specified bandwidth and phase margin can be obtained and the limitation of the standard relay auto-tuning technique using a version of Ziegler–Nichols formula can be eliminated. Furthermore, by using common modelling assumptions for the relay system, some of the required derivatives in the IFT algorithm can be derived analytically. The algorithm was tested in the laboratory on a coupled tank and good tuning result was demonstrated.
This paper considers actuator redundancy management for a class of overactuated nonlinear systems. Two tools for distributing the control effort among a redundant set of actuators are optimal control design and control allocation. In this paper, we investigate the relationship between these two design tools when the performance indexes are quadratic in the control input. We show that for a particular class of nonlinear systems, they give exactly the same design freedom in distributing the control effort among the actuators. Linear quadratic optimal control is contained as a special case. A benefit of using a separate control allocator is that actuator constraints can be considered, which is illustrated with a flight control example.
Collision avoidance (CA) systems are applicable for most transportation systems ranging from autonomous robots and vehicles to aircraft, cars and ships. A probabilistic framework is presented for designing and analyzing existing CA algorithms proposed in literature, enabling on-line computation of the risk for faulty intervention and consequence of different actions. The approach is based on Monte Carlo techniques, where sampling-resampling methods are used to convert sensor readings with stochastic errors to a Bayesian risk. The concepts are evaluated using a real-time implementation of an automotive collision mitigation system, and results from one demonstrator vehicle are presented.
We discuss several aspects of the mathematical foundations of the nonlinear black-box identification problem. We shall see that the quality of the identification procedure is always a result of a certain trade-off between the expressive power of the model we try to identify (the larger the number of parameters used to describe the model, the more flexible is the approximation), and the stochastic error (which is proportional to the number of parameters). A consequence of this trade-off is the simple fact that a good approximation technique can be the basis of a good identification algorithm. From this point of view, we consider different approximation methods, and pay special attention to spatially adaptive approximants. We introduce wavelet and ‘neuron’ approximations, and show that they are spatially adaptive. Then we apply the acquired approximation experience to estimation problems. Finally, we consider some implications of these theoretical developments for the practically implemented versions of the ‘spatially adaptive’ algorithms.
In model-based diagnosis there are often more candidate residual generators than what is needed and residual selection is therefore an important step in the design of model-based diagnosis systems. The availability of computer-aided tools for automatic generation of residual generators have made it easier to generate a large set of candidate residual generators for fault detection and isolation. Fault detection performance varies significantly between different candidates due to the impact of model uncertainties and measurement noise. Thus, to achieve satisfactory fault detection and isolation performance, these factors must be taken into consideration when formulating the residual selection problem. Here, a convex optimization problem is formulated as a residual selection approach, utilizing both structural information about the different residuals and training data from different fault scenarios. The optimal solution corresponds to a minimal set of residual generators with guaranteed performance. Measurement data and residual generators from an internal combustion engine test-bed is used as a case study to illustrate the usefulness of the proposed method.
A given explicit piecewise affine representation of an MPC feedback law is approximated by a single polynomial, computed using linear programming. This polynomial state feedback control law guarantees closed-loop stability and constraint satisfaction. The polynomial feedback can be implemented in real time even on very simple devices with severe limitations on memory storage.