Nonlinear cooperative systems associated to vector fields that are concave or subhomogeneous describe well interconnected dynamics that are of key interest for communication, biological, economical, and neural network applications. For this class of positive systems, we provide conditions that guarantee existence, uniqueness and stability of strictly positive equilibria. These conditions can be formulated directly in terms of the spectral radius of the Jacobian of the system. If control inputs are available, then it is shown how to use state feedback to stabilize an equilibrium point in the interior of the positive orthant.
For communities of agents which are not necessarily cooperating, distributed processes of opinion forming are naturally represented by signed graphs, with positive edges representing friendly and cooperative interactions and negative edges the corresponding antagonistic counterpart. Unlike for nonnegative graphs, the outcome of a dynamical system evolving on a signed graph is not obvious and it is in general difficult to characterize, even when the dynamics are linear. In this paper, we identify a significant class of signed graphs for which the linear dynamics are however predictable and show many analogies with positive dynamical systems. These cases correspond to adjacency matrices that are eventually positive, for which the Perron-Frobenius property still holds and implies the existence of an invariant cone contained inside the positive orthant. As examples of applications, we determine cases in which it is possible to anticipate or impose unanimity of opinion in decision/voting processes even in presence of stubborn agents, and show how it is possible to extend the PageRank algorithm to include negative links.
In this paper, we consider robust stability analysis of large-scale sparsely interconnected uncertain systems. By modeling the interconnections among the subsystems with integral quadratic constraints, we show that robust stability analysis of such systems can be performed by solving a set of sparse linear matrix inequalities. We also show that a sparse formulation of the analysis problem is equivalent to the classical formulation of the robustness analysis problem and hence does not introduce any additional conservativeness. The sparse formulation of the analysis problem allows us to apply methods that rely on efficient sparse factorization techniques, and our numerical results illustrate the effectiveness of this approach compared to methods that are based on the standard formulation of the analysis problem.
This technical note proposes a method for low order H-infinity synthesis where the constraint on the order of the controller is formulated as a rational equation. The resulting nonconvex optimization problem is then solved by applying a partially augmented Lagrangian method. The proposed method is evaluated together with two well-known methods from the literature. The results indicate that the proposed method has comparable performance and speed.
This technical note proposes a method for low order H-infinity synthesis where the constraint on the order of the controller is formulated as a rational equation. The resulting nonconvex optimization problem is then solved by applying a quasi-Newton primal-dual interior point method. The proposed method is evaluated together with a well-known method from the literature. The results indicate that the proposed method has comparable performance and speed.
Prediction and filtering of continuous-time stochastic processes often require a solver of a continuous-time differential Lyapunov equation (CDLE), for example the time update in the Kalman filter. Even though this can be recast into an ordinary differential equation (ODE), where standard solvers can be applied, the dominating approach in Kalman filter applications is to discretize the system and then apply the discrete-time difference Lyapunov equation (DDLE). To avoid problems with stability and poor accuracy, oversampling is often used. This contribution analyzes over-sampling strategies, and proposes a novel low-complexity analytical solution that does not involve oversampling. The results are illustrated on Kalman filtering problems in both linear and nonlinear systems.
This paper considers high-speed control of constrained linear parameter-varying systems using model predictive control. Existing model predictive control schemes for control of constrained linear parameter-varying systems typically require the solution of a semi-definite program at each sampling instance. Recently, variants of explicit model predictive control were proposed for linear parameter-varying systems with polytopic representation, decreasing the online computational effort by orders of magnitude. Depending on the mathematical structure of the underlying system, the constrained finite-time optimal control problem can be solved optimally, or close-to-optimal solutions can be computed. Constraint satisfaction, recursive feasibility and asymptotic stability can be guaranteed a priori by an appropriate selection of the terminal state constraints and terminal cost. The paper at hand gathers previous developments and provides new material such as a proof for the optimality of the solution, or, in the case of close-to-optimal solutions, a procedure to determine a bound on the suboptimality of the solution.
The stable spline (SS) kernel and the diagonal correlated (DC) kernel are two kernels that have been applied and studied extensively for kernel-based regularized LTI system identification. In this note, we show that similar to the derivation of the SS kernel, the continuous-time DC kernel can be derived by applying the same "stable" coordinate change to a "generalized" first-order spline kernel, and thus, can be interpreted as a stable generalized first-order spline kernel. This interpretation provides new facets to understand the properties of the DC kernel. In particular, we derive a new orthonormal basis expansion of the DC kernel and the explicit expression of the norm of the reproducing kernel Hilbert space associated with the DC kernel. Moreover, for the nonuniformly sampled DC kernel, we derive its maximum entropy property and show that its kernel matrix has tridiagonal inverse.
Model estimation and structure detection with short data records are two issues that receive increasing interests in System Identification. In this paper, a multiple kernel-based regularization method is proposed to handle those issues. Multiple kernels are conic combinations of fixed kernels suitable for impulse response estimation, and equip the kernel-based regularization method with three features. First, multiple kernels can better capture complicated dynamics than single kernels. Second, the estimation of their weights by maximizing the marginal likelihood favors sparse optimal weights, which enables this method to tackle various structure detection problems, e. g., the sparse dynamic network identification and the segmentation of linear systems. Third, the marginal likelihood maximization problem is a difference of convex programming problem. It is thus possible to find a locally optimal solution efficiently by using a majorization minimization algorithm and an interior point method where the cost of a single interior-point iteration grows linearly in the number of fixed kernels. Monte Carlo simulations show that the locally optimal solutions lead to good performance for randomly generated starting points.
This note studies the global robust output regulation problem by state feedback for strict feedforward systems. By utilizing the general framework for tackling the output regulation problem [10], the output regulation problem is converted into a global robust stabilization problem for a class of feedforward systems that is subject to both time-varying static and dynamic uncertainties. Then the stabilization problem is solved by using a small gain based bottom-up recursive design procedure.
Direct prediction error identification of systems operating in closed loop may lead to biased results due to the correlation between the input and the output noise. The authors study this error, what factors affect it, and how it may be avoided. In particular, the role of the noise model is discussed and the authors show how the noise model should be parameterized to avoid the bias. Apart from giving important insights into the properties of the direct method, this provides a nonstandard motivation for the indirect method.
Recursive algorithrms for the solution of linear least-squares estimation problems have been based mainly on state-space models. It has been known, however, that recursive Levinson-Whittle-Wiggins-Robinson (LWR) algorithms exist for stationary time-series, using only input-output information (i.e, covariance matrices). By introducing a way of classifying stochastic processes in terms of an "index of nonstationarity" we derive extended LWR algorithms for nonstationary processes We show also how adding state-space structure to the covariance matrix allows us to specialize these general results to state-space type estimation algorithms. In particular, the Chandrasekhar equations are shown to be natural descendants of the extended LWR algorithm.
This note presents an efficient approach for the evaluation of multi-parametric mixed integer quadratic programming (mp-MIQP) solutions, occurring for instance in control problems involving discrete time hybrid systems with quadratic cost. Traditionally, the online evaluation requires a sequential comparison of piecewise quadratic value functions. We introduce a lifted parameter space in which the piecewise quadratic value functions become piecewise affine and can be merged to a single value function defined over a single polyhedral partition without any overlaps. This enables efficient point location approaches using a single binary search tree. Numerical experiments with a power electronics application demonstrate an online speedup up to an order of magnitude. We also show how the achievable online evaluation time can be traded off against the offline computational time.
The robustness of nonlinear regulators for nonlinear systems with respect to variations in gain is investigated. It is shown that there exist regulators that produce asymptotically stable closed-loop systems, but do not tolerate any variation in gain without instability. However, if the linearized closed-loop system is also asymptotically stable, then there is always some gain margin. For a wide class of optimal regulators, it is shown that the gain margin is infinite with respect to increases in gain and that decreases down to 0.5 can be tolerated. The robustness properties of linear quadratic control laws are thus generalized.
This note deals with the performance of the recursive least squares algorithm when it is applied to problems where the measured signal is corrupted by bounded noise. Using ideas from bounding ellipsoid algorithms we derive an asymptotic expression for the bound on the uncertainty of the parameter estimate for a simple choice of design variables. This bound is also transformed to a bound on the uncertainty of the transfer function estimate.
Tracking and adaptation algorithms are, from a formal point of view, nonlinear systems which depend on stochastic variables in a fairly complicated way. The analysis of such algorithms is thus quite complicated. A first step is to establish the exponential stability of these systems. This is of interest in its own right and a prerequisite for the practical use of the algorithm. It is also a necessary starting point to analyze the performance in terms of tracking and adaptation because that is how close the estimated parameters are to the time-varying true ones. In this paper we establish some general conditions for the exponential stability of a wide and common class of tracking algorithms. This includes least mean squares, recursive least squares, and Kalman filter based adaptation algorithms. We show how stability of an averaged (linear and deterministic) equation and stability of the actual algorithm are linked to each other under weak conditions on the involved stochastic processes. We also give explicit conditions for exponential stability of the most common algorithms. The tracking performance of the algorithms is studied in a companion paper.
A general family of tracking algorithms for linear regression models is studied. It includes the familiar least mean square gradient approach, recursive least squares, and Kalman filter based estimators. The exact expressions for the quality of the obtained estimates are complicated. Approximate, and easy-to-use, expressions for the covariance matrix of the parameter tracking error are developed. These are applicable over the whole time interval, including the transient, and the approximation error can be explicitly calculated.
The problem of assessing the quality of a given, or estimated, model is a central issue in system identification. Various new techniques for estimating bias and variance contributions to the model error have been suggested in recent literature. In this contribution, classical model validation procedures are placed at the focus of our attention. We discuss the principles by which we reach confidence in a model through such validation techniques and also how the distance to a “true” description can be estimated this way. In particular, we stress how the typical model validation procedure gives a direct measure of the model error of the model test without referring to its ensemble properties. Several model error bounds are developed for various assumptions about the disturbances entering the system.
Guo and Ljung (1995) established some general results on exponential stability of random linear equations, which can be applied directly to the performance analysis of a wide class of adaptive algorithms, including the basic LMS ones, without requiring stationarity, independency, and boundedness assumptions of the system signals. The current paper attempts to give a complete characterization of the exponential stability of the LMS algorithms by providing a necessary and sufficient condition for such a stability in the case of possibly unbounded, nonstationary, and non-φ-mixing signals. The results of this paper can be applied to a very large class of signals, including those generated from, e.g., a Gaussian process via a time-varying linear filter. As an application, several novel and extended results on convergence and the tracking performance of LMS are derived under various assumptions. Neither stationarity nor Markov-chain assumptions are necessarily required in the paper.
The generalized likelihood ratio (GLR) test is a widely used method for detecting abrupt changes in linear systems and signals. In this paper the marginalized likelihood ratio (MLR) test is introduced for eliminating three shortcomings of GLR while preserving its applicability and generality. First, the need for a user-chosen threshold is eliminated in MLR. Second, the noise levels need not be known exactly and may even change over time, which means that MLR is robust. Finally, a very efficient exact implementation with linear in time complexity for batch-wise data processing is developed. This should be compared to the quadratic in time complexity of the exact GLR.
The Kronecker canonical form (KCF) can be employed when solving H-infinity synthesis problem. The KCF structure reveals variables that can be eliminated in the semidefinite program that defines the controller. The structure can also be used to remove states in the controller without sacrificing performance. In order to find the KCF structure, we can transform the relevant matrices to a generalized upper triangular (Guptri) form using orthogonal transformations. Thus, we can avoid finding the KCF structure explicitly, which is a badly conditioned problem.
A new approach for computing upper error bounds for reduced-order models of linear time-varying systems is presented. It is based on a transformation technique of the Hankel singular values using positive-real, odd incremented functions. By applying such time-varying functions, the singular values to be removed can be forced to become equal and constant, so that they can be reduced. Two variations of this method are proposed: one for finite-time horizons and the other for infinite-time problems including periodic systems.
The problem under consideration is how to estimate the frequency function of a system and the associated estimation error when a set of possible model structures is given and then one of them is known to contain the true system. The «classical» solution to this problem is to, first, use a consistent model structure selection criterion to discard all but one single structure, second, estimate a model in this structure and, third, conditioned on the assumption that the chosen structure contains the true system, compute an estimate of the estimation error. For a finite data set, however, one cannot guarantee that the correct structure is chosen, and this «structural» uncertainty is lost in the previously mentioned approach. In this contribution a method is developed that combines the frequency function estimates and the estimation errors from all possible structures into a joint estimate and estimation error. Hence, this approach bypasses the structure selection problem. This is accomplished by employing a Bayesian setting. Special attention is given to the choice of priors. With this approach it is possible to benefit from a priori information about the frequency function even though the model structure is unknown.
A reliable quality estimate of a given model is a prerequisite for any reasonable use of the model. The model error consists of two different contributions: the bias error and the random error. In this contribution, it is shown that the size (variance) of the random error can be reliably estimated in the case where a true system description cannot be achieved in the model structure used. This consistent error estimate can differ considerably from the conventionally used variance estimate, which could thus be misleading.
A general linear least-squares estimation problem is considered. It is shown how the optimal filters for filtering and smoothing can be recursively and efficiently calculated under certain structural assumptions about the covariance functions involved. This structure is related to an index known as the displacement rank, which is a measure of non-Toeplitzness of a covariance kernel. When a state space type structure is added, it is shown how the Chandrasekhar equations for determining the gain of the Kalman-Bucy filter can be derived directly from the covariance function information; thus we are able to imbed this class of state-space problems into a general input-output framework.
Distributed algorithms for solving coupled semidefinite programs commonly require many iterations to converge. They also put high computational demand on the computational agents. In this paper, we show that in case the coupled problem has an inherent tree structure, it is possible to devise an efficient distributed algorithm for solving such problems. The proposed algorithm relies on predictor- corrector primal-dual interior-point methods, where we use a message-passing algorithm to compute the search directions distributedly. Message passing here is closely related to dynamic programming over trees. This allows us to compute the exact search directions in a finite number of steps. This is because computing the search directions requires a recursion over the tree structure and, hence, terminates after an upward and downward pass through the tree. Furthermore, this number can be computed a priori and only depends on the coupling structure of the problem. We use the proposed algorithm for analyzing robustness of large-scale uncertain systems distributedly. We test the performance of this algorithm using numerical examples.
We present a new solution for the fixed interval linear least-squares smoothing of a random signal, finite dimensional or not, inadditive white noise. By using the so-called Sobolev identity of radiative transfer theory, the smoothed estimate for stationary processes is expressed entirely in terms of time-invariant causal and anticausal filtering operations; these are interpreted from a stochastic point of view as giving certain constrained (time-invariant) filtered estimates of the signal. Then by using a recently introduced notion of processes close to stationary, these results are extended in a natural way to general nonstationary processes. From a computational point of view, the representations presented here are particularly convenient, not only because time-invariant filters can be used to find the smoothed estimate, but also because a fast algorithm based on the so-called generalized Krein-Levinson recursions can be used to compute the time-invariant filters themselves.
Recursive algorithms where random observations enter are studied in a fairly general framework. An important feature is that the observations my depend on previous "outputs" of the algorithm. The considered class of algorithms contains, e.g., stochastic approximation algorithm, recursive identification algorithm, and algorithms for adaptive control of linear systems. It is shown how a deterministic differential equation can be associated with the algorithm. Problems like convergence with probability one, possible convergence points and asymptotic behavior of the algorithm can all be studied in terms of this differential equation. Theorems stating the precise relationships between the differential equation and the algorithm are given as well as examples of applications of the results to problems in identification and adaptive control.
The extended Kalman filter is an approximate filter for nonlinear systems, based on first-order linearization. Its use for the joint parameter and state estimation problem for linear systems with unknown parameters is well known and widely spread. Here a convergence analysis of this method is given. It is shown that in general, the estimates may be biased or divergent and the causes for this are displayed. Some common special cases where convergence is guaranteed are also given. The analysis gives insight into the convergence mechanisms and it is shown that with a modification of the algorithm, global convergence results can be obtained for a general case. The scheme can then be interpreted as maximization of the likelihood function for the estimation problem, or as a recursive prediction error algorithm.
Identification of black-box transfer function models is considered. It is assumed that the transfer function models possess a certain shift-property, which is satisfied for example by all polynomial-type models. Expressions for the variances of the transfer function estimates are derived, that are asymptotic both in the number of observed data and in the model orders. The result is that the joint covariance matrix of the transfer functions from input to output and from driving white noise source to the additive output disturbance, respectively, is proportional to the inverse of the joint spectrum matrix for the input and driving noise multiplied by the spectrum of the additive output noise. The factor of proportionality is the ratio of model order to number of data. This result is independent of the particular model structure used. The result is applied to evaluate the performance degradation due to variance for a number of typical model uses. Some consequences for input design are also drawn.
Least-squares estimation of the parameters of a vector difference equation model of a dynamic system is studied. A theorem for the convergence and consistency of the least-squares estimate is given that is valid under general feedback conditions.
A certain class of methods to select suitable models of dynamical stochastic systems from measured input-output data is considered. The methods are based on a comparison between the measured outputs and the outputs of a candidate model. Depending on the set of models that is used, such methods are known under a variety of names, like output-error methods, equation-error methods, maximum-likelihood methods, etc. General results are proved concerning the models that are selected asymptotically as the number of observed data tends to infinity. For these results it is not assumed that the true system necessarily can be exactly represented within the chosen set of models. In the particular case when the model set contains the system, general consistency results are obtained and commented upon. Rather than to seek an exact description of the system, it is usually more realistic to be content with a suitable approximation of the true system with reasonable complexity properties. Here, the consequences of such a viewpoint are discussed.
The convergence with probability one of a recently suggested recursive identification method by Landau is investigated. The positive realness of a certain transfer function is shown to play a crucial role, both for the proof of convergence and for convergence itself. A completely analogous analysis can be performed also for the extended least squares method and for the self-tuning regulator of Åström and Wittenmark. Explicit conditions for convergence of all these schemes are given. A more general structure is also discussed, as well as relations to other recursive algorithms.
A new method for closed-loop identification that allows fitting the model to the data with arbitrary frequency weighting is described and analyzed. Just as the direct method, this new method is applicable to systems with arbitrary feedback mechanisms. This is in contrast to other methods, such as the indirect method and the two-stage method, that assume linear feedback. The finite sample behavior of the proposed method is illustrated in a simulation study.
It is well known that the output error and Box-Jenkins model structures cannot be used for prediction error identification of unstable systems. The reason for this is that the predictors in this case generically will be unstable. Typically, this problem is handled by projecting the parameter vector onto the region of stability, which gives erroneous results when the underlying system is unstable. The main contribution of this work is that we derive modified, but asymptotically equivalent, versions of these model structures that can also be applied in the case of unstable systems.
We give simple proofs of formulas for converting linear least-squares filtered and smoothed estimates derived for one set of initial conditions to estimates valid for some other set. These are then used to study the possible advantages of first deliberately mischoosing the initial conditions so as to allow computational benefits to be obtained by using certain fast algorithms. In the course of this application we also obtain a new "dual" set of Chandrasekhar equations that provide a fast algorithm for fixed-point smoothing.
The problem of deriving so-called hard-error bounds for estimated transfer functions is addressed. A hard bound is one that is sure to be satisfied, i.e. the true system Nyquist plot will be confined with certainty to a given region, provided that the underlying assumptions are satisfied. By blending a priori knowledge and information obtained from measured data, it is shown how the uncertainty of transfer function estimates can be quantified. The emphasis is on errors due to model mismatch. The effects of unmodeled dynamics can be considered as bounded disturbances. Hence, techniques from set membership identification can be applied to this problem. The approach taken corresponds to weighted least-squares estimation, and provides hard frequency-domain transfer function error bounds. The main assumptions used in the current contribution are: that the measurement errors are bounded, that the true system is indeed linear with a certain degree of stability, and that there is some knowledge about the shape of the true frequency response.
The problem of estimating the transfer function of a linear, stochastic system is considered. The transfer function is parametrized as a black box and no given order is chosen a priori. This means that the model orders may increase to infinity when the number of observed data tends to infinity. The consistency and convergence properties of the resulting transfer function estimates are investigated. Asymptotic expressions for the variances and distributions of these estimates are also derived for the case that the model orders increase. It is shown that the variance of the transfer function estimate at a certain frequency is asymptotically given by the noise-to-signal ratio at that frequency mulliplied by the model-order-to-number-of-data-points ratio.
Checking non-negativity of polynomials using sum-of-squares has recently been popularized and found many applications in control. Although the method is based on convex programming, the optimization problems rapidly grow and result in huge semidefinite programs. Additionally, they often become increasingly ill-conditioned. To alleviate these problems, it is important to exploit properties of the analyzed polynomial, and post-process the obtained solution. This technical note describes how the sum-of-squares module in the MATLAB toolbox YALMIP handles these issues.
Two noniterative subspace-based algorithms which identify linear, time-invariant MIMO (multi-input/multioutput) systems from frequency response data are presented. The algorithms are related to the recent time-domain subspace identification techniques. The first algorithm uses equidistantly, in frequency, spaced data and is strongly consistent under weak noise assumptions. The second algorithm uses arbitrary frequency spacing and is strongly consistent under more restrictive noise assumptions, promising results are obtained when the algorithms are applied to real frequency data originating from a large flexible structure.
Model predictive control (MPC) is one of the most widely spread advanced control schemes in industry today. In MPC, a constrained finite-time optimal control (CFTOC) problem is solved at each iteration in the control loop. The CFTOC problem can be solved using, for example, second-order methods, such as interior-point or active-set methods, where the computationally most demanding part often consists of computing the sequence of second-order search directions. Each search direction can be computed by solving a set of linear equations that corresponds to solving an unconstrained finite-time optimal control (UFTOC) problem. In this paper, different direct (noniterative) parallel algorithms for solving UFTOC problems are presented. The parallel algorithms are all based on a recursive variable elimination and solution propagation technique. Numerical evaluations of one of the parallel algorithms indicate that a significant boost in performance can be obtained, which can facilitate high-performance second-order MPC solvers.
In Model Predictive Control (MPC), the control input is computed by solving a constrained finite-time optimal control (CFTOC) problem at each sample in the control loop. The main computational effort when solving the CFTOC problem using an active-set (AS) method is often spent on computing the search directions, which in MPC corresponds to solving unconstrained finite-time optimal control (UFTOC) problems. This is commonly performed using Riccati recursions or generic sparsity exploiting algorithms. In this work the focus is efficient search direction computations for AS type methods. The system of equations to be solved at each AS iteration is changed only by a low-rank modification of the previous one, and exploiting this structured change is important for the performance of AS type solvers. In this paper, theory for how to exploit these low-rank changes by modifying the Riccati factorization between AS iterations in a structured way is presented. A numerical evaluation of the proposed algorithm shows that the computation time can be significantly reduced by modifying, instead of re-computing, the Riccati factorization. This speed-up can be important for AS type solvers used for linear, nonlinear and hybrid MPC.
This paper develops a general and very simple construction for complete orthonormal bases for system identification. This construction provides a unifying formulation of many previously studied orthonormal bases since the common FIR and recently popular Laguerre and two-parameter Kautz model structures are restrictive special cases of the construction presented here. However, in contrast to these special cases, the basis vectors in the unifying construction of this paper can have arbitrary placement of pole position according to the prior information the user wishes to inject. Results characterizing the completeness of the bases and the accuracy properties of models estimated using the bases are provided.
The purpose of this paper is threefold. Firstly, it is to establish that contrary to what might be expected, the accuracy of well-known and frequently used asymptotic variance results can depend on choices of fixed poles or zeros in the model structure. Secondly, it is to derive new variance expressions that can provide greatly improved accuracy while also making explicit the influence of any fixed poles or zeros. This is achieved by employing certain new results on generalized Fourier series and the asymptotic properties of Toeplitz-like matrices in such a way that the new variance expressions presented here encompass pre-existing ones as special cases. Via this latter analysis a new perspective emerges on recent work pertaining to the use of orthonormal basis structures in system identification. Namely, that orthonormal bases are much more than an implementational option offering improved numerical properties. In fact, they are an intrinsic part of estimation since, as shown here, orthonormal bases quantify the asymptotic variability of the estimates whether or not they are actually employed in calculating them.
Linear residual generation for differential-algebraic equation (DAE) systems is considered within a polynomial framework where a complete characterization and parameterization of all residual generators is presented. Further, a condition for fault detectability in DAE systems is given. Based on the characterization of all residual generators, a design strategy for residual generators for DAE systems is presented. The design strategy guarantees that the resulting residual generator is sensitive to all the detectable faults and also that the residual generator is of lowest possible order. In all results derived, no assumption about observability or controllability is needed. In particular, special care has been devoted to assure the lowest-order property also for non-controllable systems. © 2006 IEEE.
An important issue in diagnosis research is design methods for residual generation. One method is the Chow-Willsky scheme, which is here extended such as it becomes universal in the sense that, for both discrete and continuous linear systems, it is shown to be able to generate all possible parity functions. It is shown that previous extensions to the Chow-Willsky scheme are not universal, which happens when there are dynamics controllable from faults but not from the inputs or disturbances. Also included are two new conditions on the process for fault detectability and strong fault detectability.
An important issue in diagnosis research is design methods for residual generation. One method is the Chow-Willsky scheme. Here, the Chow-Willsky scheme is extended as it becomes universal in the sense that, for both discrete and continuous linear systems, it Is shown to be able to generate all possible parity functions. This result means it can also be used to design all possible residual generators. It is shown previous extensions to the Chow-Willsky scheme are not universal, which is the case when dynamics controllable from fault exist, but not from the inputs or disturbances. Also included here are two new conditions on the process for fault detectability and strong fault detectability. A general condition for strong fault detectability has not been presented elsewhere.
Bayesian nonparametric approaches have been recently introduced in system identification scenario where the impulse response is modeled as the realization of a zero-mean Gaussian process whose covariance (kernel) has to be estimated from data. In this scheme, quality of the estimates crucially depends on the parametrization of the covariance of the Gaussian process. A family of kernels that have been shown to be particularly effective in the system identification framework is the family of Diagonal/Correlated (DC) kernels. Maximum entropy properties of a related family of kernels, the Tuned/Correlated (TC) kernels, have been recently pointed out in the literature. In this technical note, we show that maximum entropy properties indeed extend to the whole family of DC kernels. The maximum entropy interpretation can be exploited in conjunction with results on matrix completion problems in the graphical models literature to shed light on the structure of the DC kernel. In particular, we prove that the DC kernel admits a closed-form factorization, inverse, and determinant. These results can be exploited both to improve the numerical stability and to reduce the computational complexity associated with the computation of the DC estimator.
This paper addresses the conversion of discrete-time piecewise affine (PWA) state space models into input-output form. Necessary and sufficient conditions for the existence of equivalent input-output representations of a given PWA state space model are derived. Connections to the observability properties of PWA models are investigated. Under a technical assumption, it is shown that every finite-time observable PWA model admits an equivalent input-output representation. When an equivalent input-output model exists, a constructive procedure is presented to derive its equations. Several examples illustrate the proposed results.