Joint State and Parameter Estimation using Iterative Methods
2026 (English)Licentiate thesis, monograph (Other academic)
Abstract [en]
This thesis addresses the longstanding problem of joint state and parameter estimation in state-space models, with particular emphasis on the small-sample regime where the classical maximum likelihood (ML) can become biased and statistically inefficient. To overcome the drawbacks of ML under such scenarios, we analyze the joint maximum a posteriori-maximum likelihood (JMAP-ML) principle and show that, under the state-space setting, the parameter update step in JMAP-ML induces an errors-in-variables (EIV) structure because it uses the first-order moment of state estimation while ignoring the covariances. This observation motivates incorporating EIV-based estimators in the parameter estimation to reduce bias and improve robustness. Building on this, we develop an iterative alternating framework that unifies expectation maximisation (EM)/ML, standard JMAP-ML, and EIV-enhanced JMAP-ML variants, enabling principled choices across different data regimes.
On the state-estimation side, we study smoothing with uncertain model parameters and propose the Partial EIV (PEIV)–based and the bias-compensated least squares (BCLS)-based Kalman smoother, which explicitly consider parametric uncertainty in the smoothing step. Simulations show that this approach yields better accuracy over traditional smoothers when parameter uncertainty is non-negligible.
In terms of the parameter estimation problem, we review and integrate methods that either compensate for or explicitly model uncertainty in the regressors. In particular, we study bias-compensated least squares and a suite of total least squares (TLS) methods (weighted, constrained, partial). Under the assumption that the noise in the regressor is independent of the regressors themselves, both of these two approaches can improve the estimation accuracy compared with the standard least squares method. The bias-compensated methods rely on probability-limit approximation to the analytical solution of bias and therefore requires sufficient data to be reliable. In terms of the comparison between the TLS-type methods and the BCLS method when the noise in the regressor is dependent on the regressor itself, it is shown that the BCLS method mitigates the effect of this dependence more effectively than the TLS-type methods when the bias approximation is accurate (enough data).
Beyond JMAP-ML and its variants, maximum likelihood—implemented directly or via EM under a state-space formulation—is a popular choice owing to its asymptotic optimality. To assess batch-size effects, JMAP-ML-type methods are compared with EM, highlighting their practical differences. In joint state parameter estimation, a key difficulty is that the state estimates are noisy and are correlated with the process and/or measurement noise. A central implication for TLS-based identification is that the regressor (the state sequence) and its perturbation are driven by the same process noise, violating the independence assumption and thereby explaining the degraded performance of TLS in this setting.
Extensive Monte Carlo simulations support several main conclusions. First, there is no universal winner across all scenarios. Second, with small samples, standard and TLS-enhanced JMAP-ML are typically the most competitive, whereas in the middle-sample regime BCLS-enhanced JMAP-ML outperforms them, which reflects its probability-limit based bias correction. Third, in the large-data regime, maximum likelihood (direct or via EM) can surpass the alternatives due to its asymptotic optimality. Overall, the thesis advocates explicit uncertainty modeling in both the parameter- and state-estimation stages and presents a unified framework that practitioners can tune to data size, noise characteristics, and model fidelity.
Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2026. , p. 121
Series
Linköping Studies in Science and Technology. Licentiate Thesis, ISSN 0280-7971 ; 2027
National Category
Control Engineering
Identifiers
URN: urn:nbn:se:liu:diva-221072DOI: 10.3384/9789181184587ISBN: 9789181184570 (print)ISBN: 9789181184587 (electronic)OAI: oai:DiVA.org:liu-221072DiVA, id: diva2:2036215
Presentation
2026-03-06, BL32, Nobel, Campus Valla, Linköping, 09:00 (English)
Opponent
Supervisors
Note
Funding Agencies: This work has been funded by a distinguished professor grant on Scalable Kalman filter and a project grant (2021-05608), both from the Swedish Research Council.
2026-02-062026-02-062026-02-06Bibliographically approved