The expectation-maximization algorithm is a commonly employed tool for system identification. However, for a large set of state-space models, the maximization step cannot be solved analytically. In these situations, a natural remedy is to make use of the expectation-maximization gradient algorithm, i.e., to replace the maximization step by a single iteration of Newtons method. We propose alternative expectation-maximization algorithms that replace the maximization step with a single iteration of some other well-known optimization method. These algorithms parallel the expectation-maximization gradient algorithm while relaxing the assumption of a concave objective function. The benefit of the proposed expectation-maximization algorithms is demonstrated with examples based on standard observation models in tracking and localization.
Funding Agencies|Swedish Foundation for Strategic Research (SSF) via the project ASSEMBLE