In the last decade, kernel-based regularization methods (KRMs) have been widely used for stable impulse response estimation in system identification. Its favorable performance over classic maximum likelihood/prediction error methods (ML/PEM) has been verified by extensive simulations. Recently, we noticed a surprising observation: for some data sets and kernels, no matter how the hyper-parameters are tuned, the regularized least square estimate cannot have higher model fit than the least square (LS) estimate, which implies that for such cases, the regularization cannot improve the LS estimate. Therefore, this paper focuses on how to understand this observation. To this purpose, we first introduce the squared error (SE) criterion, and the corresponding oracle hyper-parameter estimator in the sense of minimizing the SE criterion. Then we find the necessary and sufficient conditions under which the regularization cannot improve the LS estimate, and we show that the probability that this happens is greater than zero. The theoretical findings are demonstrated through numerical simulations, and simultaneously the anomalous simulation outcome wherein the probability is nearly zero is elucidated, and due to the ill-conditioned nature of either the kernel matrix, the Gram matrix, or both. (c) 2023 Elsevier Ltd. All rights reserved.
Funding Agencies|National Key R&D Program of China [2018YFA0703800]; National Natural Science Foundation of China [62273287]; Shenzhen Science and Technology Innovation Council [JCYJ20220530143418040]; Thousand Youth Talents Plan - central government of China