Regularization of simple linear regression models for system identification is a recent much-studied problem. Several parameterizations ("kernels") of the regularization matrix have been suggested together with different ways of estimating ("tuning") its parameters. This contribution defines an asymptotic view on the problem of tuning and selection of kernels. It is shown that the SURE approach to parameter tuning provides an asymptotically consistent estimate of the optimal (in a MSE sense) hyperparameters. At the same time it is shown that the common marginal likelihood (empirical Bayes) approach does not enjoy that property. (C) 2017, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.