A generalization of the method developed by Adam, Psarrakos and Tsatsomeros to find inequalities for the eigenvalues of a complex matrix A using knowledge of the largest eigenvalues of its Hermitian part H(A) is presented. The numerical range or field of values of A can be constructed as the intersection of half-planes determined by the largest eigenvalue of H(e(i theta) A.). Adam, Psarrakos and Tsatsomeros showed that using the two largest eigenvalues of H(A), the eigenvalues of A satisfy a cubic inequality and the envelope of such cubic curves defines a region in the complex plane smaller than the numerical range but still containing the spectrum of A. Here it is shown how using the three largest eigenvalues of H(A) or more, one obtains new inequalities for the eigenvalues of A and new envelope-type regions containing the spectrum of A. (C) 2018 Elsevier Inc. All rights reserved.

The full text will be freely available from 2020-07-24 14:59

We show that the probability to be of rank 2 for a 2×2×2 tensor with elements from a standard normal distribution is π/4, and that the probability to be of rank 3 for a 3×3×2 tensor is 1/2. In the proof results on the expected number of real generalized eigenvalues of random matrices are applied. For n×n×2 tensors with n≥4 we also present some new aspects of their rank.

Partial least squares is a common technique for multivariate regression. The pro- cedure is recursive and in each step basis vectors are computed for the explaining variables and the solution vectors. A linear model is fitted by projection onto the span of the basis vectors. The procedure is mathematically equivalent to Golub-Kahan bidiagonalization, which is a Krylov method, and which is equiv- alent to a pair of matrix factorizations. The vectors of regression coefficients and prediction are non-linear functions of the right hand side. An algorithm for computing the Frechet derivatives of these functions is derived, based on perturbation theory for the matrix factorizations. From the Frechet derivative of the prediction vector one can compute the number of degrees of freedom, which can be used as a stopping criterion for the recursion. A few numerical examples are given.

The problem of solving linear equations, or equivalently of inverting matrices, arises in many fields. Efficient recursive algorithms for finding the inverses of Toeplitz or displacement-type matrices have been known for some time. By introducting a way of characterizing matrices in terms of their “distance” from being Toeplitz, a natural extension of these algorithms is obtained. Several new inversion formulas for the representation of the inverse of non-Toeplitz matrices are also presented.

The concepts of tensors with diagonal and circulant structure are defined and aframework is developed for the analysis of such tensors. It is shown a tensor of arbitraryorder, which is circulant with respect to two particular modes, can be diagonalized inthose modes by discrete Fourier transforms. This property can be used in the efficientsolution of linear systems involving contractive products of tensors with circulantstructure. Tensors with circulant structure occur in models for image blurring withperiodic boundary conditions. It is shown that the new framework can be applied tosuch problems.

Several Krylov-type procedures are introduced that generalize matrix Krylov methods for tensor computations. They are denoted minimal Krylov recursion, maximal Krylov recursion, and contracted tensor product Krylov recursion. It is proved that, for a given tensor A with multilinear rank-(p; q; r), the minimal Krylov recursion extracts the correct subspaces associated to the tensor in p+q+r number of tensor-vector-vector multiplications. An optimized minimal Krylov procedure is described that, for a given multilinear rank of an approximation, produces a better approximation than the standard minimal recursion. We further generalize the matrix Krylov decomposition to a tensor Krylov decomposition. The tensor Krylov methods are intended for the computation of low multilinear rank approximations of large and sparse tensors, but they are also useful for certain dense and structured tensors for computing their higher order singular value decompositions or obtaining starting points for the best low-rank computations of tensors. A set of numerical experiments, using real-world and synthetic data sets, illustrate some of the properties of the tensor Krylov methods.

Matematiska Institutionen, Kungliga Tekniska Högskolan, Stockholm, Sweden.

Ullemar’s formula for the moment map. II2005In: Linear Algebra and its Applications, ISSN 0024-3795, E-ISSN 1873-1856, Vol. 404, p. 380-388Article in journal (Refereed)

Abstract [en]

We prove the complex analogue of Ullemar’s formula for the Jacobian of the complex moment mapping. This formula was previously established in the real case.