We formulate a block- iterative algorithmic scheme for the solution of systems of linear inequalities and/ or equations and analyze its convergence. This study provides as special cases proofs of convergence of ( i) the recently proposed component averaging ( CAV) method of Censor, Gordon, and Gordon [ Parallel Comput., 27 ( 2001), pp. 777 808], ( ii) the recently proposed block- iterative CAV ( BICAV) method of the same authors [ IEEE Trans. Medical Imaging, 20 ( 2001), pp. 1050 1060], and ( iii) the simultaneous algebraic reconstruction technique ( SART) of Andersen and Kak [ Ultrasonic Imaging, 6 ( 1984), pp. 81 94] and generalizes them to linear inequalities. The first two algorithms are projection algorithms which use certain generalized oblique projections and diagonal weighting matrices which reflect the sparsity of the underlying matrix of the linear system. The previously reported experimental acceleration of the initial behavior of CAV and BICAV is thus complemented here by a mathematical study of the convergence of the algorithms.
A definition of oblique projections onto closed convex sets that use seminorms induced by diagonal matrices which may have zeros on the diagonal is introduced. Existence and uniqueness of such projections are secured via directional affinity of the sets with respect to the diagonal matrices involved. A block-iterative algorithmic scheme for solving the convex feasibility problem, employing seminorm-induced oblique projections, is constructed and its convergence for the consistent case is established. The fully simultaneous algorithm converges also in the inconsistent case to the minimum of a certain proximity function.
We propose and studya block-iterative projection method for solving linear equations and/or inequalities.The method allows diagonal componentwise relaxation in conjunction with orthogonalprojections onto the individual hyperplanes of the system, and isthus called diagonally relaxed orthogonal projections (DROP). Diagonal relaxation hasproven useful in accelerating the initial convergence of simultaneous andblock-iterative projection algorithms, but until now it was available onlyin conjunction with generalized oblique projections in which there isa special relation between the weighting and the oblique projections.DROP has been used by practitioners, and in this papera contribution to its convergence theory is provided. The mathematicalanalysis is complemented by some experiments in image reconstruction fromprojections which illustrate the performance of DROP.
We study the solution of consistent, semidefinite and symmetric linear systems by iterative techniques. Given a finite sequence of subspaces a block-iterative projection type algorithm is considered. For two specific choices of iteration parameters we show convergence. We apply our results to over and under determined linear equations. These methods are based on decomposing the system matrix into blocks of rows or blocks of columns. Thereby several algorithms, many used in image reconstruction, are presented in a unified way.
Kaczmarzs method-sometimes referred to as the algebraic reconstruction technique-is an iterative method that is widely used in tomographic imaging due to its favorable semi-convergence properties. Specifically, when applied to a problem with noisy data, during the early iterations it converges very quickly toward a good approximation of the exact solution, and thus produces a regularized solution. While this property is generally accepted and utilized, there is surprisingly little theoretical justification for it. The purpose of this paper is to present insight into the semi-convergence of Kaczmarzs method as well as its projected counterpart (and their block versions). To do this we study how the data errors propagate into the iteration vectors and we derive upper bounds for this noise propagation. Our bounds are compared with numerical results obtained from tomographic imaging.
We give a detailed study of the semiconvergence behavior of projected nonstationary simultaneous iterative reconstruction technique (SIRT) algorithms, including the projected Landweber algorithm. We also consider the use of a relaxation parameter strategy, proposed recently for the standard algorithms, for controlling the semiconvergence of the projected algorithms. We demonstrate the semiconvergence and the performance of our strategies by examples taken from tomographic imaging.
We study a class of block-iterative (BI) methods proposed in image reconstruction for solving linear systems. A subclass, symmetric block-iteration (SBI), is derived such that for this subclass both semi-convergence analysis and stopping-rules developed for fully simultaneous iteration apply. Also results on asymptotic convergence are given, e. g., BI exhibit cyclic convergence irrespective of the consistency of the linear system. Further it is shown that the limit points of SBI satisfy a weighted least-squares problem. We also present numerical results obtained using a trained stopping rule on SBI.
This book contains papers presented by leading experts at the "Interdisciplinary Workshop on Mathematical Methods in Biomedical Imaging and Intensity-Modulated Radiation Therapy (IMRT)" held at the Centro di Ricerca Matematica (CRM) Ennio De Giorgi at Pisa, Italy, from October 15 to 19, 2007. The interdisciplinary book consists of research and review papers by leading experts and practitioners in biomedical imaging and intensity-modulated radiation therapy (IMRT).
We describe a class of stopping rules for Landweber-type iterations for solving linear inverse problems. The class includes both the discrepancy principle (DP rule) and the monotone error rule (ME rule). We also unify the error analysis of the two methods. The stopping rules depend critically on a certain parameter whose value needs to be specified. A training procedure is therefore introduced for securing robustness. The advantages of using a trained rule are demonstrated on examples taken from image reconstruction from projections. After training the stopping rules became quite robust and only small differences were observed between, e.g. the DP rule and ME rule.
This paper is concerned with the Simultaneous Iterative Reconstruction Technique (SIRT) class of iterative methods for solving inverse problems. Based on a careful analysis of the semi-convergence behavior of these methods, we propose two new techniques to specify the relaxation parameters adaptively during the iterations, so as to control the propagated noise component of the error. The advantage of using this strategy for the choice of relaxation parameters on noisy and ill-conditioned problems is demonstrated with an example from tomography
This book brings together 27 state-of-the-art research and review papers by leading experts and practitioners in mathematical methods in biomedical imaging, in intensity-modulated radiation therapy (IMRT) and in optimization and inverse problems. These papers were presented at the Huangguoshu International Interdisciplinary Conference on Biomedical Mathematics Promising Directions in Imaging, Therapy Planning, and Inverse Problems November 3 9, 2008 in China. The emphasis is on trying to discover relations and connections between these fields that will enhance progress in each of them. As this volume shows, applicable mathematical work in these fields goes hand-in-hand with real-world applications and the mutual technology transfers between them leads to further progress. The topics covered here include mathematical aspects and practical problems in current major and emerging technologies in diagnostic and therapeutic medicine and biology research. The contributed work signifies the interdisciplinary cooperation between mathematicians and scientists from medical physics, engineering, clinical medicine, and biology that leads to mathematically based better solutions of practical problems in biomedical imaging and IMRT.
We consider a linear system of the form A_{1}x_{1} + A_{2}x_{2} + =b_{1}. The vector consists of independent and identically distributed random variables all with mean zero. The unknowns are split into two groups x_{1} and x_{2}. It is assumed that AA_{1} has full rank and is easy to invert. In this model, usually there are more unknowns than observations and the resulting linear system is most often consistent having an infinite number of solutions. Hence, some constraint on the parameter vector x is needed. One possibility is to avoid rapid variation in, e.g. the parameters x_{2}. This can be accomplished by regularizing using a matrix A_{3}, which is a discretization of some norm (e.g. a Sobolev space norm). We formulate the problem as a partially regularized least-squares problem and use the conjugate gradient method for its solution. Using the special structure of the problem we suggest and analyse block-preconditioners of Schur compliment type. We demonstrate their effectiveness in some numerical tests. The test examples are taken from an application in modelling of substance transport in rivers.
We consider a linear system of the form A(1)x(1)+A(2)X(2)+eta=b1. The vector eta consists of identically distributed random variables all with mean zero. The unknowns are split into two groups x(1) and x(2). In the model usually there are more unknowns than observations and the resulting linear system is most often consistent having an infinite number of solutions. Hence some constraint on the parameter vector x is needed. One possibility is to avoid rapid variation in, e.g. the parameters x(2). We formulate the problem as a partially regularized least-squares problem, and propose a direct solution method based on the QR decomposition of matrix blocks. Further we consider regularizing using one and two regularization parameters, respectively. We also discuss the choice of regularization parameters, and extend Reinschs method to the case with two parameters. Also the cross-validation technique is treated. We present test examples taken from an application in modelling of the substance transport in rivers.
This paper brings together a novel information representation model for use in signal processing and computer vision problems, with a particular algorithmic development of the Landweber iterative algorithm. The information representation model allows a representation of multiple values for a variable as well as an expression for confidence. Both properties are important for effective computation using multi-level models, where a choice between models will be implementable as part of the optimization process. It is shown that in this way the algorithm can deal with a class of high-dimensional, sparse, and constrained least-squares problems, which arise in various computer vision learning tasks, such as object recognition and object pose estimation. While the algorithm has been applied to the solution of such problems, it has so far been used heuristically. In this paper we describe the properties and some of the peculiarities of the channel representation and optimization, and put them on firm mathematical ground. We consider the optimization a convexly constrained weighted least-squares problem and propose for its solution a projected Landweber method which employs oblique projections onto the closed convex constraint set. We formulate the problem, present the algorithm and work out its convergence properties, including a rate-of-convergence result. The results are put in perspective with currently available projected Landweber methods. An application to supervised learning is described, and the method is evaluated in an experiment involving function approximation, as well as application to transient signals. © 2006 Elsevier Ltd. All rights reserved.
This report brings together a novel approach to some computer vision problems and a particular algorithmic development of the Landweber iterative algorithm. The algorithm solves a class of high-dimensional, sparse, and constrained least-squares problems, which arise in various computer vision learning tasks, such as object recognition and object pose estimation. The algorithm has recently been applied to these problems, but it has been used rather heuristically. In this report we describe the method and put it on firm mathematical ground. We consider a convexly constrained weighted least-squares problem and propose for its solution a projected Landweber method which employs oblique projections onto the closed convex constraint set. We formulate the problem, present the algorithm and work out its convergence properties, including a rate-of-convergence result. The results are put in perspective of currently available projected Landweber methods. The application to supervised learning is described, and the method is evaluated in a function approximation experiment.