This largely tutorial treatise presents a Fourier based model for 2D-projection, the latter being a most important ingredient in any iterative reconstruction method. For sampled images the model requires an assumed basis function, which implicitly defines the necessary window and interpolation functions. We unravel the basis and window functions for some projection techniques described as procedures. Circular symmetric basis functions make it simple to find interpolation coefficients but require well tuned interpolation functions to avoid aliasing. We find it unnecessary to distinguish between voxel and ray driven projection. These two techniques concern only the innermost loop and both can be applied to any interpolation function, and to projection and back-projection alike.
Most contemporary CT-sytems employ non-exact methods. This treatise reports on how these methods could be transformed from non-exact to exact reconstruction methods by means of iterative post-processing. Compared to traditional algebraic reconstruction (ART) we expect much faster convergence (in theory quadratic), due to a much improved first guess and the fact that each iteration includes the same non-exact analytical reconstruction step as the first guess.
Next generation helical cone-beam CT will feature pitches around 80 mm. It is predicted that reconstruction algorithms to be used in these machines with still rather modest cone angles may not necessarily be exact, but rather have an emphasis on simplicity and speed. The PImethods are a family of non-exact algorithms, all of which are based on complete data capture with a detector collimated to the Tam-window followed by rebinning to obliquely parallel ray geometry. The non-exactness is identified as inconsistency in the space invariant one-dimensional ramp-filtering step. It is shown that this inconsistency can be reduced resulting in significant improvement in image quality and increased tolerance for higher pitch and cone angle. A short theoretical background for the PI-methods is given but the algorithms themselves are not given in any detail. A set of experiments on mathematical phantoms illustrate (among other things) how the amount of artefacts grow with increasing cone angles.
This paper presents novel results from an ongoing feasibility study of fully 3D X-ray scanning of Pinus Sylvestris (Scots Pine) logs. Logs are assumed to be translated through two identical and static cone beam systems with the beams rotated 90degrees relative to eachother, providing a dual set of 2D-projections. For reasons of both cost and speed, each 2D-detector in these two systems consists of a limited number of line detectors. The quality of the reconstructed images is far from perfect, due to sparse detector data and missing projection angles. In spite of this we show that by employing a shape- and direction discriminative technique based on second derivatives, we are able to enhance knot-like features in these data. In the enhanced images it is then possible to detect and localize the pith for each whorl of knots, and subsequently also to perform a full segmentation of the knots in the heartwood.
Advanced model-based iterative reconstruction algorithms in quantitative computed tomography (CT) perform automatic segmentation of tissues to estimate material properties of the imaged object. Compared with conventional methods, these algorithms may improve quality of reconstructed images and accuracy of radiation treatment planning. Automatic segmentation of tissues is, however, a difficult task. The aim of this work was to develop and evaluate an algorithm that automatically segments tissues in CT images of the male pelvis. The newly developed algorithm (MK2014) combines histogram matching, thresholding, region growing, deformable model and atlas-based registration techniques for the segmentation of bones, adipose tissue, prostate and muscles in CT images. Visual inspection of segmented images showed that the algorithm performed well for the five analysed images. The tissues were identified and outlined with accuracy sufficient for the dual-energy iterative reconstruction algorithm whose aim is to improve the accuracy of radiation treatment planning in brachytherapy of the prostate.
The paper presents a method for projection generation through a 2-D pixel image or a 3-D voxel volume. During the design of the method, we have strived to apply knowledge from signal processing theory. Introductory experiments, were the projection generation method was used in an iterative CT reconstruction loop, indicate that the method is sound. Our hope is that the method could be applied in many different connections, were one task is to compute projections through a 2-D pixel image or a 3-D voxel volume. In the future we plan to do more experiments, both in 2-D and 3-D, which hopefully further demonstrates the usefulness of the method.
We will present the basic theory for the camera geometry. Our goal is camera calibration and the tools necessary for this. We start with homogeneous matrices that can be used to describe geometric transformations in a simple manner. Then we consider the pinhole camera model, the simplified camera model that we will show how to calibrate.
A camera matrix describes the mapping from the 3D world to a camera image. The camera matrix can be determined through a number of corresponding points measured in the world and the image. We also demonstrate the common special case of camera calibration when it can be assumed that the world is flat. Then, a plane in the world is transformed to the image plane. Such a plane-to-plane mapping is called a homography.
Finally, we discuss some useful mathematical tools needed for camera calibration. We show that the solution we present for the determination of the camera matrix is equivalent to a least-squares solution. We also show how to solve a homogeneous system of equations using SVD (singular value decomposition).
Quantitative dual-energy computed tomography may improve the accuracy of treatment planning in radiation therapy. Of special interest are algorithms that can estimate material composition of the imaged object. One example of such an algorithm is the 2D model-based iterative reconstruction algorithm DIRA. The aim of this work is to extend this algorithm to 3D so that it can be used with cone-beams and helical scanning. In the new algorithm, the parallel FBP method was replaced with the approximate 3D FBP-based PI-method. Its performance was tested using a mathematical phantom consisting of six ellipsoids. The algorithm substantially reduced the beam-hardening artefact and the artefacts caused by approximate reconstruction after six iterations. Compared to Alvarez-Macovskis base material decomposition, DIRA-3D does not require geometrically consistent projections and hence can be used in dual-source CT scanners. Also, it can use several tissue-specific material bases at the same time to represent the imaged object.
Radial sampling of k-space is known to simultaneously provide both high spatial and high temporal resolution. Recently, an optimal radial profile time order based on the Golden Ratio was presented in [1]. We have adopted and modified the idea, with a focus onhigher temporal resolution without sacrificing any image quality.
[1] Winkelmann et. al.: An optimal radial profile order based on the golden ratio for time-resolved MRI, IEEE Trans. Med. Im, Vol.26, No.1,2007.
Radial sampling of k-space is known to simultaneously provide both high spatial and high temporal resolution. Recently, an optimalradial profile time order based on the Golden Ratio was presented in [1]. We have adopted and modified the idea, with a focus onhigher temporal resolution without sacrificing any image quality.
[1] Winkelmann et. al.: An optimal radial profileorder based on the golden ratio for time-resolved MRI, IEEE Trans. Med. Im, Vol.26, No.1, 2007.
We have suggested a novel method PRESTO-CAN including radial sampling, filtering and reconstruction of k-space data for 3D-plus-time resolved MRI. The angular increment of the profiles was based on the golden ratio, but the number of angular positions N was locked to be a prime number which guaranteed fix angle positions.The time resolution increased dramatically when the pro-files were partly removed from the k-space using the hourglass filter.We aim for utilizing the MRI-data for fMRI, where the echo times are long, TE ≈ 37-40 ms. This will result in field inhomogeneities and phase variations in the reconstructed images. Therefore, a new calibration and correction procedure was developed. We show that we are able to reconstruct images of the human brain with an image quality in line with what can be obtained by conventional Cartesian sampling.The pulse sequence for PRESTO-CAN was implemented by modifying an existing PRESTO sequence for Cartesian sampling. The effort involved was relatively small and a great advantage will be that we are able to use standard procedures for speeding up data acquisition, i.e. parallel imaging with SENSE.
In medical helical cone-beam CT, it is common that the region-of-interest (ROI) is contained inside the helix cylinder, while the complete object is long and extends outside the top and the bottom of the cylinder. This is the Long Object Problem. Analytical reconstruction methods for helical cone-beam CT have been designed to handle this problem. It has been shown that a moderate amount of over-scanning is sufficient for reconstruction of a certain ROI. The over-scanning projection rays travel both through the ROI as well as outside the ROI. This is unfortunate for iterative methods since it seems impossible to compute accurate values for the projection rays which travel partly inside and partly outside the ROI. Therefore, it seems that the useful ROI will diminish for every iteration step. We propose the following solution to the problem. Firstly, we reconstruct volume regions also outside the ROI. These volume regions will certainly be incompletely reconstructed, but our experimental results show that they serve well for projection generation. This is rather counter-intuitive and contradictory to our initial assumptions. Secondly, we use careful extrapolation and masking of projection data. This is not a general necessity, but needed for the chosen iterative algorithm, which includes rebinning and iterative filtered backprojection. Our idea here was to use an approximate reconstruction method which gives cone-beam artifacts and then improve the reconstructed result by iterative filtered backprojection. The experimental results seem very encouraging. The cone-beam artifacts can indeed be removed. Even voxels close to the boundary of the ROI are as well enhanced by the iterative loop as those in the middle of the ROI.
Contemporary analytical reconstruction methods for helical cone-beam CT have to be designed to handle the Long Object Problem. Normally, a moderate amount of over-scanning is sufficient for reconstruction of a certain Region-of-interest (ROI). Unfortunately, for iterative methods, it seems that the useful ROI will diminish for every iteration step. The remedies proposed here are twofold. Firstly, we use careful extrapolation and masking of projection data. Secondly, we generate and utilize projection data from incompletely reconstructed volume parts, which is rather counter-intuitive and contradictory to our initial assumptions. The results seem very encouraging. Even voxels close to the boundary in the original ROI are as well enhanced by the iterative loop as the middle part.
Quantitative tissue classification using dual-energy CT has the potential to improve accuracy in radiation therapy dose planning as it provides more information about material composition of scanned objects than the currently used methods based on single-energy CT. One problem that hinders successful application of both single-and dualenergy CT is the presence of beam hardening and scatter artifacts in reconstructed data. Current pre-and post-correction methods used for image reconstruction often bias CT numbers and thus limit their applicability for quantitative tissue classification. Here we demonstrate simulation studies with a novel iterative algorithm that decomposes every soft tissue voxel into three base materials: water, protein and adipose. The results demonstrate that beam hardening artifacts can effectively be removed and accurate estimation of mass fractions of all base materials can be achieved. In the future, the algorithm may be developed further to include segmentation of soft and bone tissue and subsequent bone decomposition, extension from 2-D to 3-D and scatter correction.
A common wish in non-destructive testing is to investigate a large object with a small interesting detail inside. Due to practical circumstances, the projections may sometimes be truncated. According to the theory on tomography, it is then impossible to reconstruct the object. However, sometimes it is possible to receive an approximate result. It turns out that the key-point is how to implement the ramp-filter. The quality of the result depends on the object itself. We show one good experiment on real data, linear cone-beam tomography for logs. We also show experiments on the Shepp-Logan phantom, well-known from medical CT, and discuss the varying reconstruction quality.
Dosimetric accuracy of radiation treatment planning in brachytherapy depends on knowledge of tissue composition. It has been speculated that soft tissues can be decomposed to water, lipid and protein. The aim of our work is to evaluate the accuracy of such tissue decomposition. Selected abdominal soft tissues, whose average elemental compositions were taken from literature, were decomposed using dual energy computed tomography to water, lipid and protein via the three-material decomposition method. The quality of the decomposition was assessed using relative differences between (i) mass energy absorption and (ii) mass energy attenuation coefficients of the analyzed and approximated tissues. It was found that the relative differences were less than 2% for photon energies larger than 10 keV. The differences were notably smaller than the ones for water as the transport and dose scoring medium. The choice of the water, protein and lipid triplet resulted in negative elemental mass fractions for some analyzed tissues. As negative elemental mass fractions cannot be used in general purpose particle transport computer codes using the Monte Carlo method, other triplets should be used for the decomposition. These triplets may further improve the accuracy of the approximation as the differences were mainly caused by the lack of high-Z materials in the water, protein and lipid triplet.
Purpose: To develop and evaluate-in a proof-of-concept configuration-a novel iterative reconstruction algorithm (DIRA) for quantitative determination of elemental composition of patient tissues for application to brachytherapy with low energy (amp;lt; 50 keV) photons and proton therapy. Methods: DIRA was designed as a model-based iterative reconstruction algorithm, which uses filtered backprojection, automatic segmentation and multimaterial tissue decomposition. The evaluation was done for a phantom derived from the voxelized ICRP 110 male phantom. Soft tissues were decomposed to the lipid, protein and water triplet, bones were decomposed to the compact bone and bone marrow doublet. Projections were derived using the Drasim simulation code for an axial scanning configuration resembling a typical DECT (dual-energy CT) scanner with 80 kV and Sn140 kV x-ray spectra. The iterative loop produced mono-energetic images at 50 and 88 keV without beam hardening artifacts. Different noise levels were considered: no noise, a typical noise level in diagnostic imaging and reduced noise level corresponding to tenfold higher doses. An uncertainty analysis of the results was performed using type A and B evaluations. The two approaches were compared. Results: Linear attenuation coefficients averaged over a region were obtained with relative errors less than 0.5% for all evaluated regions. Errors in average mass fractions of the three-material decomposition were less than 0.04 for no noise and reduced noise levels and less than 0.11 for the typical noise level. Mass fractions of individual pixels were strongly affected by noise, which slightly increased after the first iteration but subsequently stabilized. Estimates of uncertainties in mass fractions provided by the type B evaluation differed from the type A estimates by less than 1.5% for most cases. The algorithm was fast, the results converged after 5 iterations. The algorithmic complexity of forward polyenergetic projection calculation was much reduced by using material doublets and triplets. Conclusions: The simulations indicated that DIRA is capable of determining elemental composition of tissues, which are needed in brachytherapy with low energy (amp;lt; 50 keV) photons and proton therapy. The algorithm provided quantitative monoenergetic images with beam hardening artifacts removed. Its convergence was fast, image sharpness expressed via the modulation transfer function was maintained, and image noise did not increase with the number of iterations. c 2017 American Association of Physicists in Medicine
Better knowledge of elemental composition of patient tissues may improve the accuracy of absorbed dose delivery in brachytherapy. Deficiencies of water-based protocols have been recognized and work is ongoing to implement patient-specific radiation treatment protocols. A model based iterative image reconstruction algorithm DIRA has been developed by the authors to automatically decompose patient tissues to two or three base components via dual-energy computed tomography. Performance of an updated version of DIRA was evaluated for the determination of prostate calcification. A computer simulation using an anthropomorphic phantom showed that the mass fraction of calcium in the prostate tissue was determined with accuracy better than 9%. The calculated mass fraction was little affected by the choice of the material triplet for the surrounding soft tissue. Relative differences between true and approximated values of linear attenuation coefficient and mass energy absorption coefficient for the prostate tissue were less than 6% for photon energies from 1 keV to 2 MeV. The results indicate that DIRA has the potential to improve the accuracy of dose delivery in brachytherapy despite the fact that base material triplets only approximate surrounding soft tissues.
The effect of scatter on reconstructed image quality in conebeam computed tomography was investigated and a function whichcan be used in scatter-reduction optimisation tasks was tested.Projections were calculated using the Monte Carlo method inan axially symmetric cone beam geometry consisting of a pointsource, water phantom and a single row of detector elements.Image reconstruction was performed using the filtered backprojectionmethod. Image quality was assessed by the L2-norm-based differencerelative to a reference image derived from (1) weighted linearattenuation coefficients and (2) projections by primary photons.It was found that the former function was strongly affectedby the beam hardening artefact and did not properly reflectthe amount of scatter but the latter function increased withincreasing beam width, was higher for the larger phantom andexhibited properties which made it a good candidate for scatter-reductionoptimisation tasks using polyenergetic beams.
The MATLAB/C program take version 3.1 is a program for simulation of X-ray projections from 3D volume data. It is based on an older C version by Muller-Merbach as well as an extended C version by Turbell. The program can simulate 2D X-ray projections from 3D objects. These data can then be input to 3D reconstruction algorithms. Here however, we only demonstrate a couple of 2D reconstruction algorithms, written in MATLAB. Simple MATLAB examples show how to generate the take projections followed by subsequent reconstruction. Compared to the old take version, the C code have been carefully revised. A preliminary, rather untested feature of using a polychromatic X-ray source with different energy levels was already included in the old take version. The current polychromatic feature X-ray is however carefully tested. For example, it has been compared with the results from the program described by Malusek et al. We also demonstrate experiments with a polychromatic X-ray source and a Plexiglass object giving the beam-hardening artefact. Detector sensitivity for different energy levels is not included in take. However, in section~\refsec:realexperiment, we describe a technique to include the detector sensitivity into the energy spectrum. Finally, an experiment with comparison of real and simulated data were performed. The result wasn't completely successful, but we still demonstrate it. Contemporary analytical reconstruction methods for helical cone-beam CT have to be designed to handle the Long Object Problem. Normally, a moderate amount of over-scanning is sufficient for reconstruction of a certain Region-of-interest (ROI). Unfortunately, for iterative methods, it seems that the useful ROI will diminish for every iteration step. The remedies proposed here are twofold. Firstly, we use careful extrapolation and masking of projection data. Secondly, we generate and utilize projection data from incompletely reconstructed volume parts, which is rather counter-intuitive and contradictory to our initial assumptions. The results seem very encouraging. Even voxels close to the boundary in the original ROI are as well enhanced by the iterative loop as the middle part.
Contemporary reconstruction for helical cone-beam CT is mostly based on non-exact algorithms, which produce more or less unacceptable artifacts for cone angles above a certain limit. We report on attempts to extend the applicability of these algorithms to higher cone angles by suppressing artifacts by means of iterative post-processing. The iterative loop includes a ramp-filtering step before back-projection, which promotes fast convergence. The scheme has been applied to the original PI-method as well as to Siemens' AMPR and WFBP methods. Using ordered subsets in the iterative loop for WFBP, we achieved almost spotless images in one single iteration for cone angles \pm 9 degrees.
New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code’s execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained.