The accurate and fast prediction of potential propagation in neuronal networks is of prime importance in neurosciences. This work develops a novel structure-preserving model reduction technique to address this problem based on Galerkin projection and nonnegative operator approximation. It is first shown that the corresponding reduced-order model is guaranteed to be energy stable, thanks to both the structure-preserving approach that constructs a distinct reduced-order basis for each cable in the network and the preservation of nonnegativity. Furthermore, a posteriori error estimates are provided, showing that the model reduction error can be bounded and controlled. Finally, the application to the model reduction of a large-scale neuronal network underlines the capability of the proposed approach to accurately predict the potential propagation in such networks while leading to important speedups.
A novel approach for simulating potential propagation in neuronal branches with high accuracy is developed. The method relies on high-order accurate dierence schemes using the Summation-By-Parts operators with weak boundary and interface conditions applied to the Hodgkin-Huxley equations. This work is the rst demonstrating high accuracy for that equation. Several boundary conditions are considered including the non-standard one accounting for the soma presence, which is characterized by its own partial dierential equation. Well-posedness for the continuous problem as well as stability of the discrete approximation is proved for all the boundary conditions. Gains in terms of CPU times are observed when high-order operators are used, demonstrating the advantage of the high-order schemes for simulating potential propagation in large neuronal trees.
A novel approach for simulating potential propagation in neuronal branches with high accuracy is developed. The method relies on high-order accurate difference schemes using the Summation-By-Parts operators with weak boundary and interface conditions applied to the Hodgkin–Huxley equations. This work is the first demonstrating high accuracy for that equation. Several boundary conditions are considered including the non-standard one accounting for the soma presence, which is characterized by its own partial differential equation. Well-posedness for the continuous problem as well as stability of the discrete approximation is proved for all the boundary conditions. Gains in terms of CPU times are observed when high-order operators are used, demonstrating the advantage of the high-order schemes for simulating potential propagation in large neuronal trees.
This paper is concerned with computing very high order accurate linear functionals from a numerical solution of a time-dependent partial differential equation (PDE). Based on finite differences on summation-by-parts form, together with a weak implementation of the boundary conditions, we show how to construct suitable boundary conditions for the PDE such that the continuous problem is well-posed and the discrete problem is stable and spatially dual consistent. These two features result in a superconvergent functional, in the sense that the order of accuracy of the functional is provably higher than that of the solution.
In this paper we derive new farfield boundary conditions for the time-dependent Navier–Stokes and Euler equations in two space dimensions. The new boundary conditions are derived by simultaneously considering well-posedess of both the primal and dual problems. We moreover require that the boundary conditions for the primal and dual Navier–Stokes equations converge to well-posed boundary conditions for the primal and dual Euler equations.
We perform computations with a high-order finite difference scheme on summation-by-parts form with the new boundary conditions imposed weakly by the simultaneous approximation term. We prove that the scheme is both energy stable and dual consistent and show numerically that both linear and non-linear integral functionals become superconvergent.
In this paper we construct well-posed boundary conditions for the compressible Euler and Navier-Stokes equations in two space dimensions. When also considering the dual equations, we show how to construct the boundary conditions so that both the primal and dual problems are well-posed. By considering the primal and dual problems simultaneously, we construct energy stable and dual consistent finite difference schemes on summation-by- parts form with weak imposition of the boundary conditions.
According to linear theory, the stable and dual consistent discretization can be used to compute linear integral functionals from the solution at a superconvergent rate. Here we evaluate numerically the superconvergence property for the non-linear Euler and Navier{ Stokes equations with linear and non-linear integral functionals.
In this paper we derive well-posed boundary conditions for a linear incompletely parabolic system of equations, which can be viewed as a model problem for the compressible Navier{Stokes equations. We show a general procedure for the construction of the boundary conditions such that both the primal and dual equations are wellposed.
The form of the boundary conditions is chosen such that reduction to rst order form with its complications can be avoided.
The primal equation is discretized using finite difference operators on summation-by-parts form with weak boundary conditions. It is shown that the discretization can be made energy stable, and that energy stability is sufficient for dual consistency.
Since reduction to rst order form can be avoided, the discretization is significantly simpler compared to a discretization using Dirichlet boundary conditions.
We compare the new boundary conditions with standard Dirichlet boundary conditions in terms of rate of convergence, errors and discrete spectra. It is shown that the scheme with the new boundary conditions is not only far simpler, but also has smaller errors, error bounded properties, and highly optimizable eigenvalues, while maintaining all desirable properties of a dual consistent discretization.
In this paper we study the heat and advectionequation in single and multipledomains. The equations are discretized using a second order accurate finite difference method on Summation-By-Parts form with weak boundary and interface conditions. We derive analytic expressions for the spectrum of the continuous problem and for their corresponding discretization matrices.
It is shown how the spectrum of the singledomain operator is contained in the multi domain operator spectrum when artificial interfaces are introduced. The interface treatments are posed as a function of one parameter, and the impact on the spectrum and discretization error is investigated as a function of this parameter. Finally we briefly discuss the generalization to higher order accurate schemes.
Finitedifference operators satisfying the summation-by-parts (SBP) rules can be used to obtain high order accurate, energy stable schemes for time-dependent partial differential equations, when the boundary conditions are imposed weakly by the simultaneous approximation term (SAT).
In general, an SBP-SAT discretization is accurate of order p + 1 with an internal accuracy of 2p and a boundary accuracy of p. Despite this, it is shown in this paper that any linear functional computed from the time-dependent solution, will be accurate of order 2p when the boundary terms are imposed in a stable and dual consistent way.
The method does not involve the solution of the dual equations, and superconvergent functionals are obtained at no extra computational cost. Four representative model problems are analyzed in terms of convergence and errors, and it is shown in a systematic way how to derive schemes which gives superconvergentfunctionaloutputs.
A shielded thermocouple is a measurement device used for monitoring the temperature in chemically, or mechanically, hostile environments. The sensitive parts of the thermocouple are protected by a shielding layer. In order to improve the accuracy of the measurement device, we study an inverse heat conduction problem where the temperature on the surface of the shielding layer is sought, given measured temperatures in the interior of the thermocouple. The procedure is well suited for real-time applications where newly collected data is continuously used to compute current estimates of the surface temperature. Mathematically we can formulate the problem as a Cauchy problem for the heat equation, in cylindrical coordinates, where data is given along the line r = r _{1} and the solution is sought at r _{1} < r ≤ r _{2}. The problem is ill-posed, in the sense that the solution (if it exists) does not depend continuously on the data. Thus, regularization techniques are needed. The ill–posedness of the problem is analyzed and a numerical method is proposed. Numerical experiments demonstrate that the proposed method works well.
We consider two dimensional inverse steady state heat conductionproblems in complex geometries. The coefficients of the elliptic equation are assumed to be non-constant. Cauchy data are given on onepart of the boundary and we want to find the solution in the wholedomain. The problem is ill--posed in the sense that the solution doesnot depend continuously on the data.
Using an orthogonal coordinate transformation the domain is mappedonto a rectangle. The Cauchy problem can then be solved by replacing one derivative by a bounded approximation. The resulting well--posed problem can then be solved by a method of lines. A bounded approximation of the derivative can be obtained by differentiating a cubic spline, that approximate the function in theleast squares sense. This particular approximation of the derivativeis computationally efficient and flexible in the sense that its easy to handle different kinds of boundary conditions.This inverse problem arises in iron production, where the walls of amelting furnace are subject to physical and chemical wear. Temperature and heat--flux data are collected by several thermocouples locatedinside the walls. The shape of the interface between the molten ironand the walls can then be determined by solving an inverse heatconduction problem. In our work we make extensive use of Femlab for creating testproblems. By using FEMLAB we solve relatively complex model problems for the purpose of creating numerical test data used for validating our methods. For the types of problems we are intressted in numerical artefacts appear, near corners in the domain, in the gradients that Femlab calculates. We demonstrate why this happen and also how we deal with the problem.
We present a one-dimensional model describing the blood flow through a moderately curved and elastic blood vessel. We use an existing two dimensional model of the vessel wall along with Navier-Stokes equations to model the flow through the channel while taking factors, namely, surrounding muscle tissue and presence of external forces other than gravity into account. Our model is obtained via a dimension reduction procedure based on the assumption of thinness of the vessel relative to its length. Results of numerical simulations are presented to highlight the influence of different factors on the blood flow. (C) 2018 Elsevier Inc. All rights reserved.
One dimensional models for fluid flow in tubes are frequently used tomodel complex systems, such as the arterial tree where a large numberof vessels are linked together at bifurcations. At the junctions transmission conditions are needed. One popular option is the classic Kirchhoffconditions which means conservation of mass at the bifurcation andprescribes a continuous pressure at the joint.
In reality the boundary layer phenomena predicts fast local changesto both velocity and pressure inside the bifurcation. Thus it is not appropriate for a one dimensional model to assume a continuous pressure. In this work we present a modification to the classic Kirchhoff condi-tions, with a symmetric pressure drop matrix, that is more suitable forone dimensional flow models. An asymptotic analysis, that has beencarried out previously shows that the new transmission conditions hasen exponentially small error.
The modified transmission conditions take the geometry of the bifurcation into account and can treat two outlets differently. The conditions can also be written in a form that is suitable for implementationin a finite difference solver. Also, by appropriate choice of the pressuredrop matrix we show that the new transmission conditions can producehead loss coefficients similar to experimentally obtained ones.
A false aneurysm is a hematoma, i.e. collection ofblood outside of a blood vessel, that forms due to a hole in the wall of an artery . This represents a serious medical condition that needs to be monitored and, under certain conditions, treatedurgently. In this work a one-dimensional model of a false aneurysm isproposed. The new model is based on a one-dimensional model of anartery previously presented by the authors and it takes into accountthe interaction between the hematoma and the surrounding musclematerial. The model equations are derived using rigorous asymptoticanalysis for the case of a simplified geometry. Even though the model is simple it still supports a realisticbehavior for the system consisting of the vessel and the hematoma. Using numerical simulations we illustrate the behavior ofthe model. We also investigate the effect of changing the size of the hematoma. The simulations show that ourmodel can reproduce realistic solutions. For instance we show thetypical strong pulsation of an aneurysm by blood entering the hematoma during the work phase of the cardiac cycle, and the blood returning tothe vessel during the resting phase. Also we show that the aneurysmgrows if the pulse rate is increased due to, e.g., a higher work load.
The Cauchy problem for the Helmholtz equation appears in applications related to acoustic or electromagnetic wave phenomena. The problem is ill–posed in the sense that the solution does not depend on the data in a stable way. In this paper we give a detailed study of the problem. Specifically we investigate how the ill–posedness depends on the shape of the computational domain and also on the wave number. Furthermore, we give an overview over standard techniques for dealing with ill–posed problems and apply them to the problem.
The Cauchy problem for the Helmholtz equation appears in various applications. The problem is severely ill-posed and regularization is needed to obtain accurate solutions. We start from a formulation of this problem as an operator equation on the boundary of the domain and consider the equation in (H-1/2)* spaces. By introducing an artificial boundary in the interior of the domain we obtain an inner product for this Hilbert space in terms of a quadratic form associated with the Helmholtz equation; perturbed by an integral over the artificial boundary. The perturbation guarantees positivity property of the quadratic form. This inner product allows an efficient evaluation of the adjoint operator in terms of solution of a well-posed boundary value problem for the Helmholtz equation with transmission boundary conditions on the artificial boundary. In an earlier paper we showed how to take advantage of this framework to implement the conjugate gradient method for solving the Cauchy problem. In this work we instead use the Conjugate gradient method for minimizing a Tikhonov functional. The added penalty term regularizes the problem and gives us a regularization parameter that can be used to easily control the stability of the numerical solution with respect to measurement errors in the data. Numerical tests show that the proposed algorithm works well. (C) 2016 Elsevier Ltd. All rights reserved.
We present a modification of the alternating iterative method, which was introduced by V.A. Kozlov and V. Maz’ya in for solving the Cauchy problem for the Helmholtz equation in a Lipschitz domain. The method is implemented numerically using the finite difference method.
The Cauchy problem for the Helmholtz equation is considered. It was demonstrated in a previous paper by the authors that the alternating algorithm suggested by V.A. Kozlov and V.G. Mazya does not converge for large wavenumbers k in the Helmholtz equation. Here, we present some simple modifications of the algorithm which may restore the convergence. They consist of the replacement of the Neumann-Dirichlet iterations by the Robin-Dirichlet ones which repairs the convergence for less than the first Dirichlet-Laplacian eigenvalue. In order to treat large wavenumbers, we present an algorithm based on iterative solution of Robin-Dirichlet boundary value problems in a sufficiently narrow border strip. Numerical implementations obtained using the finite difference method are presented. The numerical results illustrate that the algorithms suggested in this paper, produce convergent iterative sequences.
In this paper we consider two different linear covariance structures, e.g., banded and bended Toeplitz, and how to estimate them using different methods, e.g., by minimizing different norms.
One way to estimate the parameters in a linear covariance structure is to use tapering, which has been shown to be the solution to a universal least squares problem. We know that tapering not always guarantee the positive definite constraints on the estimated covariance matrix and may not be a suitable method. We propose some new methods which preserves the positive definiteness and still give the correct structure.
More specific we consider the problem of estimating parameters of a multivariate normal p–dimensional random vector for (i) a banded covariance structure reflecting m–dependence, and (ii) a banded Toeplitz covariance structure.
The eigenvalue problem for linear differential operators is important since eigenvalues correspond to the possible energy levels of a physical system. It is also important to have good estimates of the error in the computed eigenvalues. In this work, we use spline interpolation to construct approximate eigenfunctions of a linear operator using the corresponding eigenvectors of a discretized approximation of the operator. We show that an error estimate for the approximate eigenvalues can be obtained by evaluating the residual for an approximate eigenpair. The interpolation scheme is selected in such a way that the residual can be evaluated analytically. To demonstrate that the method gives useful error bounds, we apply it to a problem originating from the study of graphene quantum dots where the goal was to investigate the change in the spectrum from incorporating electronâelectron interactions in the potential.
The inverse geothermal problem consists of estimating the temperature distribution below the earth's surface using measurements on the surface. The problem is important since temperature governs a variety of geologic processes, including the generation of magmas and the deformation style of rocks. Since the thermal properties of rocks depend strongly on temperature the problem is non-linear.
The problem is formulated as an ill-posed operator equation, where the righthand side is the heat-flux at the surface level. Since the problem is ill-posed regularization is needed. In this study we demonstrate that Tikhonov regularization can be implemented efficiently for solving the operator equation. The algorithm is based on having a code for solving a well-posed problem related to the above mentioned operator. The algorithm is designed in such a way that it can deal with both 2D and 3D calculations.
Numerical results, for 2D domains, show that the algorithm works well and the inverse problem can be solved accurately with a realistic noise level in the surface data.
The partial least squares (PLS) method computes a sequence of approximate solutions x(k) is an element of K-k (A(T) A, A(T) b), k = 1, 2, ..., to the least squares problem min(x) parallel to Ax - b parallel to(2). If carried out to completion, the method always terminates with the pseudoinverse solution x(dagger) = A(dagger)b. Two direct PLS algorithms are analyzed. The first uses the Golub-Kahan Householder algorithm for reducing A to upper bidiagonal form. The second is the NIPALS PLS algorithm, due to Wold et al., which is based on rank-reducing orthogonal projections. The Householder algorithm is known to be mixed forward-backward stable. Numerical results are given, that support the conjecture that the NIPALS PLS algorithm shares this stability property. We draw attention to a flaw in some descriptions and implementations of this algorithm, related to a similar problem in Gram-Schmidt orthogonalization, that spoils its otherwise excellent stability. For large-scale sparse or structured problems, the iterative algorithm LSQR is an attractive alternative, provided an implementation with reorthogonalization is used.
Axel Ruhe passed away April 4, 2015. He was cross-country-skiing with friends in the Swedish mountains when after 21 km he suddenly died. He is survived by his wife Gunlaug and three children from his first marriage....
Algorithms for partial least squares (PLS) modelling are placed into a sound theoretical context focusing on numerical precision and computational efficiency. NIPALS and other PLS algorithms that perform deflation steps of the predictors (X) may be slow or even computationally infeasible for sparse and/or large-scale data sets. As alternatives, we develop new versions of the Bidiag1 and Bidiag2 algorithms. These include full reorthogonalization of both score and loading vectors, which we consider to be both necessary and sufficient for numerical precision. Using a collection of benchmark data sets, these 2 new algorithms are compared to the NIPALS PLS and 4 other PLS algorithms acknowledged in the chemometrics literature. The provably stable Householder algorithm for PLS regression is taken as the reference method for numerical precision. Our conclusion is that our new Bidiag1 and Bidiag2 algorithms are themethods of choice for problems where both efficiency and numerical precision are important. When efficiency is not urgent, the NIPALS PLS and the Householder PLS are also good choices. The benchmark study shows that SIMPLS gives poor numerical precision even for a small number of factors. Further, the nonorthogonal scores PLS, direct scores PLS, and the improved kernel PLS are demonstrated to be numerically less stable than the best algorithms. PrototypeMATLAB codes are included for the 5 PLS algorithms concluded to be numerically stable on our benchmark data sets. Other aspects of PLS modelling, such as the evaluation of the regression coefficients, are also analyzed using techniques from numerical linear algebra.
We build a multi-element variant of the smoothness increasing accuracy conserving (SIAC) shock capturing technique proposed for single element spectral methods by Wissink et al. (J Sci Comput 77:579–596, 2018). In particular, the baseline scheme of our method is the nodal discontinuous Galerkin spectral element method (DGSEM) for approximating the solution of systems of conservation laws. It is well known that high-order methods generate spurious oscillations near discontinuities which can develop in the solution for nonlinear problems, even when the initial data is smooth. We propose a novel multi-element SIAC filtering technique applied to the DGSEM as a shock capturing method. We design the SIAC filtering such that the numerical scheme remains high-order accurate and that the shock capturing is applied adaptively throughout the domain. The shock capturing method is derived for general systems of conservation laws. We apply the novel SIAC filter to the two-dimensional Euler and ideal magnetohydrodynamics equations to several standard test problems with a variety of boundary conditions.
This paper outlines the development of a parallel three-dimensional hybrid finite volume finite difference capability. The specific application area under consideration is modeling the trailing vortices shed from the wings of aircraft under transonic flight conditions. For this purpose, the Elemental finite volume code is employed in the vicinity of the aircraft, whereas the ESSENSE finite difference software is employed to accurately resolve the trailing vortices. The former method is spatially formally second-order, and the latter is set to sixth-order accuracy. The coupling of the two methods is achieved in a stable manner through the use of summation-by-parts operators and weak imposition of boundary conditions using simultaneous approximation terms. The developed hybrid solver is successfully validated against an analytical test case. This is followed by demonstrating the ability to model the flowfield, including trailing vortex structures, around the NASA Common Research Model under transonic flow conditions. The interface treatment is shown to describe the intersecting vortices in a smooth manner. In addition, insights gained in resolving the vortices include violation of underlying assumptions of analytical vortex modeling methods.
Localizing a radiant source is a problem of great interest to many scientific and technological research areas. Localization based on range measurements is at the core of technologies such as radar, sonar and wireless sensor networks. In this manuscript, we offer an in-depth study of the model for source localization based on range measurements obtained from the source signal, from the point of view of algebraic geometry. In the case of three receivers, we find unexpected connections between this problem and the geometry of Kummers and Cayleys surfaces. Our work also gives new insights into the localization based on range differences.
Incompressible Navier-Stokes solvers based on the projection method often require an expensive numerical solution of a Poisson equation for a pressure-like variable. This often involves linear system solvers based on iterative and multigrid methods which may limit the ability to scale to large numbers of processors. The artificial compressibility method (ACM) [6] introduces a time derivative of the pressure into the incompressible form of the continuity equation creating a coupled closed hyperbolic system that does not require a Poisson equation solution and allows for explicit time-marching and localized stencil numerical methods. Such a scheme should theoretically scale well on large numbers of CPUs, GPU's, or hybrid CPU-GPU architectures. The original ACM was only valid for steady flows and dual-time stepping was often used for time-accurate simulations. Recently, Clausen [7] has proposed the entropically damped artificial compressibility (EDAC) method which is applicable to both steady and unsteady flows without the need for dual-time stepping. The EDAC scheme was successfully tested with both a finite-difference MacCormack's method for the two-dimensional lid driven cavity and periodic double shear layer problem and a finite-element method for flow over a square cylinder, with scaling studies on the latter to large numbers of processors. In this study, we discretize the EDAC formulation with a new optimized high-order centered finite-difference scheme and an explicit fourth-order Runge-Kutta method. This is combined with an immersed boundary method to efficiently treat complex geometries and a new robust outflow boundary condition to enable higher Reynolds number simulations on truncated domains. Validation studies for the Taylor-Green Vortex problem and the lid driven cavity problem in both 2D and 3D are presented. An eddy viscosity subgrid-scale model is used to enable large eddy simulations for the 3D cases. Finally, an application to flow over a sphere is presented to highlight the boundary condition and performance comparisons to a traditional incompressible Navier-Stokes solver is shown for the 3D lid driven cavity. Overall, the combined EDAC formulation and discretization is shown to be both effective and affordable.
Partial least squares is a common technique for multivariate regression. The pro- cedure is recursive and in each step basis vectors are computed for the explaining variables and the solution vectors. A linear model is fitted by projection onto the span of the basis vectors. The procedure is mathematically equivalent to Golub-Kahan bidiagonalization, which is a Krylov method, and which is equiv- alent to a pair of matrix factorizations. The vectors of regression coefficients and prediction are non-linear functions of the right hand side. An algorithm for computing the Frechet derivatives of these functions is derived, based on perturbation theory for the matrix factorizations. From the Frechet derivative of the prediction vector one can compute the number of degrees of freedom, which can be used as a stopping criterion for the recursion. A few numerical examples are given.
n/a
Bilinear tensor least squares problems occur in applications such as Hammerstein system identification and social network analysis. A linearly constrained problem of medium size is considered, and nonlinear least squares solvers of Gauss-Newton-type are applied to numerically solve it. The problem is separable, and the variable projection method can be used. Perturbation theory is presented and used to motivate the choice of constraint. Numerical experiments with Hammerstein models and random tensors are performed, comparing the different methods and showing that a variable projection method performs best.
Using the technique of semantic mirroring a graph is obtained that represents words and their translationsfrom a parallel corpus or a bilingual lexicon. The connectedness of the graph holds information about the semanticrelations of words that occur in the translations. Spectral graph theory is used to partition the graph, which leadsto a grouping of the words in different clusters. We illustrate the method using a small sample of seed words froma lexicon of Swedish and English adjectives and discuss its application to computational lexical semantics andlexicography.
It is well known that the classical exploratory factor analysis (EFA) of data with more observations than variables has several types of indeterminacy. We study the factor indeterminacy and show some new aspects of this problem by considering EFA as a specific data matrix decomposition. We adopt a new approach to the EFA estimation and achieve a new characterization of the factor indeterminacy problem. A new alternative model is proposed, which gives determinate factors and can be seen as a semi-sparse principal component analysis (PCA). An alternating algorithm is developed, where in each step a Procrustes problem is solved. It is demonstrated that the new model/algorithm can act as a specific sparse PCA and as a low-rank-plus-sparse matrix decomposition. Numerical examples with several large data sets illustrate the versatility of the new model, and the performance and behaviour of its algorithmic implementation.
Kaczmarzs method-sometimes referred to as the algebraic reconstruction technique-is an iterative method that is widely used in tomographic imaging due to its favorable semi-convergence properties. Specifically, when applied to a problem with noisy data, during the early iterations it converges very quickly toward a good approximation of the exact solution, and thus produces a regularized solution. While this property is generally accepted and utilized, there is surprisingly little theoretical justification for it. The purpose of this paper is to present insight into the semi-convergence of Kaczmarzs method as well as its projected counterpart (and their block versions). To do this we study how the data errors propagate into the iteration vectors and we derive upper bounds for this noise propagation. Our bounds are compared with numerical results obtained from tomographic imaging.
In tomographic reconstruction problems it is not uncommon that there are errors in the implementation of the forward projector and/or the backprojector, and hence we encounter a so-called unmatched projektor/backprojector pair. Consequently, the matrices that represent the two projectors are not each others transpose. Surprisingly, the influence of such errors in algebraic iterative reconstruction methods has received little attention in the literature. The goal of this paper is to perform a rigorous first-order perturbation analysis of the minimization problems underlying the algebraic methods in order to understand the role played by the nonmatch of the matrices. We also study the convergence properties of linear stationary iterations based on unmatched matrix pairs, leading to insight into the behavior of some important row-and column-oriented algebraic iterative methods. We conclude with numerical examples that illustrate the perturbation and convergence results.
Column-oriented versions of algebraic iterative methods are interesting alternatives to their row-version counterparts: they converge to a least squares solution, and they provide a basis for saving computational work by skipping small updates. In this paper we consider the case of noise-free data. We present a convergence analysis of the column algorithms, we discuss two techniques (loping and flagging) for reducing the work, and we establish some convergence results for methods that utilize these techniques. The performance of the algorithms is illustrated with numerical examples from computed tomography.
We give a detailed study of the semiconvergence behavior of projected nonstationary simultaneous iterative reconstruction technique (SIRT) algorithms, including the projected Landweber algorithm. We also consider the use of a relaxation parameter strategy, proposed recently for the standard algorithms, for controlling the semiconvergence of the projected algorithms. We demonstrate the semiconvergence and the performance of our strategies by examples taken from tomographic imaging.
Experiences from using a higher order accurate finite difference multiblock solver to compute the time dependent flow over a cavity is summarized. The work has been carried out as part of a work in a European project called IDIHOM in a collaboration between the Swedish Defense Research Agency (FOI) and University of Linköping (LiU). The higher order code is based on Summation By Parts operators combined with the Simultaneous Approximation Term approach for boundary and interface conditions. The spatial accuracy of the code is verified by calculations over a cyclinder by monitoring the decay of the errors of known wall quantities as the grid is refined. The focus is on the validation for a test case of transonic flow over a rectangular cavity with hybrid RANS/LES calculations. The results are compared to reference numerical results from a second order finite volume code as well as with experimental results with a good overall agreement between the results.
A novel time integration approach is explored for unsteady flow computations. It is a multi-block formulation in time where one solves for all time levels within a block simultaneously. The time discretization within a block is based on the summation-by-parts (SBP) technique in time combined with the simultaneous-approximation-term (SAT) technique for imposing the initial condition. The approach is implicit, unconditionally stable and can be made high order accurate in time. The implicit system is solved by a dual time stepping technique. The technique has been implemented in a flow solver for unstructured grids and applied to an unsteady flow problem with vortex shedding over a cylinder. Four time integration approaches being 2nd to 5th order accurate in time are evaluated and compared to the conventional 2nd order backward difference (BDF2) method and a 4th order diagonally implicit Runge-Kutta scheme (ESDIRK64). The obtained orders of accuracy are higher than expected and correspond to the accuracy in the interior of the blocks, up to 8th order accuracy is obtained. The influence on the accuracy from the size of the time blocks is small. Smaller blocks are computationally more efficient though, and the efficiency increases with increased accuracy of the SBP operator and reduced size of time steps. The most accurate scheme, with a small time step and block size, is approximately as efficient as the ESDIRK64 scheme. There is a significant potential for improvements ranging from convergence acceleration techniques in dual time, alternative initialization of time blocks, and by introducing smaller time blocks based on alternative SBP operators.
The discretization of the viscous operator in an edge-based flow solver for unstructured grids has been investigated. A compact discretization of the Laplace and thin-layer operators in the viscous terms is used with two different wall boundary conditions. Furthermore, a wide discretization of the same operators is investigated. The resulting numerical operators are all formally second order accurate in space; the wide operator has higher truncation errors. The operators are implemented weakly using a penalty formulation and are shown to be stable for a scalar model problem with given constraints on the penalty coefficients. The different operators are applied to a set of grid convergence test cases for laminar flow in two dimensions up to a large-scale three dimensional turbulent flow problem. The operators converge to the same solutions as the grids are refined with one exception where the wide operator converges to a solution with higher drag. The 2nd compact discretization, being locally more accurate at a wall boundary than the original 1st compact operator, reduces the grid dependency somewhat for most test cases. The wide operator performs very well and leads for most test cases to results with minimum spread between coarsest and finest grids. For one test case though, the wide operator has a negative influence on the steady state convergence.
Many geophysical phenomena are characterized by properties that evolve over a wide range of scales which introduce difficulties when attempting to model these features in one computational method. We have developed a high-order finite difference method for the elastic wave equation that is able to efficiently handle varying temporal scales in a single, stand-alone framework. We apply this method to earthquake cycle models characterized by extremely long interseismic periods interspersed with abrupt, short periods of dynamic rupture. Through the use of summation-by-parts operators and weak enforcement of boundary conditions we derive a provably stable discretization. Time stepping is achieved through the implicit θ-method which allows us to take large time steps during the intermittent period between earthquakes and adapts appropriately to fully resolve rupture.
Many geophysical phenomena are characterized by properties that evolve over a wide range of scales which introduce difficulties when attempting to model these features in one computational method. We have developed a high-order finite difference method for the elastic wave equation that is able to efficiently handle varying temporal and spatial scales in a single, stand-alone framework. We apply this method to earthquake cycle models characterized by extremely long interseismic periods interspersed with abrupt, short periods of dynamic rupture. Through the use of summation-by-parts operators and weak enforcement of boundary conditions we derive a provably stable discretization. Time stepping is achieved through the implicit θθ-method which allows us to take large time steps during the intermittent period between earthquakes and adapts appropriately to fully resolve rupture.
Exact non-reflecting boundary conditions for a linear incompletely parabolic system in one dimension have been studied. The system is a model for the linearized compressible Navier-Stokes equations, but is less complicated which allows for a detailed analysis without approximations. It is shown that well-posedness is a fundamental property of the exact non-reflecting boundary conditions. By using summation by parts operators for the numerical approximation and a weak boundary implementation, it is also shown that energy stability follows automatically.
We derive a method to locally change the order of accuracy of finite difference schemes that approximate the second derivative. The derivation is based on summation-by-parts operators, which are connected at interfaces using penalty terms. At such interfaces, the numerical solution has a double representation, with one representation in each domain. We merge this double representation into a single one, yielding a new scheme with unique solution values in all grid points. The resulting scheme is proven to be stable, accurate and dual consistent.