 Research
 Open Access
On the solution of high order stable time integration methods
 Owe Axelsson^{1, 2},
 Radim Blaheta^{2},
 Stanislav Sysala^{2} and
 Bashir Ahmad^{1}Email author
https://doi.org/10.1186/168727702013108
© Axelsson et al.; licensee Springer. 2013
 Received: 15 February 2013
 Accepted: 9 April 2013
 Published: 26 April 2013
Abstract
Evolution equations arise in many important practical problems. They are frequently stiff, i.e. involves fast, mostly exponentially, decreasing and/or oscillating components. To handle such problems, one must use proper forms of implicit numerical timeintegration methods. In this paper, we consider two methods of high order of accuracy, one for parabolic problems and the other for hyperbolic type of problems. For parabolic problems, it is shown how the solution rapidly approaches the stationary solution. It is also shown how the arising quadratic polynomial algebraic systems can be solved efficiently by iteration and use of a proper preconditioner.
Keywords
 Parabolic Problem
 Discretization Error
 Quadrature Point
 Finite Difference Approximation
 Implicit Euler Method
1 Introduction
arises. Here, $\mathbf{u},\mathbf{f}\in {\mathrm{\Re}}^{n}$, M is a mass matrix and M, A are $n\times n$ matrices. For a finite difference approximation, $M=I$, the identity matrix.
In the above applications, the order n of the system can be very large. Under reasonable assumptions of the given source function f, the system is stable, i.e. its solution is bounded for all $t>0$ and converges to a fixed stationary solution as $t\to 0$, independent of the initial value ${\mathbf{u}}_{0}$. This holds if A is a normal matrix, that is, has a complete eigenvector space, and has eigenvalues with positive real parts. This condition holds for parabolic problems, where the eigenvalues of A are real and positive. In more involved problems, the matrix A may have complex eigenvalues with arbitrary large imaginary parts.
This method is only firstorder accurate, i.e. its global time discretization error is $O(\tau )$. Therefore, to get a sufficiently small discretization error, one must choose very small timesteps, which means that the method becomes computationally expensive and also causes a stronger increase of roundoff errors. However, there exists stable timeintegration methods of arbitrary high order. They are of implicit RungeKutta quadrature type (see e.g. [1–5]), and belong to the class of Astable methods, i.e. the eigenvalues $\mu ({B}^{1})$ of the corresponding matrix B where $B\tilde{\mathbf{u}}(t+\tau )=\tilde{\mathbf{u}}(t)+\tau \tilde{\mathbf{f}}(t)$, and $\tilde{\mathbf{f}}(t)$ is a linear function of $\mathbf{f}(t)$ at the quadrature points in the interval $[t,t+\tau ]$, satisfy $\mu ({B}^{1})<1$ for all normal matrices ${M}^{1}A$ with $\mathrm{\Re}e(\lambda )>0$. The highest order achieved, $O({\tau}^{2m})$ occurs for Gauss quadrature where m equals to the number of quadrature points within each time interval.
one can use a special subclass of such methods, based on Radau quadrature; see, e.g. [1, 5]. The discretization error is here only one order less, $O({\tau}^{2m1})$. For linear problems, all such stable methods lead to rational polynomial approximation matrices B, and hence the need to solve quadratic polynomial equations. For stable methods, it turns out that the roots of these polynomials are complex.
In Section 2, a preconditioning method is described that is very efficient when solving such systems, without the need to factorize the quadratic polynomials in first order factors, thereby avoiding the need to use complex arithmetics. Section 3 discusses the special case where $m=2$. It shows also how the general case, where $m>2$, can be handled.
Section 4 deals with the use of implicit RungeKutta methods of Gauss quadrature type for solving hyperbolic systems of Hamiltonian type.
Section 5 presents a method to derive time discretization errors.
In Section 6, some illustrating numerical tests are shown. The paper ends with concluding remarks.
2 Preconditioners for quadratic matrix polynomials
From the introduction, it follows that it is of importance to use an efficient solution method for quadratic matrix polynomials and not factorize them in first order factors when this results in complex valued factors. For a method to solve complex valued systems in real arithmetics, see, e.g. [6]. Here, we use a particular method that is suitable for the arising quadratic matrix polynomials.
where ${\tilde{C}}_{\alpha}={M}^{1/2}{C}_{\alpha}{M}^{1/2}={(I+\alpha \tilde{A})}^{2}$, etc. and $\tilde{\mathbf{x}}={M}^{1/2}\mathbf{x}$. Note that, by similarity, ${C}_{\alpha}^{1}B$ and ${\tilde{C}}_{\alpha}^{1}\tilde{B}$ have the same eigenvalues.
We are interested in cases where $\tilde{A}$ may have large eigenvalues. (In our application, $\tilde{A}$ involves a timestep factor τ, but since we use higher order timediscretization methods, τ will not be very small and cannot damp out the inverse to some power of the spacediscretization parameter h that also occurs in $\tilde{A}$.) Therefore, we choose $\alpha =b$. Note that this implies that $2\alpha a>0$.
We write $\alpha \mu ={\mu}_{0}{e}^{i\phi}$ so $\frac{1}{2}(\alpha \mu +\frac{1}{\alpha \mu})=\frac{1}{2}({\mu}_{0}+\frac{1}{{\mu}_{0}})cos(\phi )+\frac{i}{2}({\mu}_{0}\frac{1}{{\mu}_{0}})sin(\phi )$, where i is the imaginary unit. Note that ${\mu}_{0}>0$ so $\frac{1}{2}({\mu}_{0}+\frac{1}{{\mu}_{0}})\ge 1$. Since, by assumption, the real part of μ is positive, it holds $\phi \le {\phi}_{0}<\pi /2$. A computation shows that the values of the factor $\frac{1}{1+\frac{1}{2}(\alpha \mu +\frac{1}{\alpha \mu})}$ are located in a disc in the complex plane with center at $\delta /2$ and radius $\delta /2$, where $\delta =1/(1+cos{\phi}_{0})$.
Hence, $\lambda (\mu )$ is located in a disc with center at $1\frac{1}{2}(1\frac{a}{2\alpha})\delta $ and radius $\frac{1}{2}(1\frac{a}{2\alpha})\delta $.
For ${\phi}_{0}=0$, i.e. for real eigenvalues of $\tilde{A}$, then $\delta =1/2$ and $1\ge \lambda (\mu )\ge \frac{3}{4}+\frac{1}{8}\frac{a}{\alpha}$.
3 A stiffly stable time integration method
where $\mathbf{x},\mathbf{f}\in {\mathrm{\Re}}^{n}$, $\sigma (t)\ge {\sigma}_{0}>0$, M, A are $n\times n$ matrices, where M is assumed to be spd and the symmetric part of A is positive semidefinite. In the practical applications that we consider, M corresponds to a mass matrix and A to a secondorder diffusion or diffusionconvection matrix. Hence, n is large. Under reasonable assumptions on the source function f, such a system is stable for all t and its solution approaches a finite function, independent on the initial value ${\mathbf{x}}_{0}$, as $t\to \mathrm{\infty}$.
where $f:(0,\mathrm{\infty})\times V\to {V}^{\prime}$ and V is a reflexive Banach space.
Hence, (5) is stable in this case.
i.e. (5) is asymptotically stable. In particular, the above holds for the test problem considered in Section 6.
and ${\lambda}_{E}$ denotes eigenvalues by E. Furthermore, to cope with problems where $arg({\lambda}_{E})\le \alpha <\frac{\pi}{2}$, but arbitrarily close to $\pi /2$, one needs Astable methods; see e.g. [3, 7, 8]. To get stability for all times and time steps, one requires ${lim}_{\lambda \to \mathrm{\infty}}{R}_{m}(\lambda )\le c<1$ where preferably $c=0$. Such methods are called Lstable (Lambert) and stiffly Astable [3], respectively.
An important class of methods which are stiffly Astable is a particular class of the implicit RungaKutta methods; see [1, 3, 5]. Such methods correspond to rational polynomial approximations of the matrix exponential function with denominator having a higher degree than the nominator. Examples of such methods are based on Radau quadrature where the quadrature points are zeros of ${\tilde{P}}_{m}(\xi ){\tilde{P}}_{m1}(\xi )$, where $\{{\tilde{P}}_{k}\}$ are the Legendre polynomials, orthogonal on the interval $(0,1)$, see e.g. [1] and references therein. Note that $\xi =1$ is a root for all $m\ge 1$. The case $m=1$ is identical to the implicit Euler method.
Following [5], we consider here the next simplest case, where $m=2$, for the numerical solution of (4) over a time interval $[t,t+\tau ]$.
where ${\mathbf{x}}_{0}$ is the solution at time t, ${\sigma}_{1}=\sigma (t+\tau /3)$, ${\sigma}_{2}=\sigma (t+\tau )$, ${\mathbf{f}}_{1}=\mathbf{f}(t+\tau /3)$, ${\mathbf{f}}_{2}=\mathbf{f}(t+\tau )$, and $\tilde{A}=\frac{\tau}{12}A$. The global discretization error of the ${\mathbf{x}}_{2}$component for this method is $O({\tau}^{3})$, i.e. it is a thirdorder method and it is stiffly Astable even for arbitrary strong variations of the coefficient $\sigma (t)$. This can be compared with the trapezoidal or implicit midpoint methods which are only second order accurate and not stiffly stable.
For higher order Radau quadrature methods, the corresponding matrix polynomial in ${M}^{1}B$ is a m th order polynomial. By the fundamental theorem of algebra, one can factorize it in factors of at most second degree. They can be solved in a sequential order. Alternatively, using a method referred to in Remark 3.1, the solution components can be computed concurrently.
Each secondorder factor can be preconditioned by the method in Section 2. The ability to factorize ${Q}_{m}(tE)$ in secondorder factors and solve the arising systems as such twobytwo block matrix systems means that one only has to solve firstorder systems. This is of importance if for instance M and A are large sparse bandmatrices, since then one avoids increasing bandwidths in matrix products and one can solve systems of linear combinations of M and A more efficiently than for higher order polynomial combinations. Furthermore, this enables one to keep matrices on element by element form (see, e.g. [9]) and it is in general not necessary to store the matrices M and A. The arising inner system can be solved by some inner iteration method.
where $\alpha >0$ is a parameter. As already shown in [5], for the above particular application it holds.
If $0.144\le \frac{{\sigma}_{1}}{{\sigma}_{2}}\le 2.496$, then ${\delta}_{2}=1$ and ${\delta}_{1}\ge \sqrt{\frac{5}{8}}$.
□
We conclude that the condition number is very close to its ideal unit value 1, leading to very few iterations. For instance, it suffices with at most 5 conjugate gradient iterations for a relative accuracy of 10^{−6}.
Remark 3.1 High order implicit RungeKutta methods and their discretization error estimates can be derived using order tree methods as described in [1] and [10].
For an early presentation of implicit RungeKutta methods, see [2] and also [4], where the method was called global integration method to indicate its capability for large values of m to use few, or even just one, time discretization steps. It was also shown that the coefficient matrix, formed by the quadrature coefficients had a dominating lower triangular part, enabling the use of a matrix splitting and Richardson iteration method. It can be of interest to point out that the Radau method for $m=2$ can be described in an alternative way, using Radau quadrature for the whole time step interval and combined with a trapezoidal method for the shorter interval.
where ${\tilde{u}}_{1}$, ${\tilde{u}}_{1/3}$, ${\tilde{u}}_{0}$ denote the corresponding approximations of u at ${\tilde{t}}_{1}\doteq {t}_{k1}+\tau $ and ${\tilde{t}}_{1/3}={t}_{k1}+\tau /3$ and ${t}_{k1}$, respectively.
Remark 3.2 The arising system in a high order method involving $q\ge 2$ quadratic polynomial factors, can be solved sequentially in the order they appear. Alternatively (see, e.g. [11], Exercise 2.31), one can use a method based on solving a matrix polynomial equation, ${\mathcal{P}}_{2q}(A)\mathbf{x}=\mathbf{b}$ as $\mathbf{x}={\sum}_{k=1}^{q}\frac{1}{{\mathcal{P}}_{2q}^{\mathrm{\prime}}({r}_{k})}{\mathbf{x}}_{k}$, ${\mathbf{x}}_{k}={(A{r}_{k}I)}^{1}\mathbf{b}$, where ${\{{r}_{k}\}}^{2q}$, is the set of zeros of the polynomial and it is assumed that A has no eigenvalues in this set. (This holds in our applications.) Then, combining pairs of terms corresponding to complex conjugate roots ${r}_{k}$, quadratic polynomials arise for the computation of the corresponding solution components. It is seen that in this method, the solution components can be computed concurrently.
where $\epsilon >0$ and $\epsilon \to 0$.
Hence, such an DAE can be considered as an infinitely stiff differential equation problem. For strongly or infinitely stiff problems, there can occur an order reduction phenomenae. This follows since some high order error terms in the error expansion (cf. Section 5) are multiplied with (infinitely) large factors, leading to an order reduction for some methods. Heuristically, this can be understood to occur for the Gauss integration form of IRK but does not occur for the stiffly stable variants, such as based on the Radau quadrature. For further discussions of this, see, e.g. [10, 13].
4 High order integration methods for Hamiltonian systems
where K is the kinetic energy and V the potential energy. Here, ${\mathbf{x}}_{i}=({x}_{i},{y}_{i},{z}_{i})$ denote the Cartesian coordinate of the i th point mass ${m}_{i}$.
which equals the total energy of the system.
which, since $\frac{d}{dt}(\frac{\partial L}{\partial \dot{\mathbf{q}}})=\frac{\partial L}{\partial \mathbf{q}}$ implies $\dot{\mathbf{p}}=\frac{\partial L}{\partial \mathbf{q}}$, are hence equivalent to the Lagrange equations.
that is, the Hamiltonian function $H(\mathbf{p},\mathbf{q})$ is a first integral for the system (15).
The flow ${\phi}_{t}:U\to {\mathrm{\Re}}^{2n}$ of a Hamiltonian system is the mapping that describes the evolution of the solution by time, i.e. ${\phi}_{t}({\mathbf{p}}_{0},{\mathbf{q}}_{0})=(p(t,{\mathbf{p}}_{0},{\mathbf{q}}_{0}),q(t,{\mathbf{p}}_{0},{\mathbf{q}}_{0}))$, where $\mathbf{p}(t,{\mathbf{p}}_{0},{\mathbf{q}}_{0})$, $\mathbf{q}(t,{\mathbf{p}}_{0},{\mathbf{q}}_{0})$ is the solution of the system for the initial values $\mathbf{p}(0)={\mathbf{p}}_{0}$, $\mathbf{q}(0)={\mathbf{q}}_{0}$.
where C is a symmetric matrix. For the solution of the Hamiltonian system (15), we shall use an implicit RungeKutta method based on Gauss quadrature.
where ${c}_{i}={\sum}_{i=1}^{s}{a}_{ij}$, see e.g. [1, 4]. The familiar implicit midpoint rule is the special case where $s=1$. Here, ${c}_{1},\dots ,{c}_{s}$ are the zeros of the shifted Legendre polynomial $\frac{{d}^{s}}{d{x}^{s}}({x}^{s}{(1x)}^{s})$. For a linear problem, this results in a system which can be solved by the quadratic polynomial decomposition and the preconditioned iterative solution method, presented in Section 2.
and ${u}_{1}=u({t}_{0}+\tau )$.
it follows that the energy quadrature forms ${y}_{i}^{T}{C}_{i}{y}_{i}$ are conserved.
This is an important property in Hamiltonian systems and is referred to as being symplectic. For further references of symplectic integrators, see [10].
5 Discretization error estimates
Hence, there is no decrease of errors occurring at earlier time steps. On the other hand, the strong monotonicity property for parabolic problems implies that errors at earlier time steps decrease exponentially as time evolves.
For a derivation of discretization errors for such parabolic type problems for a convex combination of the implicit Euler method and the midpoint method, referred to as the θmethod, the following holds (see [14]). Similar estimates can also be derived for the Radau quadrature method, see, e.g. [10].
The major result in [14] is the following.
where $\overline{t}=\theta t+(1\theta )(t+\tau )$, $\overline{u}(t)=\theta u(t)+(1\theta )u(t+\tau )$, $0\le \theta \le 1$.
where $\tilde{u}(\overline{t})$ takes values in a tube with radius $\parallel \overline{u}(t)u(\overline{t})\parallel $ about the solution $u(t)$.
 (i)
if F is strongly monotone and $\frac{1}{2}O(\tau )\le \theta \le {\theta}_{0}$, then $\parallel e(t)\parallel \le {\varrho}_{0}^{1}{C}^{\prime}{\tau}^{2}$, $t>0$;
 (ii)
if F is monotone (or conservative) and $\frac{1}{2}O(\tau )\le \theta \le \frac{1}{2}$, then $\parallel e(t)\parallel \le t{C}^{\prime}{\tau}^{2}$, $t>0$.
Here, ${C}^{\prime}$ depends on $\parallel {u}_{t}^{(2)}\parallel $ and $\parallel {u}_{t}^{(3)}\parallel $, but is independent of the stiffness of the problem under the appropriate conditions stated above.
If the solution u is smooth so that $\frac{\partial F}{\partial u}{u}_{t}^{(2)}$ has also only smooth components, then $\parallel \frac{\partial F}{\partial u}{u}_{t}^{(2)}\parallel $ may be much smaller than $\parallel \frac{\partial F}{\partial u}\parallel \parallel {u}_{t}^{(2)}\parallel $, showing that the stiffness, i.e. factors $\parallel \frac{\partial F}{\partial u}\parallel \gg 1$, do not enter in the error estimate.
In many problems, we can expect that $\parallel \frac{\partial F}{\partial u}{u}_{t}^{(2)}\parallel $ is of the same order as $\parallel {u}_{t}^{(3)}\parallel $, i.e. the first and last forms in (21) have the same order. In particular, this holds for a linear problem ${u}_{t}+Au=0$, where ${u}_{t}^{(3)}={A}^{3}u=\frac{\partial F}{\partial u}{u}_{t}^{(2)}$.
It is seen from (20) that for hyperbolic (conservative) problems like the Hamiltonian problem in Section 4, the discretization error grows at least linearly with t, but likely faster if the solution is not sufficiently smooth. It may then be necessary to control the error by coupling the numerical timeintegration method with an adaptive time step control. We present here such a method based on the use of backward integration at each timestep using the adjoint operator. The use of adjoint operators in error estimates gives back to the classical AubinNitsche ${L}_{2}$lifting method used in boundary value problems to derive discretization error estimates in ${L}_{2}$ norm. It has also been used for error estimates in initial value problems, see e.g. [7].
Assume that the monotonicity assumption (20) holds. We show first a nonlinear (monotone) stability property, called Bstability, that holds for the numerical solution of implicit RungeKutta methods based on Gauss quadrature points. It goes back to a scientific note in [15]; see also [16].
Here, ${b}_{i}>0$ are the quadrature coefficients.
Since ${\mathrm{\Psi}}^{(2m)}(t)\ge 0$, this monotonicity property can be seen to hold also for the Radau quadrature method.
where $u(t)\in {\mathbb{R}}^{n}$ and $f(u(t))=Au(t)\tilde{f}(t)$.
where $\tilde{u}(t)$ is a piecewise polynomial of degree m.
where $\tilde{\phi}$ is a polynomial of degree m.
i.e. the implicit RungeKutta method, based on Gaussian quadrature, applied for hyperbolic (conservative) problems has order 2m.
6 A numerical test example
Here, $\sigma (t)=1+\frac{2}{5}sink\pi t$, where $k=1,2,\dots $ , $k\le \frac{1}{\tau}$, is a parameter used to test the stability of the method with respect to oscillating coefficients. Here, τ is the time step to be used in the numerical solution of (26). Note that this function $\sigma (t)$ satisfies the conditions of the ratio $\frac{{\sigma}_{1}}{{\sigma}_{2}}$ from (9). We let $f(x,y)\equiv 2{e}^{\ell x}$.
Further b is a vector satisfying $\mathrm{\nabla}\cdot \mathbf{b}\le 0$. We choose $\mathbf{b}=[\ell ,0]$, where ℓ is a parameter, possibly $\ell \gg 1$.
After a finite element or finite difference approximation, a system of the form (4) arises. For a finite difference approximation $M=I$, the identity matrix. The Laplacian operator is approximated with a ninepoint difference scheme. We use an upwind discretization of the convection term. In the outer corner points of the domain, we use the boundary conditions ${u}_{x}+\ell u=0$ for $x=0$ and ${u}_{x}+\ell u=0$ for $x=1$.
The time discretization is given by the implicit RungeKutta method with the Radau quadrature for $m=2$; see Section 3. For comparison, we also consider $m=1$, i.e. the implicit Euler method, in some experiments. For solving the timediscretized problems, we use the GMRES method with preconditioners from Section 2 and with the tolerance $1e10$. Let us note that GMRES needs 56 iteration for this tolerance. The problem is implemented in Matlab.
6.1 Experiments with a known and smooth stationary solution
The error estimates in dependence on ℓ and h
ℓ∖h  1/10  1/20  1/50  1/100  1/150 

1  1.2e−2  5.9e−3  2.3e−3  1.2e−3  7.7e−4 
20  6.1e−1  4.5e−1  2.5e−1  1.4e−1  9.4e−2 
Values of time T in dependence on h and τ

Time discretization error at time $\mathit{T}\mathbf{=}\mathbf{1}\mathbf{/}\mathbf{8}$ in dependence on h and τ
h∖i  1  2  3  4 

1/20  1.7e−1  1.6e−2  3.0e−4  9.5e−6 
1/50  1.8e−1  2.0e−2  5.6e−4  4.5e−6 
1/100  1.8e−1  2.1e−2  6.8e−4  3.0e−6 
Time discretization error at time $\mathit{T}\mathbf{=}\mathbf{1}\mathbf{/}\mathbf{8}$ in dependence on ℓ and τ
ℓ∖i  1  2  3  4 

1  7.1e−2  3.9e−3  2.1e−4  2.8e−5 
20  1.8e−1  2.0e−2  5.6e−4  4.5e−6 
Time discretization error at time $\mathit{T}\mathbf{=}\mathbf{1}\mathbf{/}\mathbf{8}$ in dependence on k and τ
k∖i  1  2  3  4 

0  1.1e−1  2.4e−2  4.3e−4  2.4e−5 
10  1.8e−1  2.0e−2  5.6e−4  4.5e−6 
The error estimates from Tables 35 indicate that the expected error estimate $O({\tau}^{3})$ holds.
Time discretization error at time $\mathit{T}\mathbf{=}\mathbf{1}\mathbf{/}\mathbf{8}$ in dependence on ℓ and τ for the implicit Euler method
k∖i  1  2  3  4 

0  7.8e−2  4.2e−2  1.7e−2  4.6e−3 
10  2.5e−2  2.5e−2  4.8e−2  2.2e−2 
The error estimates are here significantly influenced by the oscillation parameter k. For the larger value $k=10$, we do not observe convergence. In case $k=0$, the convergence is first order $O(\tau )$, that is, much slower than for the RungeKutta method with the twopoint Radau quadrature.
6.2 Experiments with an unknown and less smooth stationary solution
Values of stabilized time T in dependence on ℓ and τ
ℓ∖τ  1/5  1/10  1/20  1/40 

1  1.40  1.30  1.25  1.23 
20  1.60  0.80  0.45  0.20 
Time discretization error at time $\mathit{T}\mathbf{=}\mathbf{1}\mathbf{/}\mathbf{8}$ in dependence on ℓ and τ
l∖i  1  2  3  4 

1  7.4e−2  4.0e−2  2.0e−4  2.6e−5 
20  1.8e−1  1.9e−2  5.6e−4  4.5e−6 
7 Concluding remarks
There are several advantages in using high order time integration methods. Clearly, the major advantage is that the high order of discretization errors enables the use of larger, and hence fewer timesteps to achieve a desired level of accuracy. Some of the methods, like Radau integration, are highly stable, i.e. decrease unwanted solution components exponentially fast and do not suffer from an order reduction, which is otherwise common for many other methods. The disadvantage with such high order methods is that one must solve a number of quadratic matrix polynomial equations. For this reason, much work has been devoted to development of simpler methods, like diagonally implicit RungeKutta methods; see e.g. [10]. Such methods are, however, of lower order and may suffer from order reduction.
In the present paper, it has been shown that the arising quadratic matrix system polynomial factors can be handled in parallel and each of them can be solved efficiently with a preconditioning method, resulting in very few iterations. Each iteration involves just two first order matrix real valued factors, similar to what arises in the diagonal implicit RungeKutta methods. An alternative, stabilized explicit RungeKutta methods, i.e. methods where the stability domain has been extended by use of certain forms of Chebyshev polynomials; see, e.g. [17] can only be competitive for modestly stiff problems.
It has also been shown that the methods behave robustly with respect to oscillations in the coefficients in the differential operator. Hence, in practice, high order methods have a robust performance and do not suffer from any real disadvantage.
Declarations
Acknowledgements
This paper was funded by King Abdulaziz University, under grant No. (3531432/HiCi). The authors, therefore, acknowledge technical and financial support of KAU.
Authors’ Affiliations
References
 Butcher JC: Numerical Method for Ordinary Differential Equations. 2nd edition. Wiley, Chichester; 2008.View ArticleMATHGoogle Scholar
 Butcher JC: Implicit RungeKutta processes. Math. Comput. 1964, 18: 5064. 10.1090/S00255718196401594249MATHMathSciNetView ArticleGoogle Scholar
 Axelsson O: A class of A stable methods. BIT 1969, 9: 185199. 10.1007/BF01946812MATHMathSciNetView ArticleGoogle Scholar
 Axelsson O: Global integration of differential equations through Lobatto quadrature. BIT 1964, 4: 6986. 10.1007/BF01939850MATHMathSciNetView ArticleGoogle Scholar
 Axelsson O: On the efficiency of a class of A stable methods. BIT 1974, 14: 279287. 10.1007/BF01933227MATHMathSciNetView ArticleGoogle Scholar
 Axelsson O, Kucherov A: Real valued iterative methods for solving complex symmetric linear systems. Numer. Linear Algebra Appl. 2000, 7: 197218. 10.1002/10991506(200005)7:4<197::AIDNLA194>3.0.CO;2SMATHMathSciNetView ArticleGoogle Scholar
 Varga RS: Functional Analysis and Approximation Theory in Numerical Analysis. SIAM, Philadelphia; 1971.MATHView ArticleGoogle Scholar
 Gear CW: Numerical Initial Value Problems in Ordinary Differential Equations. Prentice Hall, New York; 1971.MATHGoogle Scholar
 Fried I: Optimal gradient minimization scheme for finite element eigenproblems. J. Sound Vib. 1972, 20: 333342. 10.1016/0022460X(72)906141MATHView ArticleGoogle Scholar
 Hairer E, Wanner G: Solving Ordinary Differential Equations II. Stiff and DifferentialAlgebraic Problems. 2nd edition. Springer, Berlin; 1996.MATHView ArticleGoogle Scholar
 Axelsson O: Iterative Solution Methods. Cambridge University Press, Cambridge; 1994.MATHView ArticleGoogle Scholar
 Hairer E, Lubich Ch, Roche M Lecture Notes in Mathematics 1409. In The Numerical Solution of DifferentialAlgebraic Systems by RungeKutta Methods. Springer, Berlin; 1989.Google Scholar
 Petzold LR: Order results for implicit RungeKutta methods applied to differential/algebraic systems. SIAM J. Numer. Anal. 1986, 23(4):837852. 10.1137/0723054MATHMathSciNetView ArticleGoogle Scholar
 Axelsson O: Error estimates over infinite intervals of some discretizations of evolution equations. BIT 1984, 24: 413424. 10.1007/BF01934901MATHMathSciNetView ArticleGoogle Scholar
 Wanner G: A short proof on nonlinear A stability. BIT 1976, 16: 226227. 10.1007/BF01931374MATHMathSciNetView ArticleGoogle Scholar
 Frank R, Schneid J, Ueberhuber CW: The concept of B convergence. SIAM J. Numer. Anal. 1981, 18: 753780. 10.1137/0718051MATHMathSciNetView ArticleGoogle Scholar
 Hundsdorfer W, Verwer JG: Numerical Solution of Time Dependent AdvectionDiffusionReaction Equations. Springer, Berlin; 2003.MATHView ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.