We consider matrix splitting iteration methods for solving the following system of linear equations:
$$ Au=b,\quad A \in \mathbb{C}^{M\times M}, b\in \mathbb{C}^{M}, $$
(3.1)
where A is a complex symmetric and non-Hermitian positive definite matrix of the form \(A=D+\widetilde{T}, D \in \mathbb{R}^{M\times M}\) is a real diagonal matrix of nonnegative diagonal elements, and \(\widetilde{T}\in \mathbb{C}^{M\times M}\) is a complex symmetric Toeplitz and non-Hermitian positive definite matrix.
Splitting A with respect to its diagonal and Toeplitz parts D and T̃, for a medium-shifting, we have
$$\begin{aligned} A=M_{\omega }-N_{\omega } \end{aligned}$$
with
$$\begin{aligned} M_{\omega }=\omega I+\widetilde{T} \quad \mbox{and} \quad N_{\omega }=\omega I-D, \end{aligned}$$
(3.2)
where \(\omega =\frac{d_{\min }+d_{\max }}{2}\), \(d_{\min }\) and \(d_{\max }\) are the smallest and largest elements of the diagonal matrix D. Thus, a splitting iteration method with medium-splitting (MS) is precisely stated in the following.
Algorithm 3.1
(MS)
Given an initial guess \(u^{(0)}\), for \(k=0,1,\ldots \) , until \(\{u^{(k)}\}\) converges, compute
$$\begin{aligned} (\omega I+\widetilde{T})u^{(k+1)}=(\omega I-D)u^{(k)}+b. \end{aligned}$$
(3.3)
The iteration matrix is given by
$$\begin{aligned} G_{\omega }=(\omega I+\widetilde{T})^{-1}(\omega I-D). \end{aligned}$$
(3.4)
The following theorem demonstrates the convergence of Algorithm 3.1 under the reasonable assumptions.
Theorem 3.1
Let
\(A=D+\widetilde{T}=D+T+iI \in \mathbb{C}^{M\times M}\), where
\(\widetilde{T}\in \mathbb{C}^{M\times M}\)
is a symmetric and non-Hermitian positive definite Toeplitz matrix, and
\(D=(d_{1},d_{2},\ldots ,d_{M})\in \mathbb{R}^{M\times M}\)
is a diagonal matrix with nonnegative elements. Assume that
\(d_{\min }\), \(d_{\max }\)
and
ω
are described as
$$ d_{\min }=\min_{1\leq p\leq M}\{d_{p}\},\qquad d_{\max }=\max_{1\leq p\leq M}\{d_{p}\} \quad\textit{and}\quad \omega =\frac{d_{\min }+d_{\max }}{2}, $$
then the spectral radius
\(\rho (G_{\omega })\)
is bounded by
$$ \delta _{\omega }=\frac{\omega -d_{\min }}{\sqrt{(\omega +\lambda _{\min })^{2}+1}}=\frac{d_{\max }-d_{\min }}{2\sqrt{(\omega +\lambda _{\min })^{2}+1}}, $$
where
\(\lambda _{\min }\)
is the smallest eigenvalue of T.
Moreover, it holds that
\(\rho (G_{\omega })\leq \delta _{\omega } < 1\). That is to say, the MS method converges to the unique solution
\(u_{*}\)
of (3.1).
Proof
By the similarity invariance of the matrix spectrum as well as (3.4), we have
$$ \begin{aligned} \rho (G_{\omega }) &= \rho \bigl((\omega I+ \widetilde{T})^{-1}(\omega I-D) \bigr) \\ &= \rho \bigl((\omega I+T+iI)^{-1}(\omega I-D) \bigr) \\ &\leq \bigl\Vert (\omega I+T+iI)^{-1}(\omega I-D) \bigr\Vert _{2} \\ &\leq \bigl\Vert (\omega I+T+iI)^{-1} \bigr\Vert _{2} \Vert \omega I-D \Vert _{2} \\ & = \max_{\lambda _{i}\in \lambda (T)}\frac{1}{\sqrt{(\omega +\lambda _{i})^{2}+1}} \max_{d_{i}\in \Sigma (D)} \vert {\omega -d_{i} \vert } \\ & = \frac{\omega -d_{\min }}{\sqrt{(\omega +\lambda _{\min })^{2}+1}} \\ & = \frac{d_{\max }-d_{\min }}{2\sqrt{(\omega +\lambda _{\min })^{2}+1}}. \end{aligned} $$
The upper bound of \(\rho (G_{\omega })\) is obtained.
It is known that \(\delta _{\omega }=\frac{\omega -d_{\min }}{\sqrt{(\omega +\lambda _{\min })^{2}+1}} < 1\) is equivalent to
$$\begin{aligned} &\omega ^{2}+d_{\min }^{2}-2\omega d_{\min }< \omega ^{2}+\lambda _{\min }^{2}+2\omega \lambda _{\min }+1 \\ &\quad \Longleftrightarrow \quad \lambda _{\min }^{2}-d_{\min }^{2}+2 \omega (\lambda _{\min }+d_{\min })+1 >0 \\ &\quad \Longleftrightarrow \quad (\lambda _{\min }+d_{\min }) (\lambda _{\min }-d_{\min }+2\omega )+1 >0 \\ &\quad \Longleftrightarrow \quad (\lambda _{\min }+d_{\min }) (\lambda _{\min }+d_{\max })+1 > 0. \end{aligned}$$
It is easily obtained that \(\rho (G_{\omega })\leq \delta _{\omega }<1\) from \(\lambda _{\min }\), \(d_{\min }\) and \(d_{\max }\) are all nonnegative. Hence, the MS method converges to the unique solution \(u_{*}\) of (3.1). □
In Algorithm 3.1, the matrices \(\omega I+\widetilde{T}\) and \(\omega I-D\) are Toeplitz and diagonal matrices, respectively. In order to reduce the computing time and improve the efficiency of the MS method, we can employ the fast and superfast algorithms of the inverse of the Toeplitz matrix to solve the linear system.
The splitting iteration Algorithm 3.1 naturally induces the preconditioning matrix \(M_{\omega }^{-1}\) defined in (3.2) for the coefficient matrix \(A \in \mathbb{C}^{M\times M}\) of the linear system (3.1). We know that the Toeplitz matrix T can be approximated well by a circulant matrix. In order to further reduce the computational costs and accelerate the convergence rate of the MS iteration method, we changed equivalently the symmetric linear system (3.1) by choosing a circulant matrix, \(C\in \mathbb{C}^{M\times M}\), obtaining the preconditioned MS (PMS) iteration method. More concretely, the preconditioner is generated from Strang’s circulant preconditioner. We can take C to be Strang’s circulant approximation obtained by copying the central diagonals of T and bringing them around to complete the circulant requirement. When M is odd,
$$ C=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} c_{0}^{(\alpha )} & c_{-1}^{(\alpha )} & \cdots & c_{\frac{M-1}{2}}^{(\alpha )} & c_{\frac{M-1}{2}}^{(\alpha )} & \cdots & c_{-1}^{(\alpha )} \\ c_{1}^{(\alpha )} & c_{0}^{(\alpha )} & \ddots & \ddots & c_{\frac{M-1}{2}}^{(\alpha )} & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & c_{\frac{M-1}{2}}^{(\alpha )} \\ c_{\frac{M-1}{2}}^{(\alpha )} & \ddots & \ddots & \ddots & \ddots & \ddots & c_{\frac{M-1}{2}}^{(\alpha )} \\ c_{\frac{M-1}{2}}^{(\alpha )} & c_{\frac{M-1}{2}}^{(\alpha )} & \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & c_{-1}^{(\alpha )} \\ c_{1}^{(\alpha )} & \cdots & c_{\frac{M-1}{2}}^{(\alpha )} & c_{\frac{M-1}{2}}^{(\alpha )} & \cdots & c_{1}^{(\alpha )} & c_{0}^{(\alpha )} \end{array}\displaystyle \right ) , $$
when M is even, it can be treated similarly as follows:
$$ C=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} c_{0}^{(\alpha )} & c_{-1}^{(\alpha )} & \cdots & c_{\frac{M}{2}}^{(\alpha )} & 0 & c_{\frac{M}{2}}^{(\alpha )} & \cdots & c_{-1}^{(\alpha )} \\ c_{1}^{(\alpha )} & c_{0}^{(\alpha )} & \ddots & \ddots & c_{\frac{M}{2}}^{(\alpha )} & 0 & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & c_{\frac{M}{2}}^{(\alpha )} \\ c_{\frac{M}{2}}^{(\alpha )} & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & 0 \\ 0 & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & c_{\frac{M}{2}}^{(\alpha )} \\ c_{\frac{M}{2}}^{(\alpha )} & 0 & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & 0 & \ddots & c_{-1}^{(\alpha )} \\ c_{1}^{(\alpha )} & \cdots & c_{\frac{M}{2}}^{(\alpha )} & 0 & c_{\frac{M}{2}}^{(\alpha )}& \cdots & c_{1}^{(\alpha )} & c_{0}^{(\alpha )} \end{array}\displaystyle \right ) . $$
According to the property of the coefficients \(c_{k}^{(\alpha )}, k=0, 1, \ldots , M-1\), we see that C is symmetric and strictly diagonally dominant, resulting in it being symmetric positive definite.
Algorithm 3.2
(PMS)
Given an initial guess \(u^{(0)}\), for \(k=0,1,\ldots \) , until \(\{u^{(k)}\}\) converges, compute
$$\begin{aligned} (\omega I+C+iI)u^{(k+1)}=(\omega I-D+C-T)u^{(k)}+b, \end{aligned}$$
(3.5)
here \(\omega =\frac{d_{\min }+d_{\max }}{2}\) is the medium of \(d_{\min }\) and \(d_{\max }\), where \(d_{\min }\) and \(d_{\max }\) are the minimum and maximum elements of the diagonal matrix D.
Remark 3.1
In Algorithm 3.2, the matrix \(\omega I+C+iI\) is circulant, we can complete effectively via fast and superfast algorithms. In fact, scheme (3.5) can be regarded as a standard stationary iteration method as follows:
$$\begin{aligned} \widetilde{M}_{\omega }u^{(k+1)}= \widetilde{N}_{\omega }u^{(k)}+b, \end{aligned}$$
(3.6)
where \(\widetilde{M}_{\omega }=\omega I+C+iI\) and \(\widetilde{N}_{\omega }=\omega I-D+C-T\). Hence, \(\widetilde{M}_{\omega }^{-1}\) can be considered as a preconditioner of (3.1). We have naturally
$$\begin{aligned} \bigl\Vert \widetilde{M}_{\omega }^{-1}A \bigr\Vert _{2} =& \bigl\Vert (\omega I+C+iI)^{-1}A \bigr\Vert _{2} \\ \leq &\bigl\Vert (\omega I+C+iI)^{-1} \bigr\Vert _{2}\cdot \Vert A \Vert _{2} \\ =&\max_{\nu _{p}\in \lambda (C)}\frac{1}{\sqrt{(\omega +\nu _{p})^{2}+1}}\cdot \Vert A \Vert _{2} \\ =&\frac{1}{\sqrt{(\omega +\nu _{\min })^{2}+1}}\cdot \Vert A \Vert _{2} \\ =&\kappa \cdot \Vert A \Vert _{2}, \end{aligned}$$
where \(\kappa =\frac{1}{\sqrt{(\omega +\nu _{\min })^{2}+1}}<1\). Therefore, the eigenvalues distribution of the matrix \(\widetilde{M}_{\omega }^{-1}A\) is tighter than that of the matrix A by rough estimate. It is verified by numerical experiments in the next section.
Remark 3.2
Because the circulant preconditioner is the approximation of the Toeplitz matrix, the proof of the convergence property for the PMS iteration method is similar to the MS iteration method. Therefore, we will not repeat the proof process of the convergence property in Algorithm 3.2 here.