Open Access

Asymptotic behavior and a posteriori error estimates for the generalized overlapping domain decomposition method for parabolic equation

Boundary Value Problems20152015:124

https://doi.org/10.1186/s13661-015-0398-1

Received: 21 February 2015

Accepted: 22 July 2015

Published: 31 July 2015

Abstract

In this paper, a posteriori error estimates for the generalized overlapping domain decomposition method with Dirichlet boundary conditions on the interfaces, for parabolic variational equation with second order boundary value problems, are derived using the semi-implicit-time scheme combined with a finite element spatial approximation. Furthermore a result of asymptotic behavior in uniform norm is given using Benssoussan-Lions’ algorithm.

Keywords

a posteriori error estimatesGODDMRobin conditionPE

1 Introduction

The Schwarz alternating method can be used to solve elliptic boundary value problems on domains which consist of two or more overlapping subdomains. It has been invented by Herman Amandus Schwarz in 1890. This method has been used to solve the stationary or evolutionary boundary value problems on domains which consist of two or more overlapping subdomains (see [15]). The solution of these qualitative problem is approximated by an infinite sequence of functions which results from solving a sequence of stationary or evolutionary boundary value problems in each of the subdomains. An extensive analysis of Schwarz alternating method for nonlinear elliptic boundary value problems can be found in [69] and the references therein. Also the effectiveness of Schwarz methods for these problems, especially those in fluid mechanics, has been demonstrated in many papers. See proceedings of the annual domain decomposition conference beginning with [7]. Moreover, the a priori estimate of the error for stationary problem is given in several papers; see for instance [2] in which a variational formulation of the classical Schwarz method is derived. In [9] geometry related convergence results are obtained. In [6] the accelerated version of the GODDM has been treated. In addition, in [7] the convergence for simple rectangular or circular geometries has been studied. However, one did not give a criterion to stop the iterative process. All these results can also be found in the recent books on domain decomposition methods [10, 11]. Recently in [12, 13], an improved version of the Schwarz method for highly heterogeneous media has been presented. This method uses new optimized interface conditions specially designed to take into account the heterogeneity between the subdomains on the interfaces. A recent overview of the current state of the art on domain decomposition methods can be found in two special issues of the computer methods in applied mechanics and engineering journal [3, 14, 15].

In general, the a priori estimate for stationary problems is not suitable for assessing the quality of the approximate solution on subdomains, since it depends mainly on the exact solution itself, which is unknown. The alternative approach is to use the approximate solution itself in order to find such an estimate. This approach, known as a posteriori estimate, became very popular in the 1990s with finite element methods; see the monographs [16, 17], and the references therein. In [5] an a posteriori estimate for a nonoverlapping domain decomposition algorithm has been given and an a posteriori error analysis for the elliptic case has also been used by [18] to determine an optimal value of the penalty parameter for penalty domain decomposition methods to construct fast solvers.

Quite a few works on maximum norm error analysis of overlapping nonmatching grids methods for elliptic problems are known in the literature (cf., e.g., [1114]). To prove the main result of this paper, we proceed as in [12]. More precisely, we develop an approach which combines a geometrical convergence result due to [4, 8, 19] and a lemma which consists of estimating the error in the maximum norm between the continuous and discrete Schwarz iterates. The optimal convergence order is then derived using the standard finite element and an \(L^{\infty}\)-error estimate for linear elliptic equations [2].

In recent research, in [20] the authors proved the error analysis in the maximum norm for a class of nonlinear elliptic problems in the context of overlapping nonmatching grids and they established the optimal \(L^{\infty}\)-error estimate between the discrete Schwarz sequence and the exact solution of the PDE, and in [21] the authors derived a posteriori error estimates for the generalized overlapping domain decomposition method GODDM with Robin boundary conditions on the interfaces for second order boundary value problems; they have shown that the error estimate in the continuous case depends on the differences of the traces of the subdomain solutions on the interfaces after a discretization of the domain by finite elements method. Also they used the techniques of the residual a posteriori error analysis to get an a posteriori error estimate for the discrete solutions on subdomains.

In this work we apply the derived a posteriori error estimates for the generalized overlapping domain decomposition method (GODDM) to the following evolutionary equation: find the \(u\in L^{2} ( 0,T;H_{0}^{1} ( \Omega ) ) \cap C^{2} ( 0,T,H^{-1} ( \Omega ) ) \) solution of
$$ \left \{ \textstyle\begin{array}{l} \frac{\partial u}{\partial t}-\Delta u+\alpha u=f\quad \text{in }\Sigma, \\ u=0\quad \text{in }\Gamma\times [ 0,T ] , \\ u(\cdot,0)=u_{0}\quad \text{in }\Omega, \end{array}\displaystyle \right . $$
(1.1)
where Σ is a set in \(\mathbb{R}^{2}\times\mathbb{R}\) defined as \(\Sigma=\Omega\times [ 0,T ] \) with \(T<+\infty\), where Ω is a smooth bounded domain of \(\mathbb{R}^{2}\) with boundary Γ.
The function \(\alpha\in L^{\infty} ( \Omega ) \) is assumed to be non-negative; it verifies
$$ \alpha\leq\beta, \qquad \beta>0. $$
(1.2)
f is a regular function that satisfies
$$ f\in L^{2} \bigl( 0,T,L^{2} ( \Omega ) \bigr) \cap C^{1} \bigl( 0,T,H^{-1} ( \Omega ) \bigr) . $$

The symbol \(( \cdot,\cdot )_{\Omega} \)stands for the inner product in \(L^{2} ( \Omega ) \).

The outline of the paper is as follows: In Section 2, we introduce some necessary notations, then we give a variational formulation of our model. In Section 3, an a posteriori error estimate is proposed for the convergence of the discretized solution using the semi-implicit-time scheme combined with a finite element method on subdomains. In Section 4, we associate with the discrete introduced problem a fixed point mapping, which we use in proving the existence of a unique discrete solution. Then in Section 5, an \(H_{0}^{1} ( \Omega ) \)-asymptotic behavior estimate for each subdomain is derived.

2 The continuous problem

Using the Green formula, the problem (1.1) can be transformed into the following continuous parabolic variational equation: find the \(u\in L^{2} ( 0,T,H_{0}^{1} ( \Omega ) ) \cap C^{2} ( 0,T,H^{-1} ( \Omega ) ) \) solution to
$$ \left \{ \textstyle\begin{array}{l} ( u_{t},v ) _{\Omega} +a(u,v)= ( f,v ) _{\Omega } , \quad v\in H_{0}^{1} ( \Omega ), \\ u(\cdot,0)=u_{0},\end{array}\displaystyle \right . $$
(2.1)
where
$$ a(u,v)=\int_{\Omega}\nabla u\nabla v\cdot dx+\int _{\Omega}\alpha uv\cdot dx . $$

The symbol \(( \cdot,\cdot ) _{\Omega} \)stands for the inner product in \(L^{2} ( \Omega ) \).

2.1 The semi-discrete parabolic variational equation

We discretize the problem (2.1) with respect to time by using the semi-implicit scheme. Therefore, we search a sequence of elements \(u^{k}\in H_{0}^{1} ( \Omega ) \) which approaches \(u^{i} ( t_{k} ) \), \(t_{k}=k\Delta t\), with initial data \(u^{i,0}=u_{0}^{i}\).

Thus, we have, for \(k=1,\ldots,n\),
$$ \left \{ \textstyle\begin{array}{l} ( \frac{u^{k}-u^{k-1}}{\Delta t},v ) +a ( u^{k},v ) = ( f ^{k},v ) _{\Omega}\quad \text{in }\Sigma, \\ u^{0} ( x ) =u_{0}\quad \text{in }\Omega, \qquad u=0\quad \text{on }\partial \Omega, \end{array}\displaystyle \right . $$
(2.2)
which implies
$$ \left \{ \textstyle\begin{array}{l} ( \frac{u^{k}}{\Delta t},v ) +a ( u^{k},v ) = ( f^{k}+\frac{u^{k-1}}{\Delta t},v ), \\ u^{0} ( x ) =u_{0}\quad \text{in }\Omega,\qquad u=0\quad \text{on }\partial \Omega.\end{array}\displaystyle \right . $$
(2.3)
Then the problem (2.3) can be reformulated into the following coercive system of elliptic variational equations:
$$ \left \{ \textstyle\begin{array}{l} b ( u^{k},v ) = ( f^{k}+\lambda u^{k-1},v ) = ( F ( u^{k-1} ) ,v ) , \\ u^{0} ( x ) =u_{0}\quad \text{in }\Omega,\qquad u=0\quad \text{on }\partial \Omega,\end{array}\displaystyle \right . $$
(2.4)
such that
$$ \left \{ \textstyle\begin{array}{l} b ( u^{k},v ) =\lambda ( u^{k},v ) +a ( u^{k},v ) ,\quad u^{k}\in H_{0}^{1} ( \Omega ) , \\ \lambda=\frac{1}{\Delta t}=\frac{1}{k}=\frac{T}{n},\quad k=1,\ldots,n.\end{array}\displaystyle \right . $$
(2.5)

2.2 The space-continuous for generalized overlapping domain decomposition

Let Ω be a bounded domain in \(\mathbb{R}^{2}\) with a piecewise \(C^{1,1}\) boundary Ω.

We split the domain Ω into two overlapping subdomains \(\Omega_{1}\) and \(\Omega_{2}\) such that
$$ \Omega_{1}\cap\Omega_{2}=\Omega_{12},\qquad \partial\Omega _{i}\cap\Omega_{j}=\Gamma_{i},\quad i\neq j\text{ and }i,j=1,2. $$
We need the spaces
$$ V_{i}=H^{1}(\Omega)\cap H^{1}( \Omega_{i})= \bigl\{ v\in H^{1}(\Omega _{i}):v_{\partial\Omega_{i}\cap\partial\Omega}=0 \bigr\} $$
and
$$ W_{i}=H_{00}^{\frac{1}{2}}( \Gamma_{i})= \{ v\vert_{\Gamma_{i}} v\in V_{i} \text{ and }v=0\text{ on }\partial\Omega_{i}\backslash\Gamma_{i} \}, $$
(2.6)
which is a subspace of
$$ H^{\frac{1}{2}}(\Gamma_{i})= \bigl\{ \psi\in L^{2}( \Gamma_{i}):\psi=\varphi _{\Gamma_{i}} \text{ for some }\varphi\in V_{i}, i=1,2 \bigr\} , $$
equipped with the norm
$$ \Vert \varphi \Vert _{W_{i}}=\inf_{v\in V_{i},v=\varphi \text{ on }\Gamma_{i}} \Vert v\Vert _{1,\Omega}. $$
(2.7)
We define the continuous counterparts of the continuous Schwarz sequences defined in (2.4), respectively, by \(u_{1}^{k,m+1}\in H_{0}^{1} ( \Omega ) \), \(m=0,1,2,\ldots\) , such that
$$ \left \{ \textstyle\begin{array}{l} b ( u_{1}^{k,m+1},v ) = ( F ( u_{1}^{k-1,m+1} ) ,v ) _{\Omega_{1}}, \\ u_{1}^{k,m+1}=0\quad \text{on }\partial\Omega_{1}\cap\partial\Omega =\partial\Omega_{1}-\Gamma_{1}, \\ \frac{\partial u_{1}^{k,m+1}}{\partial\eta_{1}}+\alpha _{1}u_{1}^{k,m+1}=\frac{\partial u_{2}^{k,m}}{\partial\eta_{1}}+\alpha_{1}u_{2}^{k,m}\quad \text{on }\Gamma_{1}, \end{array}\displaystyle \right . $$
(2.8)
and \(u_{2}^{k,m+1}\in H_{0}^{1} ( \Omega ) \) a solution of
$$ \left \{ \textstyle\begin{array}{l} b ( u_{2}^{k,m+1},v ) = ( F ( u_{2}^{k-1,m+1} ) ,v ) _{\Omega_{2}}, \quad m=0,1,2,\ldots, \\ u_{2}^{k,m+1}=0\quad \text{on }\partial\Omega_{2}\cap\partial\Omega =\partial\Omega_{2}-\Gamma_{2}, \\ \frac{\partial u_{2}^{k,m+1}}{\partial\eta_{2}}+\alpha _{2}u_{2}^{n+1,m+1}=\frac{\partial u_{1}^{k,m}}{\partial\eta _{2}}+\alpha _{2}u_{1}^{k,m}\quad \text{on }\Gamma_{2}, \end{array}\displaystyle \right . $$
(2.9)
where \(\eta_{i}\) is the exterior normal to \(\Omega_{i}\) and \(\alpha_{i}\) is a real parameter, \(i=1,2\).

Theorem 1

(cf. [2])

The sequences \(( u^{k,m+1} ) \); \(( u^{k,m+1} ) \); \(n\geq0\), produced by the Schwarz alternating method converge geometrically to the solution u of the problem (2.1). More precisely, there exist \(k_{1}, k_{2}\in ( 0,1 ) \) which depend only, respectively, on \(( \Omega_{1},\gamma_{2} ) \) and \(( \Omega_{2},\gamma_{1} ) \) such that all \(n\geq0\),
$$ \sup_{\bar{\Omega}_{i}}\bigl\vert u-u_{i}^{k,m+1} \bigr\vert \leq k_{1}^{n}k_{2}^{n}\sup _{\gamma_{1}}\bigl\vert u^{\infty }-u^{0}\bigr\vert , $$
(2.10)
where \(u^{\infty}\), the asymptotic continuous solution and \(\gamma _{i}=\partial\Omega_{i}\cap\Omega_{j}\), \(i\neq j\), \(i=1,2\).

Proof

The Schwarz alternating method converges geometrically to the solution u for the elliptic problem as has been proved in [4, 5]. Then it was updated and adapted for a new bilinear parabolic form in [2]. This theorem remains true for the problem introduced in this paper, because the introduced problem (2.1) can be reformulated to the system of elliptic variational equation (2.4). □

Also, in [21] the authors proved the error estimate for the elliptic variational inequalities using the standard nonmatching grids discretization with uniform norm and they found the following estimate:
$$ \bigl\Vert u-u_{i}^{m+1}\bigr\Vert _{L^{\infty} ( \bar{\Omega}_{1} ) }\leq Ch^{2}\vert \log h\vert , $$
(2.11)
where C is a constant independent of both h and n.

Remark 1

For our introduced problem (our particular equation), it is noted that the \(H_{0}^{1} ( \Omega ) \)-norm remained true for (2.11), and its proof is very similar to that in [2].

In the next section, our main interest is to obtain the a posteriori error estimate we need for stopping the iterative process as soon as the required global precision is reached. Namely, by applying the Green formula in the Laplacian defined in (1.1) with the new boundary conditions of the generalized Schwarz alternating method defined in (2.8) applied to the elliptic operator Δ, we get
$$\begin{aligned} \bigl( -\Delta u_{1}^{k,m+1},v_{1} \bigr) _{\Omega_{1}} =& \bigl( \nabla u_{1}^{k,m+1},\nabla v_{1} \bigr) _{\Omega_{1}}- \biggl( \frac{\partial u_{1}^{k,m+1}}{\partial\eta_{1}},v_{1} \biggr) _{\partial\Omega _{1}-\Gamma_{1}}+ \biggl( \frac{\partial u_{1}^{k,m+1}}{\partial\eta _{1}},v_{1} \biggr)_{\Gamma_{1}} \\ =& \bigl( \nabla u_{1}^{k,m+1},\nabla v_{1} \bigr) _{\Omega_{1}}- \biggl( \frac{\partial u_{1}^{k,m+1}}{\partial\eta_{1}},v_{1} \biggr) _{\Gamma _{1}} \\ =& \bigl( \nabla u_{1}^{k,m+1},\nabla v_{1} \bigr) _{\Omega_{1}}- \biggl( \frac{\partial u_{2}^{k,m}}{\partial\eta_{2}}+\alpha _{1}u_{2}^{k,m}- \alpha_{1}u_{1}^{k,m+1},v_{1} \biggr) _{\Gamma_{1}} \\ =& \bigl( \nabla u_{1}^{k,m+1},\nabla v_{1} \bigr) _{\Omega_{1}}+ \bigl( \alpha_{1}u_{1}^{k,m+1},v_{1} \bigr) _{\Gamma_{1}}- \biggl( \frac {\partial u_{2}^{k,m}}{\partial\eta_{1}}+\alpha_{1}u_{2}^{k,m},v_{1} \biggr) _{\Gamma_{1}}, \end{aligned}$$
thus the problem (2.8) equivalent to finding \(u_{1}^{k,m+1}\in V_{1}\) such that
$$\begin{aligned}& b\bigl(u_{1}^{k,m+1},v_{1}\bigr)+ \bigl( \alpha_{1}u_{1}^{n+1,m+1},v_{1} \bigr) _{\Gamma_{1}} \\& \quad = \bigl( F\bigl(u^{k-1}\bigr),v_{1} \bigr) _{\Omega_{1}}+ \biggl( \frac{\partial u_{2}^{k,m}}{\partial\eta_{1}}+\alpha _{1}u_{2}^{k,m},v_{1} \biggr) _{\Gamma_{1}},\quad \forall v_{1}\in V_{1}, \end{aligned}$$
(2.12)
and, for (2.9) \(u_{2}^{k,m+1}\in V_{2}\), we have
$$\begin{aligned}& b\bigl(u_{2}^{k,m+1},v_{2}\bigr)+ \bigl( \alpha_{2}u_{2}^{k,m+1},v_{2} \bigr) _{\Gamma _{2}} \\& \quad = \bigl( F\bigl(u^{k-1}\bigr),v_{2} \bigr) _{\Omega_{2}}+ \biggl( \frac{\partial u_{1}^{k,m}}{\partial\eta_{2}}+\alpha _{2}u_{1}^{k,m},v_{2} \biggr) _{\Gamma_{2}},\quad \forall v_{2}\in V_{2}. \end{aligned}$$
(2.13)

3 A posteriori error estimate in the continuous case

Since it is numerically easier to compare the subdomain solutions on the interfaces \(\Gamma_{1}\) and \(\Gamma_{2}\) rather than on the overlap \(\Omega_{12}\), we need to introduce two auxiliary problems defined on nonoverlapping subdomains of Ω. This idea allows us to obtain the a posteriori error estimate by following the steps of Otto and Lube [5]. We get these auxiliary problems by coupling each one of the problems (2.8) and (2.9) with another problem in a nonoverlapping way over Ω. These auxiliary problems are needed for the analysis and not for the computation, to get the estimate.

To define these auxiliary problems we need to split the domain Ω into two sets of disjoint subdomains: \(( \Omega_{1},\Omega _{3} ) \) and \(( \Omega_{2},\Omega_{4} ) \) such that
$$ \Omega=\Omega_{1}\cup\Omega_{3},\quad \text{with } \Omega_{1}\cap\Omega _{3}=\varnothing,\qquad \Omega= \Omega_{2}\cup\Omega_{4},\quad \text{with } \Omega_{2}\cap\Omega_{4}=\varnothing. $$
Let \((u_{1}^{k,m},u_{2}^{n+1,m})\) be the solution of problems (2.12) and (2.13); we define the couple \((u_{1}^{k,m},u_{3}^{k,m})\) over \(( \Omega _{1},\Omega_{3} ) \) to be the solution of the following nonoverlapping problems:
$$ \left \{ \textstyle\begin{array}{l} \frac{u_{1}^{k,m+1}-u_{1}^{k-1,m+1}}{\Delta t}-\Delta u_{1}^{k,m+1}=f^{k}\quad \text{in }\Omega_{1},k=1,\ldots,n, \\ u_{1}^{k,m+1}=0\quad \text{on }\partial\Omega_{1}\cap\partial\Omega, \\ \frac{\partial u_{1}^{k,m+1}}{\partial\eta_{1}}+\alpha _{1}u_{1}^{k,m+1}=\frac{\partial u_{2}^{k,m}}{\partial\eta_{1}}+\alpha_{1}u_{2}^{k,m}\quad \text{on }\Gamma_{1} \end{array}\displaystyle \right . $$
(3.1)
and
$$ \left \{ \textstyle\begin{array}{l} \frac{u_{3}^{k,m+1}-u^{k-1,m+1}}{\Delta t}-\Delta u_{3}^{k,m+1}=f ^{k} \quad \text{in }\Omega_{3}, \\ u_{3}^{k,m+1}=0\quad \text{on }\partial\Omega_{3}\cap\partial\Omega, \\ \frac{\partial u_{3}^{k,m+1}}{\partial\eta_{3}}+\alpha _{3}u_{3}^{n+1,m+1}=\frac{\partial u_{1}^{k,m}}{\partial\eta _{3}}+\alpha _{3}u_{1}^{k,m}\quad \text{on }\Gamma_{1}.\end{array}\displaystyle \right . $$
(3.2)

One can take \(\epsilon_{1}^{n+1,m}=u_{2}^{n+1,m}-u_{3}^{n+1,m}\) on \(\Gamma_{1}\), the difference between the overlapping and the nonoverlapping solutions \(u_{2}^{n+1,m}\) and \(u_{3}^{n+1,m}\) in problems (2.8), (2.9), and (3.1) and (3.2) in \(\Omega_{3}\). Both the overlapping and the nonoverlapping problems converge, see [5], that is, \(u_{2}^{k,m}\) and \(u_{3}^{k,m}\) tend to \(u_{2}\) (resp. \(u_{3}\)), and \(\epsilon_{1}^{k,m}\) should tend to naught as m tends to infinity in \(V_{2}\).

By putting
$$ \begin{aligned} &\Lambda_{3}^{k,m}= \frac{\partial u_{2}^{n+1,m}}{\partial\eta _{1}}+\alpha _{1}u_{2}^{n+1,m}, \\ &\Lambda_{1}^{k,m}=\frac{\partial u_{1}^{k,m}}{\partial\eta _{3}}+\alpha _{3}u_{1}^{k,m}, \\ &\Lambda_{3}^{k,m}=\frac{\partial u_{3}^{k,m}}{\partial\eta _{1}}+\alpha _{1}u_{3}^{k,m}+\frac{\partial\epsilon_{1}^{k,m}}{\partial\eta_{1}} + \alpha_{1}\epsilon_{1}^{k,m}, \\ &\Lambda_{1}^{k,m}=\frac{\partial u_{1}^{k,m}}{\partial\eta _{3}}+\alpha _{3}u_{1}^{k,m}, \end{aligned} $$
(3.3)
and using the Green formula, (3.1) and (3.2) can be reformulated to the following system of elliptic variational equations:
$$\begin{aligned}& b_{1}\bigl(u_{1}^{k,m+1},v_{1}\bigr)+ \bigl( \alpha_{1}u_{1}^{k,m+1},v_{1} \bigr) _{\Gamma_{1}} \\& \quad = \bigl( F\bigl(u^{k-1}\bigr),v_{1} \bigr) _{\Omega_{1}}+ \bigl( \Lambda_{3}^{k,m},v_{1} \bigr) _{\Gamma_{1}},\quad \forall v_{1}\in V_{1}, \end{aligned}$$
(3.4)
$$\begin{aligned}& b_{3}\bigl(u_{3}^{k,m+1},v_{3}\bigr)+ \bigl( \alpha_{3}u_{3}^{k,m+1},v_{3} \bigr) _{\Gamma_{1}} \\& \quad = \bigl( F\bigl(u^{k-1}\bigr),v_{3} \bigr) _{\Omega_{3}}+ \bigl( \Lambda_{1}^{k,m},v_{3} \bigr) _{\Gamma_{1}},\quad \forall v_{3}\in V_{3}. \end{aligned}$$
(3.5)
On the other hand by taking
$$ \theta_{1}^{k,m}=\frac{\partial\epsilon_{1}^{k,m}}{\partial\eta _{1}}+ \alpha_{1}\epsilon_{1}^{k,m}, $$
(3.6)
we get
$$\begin{aligned} \Lambda_{3}^{k,m} =&\frac{\partial u_{3}^{k,m}}{\partial\eta_{1}}+ \alpha_{1}u_{3}^{k,m}+\frac{\partial (u_{2}^{k,m}-u_{3}^{k,m})}{\partial \eta_{1}}+ \alpha_{1}\bigl(u_{2}^{k,m}-u_{3}^{k,m} \bigr) \\ =&\frac{\partial u_{3}^{k,m}}{\partial\eta_{1}}+\alpha _{1}u_{3}^{k,m}+\frac{\partial\epsilon_{1}^{k,m}}{\partial\eta_{1}}+\alpha _{1}\epsilon _{1}^{k,m} \\ =&\frac{\partial u_{3}^{k,m}}{\partial\eta_{1}}+\alpha _{1}u_{3}^{k,m}+ \theta_{1}^{k,m}. \end{aligned}$$
(3.7)
Using (3.6) we have
$$\begin{aligned} \Lambda_{3}^{k,m+1} =&\frac{\partial u_{3}^{k,m+1}}{\partial\eta_{1}}+ \alpha_{1}u_{3}^{k,m+1}+\theta_{1}^{k,m+1} \\ =&-\frac{\partial u_{3}^{k,m+1}}{\partial\eta_{3}}+\alpha _{1}u_{3}^{k,m+1}+ \theta_{1}^{k,m+1} \\ =&\alpha_{3}u_{3}^{k,m+1}-\frac{\partial u_{1}^{k,m}}{\partial\eta _{3}}- \alpha_{3}u_{1}^{k,m}+\alpha_{1}u_{3}^{k,m+1}+ \theta_{1}^{k,m+1} \\ =&(\alpha_{1}+\alpha_{3})u_{3}^{k,m+1}- \Lambda_{1}^{k,m}+\theta _{1}^{k,m+1} \end{aligned}$$
(3.8)
and by the last equation in (3.7), we have
$$\begin{aligned} \Lambda_{1}^{k,m+1} =&-\frac{\partial u_{1}^{k,m+1}}{\partial\eta_{1}} + \alpha_{3}u_{1}^{k,m+1} \\ =&\alpha_{1}u_{1}^{k,m+1}-\frac{\partial u_{2}^{k,m}}{\partial\eta _{1}}- \alpha_{1}u_{2}^{k,m}+\alpha_{3}u_{1}^{k,m+1}+ \alpha _{3}u_{1}^{k,m+1} \\ =&(\alpha_{1}+\alpha_{3})u_{1}^{k,m+1}- \Lambda_{3}^{k,m}+\theta _{3}^{k,m+1}. \end{aligned}$$
(3.9)

From this result we can write the following algorithm, which is equivalent to the auxiliary nonoverlapping problem (3.4), (3.5). We need this algorithm and two lemmas for obtaining an a posteriori error estimate for this problem.

3.1 Algorithm

The sequences \((u_{1}^{k,m},u_{3}^{k,m})_{m\in\mathbb{N}}\), solutions of (3.4), (3.5), satisfy the following domain decomposition algorithm:
Step 1:: 

\(k=0\).

Step 2:: 

Let \(\Lambda_{i}^{k,0}\in W_{1}^{\ast}\) be an initial value, \(i=1,3\) (\(W_{1}^{\ast}\) is the dual of \(W_{1}\)).

Step 3:: 
Given \(\Lambda_{j}^{k,m}\in W^{\ast}\) solve for \(i,j=1,3\), \(i\neq j\): find the \(u_{i}^{k,m+1}\in V_{i}\) solution of
$$\begin{aligned}& b_{i}\bigl(u_{i}^{k,m+1},v_{i}\bigr)+ \bigl( \alpha_{i}u_{i}^{k,m+1},v_{i} \bigr) _{\Gamma_{i}} \\& \quad = \bigl( F\bigl(u^{k-1,m+1}\bigr),v_{i} \bigr) _{\Omega_{i}}+ \bigl( \Lambda_{j}^{k,m+1},v_{i} \bigr) _{\Gamma_{i}},\quad \forall v_{i}\in V_{i}. \end{aligned}$$
Step 4:: 
Compute
$$ \theta_{1}^{k,m+1}=\frac{\partial\epsilon _{1}^{k,m+1}}{\partial \eta_{1}}+\alpha_{1} \epsilon_{1}^{k,m+1}. $$
Step 5:: 
Compute new data \(\Lambda_{j}^{n+1,m}\in W^{\ast}\) and solve for \(i,j=1,3\), from
$$\begin{aligned}& \bigl( \Lambda_{i}^{k,m+1},\varphi \bigr) _{\Gamma_{i}} \\& \quad = \bigl( (\alpha _{i}+\alpha_{j})u_{i}^{k,m+1},v_{i} \bigr) _{\Gamma_{i}}- \bigl( \Lambda_{j}^{k,m+1},\varphi \bigr) _{\Gamma_{i}}+ \bigl( \theta _{j}^{k,m+1},\varphi \bigr) _{\Gamma_{i}},\quad \forall\varphi\in W_{i},i\neq j. \end{aligned}$$
Step 6:: 

Set \(m=m+1\) and go to Step 3.

Step 7:: 

Set \(k=k+1\) and go to Step 2.

Lemma 1

Let \(u_{i}^{k}=u_{\Omega_{i}}^{k}\), \(e_{i}^{k,m+1}=u_{i}^{k,m+1}-u_{i}^{k}\), and \(\eta_{i}^{k,m+1}=\Lambda _{i}^{k,m+1}-\Lambda_{i}^{k}\). Then for \(i,j=1,3\), \(i \neq j\), the following relations hold:
$$ b_{i}\bigl(e_{i}^{k,m+1},v_{i} \bigr)+ \bigl( \alpha_{i}e_{i}^{k,m+1},v_{i} \bigr) _{\Gamma_{i}}= \bigl( \eta_{j}^{k,m},v_{i} \bigr) _{\Gamma _{i}},\quad \forall v_{i}\in V_{i} $$
(3.10)
and
$$ \bigl(\eta_{i}^{k,m+1},\varphi \bigr) _{\Gamma_{i}}= \bigl( (\alpha _{i}+\alpha_{j})e_{i}^{k,m+1},v_{1} \bigr) _{\Gamma_{i}}- \bigl( \eta _{j}^{k,m},\varphi \bigr) _{\Gamma_{i}}+ \bigl( \theta _{j}^{k,m+1},\varphi \bigr) _{\Gamma_{i}},\quad \forall\varphi\in W_{1}. $$
(3.11)

Proof

1. We have
$$ b_{i}\bigl(u_{i}^{k,m+1},v_{i}\bigr)+ \bigl( \alpha_{i}u_{i}^{k,m+1},v_{i} \bigr) _{\Gamma_{i}}= \bigl( F\bigl(u^{k-1,m+1}\bigr),v_{i} \bigr) _{\Omega _{i}}+ \bigl\langle \Lambda_{j}^{k,m},v_{1} \bigr\rangle _{\Gamma _{i}},\quad \forall v_{i}\in V_{i} $$
and
$$ b_{i}\bigl(u_{i}^{k},v_{i}\bigr)+ \bigl( \alpha_{i}u_{i}^{k},v_{i} \bigr) _{\Gamma _{i}}= \bigl( F\bigl(u^{k-1,m+1}\bigr),v_{i} \bigr) _{\Omega_{i}}+ \bigl( \Lambda _{j}^{k},v_{1} \bigr) _{\Gamma_{i}},\quad \forall v_{i}\in V_{i}. $$
Since \(b ( \cdot,\cdot ) \) is a coercive bilinear form,
$$ b_{i}\bigl(u_{i}^{k,m+1}-u_{i}^{n+1},v_{i} \bigr)+ \bigl( \alpha _{i}u_{i}^{k,m+1}-u_{i}^{n+1},v_{i} \bigr) _{\Gamma_{i}}= \bigl( \Lambda _{j}^{k,m}- \Lambda_{i}^{k},v_{i} \bigr) _{\Gamma_{i}},\quad \forall v_{i}\in V_{i} , $$
and so
$$ \text{ }b_{i}\bigl(e_{i}^{k,m+1},v_{i} \bigr)+ \bigl( \alpha _{i}e_{i}^{k,m+1},v_{i} \bigr) _{\Gamma_{i}}= \bigl( \eta _{i}^{k,m},v_{1} \bigr) _{\Gamma_{i}},\quad \forall v_{i}\in V_{i}. $$
2. We have \(\lim_{m\rightarrow+\infty}\epsilon _{1}^{n+1,m}=\lim_{m\rightarrow+\infty}\theta_{1}^{n+1,m}=0\). Then
$$ \Lambda_{i}^{k}=(\alpha_{1}+\alpha_{3})u_{i}^{k}- \Lambda_{j}^{k}. $$
Therefore
$$\begin{aligned} \eta_{i}^{k,m+1} =&\Lambda_{i}^{k,m+1}- \Lambda_{i}^{n+1} \\ =&(\alpha_{1}+\alpha_{3})u_{i}^{k,m+1}- \Lambda_{j}^{k,m}+\theta _{j}^{k,m+1}-( \alpha_{1}+\alpha_{3})u_{i}^{k}+ \Lambda_{j}^{k} \\ =&(\alpha_{1}+\alpha_{3}) \bigl(u_{1}^{k,m+1}-u_{i}^{k} \bigr)-\bigl(\Lambda _{j}^{k,m}-\Lambda_{j}^{k} \bigr)+\theta_{j}^{k,m+1}. \end{aligned}$$
 □

Lemma 2

By letting C be a generic constant which has different values at different places, one gets, for \(i,j=1,3\), \(i\neq j\),
$$ \bigl( \eta_{i}^{k,m-1}-\alpha_{i}e_{i}^{k,m},w \bigr) _{\Gamma _{1}}\leqslant C\bigl\Vert e_{i}^{k,m}\bigr\Vert _{1,\Omega_{i}} \Vert w\Vert _{W_{1}} $$
(3.12)
and
$$ \bigl( \alpha_{i}w_{i}+\theta_{1}^{k,m+1},e_{i}^{k,m+1} \bigr) _{\Gamma _{1}}\leqslant C\bigl\Vert e_{i}^{k,m+1}\bigr\Vert _{1,\Omega _{i}}\Vert w\Vert _{W_{1}}. $$
(3.13)

Proof

Using Lemma 1 and the fact that the inverse of the trace mapping \(Tr_{i}^{-1}:W_{1}\rightarrow V_{i}\) is continuous we have for \(i,j=1,3\), \(i\neq j\),
$$\begin{aligned} \bigl( \eta_{i}^{k,m-1}-\alpha_{i}e_{i}^{k,m},w \bigr) _{\Gamma_{i}} =&b_{i}\bigl(e_{i}^{k,m},Tr^{-1}w \bigr) \\ =& \bigl( \nabla e_{i}^{k,m},\nabla Tr^{-1}w \bigr) _{\Omega_{i}} \\ &{}+ \bigl( \alpha e_{i}^{k,m},Tr^{-1}w \bigr) _{\Omega_{i}}+\lambda \bigl( e_{i}^{k,m},Tr^{-1}w \bigr) _{\Omega_{i}} \\ \leqslant&\bigl\vert e_{i}^{k,m}\bigr\vert _{1,\Omega_{i}}\bigl\vert Tr^{-1}w\bigr\vert _{1,\Omega_{i}}+\Vert \alpha \Vert _{\infty }\bigl\Vert e_{i}^{k,m}\bigr\Vert _{0,\Omega_{i}}\bigl\Vert Tr^{-1}w\bigr\Vert _{0,\Omega_{i}} \\ &{}+\vert \lambda \vert \bigl\Vert e_{i}^{k,m}\bigr\Vert _{0,\Omega_{i}}\bigl\Vert Tr^{-1}w\bigr\Vert _{0,\Omega_{i}} \\ \leqslant&C\bigl\Vert e_{i}^{k,m}\bigr\Vert _{1,\Omega_{i}} \Vert w\Vert _{W_{1}}. \end{aligned}$$
For the second estimate, we have
$$\begin{aligned} \bigl( \alpha_{i}w_{i}+\theta_{1}^{k,m+1},e_{i}^{k,m+1} \bigr) _{\Gamma _{i}} =& \bigl( \alpha_{i}w_{i}+ \theta_{1}^{k,m+1},e_{i}^{k,m+1} \bigr) _{\Gamma_{i}} \\ \leqslant&\bigl\Vert \alpha_{i}w_{i}+ \theta_{1}^{k,m+1}\bigr\Vert _{0,\Gamma_{1}}\bigl\Vert e_{i}^{k,m+1}\bigr\Vert _{0,\Gamma_{1}} \\ \leqslant& \bigl( \vert \alpha_{i}\vert \Vert w_{i} \Vert _{0,\Gamma_{1}}+\bigl\Vert \theta_{1}^{k,m+1} \bigr\Vert _{0,\Gamma_{1}} \bigr) \bigl\Vert e_{i}^{n+1,m+1}\bigr\Vert _{0,\Gamma _{1}} \\ \leq&\max\bigl(\vert \alpha_{i}\vert ,\bigl\Vert \theta _{1}^{k,m+1}\bigr\Vert _{0,\Gamma_{1}}\bigr)\Vert w_{i}\Vert _{0,\Gamma_{1}}\bigl\Vert e_{i}^{k,m+1} \bigr\Vert _{0,\Gamma _{1}} \\ \leq&C\bigl\Vert e_{i}^{k,m+1}\bigr\Vert _{0,\Gamma_{1}} \Vert w_{i}\Vert _{0,\Gamma_{1}}\leqslant C\bigl\Vert e_{i}^{k,m+1}\bigr\Vert _{0,\Gamma_{1}}\Vert w_{i} \Vert _{W_{1}}. \end{aligned}$$
Thus,
$$ \vert \alpha_{i}\vert \Vert w_{i}\Vert _{0,\Gamma _{1}}+\bigl\Vert \theta_{1}^{k,m+1}\bigr\Vert _{0,\Gamma _{1}}\leqslant \max\bigl(\vert \alpha_{i}\vert ,\bigl\Vert \theta _{1}^{k,m+1}\bigr\Vert _{0,\Gamma_{1}}\bigr) \Vert w_{i}\Vert _{0,\Gamma_{1}}. $$
 □

Proposition 1

For the sequences \((u_{1}^{k,m},u_{3}^{k,m})_{m\in \mathbb{N}}\), solutions of (3.4), (3.5), we have the following a posteriori error estimation:
$$ \bigl\Vert u_{1}^{k,m+1}-u_{1}^{k}\bigr\Vert _{1,\Omega_{1}}+ \bigl\Vert u_{3}^{k,m}-u_{3}^{k} \bigr\Vert _{3,\Omega_{3}}\leqslant C\bigl\Vert u_{1}^{k,m+1}-u_{3}^{k,m} \bigr\Vert _{W_{1}}. $$

Proof

From (3.8), (3.10), we have
$$\begin{aligned}& b_{1}\bigl(e_{1}^{k,m+1},v_{1} \bigr)+b_{3}\bigl(e_{3}^{k,m},v_{3} \bigr) \\& \quad = \bigl( \eta_{3}^{k,m}-\alpha_{1}e_{1}^{k,m+1},v_{1} \bigr) _{\Gamma _{1}}+ \bigl( \eta_{1}^{k,m-1}- \alpha_{3}e_{3}^{k,m},v_{3} \bigr) _{\Gamma _{1}} \\& \quad = \bigl( \eta_{3}^{n+1,m}-\alpha_{1}e_{1}^{n+1,m+1},v_{1} \bigr) _{\Gamma _{1}}+ \bigl( \eta_{1}^{n+1,m-1}- \alpha_{3}e_{3}^{n+1,m},v_{3} \bigr) _{\Gamma_{1}} \\& \qquad {}+ \bigl( \eta_{1}^{n+1,m-1}-\alpha_{3}e_{3}^{n+1,m},v_{1} \bigr) _{\Gamma _{1}}- \bigl( \eta_{1}^{n+1,m-1}- \alpha_{3}e_{3}^{n+1,m},v_{1} \bigr) _{\Gamma_{1}}. \end{aligned}$$
Thus, we have
$$\begin{aligned}& b_{1}\bigl(e_{1}^{k,m+1},v_{1} \bigr)+b_{3}\bigl(e_{3}^{k,m},v_{3} \bigr) \\& \quad = \bigl( \eta _{3}^{n+1,m}-\alpha_{1}e_{1}^{n+1,m+1}+ \eta_{1}^{n+1,m-1}-\alpha _{3}e_{3}^{n+1,m},v_{1} \bigr) _{\Gamma_{1}} \\& \qquad {}+ \bigl( \eta_{1}^{n+1,m-1}-\alpha_{3}e_{3}^{n+1,m},v_{3}-v_{1} \bigr) _{\Gamma_{1}} \\& \quad = \bigl( ( \alpha_{1}+\alpha_{3} ) e_{3}^{n+1,m}+\theta _{1}^{n+1,m}- \alpha_{1}e_{1}^{n+1,m+1}-\alpha _{3}e_{3}^{n+1,m},v_{1} \bigr) _{\Gamma_{1}} \\& \qquad {}+ \bigl( \eta_{1}^{n+1,m-1}-\alpha_{3}e_{3}^{n+1,m},v_{3}-v_{1} \bigr) _{\Gamma_{1}} \\& \quad = \bigl( \alpha_{1}\bigl(e_{3}^{n+1,m}-e_{1}^{n+1,m+1} \bigr)+\theta _{1}^{n+1,m},v_{1} \bigr) _{\Gamma_{1}}+ \bigl( \eta _{1}^{n+1,m-1}-\alpha _{3}e_{3}^{n+1,m},v_{3}-v_{1} \bigr) _{\Gamma_{1}}. \end{aligned}$$
Take \(v_{1}=e_{1}^{n+1,m+1}\) and \(v_{3}=e_{3}^{n+1,m}\). Then using \(\frac{1}{2}(a+b)\leqslant a^{2}+b^{2}\) and Lemma 2, we get
$$\begin{aligned}& \frac{1}{2} \bigl( \bigl\Vert u_{1}^{k,m+1}-u_{1}^{n+1} \bigr\Vert _{1,\Omega_{1}}+\bigl\Vert u_{3}^{k,m}-u_{3}^{n+1} \bigr\Vert _{3,\Omega _{3}} \bigr) ^{2} \\& \quad \leqslant\bigl\Vert u_{1}^{k,m+1}-u_{1}^{k} \bigr\Vert _{1,\Omega _{1}}^{2}+\bigl\Vert u_{3}^{k,m}-u_{3}^{k} \bigr\Vert _{3,\Omega_{3}}^{2} \\& \quad \leq\bigl\Vert e_{1}^{k,m+1}\bigr\Vert _{1,\Omega_{1}}^{2}+\bigl\Vert e_{3}^{k,m}\bigr\Vert _{3,\Omega_{3}}^{2} \\& \quad \leq \bigl( \nabla e_{1}^{k,m+1},\nabla e_{1}^{k,m+1} \bigr) _{\Omega _{1}}+ \bigl( e_{1}^{k,m+1},e_{1}^{k,m+1} \bigr) _{\Omega_{3}} \\& \qquad {}+ \bigl( \nabla e_{3}^{k,m},\nabla e_{3}^{n+1,m} \bigr) _{\Omega _{1}}+ \bigl( e_{3}^{k,m},e_{3}^{k,m} \bigr) _{\Omega_{3}} \\& \quad \leqslant \bigl( \nabla e_{1}^{k,m+1},\nabla e_{1}^{k,m+1} \bigr) _{\Omega}+\frac{1}{\beta} \bigl( \alpha e_{1}^{k,m+1},e_{1}^{k,m+1} \bigr) _{\Omega} \\& \qquad {}+ \bigl( \nabla e_{3}^{k,m},\nabla e_{3}^{k,m} \bigr) _{\Omega _{1}}+\frac{1}{\beta} \bigl( \alpha e_{3}^{k,m},e_{3}^{k,m} \bigr) _{\Omega_{3}}. \end{aligned}$$
Then
$$\begin{aligned} \begin{aligned} &\frac{1}{2} \bigl( \bigl\Vert u_{1}^{k,m+1}-u_{1}^{n+1} \bigr\Vert _{1,\Omega_{1}}+\bigl\Vert u_{3}^{k,m}-u_{3}^{n+1} \bigr\Vert _{3,\Omega _{3}} \bigr) ^{2} \\ &\quad \leqslant\max\biggl(1,\frac{1}{\beta}\biggr) \bigl( b_{1} \bigl( e_{1}^{k,m+1},e_{1}^{k,m+1} \bigr) +b_{3} \bigl( e_{3}^{k,m},e_{3}^{k,m} \bigr) \bigr) \\ &\quad =\max\biggl(1,\frac{1}{\beta}\biggr) \bigl( \alpha _{1} \bigl(e_{3}^{k,m}-e_{1}^{k,m+1}\bigr)+ \theta_{1}^{k,m},e_{1}^{k,m+1} \bigr) _{\Gamma_{1}} \\ &\qquad {}+ \bigl( \eta_{1}^{k,m-1}-\alpha _{3}e_{3}^{k,m},e_{3}^{k,m}-e_{1}^{k,m+1} \bigr) _{\Gamma_{1}} \\ &\quad \leqslant C_{1}\bigl\Vert e_{1}^{k,m+1}\bigr\Vert _{1,\Omega _{1}}\bigl\Vert e_{3}^{k,m}-e_{1}^{k,m+1} \bigr\Vert _{W_{1}}+C_{2}\bigl\Vert e_{3}^{k,m} \bigr\Vert _{3,\Omega_{3}} \bigl\Vert e_{3}^{k,m}-e_{1}^{k,m+1} \bigr\Vert _{W_{1}} \\ &\quad \leqslant\max(C_{1},C_{2}) \bigl[ \bigl\Vert e_{1}^{k,m+1}\bigr\Vert _{1,\Omega_{1}}+\bigl\Vert e_{3}^{k,m}\bigr\Vert _{3,\Omega _{3}} \bigr] \bigl\Vert e_{3}^{k,m}-e_{1}^{k,m+1}\bigr\Vert _{W_{1}},\end{aligned} \end{aligned}$$
thus
$$ \bigl\Vert e_{1}^{n+1,m+1}\bigr\Vert _{1,\Omega_{1}}+ \bigl\Vert e_{3}^{n+1,m}\bigr\Vert _{3,\Omega_{3}}\leqslant\bigl\Vert e_{1}^{n+1,m+1}-e_{3}^{n+1,m}\bigr\Vert _{W_{1}}. $$
Therefore
$$ \bigl\Vert u_{1}^{n+1,m+1}-u_{1}^{n+1}\bigr\Vert _{1,\Omega _{1}}+\bigl\Vert u_{3}^{n+1,m}-u_{3}^{n+1} \bigr\Vert _{3,\Omega _{3}}\leqslant2\max(C_{1},C_{2})\bigl\Vert u_{1}^{n+1,m+1}-u_{3}^{n+1,m}\bigr\Vert _{W_{1}}. $$
 □

In a similar way, we define another nonoverlapping auxiliary problem over \(( \Omega_{2},\Omega_{4} ) \), and we get the same result.

Proposition 2

For the sequences \((u_{2}^{k,m},u_{4}^{k,m})_{m\in\mathbb{N}}\) we get a similar a posteriori error estimation, as follows:
$$ \bigl\Vert u_{2}^{k,m+1}-u_{2}^{k} \bigr\Vert _{2,\Omega_{2}}+ \bigl\Vert u_{4}^{k,m}-u_{4}^{k} \bigr\Vert _{4,\Omega_{4}}\leqslant C\bigl\Vert u_{2}^{k,m+1}-u_{4}^{k,m} \bigr\Vert _{W_{2}}. $$
(3.14)

Proof

The proof is very similar to the proof of Proposition 1. □

Theorem 2

Let \(u_{i}^{k}=u_{\Omega_{i}}^{k}\). For the sequences \((u_{1}^{k,m},u_{2}^{k,m})_{m\in\mathbb{N}}\), solutions of problems (3.1), (3.2), one has the following result:
$$\begin{aligned}& \bigl\Vert u_{1}^{k,m+1}-u_{1}^{k}\bigr\Vert _{1,\Omega_{1}}+ \bigl\Vert u_{2}^{k,m}-u_{2}^{k} \bigr\Vert _{2,\Omega_{2}} \\& \quad \leqslant C\bigl(\bigl\Vert u_{1}^{k,m+1}-u_{2}^{k,m} \bigr\Vert _{W_{1}}+\bigl\Vert u_{2}^{k,m}-u_{1}^{k,m-1} \bigr\Vert _{W_{2}}+\bigl\Vert e_{1}^{k,m}\bigr\Vert _{W_{1}}+\bigl\Vert e_{2}^{k,m-1}\bigr\Vert _{W_{2}}\bigr). \end{aligned}$$

Proof

We use two nonoverlapping auxiliary problems over \(( \Omega _{1},\Omega _{3} ) \) and over \(( \Omega_{2},\Omega_{4} ) \), respectively. From the previous two propositions, we have
$$\begin{aligned}& \bigl\Vert u_{1}^{k,m+1}-u_{1}^{k}\bigr\Vert _{1,\Omega_{1}}+ \bigl\Vert u_{2}^{k,m}-u_{2}^{k} \bigr\Vert _{2,\Omega_{2}} \\& \quad \leqslant \bigl\Vert u_{1}^{k,m+1}-u_{1}^{k} \bigr\Vert _{1,\Omega _{1}}+\bigl\Vert u_{3}^{k,m}-u_{3}^{k} \bigr\Vert _{3,\Omega_{3}} \\& \qquad {}+\bigl\Vert u_{2}^{k,m}-u_{2}^{n+1} \bigr\Vert _{2,\Omega_{2}}+ \bigl\Vert u_{4}^{k,m-1}-u_{4}^{n+1} \bigr\Vert _{4,\Omega_{4}} \\& \quad \leqslant C\bigl\Vert u_{1}^{k,m+1}-u_{3}^{n+1,m} \bigr\Vert _{W_{1}}+C\bigl\Vert u_{2}^{k,m}-u_{4}^{k,m-1} \bigr\Vert _{W_{2}} \\& \quad \leqslant C\bigl\Vert u_{1}^{k,m+1}-u_{2}^{k,m}+ \epsilon _{1}^{n+1,m}\bigr\Vert _{W_{1}}+C\bigl\Vert u_{2}^{k,m}-u_{1}^{k,m-1}+ \epsilon_{2}^{k,m-1}\bigr\Vert _{W_{2}}, \\& \bigl\Vert u_{1}^{k,m+1}-u_{1}^{k}\bigr\Vert _{1,\Omega_{1}}+ \bigl\Vert u_{2}^{k,m}-u_{2}^{k} \bigr\Vert _{2,\Omega_{2}} \\& \quad \leqslant C\bigl(\bigl\Vert u_{1}^{k,m+1}-u_{2}^{k,m}+ \epsilon_{1}^{k,m}\bigr\Vert _{W_{1}}+ \bigl\Vert u_{2}^{k,m}-u_{1}^{k,m-1}+ \epsilon_{2}^{k,m-1}\bigr\Vert _{W_{2}} \\& \qquad{} + \bigl\Vert \epsilon_{1}^{k,m}\bigr\Vert _{W_{1}}+\bigl\Vert \epsilon _{2}^{k,m-1}\bigr\Vert _{W_{2}}\bigr). \end{aligned}$$
 □

4 A posteriori error estimate in the discrete case

In this section, we consider the discretization of the variational problems (2.8), (2.9). Let \(\tau_{h}\) be a triangulation of Ω compatible with the discretization and \(V_{h}\subset H_{0}^{1}\) be the subspace of continuous functions which vanish over Ω; we have
$$ \{ V_{i,h}=V_{h,\Omega_{i}},W_{i,h}=W_{h\Gamma _{i}}, i=1,2 \} , $$
(4.1)
where \(W_{h\Gamma_{i}}\) is a subspace of \(H_{00}^{\frac{1}{2}} (\Gamma_{i})\) which consists of continuous piecewise polynomial functions on \(\Gamma_{i}\) which vanish at the end points of \(\Gamma_{i}\).

4.1 The space discretization

Let Ω be decomposed into triangles and \(\tau_{h}\) denote the set of all those elements, \(h>0\), the mesh size. We assume that the family \(\tau _{h}\) is regular and quasi-uniform. We consider the usual basis of affine functions \(\varphi_{i}\), \(i= \{ 1,\ldots,m ( h ) \} \) defined by \(\varphi_{i} ( M_{j} ) =\delta_{ij}\) where \(M_{j}\) is a vertex of the considered triangulation.

We discretize in space, i.e., we approach the space \(H_{0}^{1}\) by a space discretization of finite dimensional \(V^{h}\subset H_{0}^{1}\). In a second step, we discretize the problem with respect to time using the θ-scheme. Therefore, we search a sequence of elements \(u_{h}^{n}\in V^{h}\) which approaches \(u^{n} ( t_{n} ) \), \(t_{n}=n\Delta t\), with initial data \(u_{h}^{0}=u_{0h}\). Now we apply the θ-scheme in the following to the semi-discrete approximation for \(v_{h}\in V^{h}\).

Let \(u_{h}^{m+1}\in V_{h}\) be the solution of discrete problem associated with (3.1), \(u_{i,h}^{m+1}=u_{h\Omega_{i}}^{m+1}\).

We construct the sequences \((u_{i,h}^{n+1,m+1})_{m\in \mathbb{N}}\), \(u_{i,h}^{n+1,m+1}\in V_{i,h}\) (\(i=1,2\)), solutions of the discrete problems associated with (3.1), (3.2).

In similar manner to the previous section, we introduce two auxiliary problems. We define for \(( \Omega_{1},\Omega_{3} ) \) the following problems:
$$ \left \{ \textstyle\begin{array}{l} b_{1}(u_{1,h}^{k,m+1},v_{1})+ ( \alpha _{1,h}u_{1,h}^{k,m+1},v_{1} ) _{\Gamma_{1}}= ( F(u_{1,h}^{k-1,m+1}),v_{1} ) _{\Omega_{1}}, \\ u_{1,h}^{k,m+1}=0\quad \text{on }\partial\Omega_{1}\cap\partial\Omega, \\ \frac{\partial u_{1,h}^{k,m+1}}{\partial\eta_{1}}+\alpha _{1}u_{1,h}^{k,m+1}=\frac{\partial u_{2,h}^{k,m}}{\partial\eta_{1}}+\alpha_{1}u_{2,h}^{k,m}\quad \text{on }\Gamma_{1} \end{array}\displaystyle \right . $$
(4.2)
and
$$ \left \{ \textstyle\begin{array}{l} b_{1}(u_{3,h}^{k,m+1},v_{1})+ ( \alpha _{3,h}u_{3,h}^{k,m+1},v_{1} ) _{\Gamma_{1}}= ( F(u_{3,h}^{k-1,m+1}),v_{3} ) _{\Omega_{3}}, \\ u_{3,h}^{k,m+1}=0\quad \text{on }\partial\Omega_{3}\cap\partial\Omega, \\ \frac{\partial u_{3,h}^{k,m+1}}{\partial\eta_{3}}+\alpha _{3}u_{3,h}^{k,m+1}=\frac{\partial u_{1}^{k,m}}{\partial\eta _{3}}+\alpha _{3}u_{1,h}^{k,m}\quad \text{on }\Gamma_{1}, \end{array}\displaystyle \right . $$
(4.3)
and for \(( \Omega_{2},\Omega_{4} ) \)
$$ \left \{ \textstyle\begin{array}{l} b_{1}(u_{2,h}^{k,m+1},v_{1})+ ( \alpha _{2,h}u_{2,h}^{k,m+1},v_{1} ) _{\Gamma_{1}}= ( F(u_{2,h}^{k-1,m+1}),v_{2} ) _{\Omega_{2}}, \\ u_{2,h}^{k,m+1}=0\quad \text{on }\partial\Omega_{2}\cap\partial\Omega, \\ \frac{\partial u_{2,h}^{k,m+1}}{\partial\eta_{2}}+\alpha _{2}u_{2,h}^{k,m+1}=\frac{\partial u_{1,h}^{k,m}}{\partial\eta_{2}}+\alpha_{2}u_{1,h}^{k,m}\quad \text{on }\Gamma_{2} \end{array}\displaystyle \right . $$
and
$$ \left \{ \textstyle\begin{array}{l} b_{1}(u_{4,h}^{k,m+1},v_{1})+ ( \alpha _{4,h}u_{4,h}^{k,m+1},v_{1} ) _{\Gamma_{1}}= ( F(u_{4,h}^{k-1}),v_{4} ) _{\Omega_{4}}, \\ u_{4,h}^{n+1,m+1}=0\quad \text{on }\partial\Omega_{1}\cap\partial \Omega, \\ \frac{\partial u_{4,h}^{n+1,m+1}}{\partial\eta_{4}}+\alpha _{4}u_{4,h}^{n+1,m+1}=\frac{\partial u_{2,h}^{n+1,m}}{\partial\eta _{4}}+\alpha_{4}u_{2,h}^{n+1,m}\quad \text{on }\Gamma_{2}. \end{array}\displaystyle \right . $$
(4.4)
We can obtain the discrete counterparts of Propositions 1 and 2 by doing almost the same analysis as in the section above (i.e., passing from continuous spaces to discrete subspaces and from continuous sequences to discrete ones). Therefore,
$$ \bigl\Vert u_{1}^{k,m+1}-u_{1}^{k} \bigr\Vert _{1,\Omega_{1}}+ \bigl\Vert u_{3}^{k,m}-u_{3}^{k} \bigr\Vert _{1,\Omega_{3}}\leqslant C\bigl\Vert u_{1}^{k,m+1}-u_{3}^{k,m} \bigr\Vert _{W_{1}} $$
(4.5)
and
$$ \bigl\Vert u_{2}^{k,m+1}-u_{2}^{n+1} \bigr\Vert _{1,\Omega_{2}}+ \bigl\Vert u_{4}^{k,m}-u_{4}^{n+1} \bigr\Vert _{1,\Omega_{4}}\leqslant C\bigl\Vert u_{2}^{k,m+1}-u_{4}^{k,m} \bigr\Vert _{W_{2}}. $$
(4.6)
Similar to the proof of Theorem 2 we get the following discrete estimates:
$$\begin{aligned}& \bigl\Vert u_{1,h}^{k,m+1}-u_{1,h}^{k}\bigr\Vert _{1,\Omega _{1}}+\bigl\Vert u_{2,h}^{k,m}-u_{2,h}^{k} \bigr\Vert _{1,\Omega _{2}} \\& \quad \leqslant C\bigl(\bigl\Vert u_{1,h}^{k,m+1}-u_{2,h}^{k,m} \bigr\Vert _{W_{1}}+\bigl\Vert u_{2,h}^{k,m}-u_{1,h}^{k,m-1} \bigr\Vert _{W_{2}} +\bigl\Vert e_{1,h}^{n+1,m}\bigr\Vert _{W_{1}}+\bigl\Vert e_{2,h}^{n+1,m-1}\bigr\Vert _{W_{2}}\bigr). \end{aligned}$$

5 Asymptotic behavior for the problem

5.1 A fixed point mapping associated with discrete problem

We define for \(i=1,2,3,4\) the following mapping:
$$ \begin{aligned} &T_{h}:V_{i,h} \rightarrow H_{0}^{1} ( \Omega_{i} ), \\ &W_{i}\rightarrow TW_{i}=\xi _{h,i}^{k,m+1}= \partial _{h} \bigl( F ( w_{i} ) \bigr) , \end{aligned} $$
(5.1)
where \(\xi_{h,i}^{k} \) is the solution of the following problem:
$$ \left \{ \textstyle\begin{array}{l} b_{i}(\xi_{i,h}^{k,m+1},v_{i})+ ( \alpha_{i,h}\xi _{i,h}^{k,m+1},v_{i,h} ) _{\Gamma_{i}}= ( F(w_{i}),v_{i,h} ) _{\Omega_{i}}, \\ \xi_{i,h}^{k,m+1}=0\quad \text{on }\partial\Omega_{i}\cap\partial \Omega, \\ \frac{\partial\xi_{i,h}^{k,m+1}}{\partial\eta_{i}}+\alpha_{i}\xi _{i,h}^{k,m+1}=\frac{\partial\xi_{j,h}^{k,m}}{\partial\eta _{i}}+\alpha _{i}\xi_{j,h}^{k,m}\quad \text{on }\Gamma_{i}, i=1,\ldots,4, j=1,2.\end{array}\displaystyle \right . $$
(5.2)

5.2 An iterative discrete algorithm

Choose \(u_{h}^{i,0}=u_{h0}^{i}\), the solution of the following discrete equation:
$$ b^{i} \bigl( u_{h,i}^{0},v_{h} \bigr) = \bigl( g_{i}^{0},v_{h} \bigr) ,\quad v_{h}\in V_{h}, $$
(5.3)
where \(g^{i,0}\) is a regular function.
Now we give the following discrete algorithm:
$$ u_{i,h}^{k,m+1}=T_{h}u_{i,h}^{k-1,m+1}, \quad k=1,\ldots,n, i=1,\ldots,4, $$
where \(u_{i,h}^{k}\) is the solution of the problem (5.2).

Proposition 3

Let \(\xi_{h}^{i,k} \) be a solution of the problem (5.2) with the right-hand side \(F^{i} ( w_{i} ) \) and the boundary condition \(\frac {\partial \xi_{i,h}^{k,m+1}}{\partial\eta_{i}}+\alpha_{i}\xi_{i,h}^{k,m+1}\), \(\tilde{\xi}_{h}^{i,k}\) the solution for \(\tilde{F}^{i}\) and \(\frac {\partial \tilde{\xi}_{i,h}^{k,m+1}}{\partial\eta_{i}}+\alpha_{i}\tilde{\xi}_{i,h}^{k,m+1}\). The mapping \(T_{h}\) is a contraction in \(V_{i,h}\) with the rate of contraction \(\frac{\lambda}{\beta+\lambda}\). Therefore, \(T_{h}\) admits a unique fixed point which coincides with the solution of the problem (5.2).

Proof

We note that
$$ \Vert W\Vert _{H_{0}^{1} ( \Omega_{i} ) }= \Vert W\Vert _{1}. $$
Setting
$$ \phi=\frac{1}{\beta+\lambda}\bigl\Vert F ( w_{i} ) -F ( \tilde{w}_{i} ) \bigr\Vert _{1}. $$
Then, a \(\xi_{i,h}^{k,m+1}+\phi\) is a solution of
$$ \left \{ \textstyle\begin{array}{l} b ( \xi_{i,h}^{k,m+1}+\phi, ( v_{i,h}+\phi ) ) = ( F(w_{i})+\alpha_{i}\phi, ( v_{i,h}+\phi ) ) , \\ \xi_{i,h}^{k,m+1}=0\quad \text{on }\partial\Omega_{i}\cap\partial \Omega, \\ \frac{\partial\xi_{i,h}^{k,m+1}}{\partial\eta_{i}}+\alpha_{i}\xi _{i,h}^{k,m+1}=\frac{\partial\xi_{j,h}^{k,m}}{\partial\eta _{i}}+\alpha _{i}\xi_{j,h}^{k,m}\quad \text{on }\Gamma_{i}, i=1,\ldots,4, j=1,2.\end{array}\displaystyle \right . $$
From assumption (1.2), we have
$$\begin{aligned} F(w_{i}) \leq&F ( \tilde{w}_{i} ) +\bigl\Vert F(w_{i})-F ( \tilde{w}_{i} ) \bigr\Vert _{1} \\ \leq&F ( \tilde{w}_{i} ) +\frac{\alpha}{\beta+\lambda}\bigl\Vert F(w_{i})-F ( \tilde{w}_{i} ) \bigr\Vert _{1} \\ \leq&F ( \tilde{w}_{i} ) +\alpha\phi. \end{aligned}$$
It is very clear that if \(F^{i}(w_{i})\geqq F^{i} ( \tilde {w}_{i} ) \) then \(\xi_{i,h}^{k,m+1}\geqq\tilde{\xi}_{i,h}^{k,m+1}\). Thus
$$ \xi_{i,h}^{k,m+1}\leq\tilde{\xi}_{i,h}^{k,m+1}+ \phi. $$
But the roles of \(w_{i}\) and \(\tilde{w}_{i}\) are symmetrical, thus we have a similar proof,
$$ \tilde{\xi}_{i,h}^{k,m+1}\leq\xi_{i,h}^{k,m+1}+ \phi, $$
which yields
$$\begin{aligned} \bigl\Vert T ( w ) -T ( \tilde{w} ) \bigr\Vert _{1} \leq & \frac{1}{\beta+\lambda}\bigl\Vert F(w_{i})-F ( \tilde {w}_{i} ) \bigr\Vert _{1} \\ =&\frac{1}{\beta+\lambda}\bigl\Vert f^{i}+\lambda w_{i}-f^{i}- \lambda \tilde{w}_{i}\bigr\Vert _{1} \\ \leq&\frac{\lambda}{\beta+\lambda} \Vert w_{i}-\tilde{w}_{i} \Vert _{1}. \end{aligned}$$
 □

Proposition 4

Under the previous hypotheses and notations, we have the following estimate of the convergence:
$$ \bigl\Vert u_{i,h}^{n,m+1}-u_{i,h}^{\infty,m+1} \bigr\Vert _{1}\leq \biggl( \frac{1}{1+\beta\theta ( \Delta t ) } \biggr) ^{n} \bigl\Vert u_{i,h}^{\infty,m+1}-u_{i,h_{0}}\bigr\Vert _{1},\quad k=0,\ldots,n, $$
(5.4)
where \(u^{\infty,m+1}\) is an asymptotic continuous solution and \(u_{i,h_{0}}\) a solution of (5.3).

Proof

We have
$$\begin{aligned}& u_{h}^{i,\infty}=T_{h}u_{h}^{i,\infty}, \\& \bigl\Vert u_{i,h}^{1,m+1}-u_{i,h}^{\infty,m+1} \bigr\Vert _{1}= \bigl\Vert T_{h}u_{i,h}^{0,m+1}-T_{h}u_{i,h}^{\infty,m+1} \bigr\Vert _{1}\leq \biggl( \frac{1}{1+\beta\theta ( \Delta t ) } \biggr) \bigl\Vert u_{i,h}^{i,0}-u_{i,h}^{\infty,m+1}\bigr\Vert _{1}, \end{aligned}$$
and, for \(n+1\), we have
$$ \bigl\Vert u_{h}^{n+1,m+1}-u_{h}^{i,\infty} \bigr\Vert _{1}=\bigl\Vert T_{h}u_{i,h}^{n,m+1}-T_{h}u_{i,h}^{\infty,m+1} \bigr\Vert _{1}\leq \biggl( \frac{1}{1+\beta\theta ( \Delta t ) } \biggr) \bigl\Vert u_{i,h}^{n,m+1}-u_{i,h}^{i,\infty}\bigr\Vert _{1}, $$
then
$$ \bigl\Vert u_{i,h}^{n,m+1}-u_{i,h}^{\infty}\bigr\Vert _{1}\leq \biggl( \frac{1}{1+\beta\theta ( \Delta t ) } \biggr) ^{n}\bigl\Vert u_{i,h}^{\infty,m+1}-u_{i,h_{0}}\bigr\Vert _{1}. $$
 □

Now we evaluate the variation in \(H_{0}^{1}\)-norm between \(u ( T,x ) \), the discrete solution calculated at the moment \(T=n\Delta t\), and \(u^{\infty}\), the asymptotic continuous solution (2.1).

Theorem 3

Under the previous hypotheses, notations, and results, we have
$$\begin{aligned} \bigl\Vert u_{i,h}^{n,m+1}-u^{\infty}\bigr\Vert _{1} \leq& C\biggl[ \bigl\Vert u_{1,h}^{k,m+1}-u_{2,h}^{k,m} \bigr\Vert _{W_{1}}+\bigl\Vert u_{2,h}^{k,m}-u_{1,h}^{k,m-1} \bigr\Vert _{W_{2}}+\bigl\Vert e_{1,h}^{n+1,m}\bigr\Vert _{W_{1}} \\ &{}+\bigl\Vert e_{2,h}^{n+1,m-1}\bigr\Vert _{W_{2}}+ \biggl( \frac {1}{1+\beta \theta ( \Delta t ) } \biggr) ^{n}\biggr] \end{aligned}$$
(5.5)
and
$$ \bigl\Vert u_{i,h}^{n,m+1}-u^{\infty}\bigr\Vert _{1}\leq C \biggl[ h^{2}\vert \log h\vert + \biggl( \frac{1}{1+\beta\theta ( \Delta t ) } \biggr) ^{n} \biggr] . $$
(5.6)

Proof

Using Theorem 2 and Proposition 4, we get (5.5) and using (2.11) and Proposition 4 we get (5.6). □

6 Conclusion

In this paper, a posteriori error estimates for the generalized overlapping domain decomposition method with Robin boundary conditions on the interfaces for a parabolic variational equation with second order boundary value problems are studied using the semi-implicit-time scheme combined with a finite element spatial approximation. Furthermore a result of an asymptotic behavior using \(H_{0}^{1}\)-norm is given using Benssoussan-Lions’ algorithm. In the future this research will be completed. The geometrical convergence of both the continuous and discrete error estimate for linear elliptic PDEs corresponding to the Schwarz algorithms will be established and the results of some numerical experiments will be presented to support the theory.

Declarations

Acknowledgements

The authors wish to thank the anonymous referees and the handling editor for their useful remarks and their careful reading of the proofs presented in this paper.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Mathematics, College of Science and Arts, Al-Qassim University
(2)
Laboratory of Fundamental and Applied Mathematics, Oran University 1
(3)
Department of Mathematics, Faculty of Sciences, Annaba University
(4)
Department of Mathematics, Faculty of Sciences, El’oued University

References

  1. Badea, L: On the Schwarz alternating method with more than two subdomains for monotone problems. SIAM J. Numer. Anal. 28(1), 179-204 (1991) MathSciNetView ArticleMATHGoogle Scholar
  2. Boulaaras, S, Haiour, M: Overlapping domain decomposition methods for elliptic quasi-variational inequalities related to impulse control problem with mixed boundary conditions. Proc. Indian Acad. Sci. Math. Sci. 121(4), 481-493 (2011) MathSciNetView ArticleMATHGoogle Scholar
  3. Nataf, F: Recent developments on optimized Schwarz methods. In: Domain Decomposition Methods in Science and Engineering XVI. Lecture Notes in Computational Science and Engineering, vol. 55, pp. 115-125. Springer, Berlin (2007) View ArticleGoogle Scholar
  4. Lions, PL: On the Schwarz alternating method. II. Stochastic interpretation and order properties. In: Domain Decomposition Methods (Los Angeles, Calif, 1988), pp. 47-70. SIAM, Philadelphia (1989) Google Scholar
  5. Otto, F-C, Lube, G: A posteriori estimates for a non-overlapping domain decomposition method. Computing 62(1), 27-43 (1999) MathSciNetView ArticleMATHGoogle Scholar
  6. Douglas, J Jr., Huang, C-S: An accelerated domain decomposition procedure based on Robin transmission conditions. BIT Numer. Math. 37(3), 678-686 (1997) MathSciNetView ArticleMATHGoogle Scholar
  7. Engquist, B, Zhao, H-K: Absorbing boundary conditions for domain decomposition. Appl. Numer. Math. 27(4), 341-365 (1998) MathSciNetView ArticleMATHGoogle Scholar
  8. Lions, P-L: On the Schwarz alternating method. I. In: Glowinski, R, Golub, GH, Meurant, GA, Périaux, J (eds.) First International Symposium on Domain Decomposition Methods for Partial Differential Equations, pp. 1-42. SIAM, Philadelphia (1988) Google Scholar
  9. Chan, TF, Hou, TY, Lions, P-L: Geometry related convergence results for domain decomposition algorithms. SIAM J. Numer. Anal. 28(2), 378-391 (1991) MathSciNetView ArticleMATHGoogle Scholar
  10. Quarteroni, A, Valli, A: Domain Decomposition Methods for Partial Differential Equations. Clarendon, Oxford (1999) MATHGoogle Scholar
  11. Toselli, A, Widlund, O: Domain Decomposition Methods Algorithms and Theory. Springer Series in Computational Mathematics, vol. 34. Springer, Berlin (2005) MATHGoogle Scholar
  12. Maday, Y, Magoulès, F: Improved ad hoc interface conditions for Schwarz solution procedure tuned to highly heterogeneous media. Appl. Math. Model. 30(8), 731-743 (2006) View ArticleMATHGoogle Scholar
  13. Maday, Y, Magoulès, F: A survey of various absorbing interface conditions for the Schwarz algorithm tuned to highly heterogeneous media. In: Domain Decomposition Methods: Theory and Applications. Gakuto International Series: Mathematical Sciences Applications, vol. 25, pp. 65-93. Gakkotosho, Tokyo (2006) Google Scholar
  14. Farhat, C, Le Tallec, P: Vista in domain decomposition methods. Comput. Methods Appl. Mech. Eng. 184(2-4), 143-520 (2000) View ArticleMATHGoogle Scholar
  15. Magoulès, F, Rixen, D: Domain decomposition methods: recent advances and new challenges in engineering. Comput. Methods Appl. Mech. Eng. 196(8), 1345-1346 (2007) View ArticleGoogle Scholar
  16. Ainsworth, M, Oden, JT: A Posteriori Error Estimation in Finite Element Analysis. Wiley-Interscience, New York (2000) View ArticleMATHGoogle Scholar
  17. Verfürth, A: A Review of A Posteriori Error Estimation and Adaptive Mesh-Refinement Techniques. Wiley-Teubner, Stuttgart (1996) MATHGoogle Scholar
  18. Bernardi, C, Chacón Rebollo, T, Chacón Vera, E, Franco Coronil, D: A posteriori error analysis for two-overlapping domain decomposition techniques. Appl. Numer. Math. 59(6), 1214-1236 (2009) MathSciNetView ArticleMATHGoogle Scholar
  19. Lions, PL: On the Schwarz alternating method. I. In: First International Symposium on Domain Decomposition Methods for Partial Differential Equations (Paris, 1987), pp. 1-42. SIAM, Philadelphia (1988) Google Scholar
  20. Benlarbi, H, Chibi, A-S: A posteriori error estimates for the generalized overlapping domain decomposition methods. J. Appl. Math. 2012, Article ID 947085 (2012) MathSciNetGoogle Scholar
  21. Boulbrachene, M, Al Farei, Q: Maximum norm error analysis of a nonmatching grids finite element method for linear elliptic PDEs. Appl. Math. Comput. 238(7), 21-29 (2014) MathSciNetView ArticleGoogle Scholar

Copyright

© Boulaaras et al. 2015