Skip to content

Advertisement

Open Access

New approximation methods for solving elliptic boundary value problems via Picard-Mann iterative processes with mixed errors

Boundary Value Problems20172017:184

https://doi.org/10.1186/s13661-017-0914-6

Received: 8 August 2017

Accepted: 29 November 2017

Published: 13 December 2017

Abstract

In this paper, we introduce and study a class of new Picard-Mann iterative methods with mixed errors for common fixed points of two different nonexpansive and contraction operators. We also give convergence and stability analysis of the new Picard-Mann iterative approximation and propose numerical examples to show that the new Picard-Mann iteration converges more effectively than the Picard iterative process, Mann iterative process, Picard-Mann iterative process due to Khan and other related iterative processes. Furthermore, as an application, we explore iterative approximation of solutions for an elliptic boundary value problem in Hilbert spaces by using the new Picard-Mann iterative methods with mixed errors for contraction operators.

Keywords

new Picard-Mann approximationcommon fixed pointnonexpansion and contractionelliptic boundary value problemconvergence and stability

MSC

47H1054H2547H09

1 Introduction

In order to find a weak solution of the following elliptic boundary value problem (so-called Dirichlet problem):
$$\begin{aligned} \textstyle\begin{cases} -\Delta u=f(x, u),& x \in\Omega,\\ u(x)=0,& x \in\partial\Omega, \end{cases}\displaystyle \end{aligned}$$
(1.1)
where \(\Omega\subset\mathbb{R}^{n}\) is a bounded domain, \(f: \Omega \times\mathbb{R}\rightarrow\mathbb{R}\) is a Carathéodory function, Ayadi et al. [1] proved a new global minimization theorem in Hilbert spaces by using the notion of a nonexpansive potential operator. As we all know, multidimensional dynamical systems are frequently formulated by partial differential equations, which generally depend on space and time, i.e., parabolic or evolutionary type equations, and are treated with emphasis on various real-world applications in (thermo)mechanics of solids and fluids, electrical devices, engineering, chemistry, biology, etc. (see [2, 3]). But under some suitable conditions, the time-dependent form of partial differential equations can be rewritten as a time-independent form (see [4, Example 4, p. 161]), and some special cases of the Dirichlet problem (1.1) represent elliptic variational forms of second order physician, physicist and anatomist equation (see [2, 5]). Thus, the nonlinear elliptic problem (1.1) has been studied via fixed point index theory, critical point theory, Morse theory, variational inequality theory and so on. See, for example, [4, 5] and the references therein. The boundary value problem is widely used in physics. In 1997, Marin [6] established necessary and sufficient conditions for the existence and uniqueness of the weak solution to the mixed boundary value problem in the domain of dipolar bodies with voids. Later, Marin and Vlase [7] showed that the existence of internal state variables has no effect on the uniqueness of the solution associated with the mixed initial boundary value problem in thermoelasticity of microstretch bodies (see [7, Theorems 1-3, p. 248]), the proof of the uniqueness of the solution and some useful estimations are also contained.

In particular, many problems in physics and other applications cannot be formulated as equations but have some more complicated structure, and usually the so-called complementarity problem, which is equivalent to a variational inequality. Further, the applicability of variational inequality theory, which was initially developed to cope with equilibrium problems (e.g., the Signorini problem, which was first posed by Antonio Signorini in 1959), has been extended to involve problems in economics, finance, electrodynamics, mechanics, engineering science, optimization and game theory. Hence, the variational method is very important in optimal control theory, and such generalization is often needed in optimal-control theory of elliptic problems. In fact, optimal control problems in control theory are searching for a kind of control mode which can transform the initial state of the control object to the terminal state and make sure that the objective function can reach the maximum or minimum. For more details on variational inequalities in the context of their optimal control, one can refer to [25] and the references therein, and the following examples.

Example 1.1

([8])

Consider the following optimal boundary control problem of elliptic equation constraints:
$$\begin{aligned} \min J(u), \end{aligned}$$
(1.2)
where state variable \(y(u)\in V=H^{1}(\Omega)\), state space, and control variable \(u\in U=L^{2} (\partial\Omega)\), control space, satisfy
$$\begin{aligned} \textstyle\begin{cases} -\Delta y=f, & x\in\Omega,\\ \frac{\partial y}{\partial\overrightarrow{n}}=u+g,& x\in\Gamma _{N},\\ y=y_{d}, & x\in\Gamma_{D}, \end{cases}\displaystyle \end{aligned}$$
(1.3)
where \(\Omega\subset\mathbb{R}^{2}\) is a bounded convex region with smooth boundary Ω, \(\Gamma_{N}\) and \(\Gamma_{D}\) are respectively Neumann boundary and Dirichlet boundary, \(\partial\Omega =\Gamma_{N}\cup\Gamma_{D}\), \(\Gamma_{N}\cap\Gamma_{D}=\emptyset \), \(\overrightarrow{n}\) is the unit normal vector of Ω, f, g and \(y_{d}\) are given functions.
Define the objective function in (1.2) by
$$ J(u)=\frac{1}{2} \biggl\{ \int_{\Omega}{\gamma\bigl\vert y(x)-y_{0}(x)\bigr\vert ^{2}\,dx}+ \int_{\Gamma_{N}}{\alpha\bigl\vert u(x)\bigr\vert ^{2}\,ds} \biggr\} , $$
(1.4)
where \(y_{0}(x)\) is a given target state variable, \(\gamma, \alpha>0\) are two constants, which play a role of balance to the state variable y and the control variable u. By optimal control theory [9] and the definition of directional derivative for functional, now we know that solving the optimal problem (1.2) on a convex set U is equivalent to finding the control variable \(u\in U\) such that the following variational inequality holds:
$$ \bigl\langle J^{\prime}(u), v-u\bigr\rangle = \int_{\Gamma_{N}}{(\alpha u+p) (v-u)\,ds}\geq0,\quad\forall v\in U, $$
(1.5)
where p is the dual state variable of y and satisfies the following state equation:
$$\begin{aligned} \textstyle\begin{cases} -\Delta p=\gamma(y-y_{0}), & x\in\Omega, \\ \frac{\partial p}{\partial\overrightarrow{n}}=0, & x\in\Gamma_{N}, \\ p=0, & x\in\Gamma_{D}, \end{cases}\displaystyle \end{aligned}$$
(1.6)
which is the dual problem of (1.3). Inequality (1.5) is called optimality condition for the optimal control problems (1.2) and (1.3), which is equivalent to the equation system composed of (1.3), (1.5) and (1.6).

Based on the above analysis of dualization and optimality condition for the optimal control problems (1.2) and (1.3), Liu and Sun [8] introduced and studied an iterative non-overlapping domain decomposition method for (1.3)-(1.6) and proved convergence of the sequence generated by the iterative method. Furthermore, by using an iterative algorithm due to the penalized gradient projection method, adaptive finite element method, edge stabilization Galerkin method, variational iteration method, etc., such kind of problems as (1.2) or (1.3) were considered by many authors and researchers. See, for example, [916] and the references therein. Especially noteworthy, Zhou and Li [17] pointed out ‘though much achievement has been achieved, application of the variational iteration method to Cauchy problems has not yet been dealt with’.

On the other hand, in order to compare to Picard, Mann and Ishikawa iterations for approximating fixed points and to solve equation systems, Khan [18] introduced and studied a Picard-Mann hybrid iterative process and showed that the Picard-Mann hybrid iterative process converges faster than all of Picard, Mann and Ishikawa iterative processes for contractions. Following on the works of Khan [18], by using an up-to-date method for approximating common fixed points of countable families of nonlinear operators, Deng [19] introduced a modified Picard-Mann hybrid iterative algorithm for a sequence of nonexpansive mappings and established strong convergence and weak convergence of the iterative sequence generated by the modified hybrid iterative algorithm in a convex Banach space. Okeke and Abbas [20] introduced and studied Picard-Krasnoselskii hybrid iterations, which converge faster than Picard, and gave an application to delay differential equations Mann, Krasnoselskii and Ishikawa iterative processes for contractive nonlinear operators. However, one can know that the Picard-Krasnoselskii hybrid iteration is a special case of the Picard-Mann hybrid iterative process due to Khan [18], it is because \(\alpha_{n}\in(0,1)\) of (1.4) in [18] includes \(\lambda\in(0,1)\) in (1.7) of [20] (see [20, Example 2.2, p. 25]). Jiang et al. [21] proved convergence of Mann iterative sequences for approximating solutions of a higher order nonlinear neutral delay differential equation and proposed advantages of the presented results through three extraordinary examples. However, how to establish the error estimates between the approximate solutions and the exact solutions for partial differential equations is not reported in the literature.

Moreover, Roussel [22] pointed out that equilibria are not always stable. Since stable and unstable equilibria play quite different roles in the dynamics of a system, it is useful to be able to classify equilibrium points based on their stability. Thus, there are many scholars and researchers who have discussed stability of the iterative sequence generated by the algorithm for solving the investigated problems. See, for example, [2327] and the references therein. Especially, stimulated by the work of Bosede and Rhoades [28], Akewe and Okeke [27] obtained stability results for the Picard-Mann hybrid iterative scheme due to Khan [18] for a general class of contractive-like operators introduced by Bosede and Rhoades [28]. However, how does one obtain stability analysis when the Picard-Mann hybrid iterative scheme due to Khan [18] is generalized for two different nonexpansive and contraction operators and one involves errors or mixed errors? This is a significant and challenging research work.

Motivated and inspired by the above works, we aim in this paper to introduce and study a class of new Picard-Mann iterative methods with mixed errors for common fixed points of two different nonexpansive and contraction operators. Then convergence and stability analysis of the new Picard-Mann iterative approximation are given. Finally, two numerical examples to verify effectiveness of the new Picard-Mann iteration are presented, and a new iterative approximation of solutions for an elliptic boundary value problem in Hilbert spaces is investigated by using the new Picard-Mann iterative methods with mixed errors for nonexpansive operators, which are different from the method proposed in [1].

2 New Picard-Mann approximation methods

In this section, we shall introduce and study a class of new Picard-Mann iterative methods with mixed errors for common fixed points of two different nonexpansive and contraction operators and prove convergence and stability of the new Picard-Mann iterative approximation.

We need the following definitions and lemmas for our main results.

Definition 2.1

Let X be a normed space and \(K\subset X\) be a nonempty subset. Then an operator \(T: K\to K\) is said to be
  1. (i)
    nonexpansive if
    $$\begin{aligned} \Vert Tu-Tv\Vert \le \Vert u-v\Vert , \quad \forall u, v\in K; \end{aligned}$$
    (2.1)
     
  2. (ii)
    contraction if there exists a constant \(k\in[0, 1)\) such that
    $$\begin{aligned} \Vert Tu-Tv\Vert \le k\Vert u-v\Vert , \quad \forall u, v\in K. \end{aligned}$$
    (2.2)
     

Remark 2.1

The constant k in Definition 2.1(ii) is called the Lipschitz constant of T. Contractive operators are sometimes called Lipschitzian operators. If the above condition is instead satisfied for \(k\le1\), then the operator T is said to be nonexpansive.

Definition 2.2

Let S be a selfmap of the normed space X, \(x_{0}\in X\), and let \(x_{n+1}=h(S, x_{n})\) define an iteration procedure which yields a sequence of points \(\{x_{n}\}\subset X\). Suppose that \(\{x\in X: Sx=x\}\neq\emptyset\) and \(\{x_{n}\}\) converges to a fixed point \(x^{*}\) of S. Let \(\{w_{n}\}\subset X\) and let \(\varepsilon_{n}=\Vert w_{n+1}-h(S, w_{n})\Vert \). If \(\lim \epsilon_{n}=0\) implies that \(w_{n}\to x^{*}\), then the iteration procedure defined by \(x_{n+1}=h(S, x_{n})\) is said to be S-stable or stable with respect to S.

Lemma 2.1

([29])

Let X be a normed space and C be a nonempty closed convex bounded subset of X. Then each nonexpansive operator \(T: C\to C\) has a fixed point in C.

Lemma 2.2

([30])

Let \(\{a_{n}\}\), \(\{b_{n}\}\), \(\{ c_{n}\}\) be three nonnegative real sequences satisfying
$$\begin{aligned} a_{n+1}\leq ( 1-t_{n} ) a_{n}+t_{n}b_{n}+c_{n}, \end{aligned}$$
(2.3)
where \(t_{n}\in[0, 1]\), \(\sum_{n=0}^{\infty}t_{n}=\infty\), \(\lim_{n\to\infty}b_{n}=0\), \(\sum_{n=0}^{\infty}c_{n} <\infty\). Then \(a_{n}\to0\) (\(n\to\infty\)).

Now, we establish a class of new Picard-Mann iterations with mixed errors for common fixed points of two different nonlinear operators (in short, (PMMD)) as follows.

Algorithm 2.1

Step 1. Choose \(x_{0}\) in a normed space X.

Step 2. Let
$$\begin{aligned} \textstyle\begin{cases} x_{n+1}=T_{1}y_{n}+h_{n},\\ y_{n}=(1-\alpha_{n})x_{n}+\alpha_{n}T_{2}x_{n}+\alpha_{n}d_{n}+e_{n}, \end{cases}\displaystyle \end{aligned}$$
(2.4)
where \(T_{1}, T_{2}: X\to X\) are two nonlinear operators, and \(h_{n}, d_{n}, e_{n} \in X\) are errors to take into account a possible inexact computation of the operator points.
Step 3. Choose sequences \(\{\alpha_{n}\}\), \(\{h_{n}\}\), \(\{d_{n}\}\) and \(\{e_{n}\}\) such that for \(n\ge0\), \(\{\alpha_{n}\}\subset[0, 1]\) and \(\{h_{n}\}\), \(\{d_{n}\}\), \(\{e_{n}\}\) are three sequences in X satisfying the following conditions P:
  1. (i)

    \(d_{n}=d_{n}^{\prime}+d_{n}^{\prime\prime}\);

     
  2. (ii)

    \(\lim_{n\to\infty} \Vert d_{n}^{\prime} \Vert =0\);

     
  3. (iii)

    \(\sum_{n=0}^{\infty} \Vert h_{n}\Vert <\infty\), \(\sum_{n=0}^{\infty} \Vert d_{n}^{\prime\prime} \Vert <\infty\), \(\sum_{n=0}^{\infty} \Vert e_{n}\Vert <\infty\).

     

Step 4. If \(x_{n+1}\), \(y_{n}\), \(\alpha_{n}\), \(h_{n}\), \(d_{n}\) and \(e_{n}\) satisfy (2.4) to sufficient accuracy, go to Step 5; otherwise, set \(n: =n+1\) and return to Step 2.

Step 5. Let \(\{w_{n}\}\) be any sequence in X and define \(\{ \varepsilon_{n}\}\) by
$$ \textstyle\begin{cases} \varepsilon_{n}=\Vert w_{n+1}- ( T_{1}\xi_{n}+h_{n} ) \Vert , \\ \xi_{n}=(1-\alpha_{n})w_{n}+\alpha_{n} T_{2} w_{n}+\alpha_{n}d_{n}+e_{n}. \end{cases} $$
(2.5)

Step 6. If \(\varepsilon_{n}\), \(w_{n+1}\), \(\xi_{n}\), \(\alpha_{n}\), \(h_{n}\), \(d_{n}\) and \(e_{n}\) satisfy (2.5) to sufficient accuracy, stop; otherwise, set \(n:=n+1\) and return to Step 3.

Remark 2.2

For special choices of the operators \(T_{1}\) and \(T_{2}\), the space X, and the errors \(h_{n}\), \(d_{n}\) and \(e_{n}\) in (2.4), one can obtain a large number of Picard iterative process, Mann iterative process, Picard-Mann iterative process due to Khan [18] and other related iterations. Now we list some special cases of iteration (2.4) as follows.
  • Special Case I If \(h_{n}=d_{n}=e_{n}=0\), the iterative process (2.4) becomes the following Picard-Mann iteration for two different operators (in short, (PMD)): For any given \(x_{0}\in X\),
    $$\begin{aligned} \textstyle\begin{cases} x_{n+1}=T_{1}y_{n}, \\ y_{n}=(1-\alpha_{n})x_{n}+\alpha_{n}T_{2}x_{n}. \end{cases}\displaystyle \end{aligned}$$
    (2.6)
  • Special Case II When \(T_{1}=T_{2}=T\), for any given \(x_{0}\in X\), iteration (2.4) reduces to the sequence \(\{x_{n}\}\) defined by
    $$\begin{aligned} \textstyle\begin{cases} x_{n+1}=Ty_{n}+h_{n}, \\ y_{n}=(1-\alpha_{n})x_{n}+\alpha_{n}Tx_{n}+\alpha_{n}d_{n}+e_{n}. \end{cases}\displaystyle \end{aligned}$$
    (2.7)
    We note that the iterative processes (PMD) and the Picard-Mann iteration with mixed errors (2.7) (in short, (PMM)) are new and not studied in the literature.
  • Special Case III If \(T_{1}=T_{2}=T\), then (2.6) reduces to
    $$\begin{aligned} \textstyle\begin{cases} x_{n+1}=Ty_{n}, \\ y_{n}=(1-\alpha_{n})x_{n}+\alpha_{n}Tx_{n}, \end{cases}\displaystyle \end{aligned}$$
    (2.8)
    which was the Picard-Mann iterative process (in short, (PM)) studied by Khan [18] when \(\alpha_{n}\in(0, 1)\). We note that (PM) can be obtained from (2.7) if \(h_{n}=d_{n}=e_{n}=0\) for all \(n\ge0\). Further, the iterative process (2.8) reduces to the Picard-Krasnoselskii hybrid iterations studied by Okeke and Abbas [20] when \(\alpha_{n}=\lambda\in(0, 1)\). As Khan [18] pointed out, the iteration (2.8) is independent of all Picard and Mann iterative processes if \(\{\alpha_{n}\}\subset(0, 1)\). But one can easily see that the iterative process (2.8) will reduce to Picard and a special case of Ishikawa iterative process when \(\alpha_{n}=0\) and \(\alpha_{n}=1\), respectively.
  • Special Case IV When \(T_{1}=I\), the identity operator, for any given \(x_{0}\in X\), the iteration (PMD) defined by (2.6) can be written as
    $$\begin{aligned} x_{n+1}=(1-\alpha_{n})x_{n}+\alpha_{n}T_{2}x_{n}, \end{aligned}$$
    (2.9)
    which is the Mann iterative process (in short, (MI)) for \(\alpha _{n}\in[0, 1]\).

Based on Lemma 2.1 and the existence of fixed point for a contraction operator, in the sequel, we will prove convergence and stability of the new Picard-Mann iterative processes with mixed errors generated by Algorithm 2.1.

Theorem 2.1

Let X be a normed space and \(C\subset X\) be a nonempty closed convex bounded subset. Let \(T_{1}: C\to C\) be nonexpansive and \(T_{2}: C\to C\) be a contraction operator with constant \(\theta\in[0, 1)\). Suppose that \(F(T_{1}\cap T_{2}):=\{x\in C: T_{i}x=x, i=1, 2\}\neq\emptyset\) and \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\). Then
  1. (i)
    the iterative sequence \(\{x_{n}\}\) generated by (PMMD) in Algorithm  2.1 converges to \(x^{*}\in F(T_{1}\cap T_{2})\) with convergence rate
    $$\begin{aligned} \vartheta=1-\hat{\alpha}(1-\theta)< 1, \end{aligned}$$
    (2.10)
    where \(\hat{\alpha}=\limsup_{n\to\infty}\alpha_{n}\in(0, 1]\);
     
  2. (ii)
    if, in addition, for any sequence \(\{z_{n}\}\subset X\), there exists \(\alpha>0\) such that \(\alpha_{n}\ge\alpha\) for all \(n\geq 0\), then
    $$\begin{aligned} \lim_{n\to\infty}w_{n}=x^{*} \quad\textit{if and only if}\quad\lim_{n\to\infty}\varepsilon_{n}=0, \end{aligned}$$
    (2.11)
    where \(\varepsilon_{n}\) is defined by (2.5).
     

Proof

It follows from (2.4) that
$$\begin{aligned} &\bigl\Vert x_{n+1}-x^{*}\bigr\Vert \\ &\quad \le\bigl\Vert y_{n}-x^{*}\bigr\Vert +\Vert h_{n}\Vert \\ &\quad \le(1-\alpha_{n})\bigl\Vert x_{n}-x^{*}\bigr\Vert +\alpha_{n}\bigl\Vert T_{2} x_{n}-x^{*} \bigr\Vert \\ &\quad \quad{}+\alpha_{n}\bigl(\bigl\Vert d_{n}^{\prime} \bigr\Vert +\bigl\Vert d_{n}^{\prime\prime}\bigr\Vert \bigr)+ \Vert e_{n}\Vert +\Vert h_{n}\Vert \\ &\quad \le(1-\alpha_{n})\bigl\Vert x_{n}-x^{*}\bigr\Vert +\alpha _{n}\theta\bigl\Vert x_{n}-x^{*} \bigr\Vert \\ &\quad \quad{}+\alpha_{n}\bigl\Vert d_{n}^{\prime}\bigr\Vert +\bigl(\bigl\Vert d_{n}^{\prime\prime}\bigr\Vert +\Vert e_{n}\Vert +\Vert h_{n}\Vert \bigr) \\ &\quad =\vartheta_{n}\bigl\Vert x_{n}-x^{*}\bigr\Vert +(1-\theta)\alpha _{n}\cdot\frac{1}{1-\theta}\bigl\Vert d_{n}^{\prime}\bigr\Vert \\ &\quad \quad{}+\bigl(\bigl\Vert d_{n}^{\prime\prime}\bigr\Vert +\Vert e_{n}\Vert +\Vert h_{n}\Vert \bigr), \end{aligned}$$
(2.12)
where \(\vartheta_{n}=1-(1-\theta)\alpha_{n}\). Since \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\), by Lemma 2.2 and (2.12), now we know that \(\Vert x_{n}-x^{*}\Vert \to 0\) (\(n\to\infty\)). Thus, the sequence \(\{x_{n}\}\) converges to \(x^{*}\) for \(\vartheta_{n}\).
Further, by (2.12), we have
$$\begin{aligned} \limsup_{n\to\infty}\vartheta_{n}=1-\hat{\alpha}(1-\theta), \end{aligned}$$
(2.13)
where \(\hat{\alpha}=\limsup_{n\to\infty}\alpha_{n}\).
Next, we prove the conclusion (ii). Since \(0<\alpha\leq\alpha_{n}\), it follows from the proof of inequality (2.12) and (2.5) that
$$\begin{aligned} &\bigl\Vert T_{1}\xi_{n}+h_{n}-x^{*} \bigr\Vert \\ &\quad \le \bigl[ 1-(1-\theta)\alpha_{n} \bigr] \bigl\Vert w_{n}-x^{*}\bigr\Vert +\alpha_{n}\bigl\Vert d_{n}^{\prime}\bigr\Vert +\bigl(\bigl\Vert d_{n}^{\prime\prime}\bigr\Vert +\Vert e_{n}\Vert + \Vert h_{n}\Vert \bigr), \end{aligned}$$
(2.14)
and
$$\begin{aligned} &\bigl\Vert w_{n+1}-x^{*}\bigr\Vert \\ &\quad \le\bigl\Vert T_{1}\xi_{n}+h_{n}-x^{*} \bigr\Vert +\varepsilon _{n} \\ &\quad \le \bigl[ 1-(1-\theta)\alpha_{n} \bigr] \bigl\Vert w_{n}-x^{*}\bigr\Vert +\alpha_{n}\bigl\Vert d_{n}^{\prime}\bigr\Vert +\varepsilon_{n} \\ &\quad \quad{}+\bigl(\bigl\Vert d_{n}^{\prime\prime}\bigr\Vert +\Vert e_{n}\Vert +\Vert h_{n}\Vert \bigr) \\ &\quad \le \bigl[ 1-(1-\theta)\alpha_{n} \bigr] \bigl\Vert w_{n}-x^{*}\bigr\Vert +(1-\theta)\alpha_{n}\cdot \frac{1}{1-\theta } \biggl( \bigl\Vert d_{n}^{\prime}\bigr\Vert +\frac{\varepsilon _{n}}{\alpha} \biggr) \\ &\quad \quad{}+\bigl(\bigl\Vert d_{n}^{\prime\prime}\bigr\Vert +\Vert e_{n}\Vert +\Vert h_{n}\Vert \bigr). \end{aligned}$$
(2.15)
Let \(\lim_{n\to\infty}\varepsilon_{n}=0\). Then, by \(\sum_{n=0} ^{\infty}\alpha_{n}=\infty\), Lemma 2.2 and (2.15), we know that \(\lim_{n\to\infty}w_{n}=x^{*}\).
Conversely, if \(\lim_{n\rightarrow\infty}w_{n}=x^{*}\), then it follows from (2.14) and \(\alpha_{n}\le1\) that, for all \(n\ge0\),
$$\begin{aligned} \varepsilon_{n}&=\bigl\Vert w_{n+1}- ( T_{1} \xi_{n}+h_{n} ) \bigr\Vert \\ &\le\bigl\Vert w_{n+1}-x^{*}\bigr\Vert +\bigl\Vert T_{1}\xi _{n}+h_{n}-x^{*}\bigr\Vert \\ &\le\bigl\Vert w_{n+1}-x^{*}\bigr\Vert + \bigl[ 1-(1- \theta)\alpha _{n} \bigr] \bigl\Vert w_{n}-x^{*} \bigr\Vert \\ &\quad{}+\alpha_{n}\bigl\Vert d_{n}^{\prime}\bigr\Vert + \bigl( \bigl\Vert d_{n}^{\prime\prime}\bigr\Vert +\Vert e_{n}\Vert +\Vert h_{n}\Vert \bigr) \\ &\le\bigl\Vert w_{n+1}-x^{*}\bigr\Vert +\bigl\Vert w_{n}-x^{*}\bigr\Vert + \bigl( \bigl\Vert d_{n}^{\prime}\bigr\Vert +\bigl\Vert d_{n}^{\prime\prime} \bigr\Vert +\Vert e_{n}\Vert +\Vert h_{n}\Vert \bigr) , \end{aligned}$$
(2.16)
this implies that \(\varepsilon_{n} \to0\) as \(n\to\infty\). This completes the proof. □

Remark 2.3

(i) Since the errors in Algorithm 2.1 exist objectively when the inexact calculation of operator points is considered, the iterative process (2.4) (i.e., (PMMD)) is more truthful than the Picard iteration, Mann iteration, Picard-Mann iteration due to Khan [18] and so on. One can easily observe in the next numerical simulations visually.

(ii) We note that the stability analysis in Theorem 2.1 is little discussed in the literature. Akewe and Okeke [27] gave the stability theorems for the Picard-Mann hybrid iterative scheme for a general class of contractive-like operators. However, comparing with the stability analysis in [27], we use a different method to analyze the stability and also extend the application of stability for iterations.

(iii) According to inequality (2.12), one can obtain
$$\begin{aligned} &\bigl\Vert x_{n+1}-x^{*}\bigr\Vert \\ &\quad \le\vartheta_{n}\bigl\Vert x_{n}-x^{*}\bigr\Vert +\bigl(\Vert d_{n}\Vert +\Vert e_{n}\Vert + \Vert h_{n}\Vert \bigr) \\ &\quad \le\vartheta_{n}\vartheta_{n-1}\bigl\Vert x_{n-1}-x^{*}\bigr\Vert +\vartheta_{n}\bigl(\Vert d_{n-1}\Vert +\Vert e_{n-1}\Vert +\Vert h_{n-1} \Vert \bigr) \\ &\quad \quad{}+\bigl(\Vert d_{n}\Vert +\Vert e_{n}\Vert + \Vert h_{n}\Vert \bigr) \\ &\quad \le\cdots \\ &\quad \le\prod_{i=1}^{n} \vartheta_{i} \bigl\Vert x_{1}-x^{*}\bigr\Vert +\sum _{k=1}^{n-1}\prod_{i=k+1}^{n} \vartheta_{i}\bigl(\Vert d_{k}\Vert +\Vert e_{k}\Vert +\Vert h_{k}\Vert \bigr) \\ &\quad \quad{}+\bigl(\Vert d_{n}\Vert +\Vert e_{n}\Vert + \Vert h_{n}\Vert \bigr), \end{aligned}$$
(2.17)
where \(\prod_{i=1}^{n} \vartheta_{i}=\vartheta_{1}\cdot\vartheta _{2}\cdot \cdots \cdot\vartheta_{n}\) and \(\vartheta_{i}\) is the same as in (2.12) for all \(i=1,2,\ldots,n\). As a matter of fact, \(\sum_{k=1}^{n-1}\prod_{i=k+1}^{n} \vartheta_{i}(\Vert d_{k}\Vert +\Vert e_{k}\Vert +\Vert h_{k}\Vert )+ (\Vert d_{n}\Vert +\Vert e_{n}\Vert +\Vert h_{n}\Vert ) =o(\Vert d_{n}\Vert +\Vert e_{n}\Vert +\Vert h_{n}\Vert )\). Hence, these errors in (2.4) can help to adjust the iteration results to improve the algorithms by using this infinitesimal of a higher order sequence.

From Theorem 2.1 and Remark 2.1, we have the following result.

Theorem 2.2

Let \(C\subset X\) be a nonempty closed convex bounded subset of a normed space X, and let \(T: C\to C\) be a contraction operator with constant \(\theta\in[0, 1)\). If \(\{\alpha _{n}\}\subset[0, 1]\) and \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\) and \(\{h_{n}\}\), \(\{d_{n}\}\), \(\{e_{n}\}\) are three sequences in X satisfying the conditions P, then
  1. (i)

    the iterative sequence \(\{x_{n}\}\) generated by (2.7) (that is, (PMM)) converges to \(p\in F(T):=\{x\in C: Tx=x\}\) with convergence rate \(\vartheta=1-\hat{\alpha}(1-\theta)<1\), where \(\hat{\alpha }=\limsup_{n\to\infty}\alpha_{n}\in(0, 1]\);

     
  2. (ii)
    if, in addition, for any sequence \(\{z_{n}\}\subset X\), there exists \(\alpha>0\) such that \(\alpha_{n}\ge\alpha\) for all \(n\geq 0\), then
    $$\begin{aligned} \lim_{n\to\infty}z_{n}=p \quad\Longleftrightarrow\quad\lim _{n\to \infty}\epsilon_{n}=0, \end{aligned}$$
    (2.18)
    where \(\epsilon_{n}\) is defined by
    $$\begin{aligned} \textstyle\begin{cases} \epsilon_{n}=\Vert z_{n+1}- ( Ts_{n}+h_{n} ) \Vert , \\ s_{n}=(1-\alpha_{n})z_{n}+\alpha_{n} Tz_{n}+\alpha_{n}d_{n}+e_{n}. \end{cases}\displaystyle \end{aligned}$$
    (2.19)
     

3 Numerical simulations and an application

In order to verify our main results presented in the above section, in this section, we give some numerical simulations and consider approximation of the elliptic boundary value problem (1.1) by using the new Picard-Mann iterative methods with mixed errors for contractive operators.

3.1 Numerical examples

We first give the following examples and their numerical simulations to show verification of Theorem 2.1 and Remark 2.3(iii) and to display effectiveness of the new Picard-Mann iterative methods with mixed errors.

Example 3.1

Let \(X=\mathbb{R}\), \(1< k\le\frac {86}{49}\), \(C= [ -1, \frac{5}{2}+\frac{1}{2}\sqrt{\frac{135}{k-1}} ] \), \(T_{1}x=\frac{1}{\pi}\sin(\pi x)+8\) and \(T_{2}x=\sqrt {x^{2}-5x+40}\) for all \(x\in C\), and \(h_{n}=-\frac{5}{407^{n}}\), \(\alpha_{n}=\frac{1}{n}\), \(d_{n}=\frac{1}{n^{2}}+\frac{1}{n^{3}}\) and \(e_{n}=-\frac{14}{n^{7}}\) for \(n\ge1\). It is easy to see that \(T_{1}\) is nonexpansive and \(T_{2}\) is a contraction operator with constant \(\frac{1}{\sqrt{k}}\). In fact, for all \(x, y\in C\), we have
$$\begin{aligned} \Vert T_{1}x-T_{1}y\Vert &=\frac{1}{\pi}\bigl\Vert \sin (\pi x)-\sin(\pi y)\bigr\Vert \\ &\leq \frac{1}{\pi} \Vert \pi x-\pi y\Vert =\Vert x-y\Vert \end{aligned}$$
(3.1)
and
$$\begin{aligned} \Vert T_{2}x-T_{2}y\Vert &=\biggl\Vert \frac {(x-2.5)^{2}-(y-2.5)^{2}}{\sqrt{(x-2.5)^{2}+33.75}+\sqrt {(y-2.5)^{2}+33.75}}\biggr\Vert \\ &=\biggl\Vert \frac{(x-y)[(x-2.5)+(y-2.5)]}{\Vert x-2.5\Vert +\Vert y-2.5\Vert }\biggr\Vert \\ &\quad {}\cdot\frac{\Vert x-2.5\Vert +\Vert y-2.5\Vert }{\sqrt{(x-2.5)^{2}+33.75}+\sqrt{(y-2.5)^{2}+33.75}} \\ &\le\frac{1}{\sqrt{k}} \Vert x-y\Vert . \end{aligned}$$
(3.2)
Further, one can see that \(T_{1}\) is nonexpansive but not a contraction, and \(F(T_{1}\cap T_{2})= \{8\}\neq\emptyset\). Hence, the conditions in Theorem 2.1 and Algorithm 2.1 hold and the sequence \(\{x_{n}\}\) generated by (PMMD) can be rewritten as follows:
$$ (\mathrm{PMMD})\quad \textstyle\begin{cases} x_{n+1}=\frac{1}{\pi}\sin(\pi y)+8-\frac{5}{407^{n}}, \\ y_{n}= ( 1-\frac{1}{n} ) x_{n}+\frac{1}{n}\sqrt {x_{n}^{2}-5x_{n}+40}+\frac{1}{n} ( \frac{1}{n^{2}}+\frac {1}{n^{3}} ) -\frac{14}{n^{7}}. \end{cases} $$
Moreover, the corresponding two special cases are listed as well.
$$\begin{aligned}& (\mathrm{PMD})\quad \textstyle\begin{cases} x_{n+1}= \frac{1}{\pi}\sin(\pi y)+8, \\ y_{n}= ( 1-\frac{1}{n} ) x_{n}+\frac{1}{n}\sqrt{x_{n}^{2}-5x_{n}+40}, \end{cases}\displaystyle \\& (\mathrm{MI}) \quad x_{n+1}= \biggl( 1-\frac{1}{n} \biggr) x_{n}+\frac {1}{n}\sqrt{x_{n}^{2}-5x_{n}+40}. \end{aligned}$$
By Theorem 2.1, now we know that \(\{x_{n}\}\) generated by (PMMD) converges to \(x^{*}=8\). Further, in order to show the availability of the New Picard-Mann iterative methods with mixed errors, by using software Matlab 7.0, the numerical simulation results for the sequences \(\{x_{n}\}\) generated by (PMMD), (PMD) and (MI) are given with 70, more than 200 and more than 200 iterations in Figure 1 and Table 1, respectively.
Figure 1

Iterative solutions of (PMMD), (PMD) and (MI).

Table 1

A comparison of the iterative processes (PMMD), (PMD) and (MI)

Iteration number

(PMMD)

(PMD)

(MI)

0

25.0000

25.0000

25.0000

5

7.9781

7.8794

21.0937

10

7.9944

7.9102

20.0644

15

7.9975

7.9249

19.4604

20

7.9987

7.9340

19.0346

25

7.9992

7.9403

18.7069

30

7.9995

7.9451

18.4413

35

7.9996

7.9489

18.2183

40

7.9997

7.9519

18.0264

45

7.9998

7.9545

17.8583

50

7.9999

7.9567

17.7088

55

7.9999

7.9585

17.5743

60

7.9999

7.9602

17.4521

65

7.9999

7.9617

17.3403

70

8.0000

7.9630

17.2373

75

8.0000

7.9642

17.1418

80

8.0000

7.9652

17.0529

85

8.0000

7.9662

16.9697

90

8.0000

7.9671

16.8915

95

8.0000

7.9679

16.8179

100

8.0000

7.9687

16.7483

105

8.0000

7.9694

16.6823

Remark 3.1

If these mixed errors can be used properly, the property of (2.4) will be better than the other algorithms. From Figure 1 and Table 1, it is easy to see that the iterative process (PMMD) is effective and the sequence \(\{ x_{n}\}\) generated by (PMMD) converges much faster.

Next, we verify Theorem 2.2 by the following numerical example.

Example 3.2

Let \(X=\mathbb{R}\), constant \(1< l\le \frac{49}{25}\), \(C= [ -1, 4+2\sqrt{\frac{6}{l-1}} ] \), \(Tx=\sqrt{x^{2}-8x+40}\) for all \(x\in C\), and \(h_{n}=\frac{1}{10^{n}}\), \(\alpha_{n}=\frac{1}{2^{n}}\), \(d_{n}=\frac{1}{n}+\frac{1}{n^{2}}\) and \(e_{n}=-\frac{16}{n^{5}}\) for \(n\ge1\). Then, for all \(x, y\in C\), we have
$$\begin{aligned} \Vert Tx-Ty\Vert &=\biggl\Vert \frac {(x-4)^{2}-(y-4)^{2}}{\sqrt{(x-4)^{2}+24}+\sqrt{(y-4)^{2}+24}}\biggr\Vert \\ &=\biggl\Vert \frac{(x-y)[(x-4)+(y-4)]}{\Vert x-4\Vert +\Vert y-4\Vert }\biggr\Vert \\ &\quad{}\cdot\frac{\Vert x-4\Vert +\Vert y-4\Vert }{\sqrt{(x-4)^{2}+24}+\sqrt{(y-4)^{2}+24}} \\ &\le\frac{1}{\sqrt{l}} \Vert x-y\Vert , \end{aligned}$$
(3.3)
and so T is a contraction operator. Thus, we obtain the following iterative processes as two special cases of (PMMD):
$$\begin{aligned}& (\mathrm{PMM})\quad \textstyle\begin{cases} x_{n+1}= \sqrt{y_{n}^{2}-8y_{n}+40}+\frac{1}{10^{n}},\\ y_{n}= ( 1-\frac{1}{2^{n}} ) x_{n}+\frac{1}{2^{n}}\sqrt {x_{n}^{2}-8x_{n}+40}\\ \hphantom{y_{n}= }{} +\frac{1}{2^{n}} ( \frac{1}{n}+\frac{1}{n^{2}} ) -\frac{16}{n^{5}}, \end{cases}\displaystyle \\& (\mathrm{PM})\quad \textstyle\begin{cases} x_{n+1}= \sqrt{y_{n}^{2}-8y_{n}+40}, \\ y_{n}=(1-\frac{1}{2^{n}})x_{n}+\frac{1}{2^{n}}\sqrt{x_{n}^{2}-8x_{n}+40}. \end{cases}\displaystyle \end{aligned}$$
It follows from Theorem 2.2 that \({x_{n}}\) generated by (PMM) converges to \({p=5}\), which is the unique fixed point of T. Similarly, in order to compare (PMM) to (PM), the numerical simulations are displayed with 9 and 13 iterations in Figure 2 and Table 2, respectively. One can clearly see that the acceleration efficiency is 44.44%.
Figure 2

Iterative solutions of (PMM) and (PM).

Table 2

A comparison of the iterative processes (PMM) and (PM)

Iteration number

(PMM)

(PM)

0

25.0000

25.0000

1

6.6065

19.8945

2

5.3218

15.8549

3

5.0625

12.4783

4

5.0131

9.6469

5

5.0031

7.4247

6

5.0008

5.9644

7

5.0002

5.2762

8

5.0001

5.0623

9

5.0000

5.0128

10

5.0000

5.0026

11

5.0000

5.0005

12

5.0000

5.0001

13

5.0000

5.0000

Remark 3.2

Figure 2 and Table 2 show that the iterative process (PMM) is effective and the sequence \(\{ x_{n}\}\) generated by (PMM) converges faster than that produced by (PM).

3.2 An application to the elliptic boundary value problem

In 2013, by using the notion of a nonexpansive potential operator, Ayadi et al. [1] proved a new global minimization theorem in Hilbert spaces to find a weak solution of the following elliptic boundary value problem:
$$\begin{aligned} \textstyle\begin{cases} -\Delta u=f(x, u), & x \in\Omega, \\ u(x)=0, & x \in\partial\Omega, \end{cases}\displaystyle \end{aligned}$$
(3.4)
where \(\Omega\subset\mathbb{R}^{n}\) is a bounded domain in an n-dimensional real space. \(f, f^{\prime}: \Omega\times\mathbb {R}\rightarrow\mathbb{R}\) are Carathéodory functions, here \(f^{\prime}\) is the derivative of f with respect to its second variable.
One can know that a weak solution of (3.4) is a solution of the following variational problem:
$$ \textstyle\begin{cases} \int_{\Omega}{\nabla u\cdot\nabla v}\,dx-\int_{\Omega}{f(x, u)\cdot v}\,dx=0, &\forall v\in H_{0}^{1}(\Omega), \\ u(x)\in H_{0}^{1}(\Omega). \end{cases} $$
(3.5)
Let \(\phi: H_{0}^{1}(\Omega)\to\mathbb{R}\) be a nonlinear operator such that
$$ \phi(u)=\frac{1}{2}\Vert u\Vert ^{2}- \int_{\Omega}{F(x, u)}\,dx, \quad F(x, u)= \int_{0}^{u}{f(x, \zeta)}\,d\zeta. $$
(3.6)

From Theorem 2.2, we have the following existence results of solutions for problem (3.4).

Theorem 3.1

Let \(\mathbb{R}^{n}\) be an n-dimensional real space and \(\Omega\subset \mathbb{R}^{n}\) be a nonempty bounded domain. Define \(T: C\to C\) by \(T=I-\phi^{\prime}\), where ϕ is determined by (3.6), \(C=[\nu, \omega]=\{u\in H_{0}^{1}(\Omega): \nu(x)\le u(x)\le\omega (x), \forall x\in\Omega\}\), here \(\nu, \omega\in H_{0}^{1}(\Omega )\) are a subsolution and a supersolution of problem (3.5), respectively. If \(F(T):=\{u\in C: Tu=u\}\neq\emptyset\) and \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\), then
  1. (i)

    the iterative sequence \(\{u_{n}\}\) generated by (2.7) converges to a weak solution \(u^{*}\in F(T)\) of problem (3.4) with convergence rate \(\vartheta=1-\hat{\alpha}(1-\theta)<1\), where \(\hat {\alpha}=\limsup_{n\to\infty}\alpha_{n}\in(0, 1]\) and \(\theta =\sup_{u\in C}\Vert (I^{\prime}-\phi^{\prime\prime})u\Vert \);

     
  2. (ii)
    if, in addition, there exists \(\alpha>0\) such that \(\alpha _{n}\ge\alpha\) for all \(n\geq0\), then
    $$\begin{aligned} \lim_{n\to\infty}z_{n}=u^{*} \quad \Longleftrightarrow\quad\lim_{n\to\infty}\epsilon_{n}=0, \end{aligned}$$
    (3.7)
    where \(\epsilon_{n}\) is defined by (2.19) and \(\{z_{n}\}\) is any sequence.
     

Proof

From the proof of [1, Theorem 6], it follows that \(C\subset H_{0}^{1}(\Omega)\) is a closed convex and bounded subset, and \(\Vert (I^{\prime}-\phi^{\prime\prime})u\Vert <1\) for some \(u\in C\). By the proof of Theorem 4 in [1], we know that T is a contraction operator. Since a contraction operator has fixed points, the results hold from Theorem 2.2. This completes the proof. □

Remark 3.3

In the proof of Theorem 3.1, we employ the new Picard-Mann iterative approximation with mixed errors for contraction operators, which differs from the method proposed in Ayadi et al. [1] for showing that problem (3.4) has a weak solution.

4 Concluding remarks

In this paper, we introduced a class of new Picard-Mann iterative methods with mixed errors for two different nonlinear operators as follows:
$$\begin{aligned} \textstyle\begin{cases} x_{n+1}=T_{1}y_{n}+h_{n}, \\ y_{n}=(1-\alpha_{n})x_{n}+\alpha_{n}T_{2}x_{n}+\alpha_{n}d_{n}+e_{n}, \end{cases}\displaystyle \end{aligned}$$
(4.1)
where \(T_{1}, T_{2}: X\to X\) are respectively nonexpansive and contraction operators, \(\alpha_{n}\in[0, 1]\) and \(h_{n}, d_{n}, e_{n} \in X\) are errors to take into account a possible inexact computation of the operator points. Iteration (4.1) includes the Picard-Mann iterative process due to Khan [18], Picard iterative process, Mann iterative process and other related iterative processes as special cases.
Then we gave convergence and stability analysis of the new Picard-Mann iterative approximation and proposed two numerical examples to show that the new Picard-Mann iteration converges more effectively than the Picard iterative process, Mann iterative process, Picard-Mann iterative process due to Khan and other related iterative processes. Furthermore, as an application of the new Picard-Mann iterative methods with mixed errors for contractive operators, which are different from the method proposed in [1], we explore iterative approximation of solutions for the following elliptic boundary value problem:
$$\begin{aligned} \textstyle\begin{cases} -\Delta u=f(x, u), & x \in\Omega, \\ u(x)=0, & x \in\partial\Omega, \end{cases}\displaystyle \end{aligned}$$
(4.2)
where \(\Omega\subset\mathbb{R}^{n}\) is a bounded domain, \(f: \Omega \times\mathbb{R}\rightarrow\mathbb{R}\) is a Carathéodory function.

However, can our results be obtained when T is only nonexpansive in Theorem 2.2 or \(T_{2}\) is also nonexpansive in Theorem 2.1? These are still open questions that are worth further studying.

Declarations

Acknowledgements

We would like to thank the editors and referees for their valuable comments and suggestions to improve our paper.

Availability of data and materials

Not applicable.

Authors’ information

Further, Mr. T-FL is studying for an MA degree. His research interests focus on the theory and algorithm of nonlinear system optimization and control. H-YL is a professor in Sichuan University of Science & Engineering. He received his doctoral degree from Sichuan University in 2013. His research interests focus on the structure theory and algorithm of operational research and optimization, nonlinear analysis and applications.

Funding

This work was supported by the Scientific Research Project of Sichuan University of Science & Engineering (2017RCL54), the Innovation Fund of Postgraduate, Sichuan University of Science & Engineering (y2016041).

Authors’ contributions

T-FL carried out the proof of the theorems and gave some numerical simulations to show the main results. H-YL conceived of the study and participated in its design and coordination. All authors read and approved the final manuscript.

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Not applicable.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
College of Mathematics and Statistics, Sichuan University of Science & Engineering, Zigong, P.R. China
(2)
Institute of Nonlinear Physical Science, Sichuan University of Science & Engineering, Zigong, P.R. China

References

  1. Ayadi, S, Moussaoui, T, O’Regan, D: Existence of solutions for an elliptic boundary value problem via a global minimization theorem on Hilbert spaces. Differ. Equ. Appl. 8(3), 385-391 (2016) MathSciNetMATHGoogle Scholar
  2. Roubíček, T: Nonlinear Partial Differential Equations with Applications, 2nd edn. International Series of Numerical Mathematics, vol. 153. Springer, Basel (2013) View ArticleMATHGoogle Scholar
  3. Barbu, V: Analysis and Control of Nonlinear Infinite-Dimensional Systems. Mathematics in Science and Engineering, vol. 190. Academic Press, Boston (1993) MATHGoogle Scholar
  4. Lan, HY: Variational inequality theory for elliptic inequality systems with Laplacian type operators and related population models: an overview and recent advances. Int. J. Nonlinear Sci. 23(3), 157-169 (2017) MathSciNetGoogle Scholar
  5. Sofonea, M, Matei, A: Variational Inequalities with Applications: A Study of Antiplane Frictional Contact Problems. Advances in Mechanics and Mathematics, vol. 18. Springer, New York (2009) MATHGoogle Scholar
  6. Marin, M: On weak solutions in elasticity of dipolar bodies with voids. J. Comput. Appl. Math. 82(1-2), 291-297 (1997) MathSciNetView ArticleMATHGoogle Scholar
  7. Marin, M, Vlase, S: Effect of internal state variables in thermoelasticity of microstretch bodies. An. Ştiinţ. Univ. ‘Ovidius’ Constanţa, Ser. Mat. 24(3), 241-257 (2016) MathSciNetMATHGoogle Scholar
  8. Liu, WY, Sun, TJ: Iterative non-overlapping domain decomposition method for optimal boundary control problems governed by elliptic equations. J. Shandong Univ. Nat. Sci. 51(2), 21-28 (2016) (in Chinese) MathSciNetMATHGoogle Scholar
  9. Lions, JL: Optimal Control of Systems Governed by Partial Differential Equations. Springer, New York (1971) View ArticleMATHGoogle Scholar
  10. Coletsos, J, Kokkinis, B: Optimal control of nonlinear elliptic PDEs—theory and optimization methods. In: Lirkov, I, et al. (eds.) Large-Scale Scientific Computing. Lecture Notes in Comput. Sci., vol. 8353, pp. 81-89. Springer, Heidelberg (2014) Google Scholar
  11. Gong, W, Yan, NN: Adaptive finite element method for elliptic optimal control problems: convergence and optimality. Numer. Math. 135(4), 1121-1170 (2017) MathSciNetView ArticleMATHGoogle Scholar
  12. Yan, NN, Zhou, ZJ: A priori and a posteriori error analysis of edge stabilization Galerkin method for the optimal control problem governed by convection-dominated diffusion equation. J. Comput. Appl. Math. 223(1), 198-217 (2009) MathSciNetView ArticleMATHGoogle Scholar
  13. Turkyilmazoglu, M: An optimal variational iteration method. Appl. Math. Lett. 24(5), 762-765 (2011) MathSciNetView ArticleMATHGoogle Scholar
  14. Beretta, E, Manzoni, A, Ratti, L: A reconstruction algorithm based on topological gradient for an inverse problem related to a semilinear elliptic boundary value problem. Inverse Probl. 33(3), 035010 (2017) MathSciNetView ArticleMATHGoogle Scholar
  15. Kogut, PI, Manzo, R, Putchenko, AO: On approximate solutions to the Neumann elliptic boundary value problem with non-linearity of exponential type. Bound. Value Probl. 2016, 208 (2016) MathSciNetView ArticleMATHGoogle Scholar
  16. He, JH: Variational iteration method - some recent results and new interpretations. J. Comput. Appl. Math. 207(1), 3-17 (2007) MathSciNetView ArticleMATHGoogle Scholar
  17. Zhou, XW, Yao, L: The variational iteration method for Cauchy problems. Comput. Math. Appl. 60(3), 756-760 (2010) MathSciNetView ArticleMATHGoogle Scholar
  18. Khan, SH: A Picard-Mann hybrid iterative process. Fixed Point Theory Appl. 2013, 69 (2013) MathSciNetView ArticleMATHGoogle Scholar
  19. Deng, WQ: A modified Picard-Mann hybrid iterative algorithm for common fixed points of countable families of nonexpansive mappings. Fixed Point Theory Appl. 2014, 58 (2014) MathSciNetView ArticleMATHGoogle Scholar
  20. Okeke, GA, Abbas, M: A solution of delay differential equations via Picard-Krasnoselskii hybrid iterative process. Arab. J. Math. 6(1), 21-29 (2017) MathSciNetView ArticleMATHGoogle Scholar
  21. Jiang, GJ, Kwun, YC, Kang, SM: Solvability and Mann iterative approximations for a higher order nonlinear neutral delay differential equation. Adv. Differ. Equ. 2017, 60 (2017) MathSciNetView ArticleGoogle Scholar
  22. Roussel, MR: Stability analysis for ODEs. Teaching-Chemistry 5850: Nonlinear Dynamics, Lecture 2. http://people.uleth.ca/~roussel/nld/. Accessed 13 Sept 2005
  23. Liu, QK, Lan, HY: Stable iterative procedures for a class of nonlinear increasing operator equations in Banach spaces. Nonlinear Funct. Anal. Appl. 10(3), 345-358 (2005) MathSciNetMATHGoogle Scholar
  24. Lan, HY: Stability of iterative processes with errors for a system of nonlinear \((A,\eta)\)-accretive variational inclusions in Banach spaces. Comput. Math. Appl. 56(1), 290-303 (2008) MathSciNetView ArticleMATHGoogle Scholar
  25. Anistratov, DY, Cornejo, LR, Jones, JP: Stability analysis of nonlinear two-grid method for multigroup neutron diffusion problems. J. Comput. Phys. 346, 278-294 (2017) MathSciNetView ArticleGoogle Scholar
  26. Ashyralyyev, C, Dedeturk, M: Approximation of the inverse elliptic problem with mixed boundary value conditions and overdetermination. Bound. Value Probl. 2015, 51 (2015) MathSciNetView ArticleMATHGoogle Scholar
  27. Akewe, H, Okeke, GA: Convergence and stability theorems for the Picard-Mann hybrid iterative scheme for a general class of contractive-like operators. Fixed Point Theory Appl. 2015, 66 (2015) MathSciNetView ArticleMATHGoogle Scholar
  28. Bosede, AO, Rhoades, BE: Stability of Picard and Mann iteration for a general class of functions. J. Adv. Math. Stud. 3(2), 1-3 (2010) MathSciNetMATHGoogle Scholar
  29. Agarwal, RP, O’Regan, D, Sahu, DR: Fixed point theory for Lipschitzian-type mappings with applications. In: Topological Fixed Point Theory and Its Applications, vol. 6. Springer, New York (2009) Google Scholar
  30. Liu, LS: Ishikawa and Mann iterative process with errors for nonlinear strongly accretive mappings in Banach spaces. J. Math. Anal. Appl. 194(1), 114-135 (1995) MathSciNetView ArticleMATHGoogle Scholar

Copyright

© The Author(s) 2017