Open Access

An iterative regularization method for an abstract ill-posed biparabolic problem

Boundary Value Problems20152015:55

https://doi.org/10.1186/s13661-015-0318-4

Received: 8 December 2014

Accepted: 19 March 2015

Published: 28 March 2015

Abstract

In this paper, we are concerned with the problem of approximating a solution of an ill-posed biparabolic problem in the abstract setting. In order to overcome the instability of the original problem, we propose a regularizing strategy based on the Kozlov-Maz’ya iteration method. Finally, some other convergence results including some explicit convergence rates are also established under a priori bound assumptions on the exact solution.

Keywords

ill-posed problems biparabolic problem iterative regularization

MSC

47A52 65J22

1 Formulation of the problem

Throughout this paper H denotes a complex separable Hilbert space endowed with the inner product \(\langle\cdot,\cdot\rangle\) and the norm \(\|\cdot\|\), \(\mathcal {L}(H)\) stands for the Banach algebra of bounded linear operators on H.

Let \(A :\mathcal{D}(A)\subset H\longrightarrow H\) be a positive, self-adjoint operator with compact resolvent, so that A has an orthonormal basis of eigenvectors \((\phi_{n})\subset H\) with real eigenvalues \((\lambda_{n})\subset\mathbb{R}_{+}\), i.e.,
$$\begin{aligned}& A\phi_{n}=\lambda_{n}\phi_{n},\quad n\in \mathbb{N}^{*}, \qquad\langle\phi_{i},\phi _{j} \rangle=\delta_{ij}=\left \{ \begin{array}{@{}l@{\quad}l} 1, & \mbox{if } i=j, \\ 0, & \mbox{if } i\neq j, \end{array} \right . \\& 0< \nu\leq\lambda_{1} \leq\lambda_{2} \leq \lambda_{3} \leq\cdots,\quad \lim_{n\to\infty} \lambda_{n} =\infty, \\& \forall h\in H, \quad h=\sum_{n=1}^{\infty}h_{n} \phi_{n},\quad h_{n} =\langle h,\phi_{n}\rangle. \end{aligned}$$
In this paper, we consider the inverse source problem of determining the unknown source term \(u(0)=f\) and the temperature distribution \(u(t)\) for \(0\leq t< T\), in the following biparabolic problem:
$$ \left \{ \begin{array}{@{}l} \mathcal{B}^{2}u= (\frac{d}{dt}+A )^{2}u(t)= u''(t)+2Au'(t)+A^{2}u(t)=0, \quad 0< t< T,\\ u(T)=g,\qquad u_{t}(0)=0, \end{array} \right . $$
(1)
where \(0< T<\infty\) and f is a given H-valued function.

In [1, 2] Kozlov and Maz’ya proposed an alternating iterative method to solve boundary value problems for general strongly elliptic and formally self-adjoint systems. After that, the idea of this method has been successfully used for solving a various classes of ill-posed (elliptic, parabolic, and hyperbolic) problems; see, e.g., [37].

In this work we extend this method to our ill-posed biparabolic problem. To the best of our knowledge, the literature devoted to this class of problems is quite scarce, except the paper [8]. The study of this case is caused not only by theoretical interest, but also by practical necessity.

It is well known that the classical heat equation does not accurately describe the conduction of heat [9, 10]. Numerous models have been proposed for better describing this phenomenon, among them, we can cite the biparabolic model proposed in [11] for a more adequate mathematical description of heat and diffusion processes than the classical heat equation. For a physical motivation and other models we refer the reader to [1215].

2 Preliminaries and basic results

In this section we present the notation and the functional setting which will be used in this paper and prepare some material which will be used in our analysis.

2.1 Notation

We denote by \(\mathcal{C}(H)\) the set of all closed linear operators densely defined in H. The domain, range, and kernel of a linear operator \(B\in\mathcal{C}(H)\) are denoted as \(\mathcal{D}(B)\), \(R(B)\), and \(N(B)\); the symbols \(\rho(B)\), \(\sigma(B)\), and \(\sigma_{p}(B)\) are used for the resolvent set, spectrum, and point spectrum of B, respectively. If V is a closed subspace of H, we denote by \(\Pi _{V}\) the orthogonal projection from H to V.

For ease of reading, we summarize some well-known facts for non-expansive operators.

Definition 2.1

A linear operator \(M\in\mathcal{L}(H)\) is called non-expansive if
$$\|M\|\leq1. $$

Theorem 2.1

([16], Theorem 2.2)

Let \(M\in\mathcal{L}(H)\) be a positive, self-adjoint operator with \(\|M\| \leq1\). Putting \(V_{0} =N(M)\) and \(V_{1} = N(I-M)\). Then we have
$$s\mbox{-}\!\lim_{n\longrightarrow+\infty}M^{n}=\Pi_{V_{1}},\qquad s \mbox{-}\!\lim_{n\longrightarrow+\infty}(I-M)^{n}=\Pi_{V_{0}} $$
i.e.,
$$\forall h\in H,\quad \lim_{n\longrightarrow+\infty}M^{n}h=\Pi _{V_{1}}h,\qquad \lim_{n\longrightarrow+\infty}(I-M)^{n}h= \Pi_{V_{0}}h. $$

For more details of the theory of non-expansive operators, we refer to Krasnosel’skii et al. [17], p.66.

Let us consider the operator equation
$$ S\varphi= (I-M)\varphi=\psi $$
(2)
for non-expansive operators M.

Theorem 2.2

Let M be a linear self-adjoint, positive, and non-expansive operator on H. Let \(\hat{\psi}\in H\) be such that (2) has a solution \(\hat{\varphi}\). If 1 is not eigenvalue of M, i.e., \((I-M)\) is injective (\(V_{1} =N(I-M)=\{0\}\)), then the successive approximations
$$\varphi_{n+1}=M\varphi_{n}+\hat{\psi},\quad n=0,1,2,\ldots, $$
converge to \(\hat{\varphi}\) for any initial data \(\varphi_{0}\in H\).

Proof

From the hypothesis and by virtue of Theorem 2.1, we have
$$ \forall\varphi_{0} \in H,\quad M^{n} \varphi_{0} \longrightarrow\Pi_{V_{1}}\varphi _{0} = \Pi_{\{0\}}\varphi_{0}=0. $$
(3)
By induction with respect to n, it is easily seen that \(\varphi_{n}\) has the explicit form
$$\begin{aligned} \varphi_{n} = & M^{n}\varphi_{0} +\sum _{j=0}^{n-1}M^{j}\hat{\psi} \\ =& M^{n}\varphi_{0} +\bigl(I-M^{n}\bigr) (I-M)^{-1}\hat{\psi} \\ =& M^{n}\varphi_{0} +\bigl(I-M^{n}\bigr)\hat{ \varphi}, \end{aligned}$$
and (3) allows us to conclude that
$$ \hat{\varphi}-\varphi_{n} =M^{n}( \varphi_{0}-\hat{\varphi})\longrightarrow 0,\quad n\longrightarrow \infty. $$
(4)
 □

Remark 2.1

In many situations, some boundary value problems for partial differential equations which are ill-posed can be reduced to Fredholm operator equations of the first kind of the form \(B\varphi=\psi\), where B is compact, positive, and self-adjoint operator in a Hilbert space H. This equation can be rewritten in the following way:
$$\varphi=(I-\omega B)\varphi+\omega\psi=L\varphi+\omega\psi, $$
where \(L=(I-\omega B)\), and ω is a positive parameter satisfying \(\omega< \frac{1}{\|B\|}\). It is easily seen that the operator L is non-expansive and 1 is not eigenvalue of L. It follows from Theorem 2.2 that the sequence \(\{\varphi_{n}\}_{n=0}^{\infty}\) converges and \((I-\omega B)^{n}\zeta\longrightarrow0\), for every \(\zeta\in H\) as \(n\longrightarrow\infty\).

3 Ill-posedness of the problem and a conditional stability result

Let us consider the following well-posed problem:
$$ \left \{ \begin{array}{@{}l} \mathcal{B}^{2}w= (\frac{d}{dt}+A )^{2}w(t)=w''(t)+2Aw'(t)+A^{2}w(t)=0, \quad 0< t< T,\\ w(0)=\xi,\qquad w_{t}(0)=0, \end{array} \right . $$
(5)
where \(\xi\in\mathcal{D}(A)\).
Let us denote \(\mathbb{H}^{1}=\mathcal{D}(A)\times H\). Denoting \(U=\bigl ( {\scriptsize\begin{matrix} u_{1} \cr u_{2} \end{matrix}} \bigr )\) we define the norm in \(\mathbb{H}^{1}\) as \(\|U\|_{\mathbb {H}^{1}}^{2}=\|Au_{1}\|^{2}+\|u_{2}\|^{2}\). In this setting, the second-order differential equation (5) may be restated as a first-order system in the Hilbert space \(\mathbb{H}^{1}\) as follows:
$$ W'(t)=\mathcal{A}W(t),\qquad W(0)=\left ( \begin{array}{@{}c@{}} \xi\\ 0 \end{array} \right ), $$
(6)
by setting
$$W(t)=\left ( \begin{array}{@{}c@{}} w_{1}(t) \\ w_{2}(t) \end{array} \right )=\left ( \begin{array}{@{}c@{}} w(t) \\ w'(t) \end{array} \right ), \qquad\mathcal{A}=\left ( \begin{array}{@{}c@{\quad}c@{}} 0 & I \\ -A^{2}& -2A \end{array} \right ), $$
where \(\mathcal{A}\) is linear unbounded operator with domain \(\mathcal {D}(\mathcal{A})=\mathcal{D}(A^{2})\times\mathcal{D}(A)\).
It is well known that \(\mathcal{A}\) is a generator of a strongly continuous semigroup \(\{\mathcal{T}(t)=e^{t\mathcal{A}} \} _{t\geq0}\) on \(\mathbb{H}^{1}\) ([18], Theorem 2.1), more precisely, \(\mathcal{T}(t)\) is analytic with the following explicit form:
$$ \mathcal{T}(t)Z=e^{t\mathcal{A}}\left ( \begin{array}{@{}c@{}} z_{1}\\ z_{2} \end{array} \right ) =\sum_{n=1}^{\infty}e^{tB_{n}} \left ( \begin{array}{@{}c@{}} \langle z_{1},\phi_{n}\rangle\phi_{n}\\ \langle z_{2},\phi_{n}\rangle\phi_{n} \end{array} \right ),\qquad Z=\left ( \begin{array}{@{}c@{}} z_{1}\\ z_{2} \end{array} \right )\in\mathbb{H}^{1}, $$
(7)
where \(B_{n}=\bigl ( {\scriptsize\begin{matrix} {0} & {1} \cr {-\lambda_{n}^{2}}& {-2\lambda_{n}} \end{matrix}} \bigr )\). By using some techniques of matrix algebra, we can give the form of \(e^{tB_{n}}\) as follows:
$$e^{tB_{n}}=\left ( \begin{array}{@{}c@{\quad}c@{}} e^{-\lambda_{n} t}+\lambda_{n} te^{-\lambda_{n} t} & te^{-\lambda_{n} t}\\ -\lambda_{n}^{2}te^{-\lambda_{n} t} & -\lambda_{n} t e^{-\lambda_{n} t} + e^{-\lambda_{n} t} \end{array} \right ). $$
It follows that
$$ \mathcal{T}(t)Z=\sum_{n=1}^{\infty} \left ( \begin{array}{@{}c@{\quad}c@{}} e^{-\lambda_{n} t}+\lambda_{n} te^{-\lambda_{n} t} & te^{-\lambda_{n} t}\\ -\lambda_{n}^{2}te^{-\lambda_{n}t} & -\lambda_{n} t e^{-\lambda_{n} t} + e^{-\lambda_{n} t} \end{array} \right )\left ( \begin{array}{@{}c@{}} \langle z_{1},\phi_{n}\rangle\phi_{n}\\ \langle z_{2},\phi_{n}\rangle\phi_{n} \end{array} \right ). $$
(8)
By using semigroup theory [19], we show the existence and uniqueness of mild solution of the problem (6).

Theorem 3.1

For any \(W(0)\in\mathbb{H}^{1}\), problem (6) admits an unique solution \(W\in\mathcal{C}^{1}(]0,+\infty[; \mathbb {H}^{1})\cap\mathcal{C}([0,+\infty[;\mathbb{H}^{1})\cap \mathcal {C}(]0,+\infty[; \mathcal{D}(\mathcal{A}))\), given by
$$ W(t)=\mathcal{T}(t)W(0) =\sum_{n=1}^{\infty} \left ( \begin{array}{@{}c@{\quad}c@{}} e^{-\lambda_{n} t}+\lambda_{n} te^{-\lambda_{n} t} & te^{-\lambda_{n} t}\\ -\lambda_{n}^{2}te^{-\lambda_{n}t} & -\lambda_{n} t e^{-\lambda_{n} t} + e^{-\lambda_{n} t} \end{array} \right )\left ( \begin{array}{@{}c@{}} \langle z_{1},\phi_{n}\rangle\phi_{n}\\ \langle z_{2},\phi_{n}\rangle\phi_{n} \end{array} \right ). $$
(9)
In particular, for \(W(0)=\bigl ( {\scriptsize\begin{matrix} \xi\cr 0 \end{matrix}} \bigr )\) we have
$$ W(t)=\mathcal{T}(t)W(0) =\sum_{n=1}^{\infty} \left ( \begin{array}{@{}c@{\quad}c@{}} e^{-\lambda_{n} t}+\lambda_{n} te^{-\lambda_{n} t} & te^{-\lambda_{n} t}\\ -\lambda_{n}^{2}te^{-\lambda_{n}t} & -\lambda_{n} t e^{-\lambda_{n} t} + e^{-\lambda_{n} t} \end{array} \right )\left ( \begin{array}{@{}c@{}} \langle\xi,\phi_{n}\rangle\phi_{n}\\ 0 \end{array} \right ). $$
(10)

As a consequence of Theorem 3.1, we have the following result.

Corollary 3.1

For any \(\xi\in\mathcal{D}(A)\), problem (5) admits an unique solution
$$\begin{aligned}[b] w\in{}&\mathcal{C}^{2} \bigl(]0,+\infty[;H \bigr)\cap \mathcal{C}^{1} \bigl([0,+\infty[;H \bigr) \cap\mathcal{C} \bigl([0,+ \infty[;\mathcal {D}(A) \bigr) \\ &{}\cap\mathcal{C}^{1} \bigl(]0,+\infty[;\mathcal{D}(A) \bigr) \cap \mathcal{C}^{2} \bigl(]0,+\infty[;\mathcal{D} \bigl(A^{2}\bigr) \bigr) \end{aligned} $$
given by
$$ w(t)=\mathcal{R}(t;A)\xi=(I +t A)e^{-tA}\xi=\sum _{n=1}^{\infty }(1+t\lambda_{n})e^{-t\lambda_{n}} \langle\xi,\phi_{n}\rangle\phi_{n}. $$
(11)

Remark 3.1

It is easy to check that
$$\begin{aligned}& \bigl\| \mathcal{R}(t;A)\bigr\| =\sup_{\lambda\geq\lambda_{1}}(1+t\lambda )e^{-t\lambda} \leq (1+t\lambda_{1})e^{-t\lambda_{1}}, \end{aligned}$$
(12)
$$\begin{aligned}& \sup_{0\leq t\leq T}\bigl\| \mathcal{R}(t;A)\bigr\| =\sup_{0\leq t\leq T}(1+t \lambda_{1})e^{-t\lambda_{1}}=1. \end{aligned}$$
(13)

3.1 Ill-posedness of the problem (1)

Theorem 3.2

Let \(g\in H\), then the unique formal solution of the problem (1) is given by
$$ u(t)=\sum_{n=1}^{\infty} \biggl( \frac{1+t\lambda_{n}}{1+T\lambda _{n}} \biggr)e^{(T-t)\lambda_{n}}\langle g,\phi_{n}\rangle \phi_{n}. $$
(14)
In this case,
$$ f=u(0)=\sum_{n=1}^{\infty} \frac{1}{1+T\lambda_{n}}e^{T\lambda _{n}}\langle g,\phi_{n}\rangle \phi_{n}. $$
(15)

Proof

By using the generalized Fourier method of expansion, the solution of (1) can be written formally in the form
$$ u(t)=\sum_{n=1}^{\infty}u_{n}(t) \phi_{n}, \quad u_{n} =\langle u, \phi _{n} \rangle, $$
(16)
where \(u_{n}(t)=\langle u(t),\varphi_{n}\rangle\) is the Fourier coefficient of \(u(t)\).
Substituting \(u(T)=g=\sum_{n=1}^{\infty}g_{n} \phi_{n}\) and (16) into (1), we get the family of second-order ordinary differential equations
$$ \left \{ \begin{array}{@{}l} u_{n}''(t)+2\lambda_{n} u_{n}'(t)+\lambda_{n}^{2} u_{n}=0, \quad 0< t< T,\\ u_{n}(T)=g_{n},\qquad u_{n}'(0)=0. \end{array} \right . $$
(17)
For each fixed n, this differential equation is uniquely solvable and its unique solution is given by
$$u_{n}(t)= \biggl(\frac{1+t\lambda_{n}}{1+T\lambda_{n}} \biggr)e^{(T-t)\lambda _{n}}g_{n} = \sigma(t,\lambda_{n})g_{n}. $$
Finally, the formal solution of the problem (1) takes the form
$$u(t)=\sum_{n=1}^{\infty} \biggl( \frac{1+t\lambda_{n}}{1+T\lambda _{n}} \biggr)e^{(T-t)\lambda_{n}}g_{n}\phi_{n},\quad g_{n}=\langle g,\phi_{n}\rangle. $$
 □
From this representation we see that \(u(t)\) is unstable in \([0,T[\). This follows from the high-frequency limit:
$$\sigma(t,\lambda_{n})= \biggl(\frac{1+t\lambda_{n}}{1+T\lambda_{n}} \biggr)e^{(T-t)\lambda_{n}} \longrightarrow+\infty,\quad n\longrightarrow+\infty. $$

Remark 3.2

• In the classical backward parabolic problem
$$ v_{t}+Av=0,\quad 0< t< T,\qquad v(T)=g, $$
(18)
the unique formal solution is given by
$$ v(t)=\sum_{n=1}^{\infty} \theta_{n}(t,\lambda_{n})\langle g,\varphi _{n} \rangle\varphi_{n}, $$
(19)
where
$$\theta_{n}(t,\lambda_{n})=e^{(T-t)\lambda_{n}} \longrightarrow+ \infty,\quad n\longrightarrow+\infty. $$
In this case, the high-frequency \(\theta_{n}(t,\lambda_{n})\) are equal to \(e^{(T-t)\lambda_{n}}\) and the problem is severely ill-posed.
• In the case of the biparabolic model, we have \(\sigma_{n} = r_{n}\theta_{n}\), where
$$r_{n} = \biggl(\frac{1+t\lambda_{n}}{1+T\lambda_{n}} \biggr) $$
is the relaxation coefficient resulting from the hyperbolic character of the biparabolic model.
Observe that
$$ \frac{t}{T}\leq r_{n} \leq\frac{1+t\lambda_{1}}{1+T\lambda_{1}}\leq1 $$
(20)
and
$$ u(t)=R(t)v(t), $$
(21)
where
$$ \bigl\| R(t)\bigr\| =\sup_{n}\{r_{n} \}=r_{1}=\frac{1+t\lambda_{1}}{1+T\lambda_{1}}. $$
(22)
From this remark, we observe that the degree of ill-posedness in the biparabolic model is relaxed compared to the classical parabolic case.

3.2 Conditional stability estimate

We would like to have estimates of the form
$$\bigl\| u(t)\bigr\| \leq\Psi\bigl(\|g\|\bigr), $$
for some function \(\Psi(\cdot)\) which satisfies the condition \(\Psi (s)\longrightarrow0\) as \(s\longrightarrow0\).

Since the problem of determining \(u(t)\) from the knowledge of \(\{ u(T)=g, u'(0)=0\}\) is ill-posed, an estimate such as the above will not be possible unless we restrict the solution \(u(t)\) to a certain source set \(\mathcal{M}\subset H\).

In our model, we will see that we can employ the method of logarithmic convexity to identify this source set:
$$ M_{\rho}=\bigl\{ w(t)\in H : w \mbox{ obeys } (1) \mbox{ and } \bigl\| Aw(0)\bigr\| \leq\rho< \infty\bigr\} . $$
(23)

We recall the following useful result (see, e.g., [12], p.19, [20], p.375).

Theorem 3.3

Let \(v(t)\) be the solution of problem (18). Then the following estimate holds:
$$ \forall t\in[0,T],\quad \bigl\| v(t)\bigr\| \leq \bigl\| v(T)\bigr\| ^{\frac{t}{T}}\bigl\| v(0) \bigr\| ^{\frac{T-t}{T}}. $$
(24)
Now, if we assume that \(u(0)=f=\sum_{n=1}^{\infty}f_{n} \phi_{n}\) such that \(\|Au(0)\|=\sum_{n=1}^{\infty}\lambda_{n}^{2} |f_{n}|^{2}\leq\infty\), then we have
$$\bigl\| TAu(0)\bigr\| ^{2}=T^{2}\sum_{n=1}^{\infty} \lambda_{n}^{2} |f_{n}|^{2}\leq\sum _{n=1}^{\infty}(1+T\lambda_{n})^{2} |f_{n}|^{2} = \bigl\| (I+TA)u(0)\bigr\| ^{2} $$
and
$$\bigl\| (I+TA)u(0)\bigr\| ^{2}= \sum_{n=1}^{\infty} \biggl( \biggl(\frac{1+T\lambda_{n}}{\lambda _{n}} \biggr)\lambda_{n} \biggr)^{2}|f_{n}|^{2} \leq \biggl(\frac{1+T\lambda_{1}}{\lambda_{1}} \biggr)^{2}\sum_{n=1}^{\infty} \lambda_{n}^{2}|f_{n}|^{2}, $$
which implies that
$$ T \bigl\| Au(0)\bigr\| \leq \bigl\| (I+TA)u(0)\bigr\| \leq \biggl(\frac{1+T\lambda_{1}}{\lambda _{1}} \biggr)\bigl\| Au(0)\bigr\| . $$
(25)
By virtue of the estimate (24) and the formulas
$$\begin{aligned}& v(t)= \exp\bigl((T-t)A\bigr)g= \sum_{n=1}^{\infty}e^{(T-t)\lambda _{n}}g_{n} \phi_{n}, \\& \begin{aligned}[b] u(t)&= R(t)v(t) = (I+tA) (I+TA)^{-1}v(t) \\ &= \sum_{n=1}^{\infty} \biggl(\frac{1+t\lambda_{n}}{1+T\lambda _{n}} \biggr)e^{(T-t)\lambda_{n}}g_{n}\phi_{n} \\ &= \sum_{n=1}^{\infty}r_{n} e^{(T-t)\lambda_{n}}g_{n}\phi_{n}, \end{aligned} \end{aligned}$$
and
$$v(0)=(I+TA)u(0), \qquad v(T)=u(T)=g, $$
we can write
$$ \begin{aligned}[b] \bigl\| u(t)\bigr\| &\leq\bigl\| R(t)\bigr\| \bigl\| v(t)\bigr\| \leq \bigl\| R(t)\bigr\| \bigl(\bigl\| v(0) \bigr\| ^{\frac {T-t}{T}}\bigl\| v(T)\bigr\| ^{\frac{t}{T}} \bigr)\\ & \leq \bigl\| R(t)\bigr\| \bigl(\bigl\| (I+TA)u(0)\bigr\| ^{\frac{T-t}{T}}\bigl\| v(T)\bigr\| ^{\frac {t}{T}} \bigr). \end{aligned} $$
(26)
Combining (26) and (22), we derive the following estimate:
$$ \bigl\| u(t)\bigr\| \leq C(t,T,\lambda_{1}) \bigl\{ \bigl\| Au(0) \bigr\| ^{\frac{T-t}{T}}\|g\| ^{\frac{t}{T}} \bigr\} , $$
(27)
where
$$C(t,T,\lambda_{1})= \biggl(\frac{1+t\lambda_{1}}{1+T\lambda_{1}} \biggr) \biggl( \frac{1+T\lambda_{1}}{\lambda_{1}} \biggr)^{\frac{T-t}{T}} \leq \biggl(\frac{1+T\lambda_{1}}{\lambda_{1}} \biggr)^{\frac {T-t}{T}}=\gamma. $$
On the basis \(\{\phi_{n}\}\) we introduce the Hilbert scale \((H^{s})_{s\in \mathbb{R}}\) (resp. \((\mathfrak{E}^{s})_{s\in\mathbb{R}}\)) induced by A as follows:
$$\begin{aligned}& H^{s}= \mathcal{D}\bigl(A^{s}\bigr)=\Biggl\{ h\in H : \|h \|_{H^{s}}^{2}=\sum_{n=1}^{\infty} \lambda_{n}^{2s}\bigl|\langle h,\varphi_{n} \rangle\bigr|^{2}< +\infty \Biggr\} , \\& \mathfrak{E}^{s}=\mathcal{D}\bigl(e^{sTA}\bigr)=\Biggl\{ h\in H : \|h\|_{\mathfrak {E}^{s}}^{2}=\sum_{n=1}^{\infty} e^{2Ts\lambda_{n}}\bigl|\langle h,\varphi_{n}\rangle\bigr|^{2}<+\infty \Biggr\} . \end{aligned}$$

Remark 3.3

Observe that
$$\begin{aligned}& \begin{aligned}[b] &\forall n\geq1,\quad \biggl(\frac{\lambda_{1}}{1+T\lambda_{n}} \biggr)\leq \biggl(\frac{\lambda_{n}}{1+T\lambda_{n}} \biggr)\\ &\quad \Longrightarrow\quad\frac{1}{1+T\lambda_{n}}\leq \biggl( \frac{1}{\lambda _{1}} \biggr) \biggl(\frac{\lambda_{n}}{1+T\lambda_{n}} \biggr)\\ &\quad\Longrightarrow \quad\bigl\| u(0)\bigr\| ^{2} = \sum_{n=1}^{\infty} \biggl(\frac{1}{1+T\lambda _{n}} \biggr)^{2}e^{2T\lambda_{n}}|g_{n}|^{2} \leq \biggl( \frac{1}{\lambda_{1}} \biggr)^{2}\bigl\| Au(0)\bigr\| ^{2}; \end{aligned} \end{aligned}$$
(28)
$$\begin{aligned}& \begin{aligned}[b] &\forall n\geq1, \quad \biggl(\frac{\lambda_{1}}{1+T\lambda_{1}} \biggr)\leq \biggl(\frac{\lambda_{n}}{1+T\lambda_{n}} \biggr) \leq\frac{1}{T}\\ &\quad\Longrightarrow \quad \biggl(\frac{\lambda_{1}}{1+T\lambda_{1}} \biggr)^{2}\sum _{n=1}^{\infty}e^{2T\lambda_{n}}|g_{n}|^{2} \leq \sum_{n=1}^{\infty} \biggl(\frac{\lambda_{n}}{1+T\lambda_{n}} \biggr)^{2}e^{2T\lambda_{n}}|g_{n}|^{2} \\ &\hphantom{\quad\Longrightarrow \quad\biggl(\frac{\lambda_{1}}{1+T\lambda_{1}} \biggr)^{2} \sum_{n=1}^{\infty}e^{2T\lambda_{n}}|g_{n}|^{2}\ }\leq \biggl(\frac{1}{T} \biggr)^{2}\sum _{n=1}^{\infty}e^{2T\lambda_{n}}|g_{n}|^{2}. \end{aligned} \end{aligned}$$
(29)
Then we deduce that
$$ \bigl\| u(0)\bigr\| +\bigl\| Au(0)\bigr\| < \infty\quad\Longleftrightarrow\quad\bigl\| Au(0)\bigr\| < \infty \quad\Longleftrightarrow\quad\sum_{n=1}^{\infty}e^{2T\lambda _{n}}|g_{n}|^{2}< \infty. $$
(30)

Theorem 3.4

Problem (1) is conditionally well-posed on the set
$$M=\bigl\{ w(t)\in H : \bigl\| Aw(0)\bigr\| < \infty\bigr\} $$
if and only if
$$g\in\mathfrak{E}^{1}=\Biggl\{ h\in H : \sum _{n=1}^{\infty} e^{2T\lambda _{n}}\bigl|(h,\varphi_{n})\bigr|^{2}< \infty\Biggr\} . $$
Moreover, if \(u(t)\in M_{\rho}\), then we have the Hölder continuity,
$$ \bigl\| u(t)\bigr\| \leq\Psi\bigl(\|g\|\bigr) =\gamma \bigl(\rho^{\frac{T-t}{T}} \bigr)\|g \| ^{\frac{t}{T}}. $$
(31)

4 Regularization by Kozlov-Maz’ya iteration method and error estimates

4.1 Description of the method

The iterative algorithm for solving the ill-posed problem (1) starts by letting \(f_{0}\in H\) be arbitrary. The first approximation \(u^{0}(t)\) is the solution to the direct problem
$$ \left \{ \begin{array}{@{}l} \mathcal{B}^{2}u^{0}(t) = (\frac{d}{dt}+A )^{2}u^{0}(t)=0, \quad 0< t\leq T,\\ u^{0}(0)=f_{0}, \qquad u^{0}_{t}(0)=0. \end{array} \right . $$
(32)
If the pair \((u^{k},f_{k})\) has been constructed, let
$$ (\mathrm{P})_{k+1}:\quad f_{k+1}=f_{k} -\omega \bigl(u^{k}(T)-g\bigr), $$
(33)
where ω is such that
$$0< \omega< \omega^{*}= \frac{1}{\|K\|}=\frac{e^{T\lambda _{1}}}{ ( 1+T\lambda_{1} )}, $$
where
$$\|K\|=\bigl\| \mathcal{R}(T,A)\bigr\| =\sup_{n\geq1}\bigl\{ ( 1+T\lambda _{n} )e^{-T\lambda_{n}}\bigr\} = ( 1+T\lambda_{1} )e^{-T\lambda_{1}}, $$
and \(\mathcal{R}(t,A)\) is the resolving operator associated to the direct well-posed biparabolic problem (5), given by the expression (11).
Finally, we get \(u^{k+1}\) by solving the problem
$$ \left \{ \begin{array}{@{}l} \mathcal{B}^{2}u^{k+1}(t) = (\frac{d}{dt}+A )^{2}u^{k+1}(t)=0, \quad 0< t\leq T,\\ u^{k+1}(0)=f_{k+1}, \qquad u^{k+1}_{t}(0)=0. \end{array} \right . $$
(34)
We set \(G= (I-\omega K)\). If we iterate backwards in \((\mathrm{P})_{k+1}\) we obtain
$$ f_{k} =G^{k} f_{0} +\omega\sum _{i=0}^{k-1}G^{i}f =G^{k}f_{0} +\bigl(I-G^{k}\bigr)K^{-1}f = G^{k}f_{0} + u(0)- G^{k}u(0). $$
(35)
This implies that
$$ f_{k}-u(0)= G^{k}\bigl(f_{0} -u(0) \bigr),\qquad u^{k}(t)-u(t)=\mathcal {R}(t;A)G^{k} \bigl(f_{0} -u(0)\bigr). $$
(36)

Proposition 4.1

The operator \(G=(I-\omega K)\) is self-adjoint and non-expansive on H. Moreover, it does not have 1 as eigenvalue.

Proof

The self-adjointness follows from the definition of G. Since we have the inequality \(0<1-\omega(1+T\lambda)e^{-T\lambda}<1\) for \(\lambda\in\sigma(A)\), we have \(\sigma_{p}(G)\subset\,]0,1[\), then 1 is not eigenvalue of G. □

Remark 4.1

Let \(k\in\mathbb{N}^{*}\). Then we have
$$ \|G\|=\|I-\omega K\|< 1 \quad\Longrightarrow\quad\Biggl\Vert \sum _{i=0}^{k-1}G^{i}\Biggr\Vert \leq\sum _{i=0}^{k-1}\bigl\| G^{i}\bigr\| \leq k. $$
(37)
In general, the exact solution \(u(0)=f\in H\) is required to satisfy a so-called source condition [21], otherwise the convergence of the regularization method approximating the problem can be arbitrarily slow. To accelerate the convergence of the regularization method, we assume the following source conditions:
$$ \bigl(f_{0} -u(0)\bigr)\in\mathcal{D} \bigl(A^{1+\beta}\bigr), \quad\beta>0. $$
(38)

We provide the following lemma which will be used in the proof of convergence estimates.

Lemma 4.1

Let \(\sigma>0\), \(k\geq2\), and Φ the real-valued function defined by
$$ \Phi(\lambda)= \bigl(1-\omega(1+T\lambda)e^{-T\lambda} \bigr)^{k} \lambda ^{-\sigma},\quad \lambda\in[\lambda_{1}, \infty[, $$
(39)
where \(\lambda_{1} > 0\) and \(\omega< \omega^{*}=\frac{1}{(1+T\lambda _{1})e^{-T\lambda_{1}}}\). Then we have
$$ \Phi_{\infty}=\sup_{\lambda\geq\lambda_{1}}\Phi(\lambda)\leq C \biggl(\frac{1}{\ln(k)} \biggr)^{\sigma}. $$
(40)

Proof

We have
$$\Phi(\lambda)\leq\hat{\Phi}(\lambda)= \bigl(1-\omega(1+T\lambda _{1})e^{-T\lambda} \bigr)^{k}\lambda^{-\sigma},\quad \lambda\in[\lambda _{1}, \infty[. $$
For notational convenience and simplicity, we denote
$$\begin{aligned}& \mu=T\lambda, \qquad\tau=\omega(1+T\lambda_{1}),\qquad \mu_{1} =T\lambda_{1}, \\& \hat{\Phi}(\lambda)= \bigl(1-\tau e^{-\mu} \bigr)^{k} \bigl(T^{-1}\mu \bigr)^{-\sigma}= T^{\sigma} \bigl(1-\tau e^{-\mu} \bigr)^{k} \mu^{-\sigma}=T^{\sigma } \widetilde{\Phi}(\mu),\quad \mu\in[\mu_{1} , \infty[. \end{aligned}$$
The question now is to show that there exists a positive constant \(\mu ^{*}\) such that \(\widetilde{\Phi}(\mu)= (1-\tau e^{-\mu} )^{k} \mu^{-\sigma}\) is monotonically increasing in \([\mu_{1},\mu^{*}[\) and monotonically decreasing in \(]\mu^{*}, \infty[\). Since \(\widetilde{\Phi }(\mu)\) is continuously differentiable in \([\mu_{1} , \infty[\) and
$$\widetilde{\Phi}(\mu_{1})>0, \qquad \widetilde{\Phi}(\infty)=0,\qquad \widetilde{\Phi}(\mu)\geq0, $$
then the maximum of \(\widetilde{\Phi}(\mu)\) is attained at an interior point, which is a critical point of \(\widetilde{\Phi}(\mu)\). From
$$\frac{d\widetilde{\Phi}(\mu)}{d\mu}= \mu^{-\sigma-1} \bigl(1-\tau e^{-\mu} \bigr)^{k-1} \bigl\{ \tau(k\mu+\sigma )e^{-\mu}-\sigma \bigr\} $$
it follows that a critical point of \(\widetilde{\Phi}(\mu)\) in \(]\mu_{1} , \infty[\) satisfies
$$\tau(k\mu+\sigma)e^{-\mu}-\sigma=0 \quad\Longleftrightarrow\quad(k\mu+\sigma )e^{-\mu}-\frac{\sigma}{\tau}=0. $$
We introduce the auxiliary function
$$D(\mu)=(k\mu+\sigma)e^{-\mu}-\frac{\sigma}{\tau},\quad \mu\in[ \mu_{1} , \infty[. $$
For k sufficiently large, \(D(\mu^{*} =\ln(k))=\frac{k\ln(k)+\sigma }{k}-\frac{\sigma}{\tau}> 0\). For \(a > 1\) and k sufficiently large, we have \(D(\mu^{**} =\ln(k^{a}))=\frac{ak\ln(k)+\sigma}{k^{a}}-\frac {\sigma}{\tau}< 0\). Therefore, there exists \(\hat{k}(a)\) such that
$$\begin{aligned}& D\bigl(\ln(k)\bigr)> 0,\quad \forall k\geq \hat{k}(a), \\& D\bigl(\ln\bigl(k^{a}\bigr)\bigr)< 0,\quad \forall k\geq \hat{k}(a). \end{aligned}$$
Consequently the critical point ν of \(D(\mu)\) must lie between \(\mu ^{*}=\ln(k)\) and \(\mu^{**}=\ln(k^{a})\), i.e., \(\nu\in\,]\mu ^{*}, \mu^{**}[\). Now let \(k\geq\max(2, \hat{k}(a))\). Then we have
$$\sup_{\mu\in[\mu_{1}, +\infty[}\widetilde{\Phi}(\mu)=\widetilde {\Phi}(\nu)= \bigl(1-\tau e^{-\nu} \bigr)^{k} \nu^{-\sigma}\leq \nu^{-\sigma} \leq \bigl(\mu^{*} \bigr)^{-\sigma} = \bigl( \ln(k) \bigr)^{-\sigma}. $$
Thus, the upper bound of \(\Phi(\lambda)\) can be estimated as follows:
$$\sup_{\lambda\in[\lambda_{1},+\infty[}\Phi(\lambda)\leq \sup_{\lambda\in[\lambda_{1},+\infty[} \hat{\Phi}(\lambda)= T^{\sigma}\sup_{\mu\in[\mu_{1}, +\infty[}\widetilde{ \Phi}(\mu) \leq T^{\sigma} \bigl(\ln(k) \bigr)^{-\sigma}. $$
 □

Now we are in a position to state the main result of this method.

Theorem 4.1

Let \(g\in\mathfrak{E}^{1}\) and ω satisfy \(0< \omega< \omega ^{*}\), \(f_{0}\in H^{1}\), be an arbitrary element for the iterative procedure suggested above and \(u^{k}\) be the kth approximate solution. Then we have
$$ \sup_{t\in[0,T]} \bigl\| u(t)-u^{k}(t)\bigr\| \longrightarrow0,\quad k\longrightarrow\infty. $$
(41)
Moreover, if \((f_{0} - u(0))\in H^{\sigma}\), \(\sigma= \beta+1\) (\(\beta >0\)), i.e.,
$$\bigl\| f_{0} - u(0)\bigr\| _{H^{\sigma}}^{2}=\sum _{n=1}^{\infty} \lambda_{n}^{2\sigma}\bigl|\bigl\langle f-u(0),\phi_{n}\bigr\rangle \bigr|^{2}\leq E^{2}, $$
then the rate of convergence of the method is given by
$$ \sup_{t\in[0,T]}\bigl\| u(t)-u^{k}(t)\bigr\| \leq C E \biggl( \frac{1}{\ln(k)} \biggr)^{1+\beta}, \quad k\geq2. $$
(42)

Proof

By virtue of Proposition 4.1, Theorem 2.2, and the estimate (13), it follows immediately that
$$\begin{aligned} \sup_{t\in[0,T]} \bigl\| u(t)-u^{k}(t)\bigr\| = & \sup _{t\in[0,T]}\bigl\| \mathcal {R}(t;A)G^{k}\bigl(f_{0} -u(0)\bigr)\bigr\| \leq \sup_{t\in[0,T]}\bigl\| \mathcal{R}(t;A)\bigr\| \bigl\| G^{k} \bigl(f_{0} -u(0)\bigr)\bigr\| \\ \leq& \bigl\| G^{k}\bigl(f_{0} -u(0)\bigr)\bigr\| \longrightarrow0, \quad k\longrightarrow\infty. \end{aligned}$$
We have
$$\begin{aligned}[b] \bigl\| u(t)-u^{k}(t)\bigr\| ^{2}&= \bigl\| \mathcal{R}(t;A)G^{k} \bigl(f_{0} -u(0)\bigr)\bigr\| ^{2} \\ &\leq \bigl\| \mathcal{R}(t;A)\bigr\| ^{2}\bigl\| G^{k}\bigl(f_{0} -u(0)\bigr)\bigr\| ^{2} \leq \sum_{n=1}^{\infty} \Phi(\lambda_{n})^{2}\lambda_{n}^{2\sigma }\bigl| \bigl\langle f_{0}-u(0),\phi_{n} \bigr\rangle \bigr|^{2} \\ & \leq \Bigl(\sup_{n}\Phi(\lambda_{n}) \Bigr)^{2}\bigl\| f_{0} - u(0)\bigr\| _{H^{\sigma}}^{2} \leq \Bigl(\sup_{n}\Phi(\lambda_{n}) \Bigr)^{2}E^{2}, \end{aligned} $$
and by virtue of Lemma 4.1 (estimate (40)), we conclude the desired estimate. □

Remark 4.2

Under the conditions \((f_{0}-u(0))\in H^{\sigma}\), \(\sigma=1+\beta\), \(\beta>0\), and
$$\sup_{\lambda\geq\lambda_{1}} \bigl\{ \lambda\Phi(\lambda) \bigr\} \leq C \biggl( \frac{1}{\ln(k)} \biggr)^{\beta}, $$
we can write
$$ \bigl\| A\bigl(u(t)-u^{k}(t)\bigr)\bigr\| \leq CE \biggl( \frac{1}{\ln(k)} \biggr)^{\beta}. $$
(43)

Proof

$$\begin{aligned} \bigl\| A\bigl(u(t)-u^{k}(t)\bigr)\bigr\| ^{2} =& \bigl\| A \mathcal{R}(t;A)G^{k}\bigl(f_{0} -u(0)\bigr)\bigr\| ^{2} \\ \leq& \bigl\| \mathcal{R}(t;A)\bigr\| ^{2}\bigl\| AG^{k} \bigl(f_{0} -u(0)\bigr)\bigr\| ^{2} \\ \leq& \sum_{n=1}^{\infty} \bigl\{ \lambda_{n}\Phi(\lambda_{n}) \bigr\} ^{2} \lambda_{n}^{2\sigma}\bigl|\bigl\langle f_{0}-u(0), \phi_{n} \bigr\rangle \bigr|^{2} \\ \leq& \Bigl(\sup_{n} \lambda_{n}\Phi( \lambda_{n}) \Bigr)^{2}\bigl\| f_{0} - u(0) \bigr\| _{H^{\sigma}}^{2} \\ \leq& \Bigl(\sup_{n}\lambda_{n}\Phi( \lambda_{n}) \Bigr)^{2}E^{2} \\ \leq& \biggl\{ CE \biggl(\frac{1}{\ln(k)} \biggr)^{\beta} \biggr\} ^{2}. \end{aligned}$$
 □

Theorem 4.2

Let \(g\in\mathfrak{E}^{1}\) and ω satisfy \(0< \omega< \omega^{*}\), and let \(f_{0}\in H^{1}\) be an arbitrary element for the iterative procedure suggested above and \(u^{k}\) (resp. \(u_{k}^{\delta}\)) be the kth approximate solution for the exact data g (resp. for the inexact data \(g^{\delta}\)) such that \(\|g-g^{\delta }\|\leq\delta\). Then under the condition (38), the following inequality holds:
$$\sup_{t\in[0,T]}\bigl\| u(t)-u_{\delta}^{k}(t)\bigr\| \leq C E \biggl( \frac {1}{\ln(k)} \biggr)^{1+\beta}+\varepsilon(k)\delta, $$
where \(\varepsilon(k)=\|\omega\sum_{i=0}^{k-1}(I-\omega K)^{i}\| \leq k\omega\).

Proof

Using (35) and the triangle inequality, we can write
$$\begin{aligned}& f^{k}= G^{k}f_{0} + \omega\sum _{i=0}^{k-1}G^{i}g,\qquad u_{k}(t)=\mathcal{R}(t;A)f^{k}, \end{aligned}$$
(44)
$$\begin{aligned}& f^{k}_{\delta}= G^{k}f_{0} + \omega\sum _{i=0}^{k-1}G^{i}g^{\delta }, \qquad u_{k}^{\delta}(y)=\mathcal{R}(t;A)f^{k}_{\delta}, \\& \bigl\| u(t)-u_{\delta}^{k}(t)\bigr\| =\bigl\| \bigl(u(t)-u^{k}(t) \bigr)+\bigl(u^{k}(t)-u_{\delta }^{k}(t)\bigr)\bigr\| \leq \Delta_{1} +\Delta_{2}, \end{aligned}$$
(45)
where
$$ \Delta_{1} =\bigl\| u(t)-u^{k}(t)\bigr\| \leq \bigl\| u(t)-u^{k}(t)\bigr\| _{\infty}\leq C E \biggl(\frac{1}{\ln(k)} \biggr)^{1+\beta}, \quad k\geq2, $$
(46)
and
$$\begin{aligned} \Delta_{2} =& \bigl\| u^{k}(t)-u_{\delta}^{k}(t) \bigr\| =\bigl\| \mathcal {R}(t;A) \bigl(f^{k}-f^{k}_{\delta} \bigr)\bigr\| =\Biggl\Vert \omega\mathcal{R}(t;A)\sum_{i=0}^{k-1}G^{i} \bigl(g-g^{\delta}\bigr)\Biggr\Vert \\ \leq& \Biggl\Vert \omega\sum_{i=0}^{k-1}G^{i} \bigl(g-g^{\delta}\bigr)\Biggr\Vert \leq\Biggl\Vert \omega\sum _{i=0}^{k-1}G^{i}\Biggr\Vert \delta=\hat {\Delta}_{2}. \end{aligned}$$
By using inequality (37), the quantity \(\hat{\Delta}_{2}\) can be estimated as follows:
$$ \hat{\Delta}_{2} \leq\omega k \delta. $$
(47)
Combining (46) and (47) and taking the supremum with respect to \(t\in[0,T]\) of \(\|u(t)-u_{\delta}^{k}(t)\|\), we obtain the desired bound. □

Remark 4.3

Choosing \(k=k(\delta)\) such that \(\omega k\delta \longrightarrow0\) as \(\delta\longrightarrow0\), we obtain
$$\sup_{t\in[0,T]}\bigl\| u^{k}(t)-u_{\delta}^{k}(t) \bigr\| \longrightarrow 0 \quad\mbox{as } k\longrightarrow+\infty. $$

5 Numerical results

In this section we give a two-dimensional numerical test to show the feasibility and efficiency of the proposed method. Numerical experiments were carried out using Matlab.

We consider the following inverse problem:
$$ \left \{ \begin{array}{@{}l} (\frac{\partial}{\partial t}-\frac{\partial^{2}}{\partial x^{2}} )^{2}u(x,t)=0, \quad x\in(0,\pi), t\in(0,1), \\ u(0,t)=u(\pi,t)=0, \quad t\in(0,1), \\ u(x,1)=g(x),\qquad u_{t}(x,0)=0,\quad x\in[0,\pi], \end{array} \right . $$
(48)
where \(f(x)=u(x,0)\) is the unknown initial condition and \(u(x,1) = g(x)\) is the final condition.
It is easy to check that the operator
$$A= -\frac{\partial^{2}}{\partial x^{2}},\qquad \mathcal {D}(A)=H_{0}^{1}(0, \pi)\cap H^{2}(0,\pi)\subset H=L^{2}(0,\pi), $$
is positive, self-adjoint with compact resolvent (A is diagonalizable).
The eigenpairs \((\lambda_{n}, \phi_{n})\) of A are
$$\lambda_{n} =n^{2}, \qquad\phi_{n}(x)=\sqrt{ \frac{2}{\pi}}\sin(nx), \quad n\in \mathbb{N}^{*}. $$
In this case, (15) takes the form
$$ f(x)=u(x,0)= \frac{2}{\pi}\sum_{n=1}^{+\infty} \frac{1}{1+n^{2}} e^{n^{2}} \biggl(\int_{0}^{\pi}g(x) \sin(nx)\,dx \biggr)\sin(nx). $$
(49)

In the following, we consider an example which has an exact expression of solutions \((u(x,t), f(x))\).

Example

If \(u(x,0)= \phi_{1}(x)=\sqrt{\frac{2}{\pi}}\sin(x)\), then the function
$$u(x,t)=\sum_{n=1}^{\infty}(1+t \lambda_{n})e^{-t\lambda_{n}}\langle \phi_{1}, \phi_{n}\rangle\phi_{n}(x) = (1+t\lambda_{1})e^{-t\lambda_{1}} \phi_{1}(x)= \sqrt{\frac{2}{\pi}} (1+t\lambda_{1})e^{-t\lambda_{1}} \sin(x) $$
is the exact solution of the problem (48). Consequently, the data function is \(g(x) = u(x,1)=\sqrt{\frac{2}{\pi}} \frac{2}{e}\sin(x)\).
Kozlov-Maz’ya iteration method. By using the central difference with step length \(h= \frac{\pi}{N+1}\) to approximate the first derivative \(u_{x}\) and the second derivative \(u_{xx}\), we can get the following semi-discrete problem (ordinary differential equation):
$$ \left \{ \begin{array}{@{}l} (\frac{d}{dt}-\mathbb{A}_{h} )^{2}u(x_{i},t)=0, \quad x_{i} =ih, i=1,\ldots, N, t\in(0,1), \\ u(x_{0}=0,t)=u(x_{N+1}=\pi,t)=0, \quad t\in(0,1), \\ u(x_{i},0)=g(x_{i}), \qquad u_{t}(x_{i},0)=0, \quad x_{i} =ih, i=1,\ldots, N, \end{array} \right . $$
(50)
where \(\mathbb{A}_{h}\) is the discretization matrix stemming from the operator \(A=-\frac{d^{2}}{dx^{2}}\):
$$\mathbb{A}_{h}=\frac{1}{h^{2}} \operatorname{Tridiag}(-1,2,-1)\in \mathcal {M}_{N}(\mathbb{R}) $$
is a symmetric, positive definite matrix. We assume that it is fine enough so that the discretization errors are small compared to the uncertainty δ of the data; this means that \(\mathbb{A}_{h}\) is a good approximation of the differential operator \(A=-\frac {d^{2}}{dx^{2}}\), whose unboundedness is reflected in a large norm of \(\mathbb{A}_{h}\). The eigenpairs \((\mu_{k}, e_{k})\) of \(\mathbb{A}_{h}\) are given by
$$\mu_{k}=4 \biggl(\frac{N+1}{\pi} \biggr)^{2} \sin^{2} \biggl(\frac{k\pi }{2(N+1)} \biggr),\qquad e_{k}= \biggl( \sin \biggl(\frac{jk\pi}{N+1} \biggr) \biggr)_{j=1}^{N}, \quad k=1,\ldots,N. $$
Adding a random distributed perturbation (obtained by the Matlab command randn) to each data function, we obtain the vector \(g^{\delta}\):
$$g^{\delta} =g+\varepsilon\operatorname{randn}\bigl(\mathrm{size}(g)\bigr), $$
where ε indicates the noise level of the measurement data and the function ‘\(\operatorname{randn}(\cdot)\)’ generates arrays of random numbers whose elements are normally distributed with mean 0, variance \(\sigma ^{2} = 1\), and standard deviation \(\sigma= 1\). ‘\(\operatorname{randn}(\mathrm{size}(g))\)’ returns an array of random entries that is the same size as g. The bound on the measurement error δ can be measured in the sense of the root mean square error (RMSE) according to
$$\delta=\bigl\| g^{\delta}-g\bigr\| _{*}= \Biggl( \frac{1}{N}\sum _{i=1}^{N} \bigl(g(x_{i})-g^{\delta}(x_{i}) \bigr)^{2} \Biggr)^{1/2}. $$
The discrete iterative approximation of (45) takes the form
$$ f^{\delta}_{k}(x_{j}) =(I-\omega K_{h})^{k} f_{0}(x_{j}) +\omega\sum _{i=0}^{k-1}(I-\omega K_{h})^{i}g^{\delta}(x_{j}), \quad j=1,\ldots,N, $$
(51)
where \(K_{h} = (I_{N}+\mathbb{A}_{h})e^{-\mathbb{A}_{h}}\) and \(\omega< \omega^{*} = \frac{1}{\|K_{h}\|} = \frac{e^{\mu_{1}}}{1+\mu_{1}}\).
Figures 1-4 show the comparison between the exact solution and its computed approximations for different values N (:= number of grid points), k (:= number of iterations), ω (:= relaxation factor), ε (:= noisy level) and \(E_{r}(f)= \frac{\|f_{\mathrm{approximate}}-f_{\mathrm{exacte}}\|_{*}}{\|f_{\mathrm{exacte}}\|_{*}}\) (:= relative error).
Figure 1

Noise \(\pmb{\mbox{level} =1/100}\) , \(\pmb{\mbox{iter}=5}\) .

Figure 2

Noise \(\pmb{\mbox{level}=1/100}\) , \(\pmb{\mbox{iter} =6}\) .

Figure 3

Noise \(\pmb{\mbox{level}=1/1{,}000}\) , \(\pmb{\mbox{iter} =5}\) .

Figure 4

Noise \(\pmb{\mbox{level}=1/1{,}000}\) , \(\pmb{\mbox{iter} =6}\) .

6 Conclusion

The numerical results (Figures 1-4, Table 1) are quite satisfactory. Even with the noise level \(\varepsilon =0.01\), the numerical solutions are still in good agreement with the exact solution.
Table 1

Kozlov-Maz’ya method

N

k

ϵ

ω

\(\boldsymbol {E_{r}(f)}\)

40

5

0.01

0.83697

0.0523

40

6

0.01

0.83697

0.0612

40

5

0.001

0.83697

0.0054

40

6

0.001

0.83697

0.0162

Relative error \(E_{r}(f)\).

In this study, a convergent and stable reconstruction of an unknown initial condition has been obtained using the Kozlov-Maz’ya iteration method. Both theoretical and numerical studies have been provided.

Declarations

Acknowledgements

This work is supported by the MESRS of Algeria (CNEPRU Project B01120090003).

Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

Authors’ Affiliations

(1)
Laboratory of Applied Mathematics and Modeling, University 8 Mai 1945 Guelma
(2)
Department of Mathematics, University 8 Mai 1945 Guelma
(3)
Applied Mathematics Laboratory, University Badji Mokhtar Annaba

References

  1. Kozlov, VA, Maz’ya, VG: On iterative procedures for solving ill-posed boundary value problems that preserve differential equations. Leningr. Math. J. 1, 1207-1228 (1990) MATHMathSciNetGoogle Scholar
  2. Kozlov, VA, Maz’ya, VG, Fomin, AV: An iterative method for solving the Cauchy problem for elliptic equations. USSR Comput. Math. Math. Phys. 31(1), 45-52 (1991) MATHMathSciNetGoogle Scholar
  3. Bastay, G: Iterative Methods for Ill-posed Boundary Value Problems. Linköping Studies in Science and Technology, Dissertations No. 392, Linköping University, Linköping (1995) Google Scholar
  4. Baumeister, J, Leitao, A: On iterative methods for solving ill-posed problems modeled by partial differential equations. J. Inverse Ill-Posed Probl. 9(1), 13-29 (2001) View ArticleMATHMathSciNetGoogle Scholar
  5. Bouzitouna, A, Boussetila, N: Two regularization methods for a class of inverse boundary value problems of elliptic type. Bound. Value Probl. 2013, 178 (2013) View ArticleMathSciNetGoogle Scholar
  6. Wang, JG, Wei, T: An iterative method for backward time-fractional diffusion problem. Numer. Methods Partial Differ. Equ. 30(6), 2029-2041 (2014) View ArticleMATHGoogle Scholar
  7. Zhang, HW, Wei, T: Two iterative methods for a Cauchy problem of the elliptic equation with variable coefficients in a strip region. Numer. Algorithms 65, 875-892 (2014) View ArticleMATHMathSciNetGoogle Scholar
  8. Atakhadzhaev, MA, Egamberdiev, OM: The Cauchy problem for the abstract bicaloric equation. Sib. Mat. Zh. 31(4), 187-191 (1990) MathSciNetGoogle Scholar
  9. Fichera, G: Is the Fourier theory of heat propagation paradoxical? Rend. Circ. Mat. Palermo. 41, 5-28 (1992) View ArticleMATHMathSciNetGoogle Scholar
  10. Joseph, L, Preziosi, DD: Heat waves. Rev. Mod. Phys. 61, 41-73 (1989) View ArticleMATHMathSciNetGoogle Scholar
  11. Fushchich, VL, Galitsyn, AS, Polubinskii, AS: A new mathematical model of heat conduction processes. Ukr. Math. J. 42, 210-216 (1990) View ArticleMATHMathSciNetGoogle Scholar
  12. Ames, KA, Straughan, B: Non-Standard and Improperly Posed Problems. Academic Press, New York (1997) Google Scholar
  13. Carasso, AS: Bochner subordination, logarithmic diffusion equations, and blind deconvolution of Hubble space telescope imagery and other scientific data. SIAM J. Imaging Sci. 3(4), 954-980 (2010) View ArticleMATHMathSciNetGoogle Scholar
  14. Payne, LE: On a proposed model for heat conduction. IMA J. Appl. Math. 71, 590-599 (2006) View ArticleMATHMathSciNetGoogle Scholar
  15. Wang, L, Zhou, X, Wei, X: Heat Conduction: Mathematical Models and Analytical Solutions. Springer, Berlin (2008) Google Scholar
  16. Shlapunov, A: On iterations of non-negative operators and their applications to elliptic systems. Math. Nachr. 218, 165-174 (2000) View ArticleMATHMathSciNetGoogle Scholar
  17. Krasnosel’skii, MA, Vainikko, GM, Zabreiko, PP, Rutitskii, YB: Approximate Solutions of Operator Equations. Noordhoff, Groningen (1972) View ArticleGoogle Scholar
  18. Leiva, H: Controllability of a Generalized Damped Wave Equation. Notas de Matemática, No. 244, Mérida (2006) Google Scholar
  19. Pazy, A: Semigroups of Linear Operators and Application to Partial Differential Equations. Springer, Berlin (1983) View ArticleGoogle Scholar
  20. Fattorini, HO: The Cauchy Problem. Cambridge University Press, Cambridge (1983) MATHGoogle Scholar
  21. Bakushinsky, AB, Kokurin, M: Iterative Methods for Approximate Solution of Inverse Problems. Springer, Berlin (2004) MATHGoogle Scholar

Copyright

© Lakhdari and Boussetila; licensee Springer. 2015