Skip to main content

Two-parameter regularization method for an axisymmetric inverse heat problem

Abstract

In this paper we consider the inverse time problem for the axisymmetric heat equation which is a severely ill-posed problem. Using the modified quasi-boundary value (MQBV) method with two regularization parameters, one related to the error in measurement process and the other related to the regularity of solution, we regularize this problem and obtain the Hölder-type estimation error for the whole time interval. Numerical results are presented to illustrate the accuracy and efficiency of the method.

1 Introduction

Partial differential equations (PDEs) associated with various types of boundary conditions are a powerful and useful tool to model natural phenomena. For time-dependent phenomena, they are usually joined by a time condition (initial time condition or final time condition), which can be considered as the data. The time-inverse problem means that, from the final data, the main goal is a reconstruction of the whole structure in previous time. These problems were widely investigated in Tikhonov and Arsenin [1] and Glasko [2]. A classical example can be recalled here: the backward heat conduction problem (BHCP). The BHCP is the time-inverse boundary value problem, i.e., given the information at a specific point of time, say \(t=T\), the goal is to recover the corresponding structure at an earlier time \(t< T\).

The BHCP is strictly difficult to solve since, in general, the solution does not always exist. Furthermore, even if the solution does exist, it would not be continuously dependent on the data. As a result, there are a number of difficulties in doing numerical calculations. BHCP is a very famous problem and has been considered by many authors by different methods [3–19]. For the BHCP with a constant coefficient, there is much nice literature that can be listed. Trong and Tuan in [12] used the method of integral equation to regularize backward heat conduction problem and get some error estimates. In [16], Hao et al. gave a very nice approximation for this problem by using a non-local boundary value problem method. Later on, Hao and Duc [17] used the Tikhonov regularization method to give an approximation for this problem in Banach space. Tautenhahn in [10] established an optimal error estimate for a backward heat equation with constant coefficient. Fu et al. [7] applied a wavelet dual least squares method to investigate a BHCP with constant coefficient.

The available results in the literature on BHCP are mainly devoted to the heat equation where the domain is a rectangle or a box in \(\mathbb {R}^{n}\). In this article, we consider a backward heat equation in an infinite long cylinder. The physical model considered here is an infinitely long cylinder of radius \(r_{0}\) with initial temperature, and it is considered to be axisymmetric and the surface temperature distribution is kept zero [18]. The corresponding mathematical model can be described by the following axisymmetric BHCP:

$$ \left \{ \textstyle\begin{array}{l} \frac{{\partial u}}{{\partial t}} = \frac{{{\partial^{2}}u}}{{\partial {r^{2}}}} + \frac{1}{r}\frac{{\partial u}}{{\partial r}},\quad 0 < r < {r_{0}},0 < t < T, \\ u({r_{0}},t) = 0, \quad 0 \le t \le T, \\ u(r,t) \quad \mbox{bounded as }r \to0, 0 \le t \le T, \\ u(r,T) = g(r), \quad 0 \le r \le{r_{0}}, \end{array}\displaystyle \right . $$
(1.1)

where r is the radial coordinate, \(g(r)\) denotes the final temperature history of the cylinder. Our goal is to recover the temperature distribution \(u(\cdot,t)\) for \(0 \le t \le T\). Similar to the case of a constant coefficient, the axisymmetric BHCP is also an ill-posed problem: a small perturbation in the final data may cause dramatically large errors in the solution. Therefore, a regularization method is extremely important. In [18], with a prior condition on the solution, we use the spectral method with a regularizing filter function to approximate problem (1.1) to give an error of the order of \(\varepsilon^{t/T}\).

The quasi-boundary value (QBV) method is a well-known method introduced by Showalter in 1983 (see [3, 4]). The main idea of the method is to add an appropriate ‘corrector’ in the final data. Using the method, Clark and Oppenheimer, in [5], and Denche-Bessila very recently in [20], regularized the backward problem by replacing the final condition by

$$ u(T) + \varepsilon u(0) = g $$
(1.2)

and

$$ u(T) - \varepsilon u'(0) = g. $$
(1.3)

In this present paper, we will apply the QBV method with some modification, the so-called modified quasi-boundary value (MQBV) method to regularize problem (1.1) to obtain the Hölder-type estimation. By MQBV, we aim to introduce two regularization parameters: The first one (ε) captures the measuring error and the second one (τ) captures the regularity of the solution. In fact, we will approximate (1.1) by the following regularized problem:

$$ \left \{ \textstyle\begin{array}{l} \frac{{\partial{u^{\varepsilon,\tau} }}}{{\partial t}}(r,t) - \frac {{{\partial^{2}}{u^{\varepsilon,\tau} }}}{{\partial{r^{2}}}}(r,t) - \frac{1}{r}\frac{{\partial{u^{\varepsilon,\tau} }}}{{\partial r}}(r,t) = 0, \quad r \in(0,r_{0}), t \in(0,T), \\ {u^{\varepsilon,\tau} }({r_{0}},t) = 0, \quad 0 \le t \le T, \\ {u^{\varepsilon,\tau} }(r,t)\quad \mbox{bounded as }r \to0, 0 \le t \le T, \\ {u^{\varepsilon,\tau} }(x,T) = \sum_{n = 1}^{\infty}{{g_{n}} ( {\frac{{{e^{ - {{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(T + \tau)}}}}{{\varepsilon{{ ( {\frac{{{\mu _{n}}}}{{{r_{0}}}}} )}^{2}} + {e^{ - {{ ( {\frac{{{\mu _{n}}}}{{{r_{0}}}}} )}^{2}}(T + \tau)}}}}} ){J_{0}} ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}r} )} , \end{array}\displaystyle \right . $$

where \(\mu_{n}\) is the root of the Bessel function \(J_{0}(x)\) and τ is a positive number. The rest of the paper is organized as follows. In Section 2, we state the backward problem for the axisymmetric BHCP. In Section 3, we formulate the regularize problem and provide an error estimation between solutions of these two problems. Finally, Section 4 provides some numerical examples to illustrate the efficiency of our method.

2 Statement of the problem

Throughout this paper, we denote by \(L^{2}(0,r_{0};r)\) the Hilbert space of Lebesgue measurable functions f with weight r on \([0,r_{0}]\). \(\langle\cdot,\cdot\rangle\) and \(\Vert \cdot \Vert \) denote inner product and norm on \(L^{2}(0,r_{0};r)\), respectively. Specifically, the norm and inner product in \(L^{2}(0,r_{0};r)\) are defined as follows:

$$ \Vert f \Vert : = {\Vert f \Vert _{{L^{2}}(0,r_{0};{r})}} = { \biggl( { \int _{0}^{{r_{0}}} {r{{\bigl\vert {f(r)} \bigr\vert }^{2}}\,dr} } \biggr)^{1/2}},\qquad \langle{f,g} \rangle = \int _{0}^{{r_{0}}} {rf(r)g(r)\,dr} $$

for \(f,g\in L^{2}(0,r_{0};{r})\).

Let us first make it clear what a solution of the problem (1.1) is. By a solution of (1.1), we imply a function \(u(r,t)\) satisfying (1.1) in the classical sense and for every fixed \(t \in[0, T]\) and this function \(u(r,t) \in L^{2}(0,r_{0};r) \). In this class of functions, if the solution of problem (1.1) exists, then it must be unique (see [21]).

Theorem 1

Cheng and Fu [18]

The original problem (1.1) is equivalent to the following integral equation:

$$ u(r,t) = \sum_{n = 1}^{\infty}{g_{n}} {e^{{{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(T - t)}} {J_{0}} \biggl( { \frac{{{\mu_{n}}}}{{{r_{0}}}}r} \biggr), $$
(2.1)

where \({J_{0}}(x) = \sum_{m = 0}^{\infty}{\frac{{{{( - 1)}^{m}}}}{{{{(m!)}^{2}}}}} { ( {\frac{x}{2}} )^{2m}}\) and

$$ {g_{n}} = \frac{2}{{r_{0}^{2}J_{1}^{2}({\mu_{n}})}} \int _{0}^{{r_{0}}} rg(r){J_{0}} \biggl( { \frac{{{\mu_{n}}}}{{{r_{0}}}}r} \biggr)\,dr, \quad n = 1,2,3,\ldots. $$

Proof

We will find a solution of the form

$$ u(r,t) = P(t)Q(r). $$
(2.2)

By substituting (2.2) into (1.1), for \(0< r\le r_{0}\), \(Q(r)\) must satisfy

$$\begin{aligned}& Q''(r) + \frac{1}{r}Q'(r) + \lambda Q(r) = 0, \end{aligned}$$
(2.3)
$$\begin{aligned}& Q({r_{0}}) = 0, \end{aligned}$$
(2.4)
$$\begin{aligned}& \bigl\vert {Q(0)} \bigr\vert < \infty, \end{aligned}$$
(2.5)

where λ is an unknown constant.

Similarly, for \(0< t\le T\), \(P(t)\) must satisfy

$$ P'(t) + \lambda P(t) = 0. $$
(2.6)

It is well known that the eigenvalue λ of problem (2.3)-(2.5) is nonnegative (see [22]). However, the eigenvalue cannot be zero since if \(\lambda= 0\), then \(Q(r)=0\). For \(\lambda>0\), regarding [11], we obtain the general solution of equation (2.3) taking the form

$$ Q(r) = {c_{1}} {J_{0}} ( {r\sqrt{\lambda}} ) + {c_{2}} {Y_{0}} ( {r\sqrt{\lambda}} ), $$
(2.7)

where \(J_{0}(z)\) and \(Y_{0}(z)\) denote Bessel functions of order zero of the first kind and the second kind, respectively. Note that \(\lim_{x \to0} {Y_{0}}(x) = - \infty\). Therefore, from boundary condition (2.3) and (2.4), \(c_{2}=0\). In addition, the boundary condition \(Q({r_{0}}) = 0\) tells us that

$${c_{1}} {J_{0}} ( {{r_{0}}\sqrt{\lambda}} ) = 0. $$

The sequence of roots of \(J_{0}(x)\) are \(\{ {{\mu_{n}}} \}_{n = 1}^{\infty}\), which satisfy

$$0 < {\mu_{1}} \le{\mu_{2}} \le\cdots \le{\mu_{n}} \le\cdots $$

and \(\lim_{n \to\infty} {\mu_{n}} = + \infty\). Thus, the eigenvalues of problem (2.3)-(2.5) are sequence of

$$ \frac{{{\mu_{n}}}}{{{r_{0}}}}, \quad n = 1,2,3,\ldots $$
(2.8)

and the corresponding eigenfunctions are

$$ {Q_{n}}(r) = {J_{0}} \biggl( { \frac{{{\mu _{n}}}}{{{r_{0}}}}r} \biggr),\quad n = 1,2,3,\ldots. $$
(2.9)

In the other hand, solution of (2.6) takes the form

$$ {P_{n}}(t) = {a_{n}} {e^{ - {{ ( {\frac {{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}t}}. $$
(2.10)

Combining (2.9) and (2.10), the representation of solution of problem (1.1) is

$$ u(r,t) = \sum_{n = 1}^{\infty}{{P_{n}}(t){Q_{n}}(r)} = \sum_{n = 1}^{\infty}{{a_{n}} {e^{ - {{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}t}} {J_{0}} \biggl( { \frac {{{\mu_{n}}}}{{{r_{0}}}}r} \biggr)} . $$

At \(t=T\), we have

$$ u(r,T) = \sum_{n = 1}^{\infty}{{a_{n}} {e^{ - {{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}T}} {J_{0}} \biggl( {\frac{{{\mu_{n}}}}{{{r_{0}}}}r} \biggr)}. $$

At the same time,

$$ u(r,T) = g(r) = \sum_{n = 1}^{\infty}{{g_{n}} {J_{0}} \biggl( {\frac{{{\mu_{n}}}}{{{r_{0}}}}r} \biggr)}, $$

where

$$\begin{aligned} {g_{n}} &= \frac{{ \langle{g(r),{J_{0}} ( {\frac {{{\mu_{n}}}}{{{r_{0}}}}r} )} \rangle}}{{{{ \Vert {{J_{0}} ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}r} )} \Vert }^{2}}}} = \frac{{\int _{0}^{{r_{0}}} {rg(r){J_{0}} ( {\frac{{{\mu _{n}}}}{{{r_{0}}}}r} )\,dr} }}{{\int _{0}^{{r_{0}}} {rJ_{0}^{2} ( {\frac{{{\mu_{n}}r}}{{{r_{0}}}}} )\,dr} }} \\ &= \frac{2}{{r_{0}^{2}J_{1}^{2}({\mu_{n}})}} \int _{0}^{{r_{0}}} rg(r){J_{0}} \biggl( { \frac{{{\mu_{n}}}}{{{r_{0}}}}r} \biggr)\,dr, \quad n = 1,2,3,\ldots. \end{aligned}$$

It follows that

$$ {a_{n}} = {g_{n}} {e^{{{ ( {\frac{{{\mu _{n}}}}{{{r_{0}}}}} )}^{2}}T}}. $$

Then a solution of problem (1.1) is

$$ u(r,t) = \sum_{n = 1}^{\infty}{g_{n}} {e^{{{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(T - t)}} {J_{0}} \biggl( { \frac{{{\mu_{n}}}}{{{r_{0}}}}r} \biggr). $$
(2.11)

It is clear that the solution \(u(r,t)\) belongs to \(L^{2}(0,r_{0};r)\). Therefore, it is the unique solution of problem (1.1). □

Regarding (2.11), the exponential growth causes an instability in the solution, i.e., the problem is ill-posed and a regularization method is extremely important.

3 Regularization and error estimates

In this paper we consider the following regularized problem:

$$ \left \{ \textstyle\begin{array}{l} \frac{{\partial{u^{\varepsilon,\tau} }}}{{\partial t}}(r,t) - \frac {{{\partial^{2}}{u^{\varepsilon,\tau} }}}{{\partial{r^{2}}}}(r,t) - \frac{1}{r}\frac{{\partial{u^{\varepsilon,\tau} }}}{{\partial r}}(r,t) = 0, \\ {u^{\varepsilon,\tau} }({r_{0}},t) = 0, \quad 0 \le t \le T, \\ {u^{\varepsilon,\tau} }(r,t) \quad \mbox{bounded as }r \to0, \\ {u^{\varepsilon,\tau} }(r,T) = \sum_{n = 1}^{\infty}{{g_{n}} ( {\frac{{{e^{ - {{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(T+\tau)}}}}{{\varepsilon{{ ( {\frac{{{\mu _{n}}}}{{{r_{0}}}}} )}^{2}} + {e^{ - {{ ( {\frac{{{\mu _{n}}}}{{{r_{0}}}}} )}^{2}}(T+\tau)}}}}} ){J_{0}} ( {\frac {{{\mu_{n}}}}{{{r_{0}}}}r} )} , \quad 0 \le r \le {r_{0}}. \end{array}\displaystyle \right . $$
(3.1)

We first start to state the main results by the following lemma.

Lemma 1

Let \(0 \le t \le T\), \(\varepsilon \in (0,T+\tau)\), \(\tau\ge 0\) and \(x>0\). Then the following inequality holds:

$$ \frac{{{e^{ - (t + \tau)x}}}}{{\varepsilon x + {e^{ - (T + \tau )x}}}} \le{\varepsilon^{\frac{{t - T}}{{T + \tau}}}} { \biggl( { \frac {{T + \tau}}{{ ( {1 + \ln((T + \tau)/\varepsilon)} )}}} \biggr)^{\frac{{T - t}}{{T + \tau}}}}. $$
(3.2)

Proof

We start the proof by the idea from [14]. For any \(\varepsilon \in (0,T)\), \(x>0\), and \(T>0\), the function

$$ w(x) = \frac{1}{{\varepsilon x + {e^{ - Tx}}}} $$

maximizes at \(x = \frac{{\ln(T/\varepsilon)}}{T}\). Therefore,

$$ w(x) = \frac{1}{{\varepsilon x + {e^{ - Tx}}}} \le w \biggl( {\frac {{\ln(T/\varepsilon)}}{T}} \biggr) = \frac{T}{{\varepsilon ( {1 + \ln(T/\varepsilon)} )}}. $$
(3.3)

Then, for any positive number Ï„, inequality (3.3) leads to the following estimation:

$$\begin{aligned} \frac{{{e^{ - (t + \tau)x}}}}{{\varepsilon x + {e^{ - (T + \tau )x}}}} =& \frac{{{e^{ - (t + \tau)x}}}}{{{{ ( {\varepsilon x + {e^{ - (T + \tau)x}}} )}^{\frac{{T - t}}{{T + \tau}}}}{{ ( {\varepsilon x + {e^{ - (T + \tau)x}}} )}^{\frac{{t + \tau }}{{T + \tau}}}}}} \\ \leqslant&\frac{{{e^{ - (t + \tau)x}}}}{{{e^{ - (t + \tau )x}}}}\frac{1}{{{{ ( {\varepsilon x + {e^{ - (T + \tau)x}}} )}^{\frac{{T - t}}{{T + \tau}}}}}} \\ \leqslant&{\varepsilon^{\frac{{t - T}}{{T + \tau}}}} { \biggl( {\frac{{T + \tau}}{{ ( {1 + \ln((T + \tau)/\varepsilon)} )}}} \biggr)^{\frac{{T - t}}{{T + \tau}}}}. \end{aligned}$$

The proof is completed. □

The following theorem shows the well-posedness of the regularized problem (3.1).

Theorem 2

Let \(g(r) \in{L^{2}}(0,r_{0};{r})\) and given \(\varepsilon \in(0,T + \tau)\). Then the regularized problem (3.1) has a unique solution \({u^{\varepsilon,\tau}} \in{C^{2,1}} ( {(0,{r_{0}}) \times (0,T)}; L^{2}(0,r_{0};r) )\) which is represented by

$$ {u^{\varepsilon,\tau} }(r,t) = \sum_{n = 1}^{\infty}{{g_{n}} \biggl( {\frac{{{e^{ - {{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(t+\tau)}}}}{{\varepsilon{{ ( {\frac{{{\mu _{n}}}}{{{r_{0}}}}} )}^{2}} + {e^{ - {{ ( {\frac{{{\mu _{n}}}}{{{r_{0}}}}} )}^{2}}(T+\tau)}}}}} \biggr){J_{0}} \biggl( {\frac {{{\mu_{n}}}}{{{r_{0}}}}r} \biggr)} . $$
(3.4)

The solution depends continuously on g in \(C ( {[0,r_{0}];{L^{2}}(0,r_{0};{r})} )\).

Proof

The proof is divided into two steps. In Step 1, we prove the existence and the uniqueness of solution of the regularized problem (3.1). In Step 2, the stability of the solution is given.

Step 1. If u satisfies (3.4), then u is a solution of the regularized problem (3.1). We have

$${u^{\varepsilon,\tau} }(r,t) = \sum_{n = 1}^{\infty}{{g_{n}} \biggl( {\frac{{{e^{ - {{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(t+\tau)}}}}{ {\varepsilon{{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}} + {e^{ - {{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(T+\tau )}}}}} \biggr) {J_{0}} \biggl( {\frac{{{\mu_{n}}}}{{{r_{0}}}}r} \biggr)}. $$

It follows that

$$\begin{aligned} \frac{{\partial{u^{\varepsilon,\tau} }}}{{\partial t}}(r,t) = & - { \biggl( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} \biggr)^{2}}\sum_{n = 1}^{\infty}{{g_{n}}\frac{{{e^{ - {{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(t+\tau)}}}}{{\varepsilon{{ ( {\frac{{{\mu _{n}}}}{{{r_{0}}}}} )}^{2}} + {e^{ - {{ ( {\frac{{{\mu _{n}}}}{{{r_{0}}}}} )}^{2}}(T+\tau)}}}}{J_{0}} \biggl( { \frac{{{\mu _{n}}}}{{{r_{0}}}}r} \biggr)} \\ =& \sum_{n = 1}^{\infty}{g_{n}} \frac{{{e^{ - {{ ( {\frac {{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(t+\tau)}}}}{{\varepsilon{{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}} + {e^{ - {{ ( {\frac {{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(T+\tau)}}}} \\ &{}\times\biggl( {\frac{{{\mu _{n}}}}{{r{r_{0}}}}{J_{1}} \biggl( { \frac{{{\mu_{n}}}}{{{r_{0}}}}r} \biggr) - {{ \biggl( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} \biggr)}^{2}} {J_{0}} \biggl( {\frac {{{\mu_{n}}}}{{{r_{0}}}}r} \biggr) - \frac{{{\mu _{n}}}}{{r{r_{0}}}}{J_{1}} \biggl( {\frac{{{\mu_{n}}}}{{{r_{0}}}}r} \biggr)} \biggr) \\ =&\frac{{{\partial^{2}}{u^{\varepsilon,\tau} }}}{{\partial {r^{2}}}}(r,t) + \frac{1}{r}\frac{{\partial{u^{\varepsilon,\tau} }}}{{\partial r}}(r,t). \end{aligned}$$
(3.5)

On the other hand, for all \(0< r\le r_{0}\), one has

$$ {u^{\varepsilon,\tau} }(r,T) = \sum_{n = 1}^{\infty}{{g_{n}} \biggl( {\frac{{{e^{ - {{ ( {\frac{{{\mu _{n}}}}{{{r_{0}}}}} )}^{2}}(T+\tau)}}}}{{\varepsilon{{ ( {\frac {{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}} + {e^{ - {{ ( {\frac{{{\mu _{n}}}}{{{r_{0}}}}} )}^{2}}(T+\tau)}}}}} \biggr){J_{0}} \biggl( {\frac {{{\mu_{n}}}}{{{r_{0}}}}r} \biggr)}. $$
(3.6)

It is also clear that \(u^{\varepsilon,\tau} \) belongs to \({{L^{2}}(0,r_{0};{r})}\). Hence, \(u^{\varepsilon,\tau} \) is a unique solution of the regularized problem (3.1).

Step 2. Let \(v^{\varepsilon,\tau} \) and \(w^{\varepsilon,\tau}\) be two solutions of the regularized problem (3.1) which are corresponding to the data g and h, respectively. Then the following representation holds:

$$\begin{aligned}& {v^{\varepsilon,\tau} }(r,t) = \sum_{n = 1}^{\infty}{{g_{n}}\frac{{{e^{ - {{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(t+\tau)}}}}{ {\varepsilon{{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}} + {e^{ - {{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(T+\tau)}}}} {J_{0}} \biggl( { \frac{{{\mu_{n}}}}{{{r_{0}}}}r} \biggr)}, \\& {w^{\varepsilon,\tau} }(r,t) = \sum_{n = 1}^{\infty}{h_{n}\frac{{{e^{ - {{ ( {\frac{{{\mu _{n}}}}{{{r_{0}}}}} )}^{2}}(t+\tau)}}}}{ {\varepsilon{{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}} + {e^{ - {{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(T+\tau)}}}} {J_{0}} \biggl( { \frac{{{\mu_{n}}}}{{{r_{0}}}}r} \biggr)}. \end{aligned}$$

We obtain

$$\begin{aligned} \bigl\Vert {{v^{\varepsilon,\tau} }(\cdot,t) - {w^{\varepsilon,\tau} }(\cdot,t)} \bigr\Vert =& \Biggl\Vert {\sum_{n = 1}^{\infty}{ \vert {{g_{n}} - h_{n}}\vert \biggl( {\frac{{{e^{ - {{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(t+\tau)}}}}{{\varepsilon {{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}} + {e^{ - {{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(T+\tau)}}}}} \biggr){J_{0}} \biggl( { \frac{{{\mu_{n}}}}{{{r_{0}}}}r} \biggr)} } \Biggr\Vert \\ \le& \biggl( {\frac{{{e^{ - {{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(t+\tau)}}}}{{\varepsilon{{ ( {\frac{{{\mu _{n}}}}{{{r_{0}}}}} )}^{2}} + {e^{ - {{ ( {\frac{{{\mu _{n}}}}{{{r_{0}}}}} )}^{2}}(T+\tau)}}}}} \biggr)\Biggl\Vert {\sum _{n = 1}^{\infty}{ \vert {{g_{n}} - h_{n}} \vert {J_{0}} \biggl( { \frac{{{\mu_{n}}}}{{{r_{0}}}}r} \biggr)} } \Biggr\Vert . \end{aligned}$$

By applying Lemma 1 directly, we get

$$ \bigl\Vert {{v^{\varepsilon,\tau} }(\cdot,t) - {w^{\varepsilon,\tau}}(\cdot,t)} \bigr\Vert \le{\varepsilon^{\frac{{t - T}}{{T + \tau}}}} { \biggl( {\frac{{T + \tau}}{{ ( {1 + \ln((T + \tau)/\varepsilon)} )}}} \biggr)^{\frac{{T - t}}{{T + \tau }}}}\Vert {g - h} \Vert . $$

The proof is completed. □

Until now, we already stated that our regularized problem is a well-posed problem in the sense of Hadamard. In the following, we will establish an error estimate between the exact solution and the regularized solution.

Let \(\mu_{n}\) be the sequence of roots of the Bessel function \(J_{0}(x)\), we denote

$$ {\bigl\Vert {u(\cdot,0)} \bigr\Vert } = \Biggl\Vert {\sum _{n = 1}^{\infty}{{{ \biggl( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} \biggr)}^{2}} {e^{{{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}\tau }} {u_{n}}(0){J_{0}} \biggl( {\frac{{{\mu_{n}}}}{{{r_{0}}}}r} \biggr)} } \Biggr\Vert , $$
(3.7)

where

$$ {u_{n}}(0) = \frac{2}{{r_{0}^{2}J_{1}^{2}({\mu_{n}})}} \int _{0}^{{r_{0}}} ru(r,0){J_{0}} \biggl( { \frac{{{\mu_{n}}}}{{{r_{0}}}}r} \biggr)\,dr, \quad n = 1,2,3,\ldots. $$
(3.8)

Theorem 3

The error estimate in the case of exact data

Let \(g(r) \in{L^{2}}(0,r_{0};{r})\), \(\tau\ge0\), and given \(\varepsilon \in(0,T + \tau)\). Suppose that the problem (1.1) has uniquely a solution u such that \({\Vert {u(\cdot,0)} \Vert } \le C\), where C is a positive constant. Then the following estimation holds for all \(t \in[0,T)\):

$$ \bigl\Vert {u(\cdot,t) - {u^{\varepsilon,\tau}}(\cdot,t)} \bigr\Vert \le C{\varepsilon^{\frac{{t + \tau}}{{T + \tau}}}} { \biggl( {\frac{{T + \tau}}{{ ( {1 + \ln((T + \tau)/\varepsilon)} )}}} \biggr)^{\frac{{T - t}}{{T + \tau}}}}, $$
(3.9)

where \(u^{\varepsilon,\tau}\) is the unique solution of (3.1).

Proof

Suppose the problem (1.1) has uniquely a solution u, then u is represented by

$$ u(r,t) = \sum_{n = 1}^{\infty}{{g_{n}} {e^{{{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(T - t)}} {J_{0}} \biggl( {\frac{{{\mu_{n}}}}{{{r_{0}}}}r} \biggr)}. $$

Since \({u_{n}}(0) = {g_{n}}{e^{{{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}T}}\) and with (3.4), we get

$$\begin{aligned} \bigl\Vert {u(r,t) - {u^{\varepsilon,\tau} }(r,t)} \bigr\Vert ={}& \Biggl\Vert { \sum_{n = 1}^{\infty}{{g_{n}} \biggl( {{e^{{{ ( {\frac {{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(T - t)}} - \frac{{{e^{ - {{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(t + \tau)}}}}{{\varepsilon {{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}} + {e^{ - {{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(T + \tau)}}}}} \biggr){J_{0}} \biggl( { \frac{{{\mu_{n}}}}{{{r_{0}}}}r} \biggr)} } \Biggr\Vert \\ \le{}& {\varepsilon^{\frac{{t + \tau}}{{T + \tau}}}} { \biggl( {\frac {{T + \tau}}{{ ( {1 + \ln((T + \tau)/\varepsilon)} )}}} \biggr)^{\frac{{T - t}}{{T + \tau}}}} \\ &{}\times\Biggl\Vert {\sum_{n = 1}^{\infty}{{g_{n}} {{ \biggl( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} \biggr)}^{2}} {e^{{{ ( {\frac{{{\mu_{n}}}}{{{r_{0}}}}} )}^{2}}(T + \tau )}} {J_{0}} \biggl( {\frac{{{\mu_{n}}}}{{{r_{0}}}}r} \biggr)} } \Biggr\Vert \\ \le{}& C{\varepsilon^{\frac{{t + \tau}}{{T + \tau}}}} { \biggl( {\frac{{T + \tau}}{{ ( {1 + \ln((T + \tau)/\varepsilon)} )}}} \biggr)^{\frac{{T - t}}{{T + \tau}}}}. \end{aligned}$$

 □

Theorem 4

Error estimates in the case of non-exact data

Let \(\tau\ge0\) and given \(\varepsilon \in(0,T + \tau)\). Let u be the unique solution of the problem (1.1) corresponding to the data g. Suppose that \(g^{\varepsilon}\) is a measured data such that

$$\bigl\Vert {g - {g^{\varepsilon}}} \bigr\Vert \le \varepsilon. $$

Then there exists an approximate solution \(U^{\varepsilon,\tau}\), which corresponds to the noisy data \(g^{\varepsilon}\), satisfying

$$ \bigl\Vert {{U^{\varepsilon,\tau}}(\cdot,t) - u(\cdot,t)} \bigr\Vert \le D{\varepsilon^{\frac{{t + \tau}}{{T + \tau}}}} { \biggl( {\frac{{T + \tau}}{{ ( {1 + \ln((T + \tau)/\varepsilon)} )}}} \biggr)^{\frac{{T - t}}{{T + \tau}}}} $$
(3.10)

for every \(t \in[0,T]\), the value of C is as in Theorem  3 and \(D = (1+C)\).

Proof

Let \(U^{\varepsilon,\tau}\) be the solution of regularized problem (3.1) corresponding to data \(g^{\varepsilon}\) and let \(u^{\varepsilon,\tau}\) be the solution of the problem (3.1) corresponding to data g. Let \(u(r,t)\) be the exact solution, in view of the triangle inequality, one has

$$\bigl\Vert {{U^{\varepsilon,\tau} }(r,t) - u(r,t)} \bigr\Vert \le \bigl\Vert {{U^{\varepsilon,\tau} }(r,t) - {u^{\varepsilon,\tau} }(r,t)} \bigr\Vert + \bigl\Vert {u(r,t) - {u^{\varepsilon,\tau} }(r,t)} \bigr\Vert . $$

Combining the results from Theorem 2 and Theorem 3, for every \(t \in [0,T]\), we get

$$\begin{aligned} \bigl\Vert {{U^{\varepsilon,\tau}}(r,t) - u(r,t)} \bigr\Vert \le& { \varepsilon^{\frac{{t + \tau}}{{T + \tau}}}} { \biggl( {\frac{{T + \tau}}{{ ( {1 + \ln((T + \tau)/\varepsilon)} )}}} \biggr)^{\frac{{T - t}}{{T + \tau}}}} \\ &{}+ C{\varepsilon^{\frac{{t + \tau }}{{T + \tau}}}} { \biggl( {\frac{{T + \tau}}{{ ( {1 + \ln((T + \tau)/\varepsilon)} )}}} \biggr)^{\frac{{T - t}}{{T + \tau }}}} \\ \le& D{\varepsilon^{\frac{{t + \tau}}{{T + \tau}}}} { \biggl( {\frac{{T + \tau}}{{ ( {1 + \ln((T + \tau)/\varepsilon)} )}}} \biggr)^{\frac{{T - t}}{{T + \tau}}}}. \end{aligned}$$

The proof is completed. □

Remark 1

At the initial time \(t=0\), the error estimation is

$${\varepsilon^{\frac{\tau}{{T + \tau}}}} { \biggl( {\frac{{T + \tau }}{{ ( {1 + \ln((T + \tau)/\varepsilon)} )}}} \biggr)^{\frac{T}{{T + \tau}}}}, $$

which is improved in comparison to the result in Theorem 3.3 of [18]. However, this improving estimation costs a stronger prior assumption. This is the weak point of this paper. In future work, the authors will try to reduce the prior assumptions while keeping the same order of error estimation at the same time.

4 A numerical illustration

In this section, we establish a numerical tests to illustrate the theoretical result in the above section. Consider the following problem:

$$ \left \{ \textstyle\begin{array}{l} \frac{{\partial u}}{{\partial t}} = \frac{{{\partial^{2}}u}}{{\partial {r^{2}}}} + \frac{1}{r}\frac{{\partial u}}{{\partial r}}, \quad 0 < r < {r_{0}}, 0 < t < T, \\ u({r_{0}},t) = 0, \quad 0 \le t \le T, \\ u(r,t) \quad \mbox{bounded as }r \to0, \\ u(r,T) = g^{ex}(r), \quad 0 \le r \le{r_{0}}, \end{array}\displaystyle \right . $$
(4.1)

where the exact data is given by

$$ {g^{ex}}(r) = e^{T}{J_{0}} \biggl( { \frac {{{\mu_{1}}}}{{{r_{0}}}}r} \biggr). $$

Under this assumption, the exact solution can be obtained by

$$ {u^{ex}}(r,t) = e^{t}{e^{{{ ( {\frac {{{\mu_{1}}}}{{{r_{0}}}}} )}^{2}}(T - t)}} {J_{0}} \biggl( {\frac{{{\mu _{1}}}}{{{r_{0}}}}r} \biggr). $$
(4.2)

Due to the error in measurement process, the measured data is noised and given by

$$ {g^{\varepsilon}}(r) = e^{T}{J_{0}} \biggl( { \frac{{{\mu_{1}}}}{{{r_{0}}}}r} \biggr) + \sum_{p = 1}^{{P_{0}}} {\varepsilon{a_{p}} {J_{0}} \biggl( {\frac{{{\mu_{p}}}}{{{r_{0}}}}r} \biggr)}, $$
(4.3)

where \(P_{0}\) is a random natural number and \(a_{p}\) is a finite sequence of random normal numbers with mean 0 and variance \(A^{2}\). It follows that the error in the measurement process is bounded by ε, \(\Vert {{g^{\varepsilon}} - g} \Vert \le \varepsilon\). The regularized solution, which is obtained by (3.4) and corresponding to the data \(g^{\varepsilon}\), is

$$\begin{aligned} {u^{\varepsilon,\tau}}(r,t) =& \frac{{{e^{1 - {{ ( {\frac{{{\mu_{1}}}}{{{r_{0}}}}} )}^{2}}(t + \tau )}}}}{{\varepsilon{{ ( {\frac{{{\mu_{1}}}}{{{r_{0}}}}} )}^{2}} + {e^{ - {{ ( {\frac{{{\mu_{1}}}}{{{r_{0}}}}} )}^{2}}(T + \tau )}}}}{J_{0}} \biggl( { \frac{{{\mu_{1}}}}{{{r_{0}}}}r} \biggr) \\ &{}+ \sum_{p = 1}^{{P_{0}}} {{a_{p}} \biggl( {\frac{{\varepsilon{e^{ - {{ ( {\frac{{{\mu_{p}}}}{{{r_{0}}}}} )}^{2}}(t + \tau )}}}}{{\varepsilon{{ ( {\frac{{{\mu_{p}}}}{{{r_{0}}}}} )}^{2}} + {e^{ - {{ ( {\frac{{{\mu_{p}}}}{{{r_{0}}}}} )}^{2}}(T + \tau )}}}}} \biggr){J_{0}} \biggl( {\frac{{{\mu_{p}}}}{{{r_{0}}}}r} \biggr)} . \end{aligned}$$
(4.4)

For each point of time, let us evaluate the ‘relative error’ between the exact solution and the regularized solution, which is defined by

$$ \operatorname{RE}(\varepsilon,t) = \frac{{\Vert {{u^{re}}(\cdot,t) - {u^{ex}}(\cdot,t)} \Vert }}{{\Vert {{u^{ex}}(\cdot,t)} \Vert }}. $$
(4.5)

The relative error has a better representation of the difference between exact and approximate solution. Let us say when the value of the exact solution is big, the difference between exact and approximate solution does not give us much information as regards the goodness of fit of the approximation. In this case, the relative error is a better measurement of how well fit is the approximate method.

Fix \(T=1\), \(r_{0}=2\), \(P_{0} = 1\text{,}000\), \(A^{2} = 100\). There are two situations to consider.

Situation 1: Situation 1 focuses mainly on the regularization parameter ε. Fix \(\tau=0.3\) and let \({\varepsilon_{1}} = 10^{-1} \), \({\varepsilon_{2}} = 10^{-2}\), \({\varepsilon_{3}} = 10^{-3}\). We have the graphics of the exact solution and the regularized solution with various values of ε (see Figures 1 and 2).

Figure 1
figure 1

The exact solution (a) and regularized solution with \(\pmb{\varepsilon_{1}=10^{-1}}\) , \(\pmb{\tau=0.3}\) (b).

Figure 2
figure 2

The approximate solution with \(\pmb{\tau=0.3}\) and \(\pmb{\varepsilon_{2}=10^{-2}}\) (a) and \(\pmb{\varepsilon_{3}=10^{-3}}\) (b).

At the initial time \(t=0\), we have the graphic in Figure 3.

Figure 3
figure 3

At time \(\pmb{t=0}\) : exact solution (black) and regularized solution with \(\pmb{\tau=0.3}\) and \(\pmb{\varepsilon_{1}=10^{-3}}\) (blue), \(\pmb{\varepsilon_{2}=10^{-5}}\) (green), \(\pmb{\varepsilon_{3}=10^{-7}}\) (red).

In addition to the graphics, a table of the summary of the error estimation and relative error of the estimation is also provided (see Table 1).

Table 1 The error and relative error of method in this paper with \(\pmb{\tau=0.3}\) and various values of  ε

Remark 2

Through Figure 1, Figure 2, Figure 3, and Table 1, it is clear that as the measuring error ε becomes small, the regularized solution gets ever more close to the exact one. It is also noted that the value of \(a_{p}\) ranges from −307.896 to 290.9509 in this situation.

Situation 2: In this situation, the regularized parameter τ is heavily focused. Fix \(\varepsilon= 10^{-3}\) and let \(\tau_{1}=0\), \(\tau_{2}=1\) (see Figure 4). At the initial time \(t=0\), we have the graphic in Figure 5.

Figure 4
figure 4

The regularized solution with \(\pmb{\varepsilon= 10^{-3}}\) and \(\pmb{\tau_{1}=0}\) (a) and \(\pmb{\tau_{2}=1}\) (b).

Figure 5
figure 5

At time \(\pmb{t=0}\) : exact solution (black) and regularized solution with \(\pmb{\varepsilon= 10^{-3}}\) and \(\pmb{\tau=0}\) (blue), \(\pmb{\tau=0.3}\) (green) and \(\pmb{\tau=1}\) (red).

Remark 3

Figure 4 and Figure 5 agree with the theoretical result in Section 3: the regularized solution with higher value of τ is more close to the exact one. The parameter τ is very useful in the case that we want to get a more accurate approximation while the measuring process cannot be better or the cost of better measuring is very expensive. In this case, with the appearance of τ, the error can be improved without any more cost on measuring (as we can see in Figure 5). It is also noted that the value of \(a_{p}\) ranges from −309.433 to 384.8187 in this situation.

References

  1. Tikhonov, AN, Arsenin, VY: Solutions of Ill-Posed Problems. V.H. Winston, Washington (1977)

    MATH  Google Scholar 

  2. Glasko, VB: Inverse Problems of Mathematical Physics. AIP, New York (1984)

    MATH  Google Scholar 

  3. Showalter, RE: The final value problem for evolution equations. J. Math. Anal. Appl. 47, 563-572 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  4. Showalter, RE: Quasi-reversibility of first and second order parabolic evolution equations. In: Improperly Posed Boundary Value Problems, pp. 76-84. Pitman, London (1975)

    Google Scholar 

  5. Clark, GW, Oppenheimer, SF: Quasireversibility methods for non-well posed problems. Electron. J. Differ. Equ. 1994, 8 (1994)

    MathSciNet  MATH  Google Scholar 

  6. Fu, CL, Qian, Z, Shi, R: A modified method for a backward heat conduction problem. Appl. Math. Comput. 185, 564-573 (2007)

    MathSciNet  MATH  Google Scholar 

  7. Fu, CL, Xiong, XT, Qian, Z: Fourier regularization for a backward heat equation. J. Math. Anal. Appl. 331, 472-480 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  8. Hao, DN, Duc, NV: Stability results for the heat equation backward in time. J. Math. Anal. Appl. 353, 627-641 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  9. Muniz, BW: A comparison of some inverse methods for estimating the initial condition of the heat equation. J. Comput. Appl. Math. 103, 145-163 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  10. Tautenhahn, U: Optimality for ill-posed problems under general source conditions. Numer. Funct. Anal. Optim. 19, 377-398 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  11. Abramowitz, M, Stegun, IA: Handbook of Mathematical Functions. Dover, New York (1972)

    MATH  Google Scholar 

  12. Trong, DD, Tuan, NH: Regularization and error estimate for the nonlinear backward heat problem using a method of integral equation. Nonlinear Anal. 71, 4167-4176 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  13. Trong, DD, Quan, PH, Khanh, TV, Tuan, NH: A nonlinear case of the 1-D backward heat problem: regularization and error estimate. Z. Anal. Anwend. 26, 231-245 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  14. Trong, DD, Tuan, NH: Regularization and error estimate for the nonlinear backward heat problem using a method of integral equation. Nonlinear Anal., Theory Methods Appl. 71, 4167-4176 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  15. Tuan, NH, Hoa, NV: Determination temperature of a backward heat equation with time-dependent coefficients. Math. Slovaca 62, 937-948 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  16. Hao, DN, Duc, NV, Lesnic, D: Regularization of parabolic equations backward in time by a non-local boundary value problem method. IMA J. Appl. Math. 75, 291-315 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  17. Hao, DN, Duc, NV: Regularization of backward parabolic equations in Banach spaces. J. Inverse Ill-Posed Probl. 20, 745-763 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  18. Cheng, W, Fu, CL: A spectral method for an axisymmetric backward heat equation. Inverse Probl. Sci. Eng. 17, 1081-1093 (2009)

    Article  MathSciNet  Google Scholar 

  19. Cheng, W, Fu, CL, Qin, FJ: Regularization and error estimate for a spherically symmetric backward heat equation. J. Inverse Ill-Posed Probl. 19, 369-377 (2011)

    MathSciNet  MATH  Google Scholar 

  20. Denche, M, Bessila, K: A modified quasi-boundary value method for ill-posed problems. J. Math. Anal. Appl. 301, 419-426 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  21. Evans, LC: Partial Differential Equations. Am. Math. Soc., Providence (1998)

    MATH  Google Scholar 

  22. Abramowitz, M, Stegun, IA: Handbook of Mathematical Functions: With Formulas, Graphs, and Mathematical Tables. National Bureau of Standards Applied Mathematics Series, vol. 55. U.S. Government Printing Office, Washington (1964)

    MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Professor Dang Duc Trong and Associate Professor Nguyen Huy Tuan for the great support during their undergraduate period.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ngo Van Hoa.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final version of the manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hoa, N.V., Khanh, T.Q. Two-parameter regularization method for an axisymmetric inverse heat problem. Bound Value Probl 2017, 25 (2017). https://doi.org/10.1186/s13661-017-0750-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13661-017-0750-8

Keywords