# On an inverse problem in the parabolic equation arising from groundwater pollution problem

## Abstract

In this paper, we consider an inverse problem to determine a source term in the parabolic equation, once the measured data are obtained at a later time. In general, this problem is ill-posed, therefore the Tikhonov regularization method with a priori and a posteriori parameter choice rule strategies is proposed to solve the problem. In the theoretical results, a priori error estimate between the exact solution and its regularized solution is obtained. For estimating the errors between the regularized solution and its exact solution, numerical experiments have been carried out. From the numerical results it shows that the a posteriori parameter choice rule method has a better convergence speed in comparison with the a priori parameter choice rule method.

## 1 Introduction

Groundwater is crucial to human being, environment and economy, because a large portion of drinking water comes from groundwater, and it is extracted for commercial, industrial and irrigation uses. Groundwater also sustains stream flow during dry periods and plays an important role in the function of streams, wetlands and other aquatic environments. Therefore, protecting the safety and security of groundwater is essential for environment and communities. In recent years, mathematical models have become an efficient tool to study the groundwater system, whereby there are two notable approaches in dealing with groundwater modeling, namely the forward and backward approaches. The former is going to predict unknown parameters at a later time from previous given conditions by solving appropriate governing equations, while the latter is going to determine unknown physical parameters, which could not be observed at a previous time. Most of groundwater models are distributed parameter models, where the parameters used in the modeling equations are not directly obtained from physical observations, but from trial-and-error and graphical fitting techniques. If large errors are included in mathematical model structure, model parameters, sink/source terms and boundary conditions, the model cannot produce accurate results. To deal with this issue, the inverse problem of parameter identification has been applied. In groundwater applications such as in finding a previous pollution source intensity from observation data of the pollutant concentrations at a later time, or in designing the final state of melting and freezing processes, it is necessary to construct a pollution/heat source at any given time from the final outcome state data. The groundwater inverse problem has been studied since the middle of 1970s by McLaughin (1975), Yeh (1986), Kuiper (1986), Carrera (1987), Ginn and Cushman (1990) and Sun (1994), etc. (see [1â€“6]). Some remarkable results on this research area should be mentioned by McLaughlin and Townley (1996) [7] and Poeter and Hill (1997) [8]. Under consideration of a solute diffusion, the flow and self-purifying function of watershed system, the concentration of pollution $$u(x,t)$$ at any time in a watershed is described by the following one-dimensional linear parabolic equation:

$$\frac{\partial u}{\partial t}-\eta\frac{\partial^{2}u}{\partial x^{2}}+\nu\frac{\partial u}{\partial x}+\gamma u=P (x,t ), \quad x\in\Omega,t>0,$$
(1.1)

where $$\Omega\in\mathbb{R}$$ is the spatial studied domain, Î· is the diffusion coefficient, Î½ is mean velocity of water in the watershed, and Î³ is the self-purifying function of the watershed, $$P (x,t )$$ is the source term causing the pollution function $$u (x,t )$$. By setting

$$w (x,t )=u (x,t )\exp \biggl(\frac{\nu }{2\eta}x- \biggl(\frac{\nu^{2}}{4\eta}+ \gamma \biggr)t \biggr)$$

and

$$F (x,t )=P (x,t )\exp \biggl( \biggl(\frac {\nu^{2}}{4\eta}+\gamma \biggr)t- \frac{\nu}{2\eta}x \biggr),$$

equation (1.1) becomes

$$\frac{\partial w}{\partial t}-\eta\frac{\partial^{2}w}{\partial x^{2}}=F (x,t ).$$

This equation is well known as a parabolic heat equation with the time-dependent coefficient and has been investigated for the heat source with either temporal [9â€“11] or spatial-dependent [12â€“16] only. There are few studies on identification of the source term depending on both time and space in the case of a separable form of $$F(x,t)$$, i.e., $$F(x,t)=\varphi(t)f(x)$$, where $$\varphi (t )$$ is a given function. For instance, Hasanov [17] identified the heat source in the separable form of $$F(x,t)=F(x)H(t)$$ for the spatial dependent coefficient in the heat conduction equation $$u_{t}=(k(x)u_{x})_{x}+F(x)H(t)$$ under the variational method. However, there are still limited results in the case with the diffusion coefficient Î· dependent on time.

In this study, we consider the equation for groundwater pollution as follows:

$$\frac{\partial u}{\partial t}-\frac{\partial}{\partial x} \biggl(a (t )\frac{\partial u}{\partial x} \biggr)= \varphi (t )f (x ),\quad (x,t )\in (0,\pi )\times (0,T ),$$
(1.2)

with initial and final conditions

$$u (x,0 )=0, \qquad u (x,T )=g (x ),\quad x\in (0,\pi ),$$
(1.3)

and boundary conditions

$$u (0,t )=u (\pi,t )=0.$$
(1.4)

Here, $$a (t )>0$$ is a temporal dependent diffusion coefficient, $$g (x )$$ and $$\varphi (t )$$ are given functions. An objective of this study is to determine the source term $$f (x )$$ from the noisy observed data set of $$\varphi (t )$$ and $$g (x )$$.

Let $$\Vert \cdot \Vert$$ and $$\langle\cdot,\cdot \rangle$$ be the norm and the inner product in $$L^{2} (0,\pi )$$, respectively. Now, we take an orthonormal basis in $$L^{2} (0,\pi )$$ satisfying the boundary condition (1.4); in particular, the basic function $$\sqrt{\frac{2}{\pi}}\sin (nx )$$ for $$n\in\mathbb{N}$$ satisfies that condition. Then, by an elementary calculation, problem (1.2) under conditions (1.3) and (1.4) can be transformed into the following corresponding problem:

\begin{aligned}& \frac{d}{dt} \bigl\langle u (x,t ),\sin (nx ) \bigr\rangle +n^{2}a (t ) \bigl\langle u (x,t ),\sin (nx ) \bigr\rangle =\varphi (t ) \bigl\langle f (x ),\sin (nx ) \bigr\rangle ,\quad t\in (0,T ), \end{aligned}
(1.5)
\begin{aligned}& \bigl\langle u (x,0 ),\sin (nx ) \bigr\rangle =0,\qquad \bigl\langle u (x,T ), \sin (nx ) \bigr\rangle = \bigl\langle g (x ),\sin (nx ) \bigr\rangle . \end{aligned}
(1.6)

By setting $$A (t )=\int_{0}^{t}a (s )\, ds$$, we can solve the ordinary differential equation (1.5) with conditions (1.6), we thus obtain

$$\bigl\langle f (x ),\sin (nx ) \bigr\rangle =e^{n^{2}A (T )} \biggl(\int _{0}^{T}e^{n^{2}A (t )}\varphi (t )\,dt \biggr)^{-1} \bigl\langle g (x ),\sin (nx ) \bigr\rangle ,$$
(1.7)

$$f (x )=\sum_{n=1}^{\infty}e^{n^{2}A (T )} \biggl(\int_{0}^{T}e^{n^{2}A (t )}\varphi (t ) \,dt \biggr)^{-1}g_{n}\sin (nx ),$$
(1.8)

where $$g_{n}=\frac{2}{\pi} \langle g (x ),\sin (nx ) \rangle$$.

Note that $$e^{n^{2}A (T )}$$ increases rather quickly once n becomes large. Thus, the exact data function $$g (x )$$ must satisfy that $$g_{n}$$ decays at least as the same speed of $$e^{n^{2}A (t )}$$. However, in application the input data of $$g (x )$$ from observations will never be exact due to the measurements. We assume that the observed data functions of $$g (x )$$ and $$\varphi (t )$$ are $$g_{\epsilon} (x )\in L^{2} (0,\pi )$$ and $$\varphi_{\epsilon} (t )\in L^{2} (0,T )$$, respectively, and they satisfy

$$\Vert g_{\epsilon}-g\Vert \le\epsilon,\qquad \Vert \varphi_{\epsilon}-\varphi \Vert \le\epsilon,$$
(1.9)

where $$\epsilon>0$$ represents a noise from observations.

The aim of this paper is to determine a conditional stability and provide the revised generalized Tikhonov regularization method. In addition, the stability estimate between the regularized solution and the exact solution is obtained. For implementation of this method, we impose an a priori bound on the data

$$\Vert f\Vert _{H^{k} (0,\pi )}\le M,\quad k\ge0,$$
(1.10)

where $$M\ge0$$ is a constant, and $$\Vert \cdot \Vert _{H^{k} (0,\pi )}$$ denotes the norm in the Sobolev space $$H^{k} (0,\pi )$$ of order k, which can be naturally defined in terms of Fourier series whose coefficients decay rapidly; namely

$$H^{k} (0,\pi ):= \bigl\{ f\in L^{2} (0,\pi ):\Vert f \Vert _{H^{k} (0,\pi )}< \infty \bigr\} ,$$
(1.11)

equipped with the norm

$$\Vert f\Vert _{H^{k} (0,\pi )}=\sqrt{\sum _{n=1}^{\infty} \bigl(1+n^{2} \bigr)^{k}f_{n}^{2}},$$

where $$f_{n}$$ defined by $$f_{n}= \langle f$$, $$X_{n} \rangle ,X_{n}=\sqrt{\frac{2}{\pi}}\sin (nx )$$ is the Fourier coefficient of f.

As a regularization method, the Tikhonov method has been used to solve ill-posed problems in a number of publications. However, most of previous works focus on an a priori choice of the regularization parameter. There is usually a defect in any a priori method; i.e., the a priori choice of the regularization parameter depends obviously on the a priori bound M of the unknown solution. In fact, the a priori bound M cannot be known exactly in practice, and working with a wrong constant M may lead to a poor regularization solution. In this paper, we mainly consider the a posteriori choice of a regularization parameter for the mollification method. Using the discrepancy principle, we provide a new a posteriori parameter choice rule.

The outline of this paper is as follows. In SectionÂ 2, a conditional stability is introduced. A Tikhonov regularization and its convergence under an a priori parameter choice rule is presented in SectionÂ 3. Similarly to SectionÂ 3, another Tikhonov regularization and its convergence under a posteriori parameter choice rule is described in SectionÂ 4. In SectionÂ 5, we introduce two numerical examples, which are implemented from the proposed regularization methods, the numerical results are compared with the exact solutions.

## 2 Conditional stability

Let $$a,\varphi,\varphi_{\epsilon}: [0,T ]\to\mathbb{R}$$ be continuous functions. We suppose that there exist constants $$B_{1},B_{2},C_{1},C_{2},D_{1},D_{2}>0$$ such that

$$B_{1}\le\varphi (t )\le B_{2}, \qquad C_{1}\le \varphi _{\epsilon} (t )\le C_{2},\qquad D_{1}\le a (t ) \le D_{2}.$$
(2.1)

Hereafter, let us set

$$A (t )-A (T )=B (t ), \qquad \Phi (n,h )=\int_{0}^{T}e^{n^{2}B (t )}h (t )\,dt,$$
(2.2)

where h plays a role as Ï† and $$\varphi _{\epsilon}$$ by implication. Then we can obtain the following conditional stability.

### Lemma 2.1

For all continuous functions $$h\in [E_{1},E_{2} ]$$, then

$$\bigl(\Phi (n,h ) \bigr)^{k}\le \begin{cases} E_{2}^{k}n^{-2k}D_{1}^{-k} (1-e^{-n^{2}D_{2}T} )^{k}, & k\ge 0, \\ E_{1}^{k}n^{-2k}D_{2}^{-k} (1-e^{-D_{1}T} )^{k}, & k< 0, \end{cases}$$
(2.3)

for $$n\in\mathbb{N}$$.

### Proof

The proof is simple by elementary calculation.â€ƒâ–¡

### Theorem 2.1

If there exists $$M\ge0$$ such that $$\Vert f\Vert _{H^{k} (0,\pi )}\le M$$, then

$$\Vert f\Vert \le \biggl(\frac{D_{2}}{B_{1} (1-e^{-D_{1}T} )} \biggr)^{\frac{k}{k+2}}M^{\frac{2}{k+2}} \Vert g\Vert ^{\frac{k}{k+2}}.$$
(2.4)

### Proof

Using Holderâ€™s inequality, we first have

\begin{aligned} \Vert f\Vert ^{2} = & \sum_{n=1}^{\infty} \biggl(\int_{0}^{T}e^{n^{2}B (t )}\varphi (t ) \,dt \biggr)^{-2}g_{n}^{\frac{4}{k+2}}g_{n}^{\frac{2k}{k+2}} \\ \le& \Biggl[\sum_{n=1}^{\infty} \biggl(\int _{0}^{T}e^{n^{2}B (t )}\varphi (t )\,dt \biggr)^{- (k+2 )}g_{n}^{2} \Biggr]^{\frac{2}{k+2}} \Biggl(\sum_{n=1}^{\infty }g_{n}^{2} \Biggr)^{\frac{k}{k+2}}. \end{aligned}
(2.5)

Then, from (1.7), the inequality becomes

$$\Vert f\Vert ^{2}\le \Biggl[\sum_{n=1}^{\infty} \biggl(\int_{0}^{T}e^{n^{2}B (t )}\varphi (t ) \,dt \biggr)^{-k}f_{n}^{2} \Biggr]^{\frac{2}{k+2}} \Vert g \Vert ^{\frac{2k}{k+2}}.$$
(2.6)

We pay attention to the integral on the right-hand side by direct estimate and computation. From (2.1), we thus get

\begin{aligned} \Vert f\Vert ^{2} \le& \biggl(\frac{D_{2}^{k}}{ [B_{1} (1-e^{-D_{1}T} ) ]^{k}} \biggr)^{\frac {2}{k+2}} \Biggl(\sum_{n=1}^{\infty}n^{2k}f_{n}^{2} \Biggr)^{\frac {2}{k+2}}\Vert g\Vert ^{\frac{2k}{k+2}} \\ \le& \biggl[\frac{D_{2}}{B_{1} (1-e^{-D_{1}T} )} \biggr]^{\frac{2k}{k+2}}\Vert f\Vert _{H^{k} (0,\pi )}^{\frac{4}{k+2}}\Vert g\Vert ^{\frac{2k}{k+2}}. \end{aligned}
(2.7)

Hence, the theorem has been proved.â€ƒâ–¡

## 3 Tikhonov regularization under an a priori parameter choice rule

We define a linear operator $$K:L^{2} (0,\pi )\to L^{2} (0,\pi )$$ as follows:

$$Kf (x )=\sum_{n=1}^{\infty} \langle f,X_{n} \rangle\int_{0}^{T}e^{n^{2}B (t )} \, dtX_{n} (x )=\int_{0}^{\pi}k (x,\xi )f (\xi )\, d\xi ,$$
(3.1)

where $$k (x,\xi )=\sum_{n=1}^{\infty}\int_{0}^{T}e^{n^{2}B (t )}\, dtX_{n} (x )X_{n} (\xi )$$. Due to $$k (x,\xi )=k (\xi,x )$$, K is self-adjoint. Next, we prove its compactness. Let us consider the finite rank operators $$K_{m}$$ by

$$K_{m}f (x )=\sum_{n=1}^{m} \langle f,X_{n} \rangle\int_{0}^{T}e^{n^{2}B (t )} \, dtX_{n} (x ).$$
(3.2)

Then, from (3.1) and (3.2), we have

$$\Vert K_{m}f-Kf\Vert ^{2}=\sum _{n=m+1}^{\infty} \biggl(\int_{0}^{T}e^{n^{2}B (t )} \,dt \biggr)^{2}f_{n}^{2}\le \frac{1}{m^{4}D_{1}^{2}} \sum_{n=m+1}^{\infty}f_{n}^{2} \le\frac {1}{m^{4}D_{1}^{2}}\Vert f\Vert ^{2}.$$
(3.3)

Therefore, $$\Vert K_{m}-K\Vert \to0$$ in the sense of operator norm in $$\mathcal{L} (L^{2} (0,\pi );L^{2} (0,\pi ) )$$, as $$m\to\infty$$. K is also a compact operator. Next, the singular values for the linear self-adjoint compact operator are

$$\sigma_{n}=\int_{0}^{T}e^{n^{2}B (t )} \,dt,$$
(3.4)

and the corresponding eigenvectors $$X_{n}$$ are known as an orthonormal basis in $$L^{2} (0,\pi )$$. From (3.1), the inverse source problem introduced above can be formulated as an operator equation

$$(Kf ) (x )=g (x ).$$
(3.5)

In general, such a problem is ill-posed, therefore we aim at solving it by using the Tikhonov regularization method, i.e., to minimize the following quantity in $$L^{2} (0,\pi )$$:

$$\Vert Kf-g\Vert ^{2}+\mu^{2}\Vert f \Vert ^{2}.$$
(3.6)

Applying TheoremÂ 2.12 in [18], the value of expression (3.6) has a minimum value at $$f_{\mu}$$, which satisfies

$$K^{*}Kf_{\mu} (x )+\mu^{2}f_{\mu} (x )=K^{*}g (x ).$$
(3.7)

Due to singular value decomposition for a compact self-adjoint operator, we have

$$f_{\mu} (x )=\sum_{n=1}^{\infty} \biggl(\mu^{2}+ \biggl(\int_{0}^{T}e^{n^{2}B (t )} \varphi (t )\,dt \biggr)^{2} \biggr)^{-1}\int _{0}^{T}e^{n^{2}B (t )}\varphi (t )\,dt \langle g,X_{n} \rangle X_{n} (x ).$$
(3.8)

If the given data is noised, we can establish

$$f_{\mu}^{\epsilon} (x )=\sum_{n=1}^{\infty} \biggl(\mu ^{2}+ \biggl(\int_{0}^{T}e^{n^{2}B (t )} \varphi_{\epsilon } (t )\,dt \biggr)^{2} \biggr)^{-1}\int _{0}^{T}e^{n^{2}B (t )}\varphi_{\epsilon} (t )\,dt \langle g_{\epsilon},X_{n} \rangle X_{n} (x ).$$
(3.9)

From (2.2), (3.8) and (3.9), we get

\begin{aligned}& f_{\mu} (x ) = \sum_{n=1}^{\infty} \frac{\Phi (n,\varphi )}{\mu^{2}+ (\Phi (n,\varphi ) )^{2}} \langle g,X_{n} \rangle X_{n} (x ), \end{aligned}
(3.10)
\begin{aligned}& f_{\mu}^{\epsilon} (x ) = \sum_{n=1}^{\infty} \frac {\Phi (n,\varphi_{\epsilon} )}{\mu^{2}+ (\Phi (n,\varphi_{\epsilon} ) )^{2}} \langle g_{\epsilon },X_{n} \rangle X_{n} (x ). \end{aligned}
(3.11)

In this work, we will deduce an error estimate for $$\Vert f-f_{\mu}^{\epsilon} \Vert$$ and show convergence rate under a suitable choice of regularization parameters. It is clear that the entire error can be decomposed into the bias and noise propagation as follows:

$$\bigl\Vert f-f_{\mu}^{\epsilon}\bigr\Vert \le \Vert f-f_{\mu }\Vert +\bigl\Vert f_{\mu}-f_{\mu}^{\epsilon} \bigr\Vert .$$
(3.12)

We first give the error bound for the noise term.

### Lemma 3.1

If the noise assumption holds, and assume that $$\Vert g-g_{\epsilon} \Vert \le\epsilon$$ and $$\Vert \varphi-\varphi_{\epsilon} \Vert _{L^{2} [0,T ]}\le\epsilon$$, then the solution depends continuously on the given data. Moreover, we have the following estimate:

$$\bigl\Vert f_{\mu}-f_{\mu}^{\epsilon}\bigr\Vert \le \frac{ \Vert f\Vert (\mu^{2}+D_{1}^{-2}B_{2}C_{2} )}{4\mu ^{2}T^{2}B_{1}C_{1}}\Vert \varphi-\varphi_{\epsilon} \Vert _{L^{2} [0,T ]}+ \frac{1}{2\mu} \Vert g-g_{\epsilon} \Vert .$$
(3.13)

### Proof

We notice that

\begin{aligned} f_{\mu}-f_{\mu}^{\epsilon} = & \sum _{n=1}^{\infty} \biggl(\frac {\Phi (n,\varphi )}{\mu^{2}+ (\Phi (n,\varphi ) )^{2}}g_{n}X_{n}- \frac{\Phi (n,\varphi_{\epsilon } )}{\mu^{2}+ (\Phi (n,\varphi_{\epsilon} ) )^{2}}g_{n}X_{n} \biggr) \\ &{} +\sum_{n=1}^{\infty} \biggl( \frac{\Phi (n,\varphi _{\epsilon} )}{\mu^{2}+ (\Phi (n,\varphi_{\epsilon } ) )^{2}}g_{n}X_{n}-\frac{\Phi (n,\varphi _{\epsilon} )}{\mu^{2}+ (\Phi (n,\varphi_{\epsilon } ) )^{2}}g_{n}^{\epsilon}X_{n} \biggr) \\ = & A_{1}+A_{2}. \end{aligned}
(3.14)

We consider two following estimates by diving them into two steps.

Step 1. Estimate $$\Vert A_{1}\Vert$$:

\begin{aligned} \Vert A_{1}\Vert ^{2} \le& \sum _{n=1}^{\infty} \biggl[\frac{\mu^{2}\Phi (n,\vert \mu-\mu_{\epsilon} \vert )+\Phi (n,\varphi )\Phi (n,\varphi _{\epsilon} )\Phi (n,\vert \varphi-\varphi_{\epsilon }\vert )}{ [\mu^{2}+ (\Phi (n,\varphi ) )^{2} ] [\mu^{2}+ (\Phi (n,\varphi _{\epsilon} ) )^{2} ]} \biggr]^{2} g_{n}^{2} \\ \le& \sum_{n=1}^{\infty} \biggl[ \frac{ (\mu^{2}+\Phi (n,\varphi )\Phi (n,\varphi_{\epsilon} ) )\Phi (n,\vert \varphi-\varphi_{\epsilon} \vert )}{4\mu^{2}\Phi (n,\varphi )\Phi (n,\varphi _{\epsilon} )} \biggr]^{2} g_{n}^{2}. \end{aligned}
(3.15)

Notice that

\begin{aligned} &\Phi (n,\varphi )\Phi (n,\varphi_{\epsilon} )\le B_{2}C_{2} \biggl(\int_{0}^{T}e^{n^{2}B (t )}\,dt \biggr)^{2}, \\ &\Phi \bigl(n,\vert \varphi-\varphi_{\epsilon} \vert \bigr)\le \Vert \varphi-\varphi_{\epsilon} \Vert _{L^{2} [0,T ]} \biggl( \int_{0}^{T}e^{2n^{2}B (t )}\,dt \biggr)^{\frac{1}{2}}. \end{aligned}
(3.16)

It follows that

\begin{aligned} \Vert A_{1}\Vert \le& \frac{\Vert \varphi -\varphi_{\epsilon} \Vert _{L^{2} [0,T ]} (\mu ^{2}+B_{2}C_{2} (\int_{0}^{T}e^{n^{2}B (t )}\,dt )^{2} )}{4\mu^{2}T^{2}B_{1}C_{1}}\Vert f\Vert \\ \le& \frac{\Vert \varphi-\varphi_{\epsilon} \Vert _{L^{2} [0,T ]} (\mu ^{2}+n^{-4}D_{1}^{-2}B_{2}C_{2} (1-e^{-n^{2}D_{2}T} )^{2} )}{4\mu^{2}T^{2}B_{1}C_{1}}\Vert f\Vert \\ \le& \frac{\Vert f\Vert (\mu ^{2}+D_{1}^{-2}B_{2}C_{2} )}{4\mu^{2}T^{2}B_{1}C_{1}}\Vert \varphi-\varphi_{\epsilon} \Vert _{L^{2} [0,T ]}. \end{aligned}
(3.17)

Step 2. Estimate $$\Vert A_{2}\Vert$$:

$$\Vert A_{2}\Vert \le\sqrt{\sum _{n=1}^{\infty}\frac { (\Phi (n,\varphi_{\epsilon} ) )^{2}}{\mu ^{2}+ (\Phi (n,\varphi_{\epsilon} ) )^{2}}\bigl\vert g_{n}-g_{n}^{\epsilon}\bigr\vert ^{2}}\le \frac{1}{2\mu }\Vert g-g_{\epsilon} \Vert .$$
(3.18)

Combining (3.17) and (3.18), the proof is completed.â€ƒâ–¡

In order to obtain the boundedness of bias, we usually need some a priori conditions. By Tikhonovâ€™s theorem, the operator $$K^{-1}$$ is restricted to the continuous image of a compact set M. Thus, we assume that f is in a compact subset of $$L^{2} (0,\pi )$$. Hereafter, we assume that $$\Vert f\Vert _{H^{2k} (0,\pi )}\le M$$ for $$k>0$$.

### Lemma 3.2

If the a priori bound holds, then

$$\Vert f-f_{\mu} \Vert \le \begin{cases} \max \{ 1,\frac{D_{2}^{2}}{ [B_{1} (1-e^{-D_{1}T} ) ]^{2}} \} M\mu^{\frac{k}{2}} ,&0< k\le2, \\ \max \{ 1,\frac{D_{2}^{2}}{ [B_{1} (1-e^{-D_{1}T} ) ]^{2}} \} M\mu ,&k>2. \end{cases}$$
(3.19)

### Proof

From (1.8) and (3.8), we deduce that

\begin{aligned} \Vert f-f_{\mu} \Vert ^{2} \le& \sum _{n=1}^{\infty }\frac{\mu^{4}}{ (\Phi (n,\varphi ) )^{2} (\mu^{2}+ (\Phi (n,\varphi ) )^{2} )^{2}}g_{n}^{2} \\ \le& \sum_{n=1}^{\infty}P (n ) \frac{ (1+n^{2} )^{2k}g_{n}^{2}}{ (\Phi (n,\varphi ) )^{2}}, \end{aligned}
(3.20)

where

$$P (n )=\frac{\mu^{4}}{ (\mu^{2}+ (\Phi (n,\varphi ) )^{2} )^{2}} \bigl(1+n^{2} \bigr)^{-2k}.$$
(3.21)

Next, we estimate $$P (n )$$. Without loss of generality, we assume that $$\mu^{-\frac{1}{4}}$$ is not an integer, therefore, the right-hand side of (3.20) can be divided into the sum of $$A_{3}$$ and $$A_{4}$$ as follows:

$$A_{3}=\sum_{n=1}^{n_{0}}P (n ) \frac{ (1+n^{2} )^{2k}g_{n}^{2}}{ (\Phi (n,\varphi ) )^{2}}, \qquad A_{4}=\sum_{n=n_{0}+1}^{\infty}P (n )\frac { (1+n^{2} )^{2k}g_{n}^{2}}{ (\Phi (n,\varphi ) )^{2}},$$
(3.22)

where $$n_{0}\le\mu^{-\frac{1}{4}}\le n_{0}+1$$. In the term $$A_{3}$$, we have

$$P (n )\le\frac{\mu^{4}n^{8}D_{2}^{4}}{ [B_{1} (1-e^{-D_{1}T} ) ]^{4}} \bigl(1+n^{2} \bigr)^{-2k}\le \frac{D_{2}^{4}}{ [B_{1} (1-e^{-D_{1}T} ) ]^{4}}\mu^{4}n^{8-4k}.$$
(3.23)

For $$0< k\le2$$, we deduce that

$$P (n )\le\frac{D_{2}^{4}}{ [B_{1} (1-e^{-D_{1}T} ) ]^{4}}\mu^{k+2}\le\frac {D_{2}^{4}}{ [B_{1} (1-e^{-D_{1}T} ) ]^{4}}\mu ^{k}.$$
(3.24)

For $$k>2$$, it yields

$$P (n )\le\frac{D_{2}^{4}}{ [B_{1} (1-e^{-D_{1}T} ) ]^{4}}\mu^{4}.$$
(3.25)

In addition, we observe in the term $$A_{4}$$ that

$$P (n )\le \bigl(1+n^{2} \bigr)^{-2k}\le\mu^{k}.$$
(3.26)

From (3.24)-(3.26), we thus obtain

$$P (n )\le \begin{cases} \max \{ 1,\frac{D_{2}^{4}}{ [B_{1} (1-e^{-D_{1}T} ) ]^{4}} \} \mu^{k}, & 0< k\le2, \\ \max \{ 1,\frac{D_{2}^{4}}{ [B_{1} (1-e^{-D_{1}T} ) ]^{4}} \} \mu^{2}, & k>2. \end{cases}$$
(3.27)

Hence, by using the above assumption, we conclude that

$$\Vert f-f_{\mu} \Vert \le \begin{cases} \max \{ 1,\frac{D_{2}^{2}}{ [B_{1} (1-e^{-D_{1}T} ) ]^{2}} \} M\mu^{\frac{k}{2}}, & 0< k\le2, \\ \max \{ 1,\frac{D_{2}^{2}}{ [B_{1} (1-e^{-D_{1}T} ) ]^{2}} \} M\mu, & k>2. \end{cases}$$
(3.28)

â€ƒâ–¡

### Theorem 3.1

Assuming that the a priori condition and the noise assumption hold, the following estimates are obtained.

1. (a)

If $$0< k\le2$$ and choose $$\mu= (\frac{\epsilon }{M} )^{\frac{1}{k+2}}$$, then

$$\bigl\Vert f-f_{\mu}^{\epsilon}\bigr\Vert \le P \bigl[\Vert f\Vert \bigl(\epsilon^{\frac {2}{k+2}}+D_{1}^{-2}B_{2}C_{2}M^{\frac{2}{k+2}} \bigr)\epsilon ^{\frac{k}{k+2}}+\epsilon^{\frac{k+1}{k+2}}+\epsilon^{\frac {1}{k+2}} \bigr],$$
(3.29)

where P is a constant and depends on constants T, $$B_{1}$$, $$C_{1}$$, $$D_{1}$$, $$D_{2}$$, and M (shown in SectionÂ  2).

2. (b)

If $$k>2$$ and choose $$\mu= (\frac{\epsilon }{M} )^{\frac{1}{2}}$$, then

$$\bigl\Vert f-f_{\mu}^{\epsilon}\bigr\Vert \le Q\epsilon \biggl[\Vert f\Vert \biggl(1+\frac {D_{1}^{-2}B_{2}C_{2}}{M}\epsilon \biggr)+1 \biggr],$$
(3.30)

where Q is a constant and depends on T, $$B_{1}$$, $$C_{1}$$, $$D_{1}$$, $$D_{2}$$, M.

### Proof

From LemmaÂ 3.1 and LemmaÂ 3.2, we can obtain the proof easily. Indeed, for $$0< k\le2$$, using $$\mu= (\frac {\epsilon}{M} )^{\frac{1}{k+2}}$$ we have

\begin{aligned} \bigl\Vert f-f_{\mu}^{\epsilon}\bigr\Vert \le& \frac{ \Vert f\Vert }{4T^{2}B_{1}C_{1}} \biggl(1+\frac {D_{1}^{-2}B_{2}C_{2}}{\mu^{2}} \biggr)\Vert \varphi-\varphi _{\epsilon} \Vert _{L^{2} [0,T ]}+\frac{1}{2\mu }\Vert g-g_{\epsilon} \Vert \\ &{} +\max \biggl\{ 1,\frac{D_{2}^{2}}{ [B_{1} (1-e^{-D_{1}T} ) ]^{2}} \biggr\} M\mu^{\frac {k}{2}} \\ \le& \frac{\Vert f\Vert }{4T^{2}B_{1}C_{1}} \bigl(\epsilon^{\frac{2}{k+2}}+D_{1}^{-2}B_{2}C_{2}M^{\frac {2}{k+2}} \bigr)\epsilon^{\frac{k}{k+2}}+\frac{1}{2}M^{\frac {1}{k+2}} \epsilon^{\frac{k+1}{k+2}} \\ &{} +\max \biggl\{ 1,\frac{D_{2}^{2}}{ [B_{1} (1-e^{-D_{1}T} ) ]^{2}} \biggr\} M\epsilon^{\frac{1}{k+2}}. \end{aligned}
(3.31)

Besides, for $$k>2$$, choosing $$\mu= (\frac{\epsilon}{M} )^{\frac{1}{2}}$$ will lead to the following estimate:

\begin{aligned} \bigl\Vert f-f_{\mu}^{\epsilon}\bigr\Vert \le& \frac{ \Vert f\Vert }{4T^{2}B_{1}C_{1}} \biggl(1+\frac {D_{1}^{-2}B_{2}C_{2}}{M}\epsilon \biggr)\epsilon+ \frac {1}{2}M^{\frac{1}{2}}\epsilon^{\frac{1}{2}} \\ &{} +\max \biggl\{ 1,\frac{D_{2}^{2}}{ [B_{1} (1-e^{-D_{1}T} ) ]^{2}} \biggr\} M^{\frac{1}{2}}\epsilon ^{\frac{1}{2}}. \end{aligned}
(3.32)

â€ƒâ–¡

### Remark 3.1

In the case if the time-dependent coefficient $$a(t)$$ is not perturbed, then the corresponding inverse problem can be simplified by the variable transform as follows:

$$\tau= \int_{0}^{t} a(s)\,ds=A(t), \qquad u(x,t)=w(x,\tau).$$
(3.33)

Now, equation (1.2) becomes

$$w_{\tau}(x,\tau)- w_{xx}(x,\tau)={\biggl. \frac{\varphi(t)}{ a(t)} \biggr|_{t=A^{-1}(\tau)}} f(x).$$

Since $$\frac{\varphi(t)}{ a(t)} |_{t=A^{-1}(\tau)}$$ is known, then all existing results for identifying the source with constant coefficients in heat equation are applicable. However, if $$a(t)$$ is perturbed as $$a_{\epsilon}(t)$$, the inverse source problem will be more complicated than (1.2). To the best of our knowledge, it is difficult to use the variable Ï„ as in (3.33) for solving the inverse source problem in the case that $$a(t)$$ is perturbed. Therefore, we choose the direct method to solve (1.2) and mention the result in the case that $$a(t)$$ is perturbed. Indeed, in the following theorem, we introduce a regularized solution $$F_{\mu}^{\epsilon}$$ and obtain the convergence rates as the a priori parameter choice rule. The case for the a posteriori parameter choice rule is similarly proven and it is not described here due to the length of this manuscript.

### Theorem 3.2

Assume that the source term $$f \in H^{2}(0,\pi)$$ and there exists a positive number M such that

$$\|f\|_{H^{2}(0,\pi)} \le M.$$

Suppose that the term a is noised by the perturbed data $$a_{\epsilon}\in C([0,T])$$ in such a way that

$$\|a_{\epsilon}-a\|_{C([0,T])}=\sup_{0 \le t \le T} \bigl\vert a_{\epsilon}(t)-a(t)\bigr\vert \le\epsilon.$$

Then we construct a regularized solution $$F_{\mu}^{\epsilon}$$ satisfying

$$\lim_{\epsilon\to0} \bigl\Vert F_{\mu}^{\epsilon}-f \bigr\Vert =0,$$

where

$$F_{\mu}^{\epsilon} (x )=\sum_{n=1}^{\infty} \biggl(\mu ^{2}+ \biggl(\int_{0}^{T}e^{n^{2}B_{\epsilon}(t )} \varphi _{\epsilon} (t )\,dt \biggr)^{2} \biggr)^{-1}\int _{0}^{T}e^{n^{2}B_{\epsilon}(t )}\varphi_{\epsilon} (t )\,dt \langle g_{\epsilon},X_{n} \rangle X_{n} (x )$$
(3.34)

and

$$B_{\epsilon}(t)=A_{\epsilon}(t)-A_{\epsilon}(T), \quad A_{\epsilon}(t) = \int_{0}^{t} a_{\epsilon}(s)\,ds.$$

Moreover, by choosing $$\mu=\epsilon^{\frac{1}{3}}$$, we have the following estimate:

$$\bigl\Vert F_{\mu}^{\epsilon}-f\bigr\Vert \le Q_{2} \epsilon^{\frac{1}{3}},$$

where $$Q_{2}$$ is a constant and depends on T, $$B_{1}$$, $$B_{2}$$, $$C_{1}$$, $$C_{2}$$, $$D_{1}$$, $$D_{2}$$, $$D_{3}$$, $$D_{4}$$, M (shown in SectionÂ  2).

### Proof

Since $$a_{\epsilon}\in C([0,T])$$, there exist two positive numbers $$D_{3}$$, $$D_{4}$$ such that

$$D_{3} \le a_{\epsilon}(t) \le D_{4}.$$

First, we denote

$$\Phi_{\epsilon}(n,h ) = \int_{0}^{T}e^{n^{2}B_{\epsilon}(t )}h (t )\,dt,$$

then

$$F_{\mu}^{\epsilon} (x ) = \sum_{n=1}^{\infty} \frac {\Phi_{\epsilon}(n,\varphi_{\epsilon} )}{\mu^{2}+ (\Phi _{\epsilon}(n,\varphi_{\epsilon} ) )^{2}} \langle g_{\epsilon},X_{n} \rangle X_{n} (x ).$$

Defining the functions $$G_{\mu}^{\epsilon} (x )$$, $$R_{\mu }^{\epsilon} (x )$$ by

\begin{aligned}& G_{\mu}^{\epsilon} (x ) = \sum_{n=1}^{\infty} \frac {\Phi_{\epsilon}(n,\varphi_{\epsilon} )}{\mu^{2}+ (\Phi _{\epsilon}(n,\varphi_{\epsilon} ) )^{2}} \langle g,X_{n} \rangle X_{n} (x ), \\& R_{\mu}^{\epsilon} (x ) = \sum_{n=1}^{\infty} \frac {\Phi (n,\varphi_{\epsilon} )}{\mu^{2}+ (\Phi (n,\varphi_{\epsilon} ) )^{2}} \langle g,X_{n} \rangle X_{n} (x ), \end{aligned}

it is clear that

$$\bigl\Vert F_{\mu}^{\epsilon}-f\bigr\Vert \le\bigl\Vert F_{\mu}^{\epsilon}-G_{\mu}^{\epsilon}\bigr\Vert + \bigl\Vert G_{\mu}^{\epsilon}-R_{\mu}^{\epsilon}\bigr\Vert + \bigl\Vert R_{\mu}^{\epsilon}-f_{\mu}\bigr\Vert + \Vert f_{\mu}-f\Vert .$$
(3.35)

It is obvious that $$\|R_{\mu}^{\epsilon}-f_{\mu}\| = \|A_{1}\|$$, which is given in (3.14), and using (3.17) we obtain the following inequality:

$$\bigl\| R_{\mu}^{\epsilon}-f_{\mu}\bigr\| =\|A_{1}\| \le\frac{\Vert f \Vert (\mu^{2}+D_{1}^{-2}B_{2}C_{2} )}{4\mu ^{2}T^{2}B_{1}C_{1}}\Vert \varphi-\varphi_{\epsilon} \Vert _{L^{2} [0,T ]} \le\frac{\Vert f\Vert (\mu^{2}+D_{1}^{-2}B_{2}C_{2} )}{4\mu^{2}T^{2}B_{1}C_{1}} \epsilon.$$
(3.36)

We estimate $$\|F_{\mu}^{\epsilon}-G_{\mu}^{\epsilon}\|$$ as follows:

$$\bigl\Vert F_{\mu}^{\epsilon}-G_{\mu}^{\epsilon}\bigr\Vert \le\sqrt{\sum_{n=1}^{\infty } \frac { (\Phi_{\epsilon}(n,\varphi_{\epsilon} ) )^{2}}{\mu^{2}+ (\Phi_{\epsilon}(n,\varphi_{\epsilon } ) )^{2}}\bigl\vert g_{n}-g_{n}^{\epsilon} \bigr\vert ^{2}}\le\frac {1}{2\mu} \Vert g-g_{\epsilon} \Vert \le\frac{\epsilon }{2\mu} .$$
(3.37)

The term $$\|G_{\mu}^{\epsilon}-R_{\mu}^{\epsilon}\|$$ is bounded in the following manner:

\begin{aligned} \begin{aligned}[b] \bigl\| G_{\mu}^{\epsilon}-R_{\mu}^{\epsilon}\bigr\| ^{2} &=\sum_{n=1}^{\infty} \biggl[ \frac {\Phi_{\epsilon}(n,\varphi_{\epsilon} )}{\mu^{2}+ (\Phi _{\epsilon}(n,\varphi_{\epsilon} ) )^{2}} -\frac {\Phi (n,\varphi_{\epsilon} )}{\mu^{2}+ (\Phi (n,\varphi_{\epsilon} ) )^{2}} \biggr]^{2} g_{n}^{2} \\ &\le \sum_{n=1}^{\infty} \frac{ [ \mu^{2}+ \Phi (n,\varphi_{\epsilon} ) \Phi_{\epsilon}(n,\varphi _{\epsilon } ) ]^{2} [ \Phi (n,\varphi_{\epsilon} )-\Phi_{\epsilon}(n,\varphi_{\epsilon} ) ]^{2} }{ [ \mu^{2}+ (\Phi (n,\varphi_{\epsilon} ) )^{2} ]^{2} [ \mu^{2}+ (\Phi_{\epsilon}(n,\varphi _{\epsilon } ) )^{2} ]^{2} }g_{n}^{2} \\ &\le \sum_{n=1}^{\infty} \frac{ [ \mu^{2}+ \Phi (n,\varphi_{\epsilon} ) \Phi_{\epsilon}(n,\varphi _{\epsilon } ) ]^{2} [ \Phi (n,\varphi_{\epsilon} )-\Phi_{\epsilon}(n,\varphi_{\epsilon} ) ]^{2} }{ 16 \mu ^{4} |\Phi (n,\varphi_{\epsilon} ) |^{2} |\Phi _{\epsilon}(n,\varphi_{\epsilon} ) |^{2}}g_{n}^{2}. \end{aligned} \end{aligned}
(3.38)

Since $$B(t) \le-D_{1} (T-t) \le0$$, $$B_{\epsilon}(t) \le-D_{3}(T-t) \le0$$, and from (2.1) and (2.2), we have

$$\Phi (n,\varphi_{\epsilon} ) \Phi_{\epsilon}(n,\varphi _{\epsilon} ) = \int_{0}^{T}e^{n^{2}B_{\epsilon}(t )} \varphi_{\epsilon} (t )\,dt \int_{0}^{T}e^{n^{2}B (t )} \varphi (t )\,dt \le B_{2}C_{2} T^{2}$$
(3.39)

and

\begin{aligned} \Phi (n,\varphi_{\epsilon} ) \Phi_{\epsilon}(n,\varphi _{\epsilon} ) \ge& \int_{0}^{T} e^{-n^{2}D_{3}(T-t)} \varphi (t ) \,dt \int_{0}^{T} e^{-n^{2}D_{1}(T-t)} \varphi_{\epsilon} (t ) \,dt \\ = &\frac{B_{2}C_{2}}{D_{1}D_{3}}\frac{ (1-e^{-n^{2}D_{3}T}) (1-e^{-n^{2}D_{1}T}) }{n^{4} }. \end{aligned}
(3.40)

In addition, the term $$g_{n}^{2}$$ is bounded by

\begin{aligned} g_{n}^{2}&= \biggl( \int_{0}^{T}e^{n^{2}B (t )} \varphi (t )\,dt \biggr)^{2} f_{n}^{2} \\ &\le \biggl( \int_{0}^{T}e^{-D_{2} n^{2} (T-t)}\varphi (t )\,dt \biggr)^{2} f_{n}^{2} \\ &= \frac{B_{2}^{2} (1-e^{-n^{2}D_{2}T})^{2} f_{n}^{2} }{ D_{2}^{2}n^{4}}. \end{aligned}
(3.41)

It is easy to see that for $$c, d\ge0$$,

$$\bigl\vert e^{c}-e^{d}\bigr\vert \le\max\bigl\{ {|c-d|} {e^{c}}, {|c-d|} {e^{d}} \bigr\} .$$

Using this inequality, we obtain

\begin{aligned} \bigl| \Phi (n,\varphi_{\epsilon} )-\Phi_{\epsilon}(n,\varphi_{\epsilon} ) \bigr| =& \biggl| \int_{0}^{T} \bigl(e^{n^{2}B_{\epsilon}(t )}-e^{n^{2}B (t )} \bigr) \varphi_{\epsilon}(t )\,dt\biggr| \\ \le& C_{2} \int_{0}^{T} n^{2} \bigl|B_{\epsilon}(t) - B(t)\bigr| \max \bigl( e^{n^{2}B_{\epsilon}(t )}, e^{n^{2}B (t )} \bigr)\,dt \\ \le& C_{2} n^{2} \int_{0}^{T} \biggl(\int_{t}^{T} \bigl|a_{\epsilon}(s)-a(s)\bigr| \,ds \biggr) \max \bigl( e^{n^{2}B_{\epsilon}(t )}, e^{n^{2}B (t )} \bigr)\,dt \\ \le& \epsilon C_{2}T n^{2} \max \biggl( \int _{0}^{T}e^{n^{2}B (t )} \,dt, \int _{0}^{T}e^{n^{2}B_{\epsilon}(t )}\,dt \biggr) \\ \le& \epsilon C_{2}T n^{2} \max \biggl( \int _{0}^{T}e^{-D_{2} n^{2} (T-t)} \,dt, \int _{0}^{T} e^{-D_{4} n^{2} (T-t)} \,dt \biggr) \\ \le& \epsilon C_{2}T n^{2} \max \biggl( \frac{ 1-e^{-n^{2}D_{2}T} }{D_{2} n^{2} } , \frac{ 1-e^{-n^{2}D_{4}T} }{D_{4}n^{2} } \biggr) \\ \le&\frac{\epsilon C_{2}T }{ \min (D_{2}, D_{4})}. \end{aligned}
(3.42)

Combining (3.38), (3.39), (3.40), (3.41) and (3.42), we get

\begin{aligned} \bigl\Vert G_{\mu}^{\epsilon}-R_{\mu}^{\epsilon}\bigr\Vert ^{2} \le& \frac{\epsilon^{2} C_{2}^{2} D_{1}^{2} D_{3}^{2} T^{2} (\mu^{2}+ B_{2}C_{2} T^{2} )^{2} }{16 \mu^{4} \min(D_{2}^{2}, D_{4}^{2}) C_{2}^{2}D_{2}^{2}} \\ &{}\times\sum _{n=1}^{\infty} n^{4} \frac{ (1-e^{-n^{2}D_{2}T})^{2} }{ (1-e^{-n^{2}D_{1}T})^{2}(1-e^{-n^{2}D_{3}T})^{2} } f_{n}^{2} \\ \le& \frac{\epsilon^{2} C_{2}^{2} D_{1}^{2} D_{3}^{2} T^{2} (\mu^{2}+ B_{2}C_{2} T^{2} )^{2} }{16 \mu^{4} \min(D_{2}^{2}, D_{4}^{2}) C_{2}^{2}D_{2}^{2}} \\ &{}\times \frac{1}{ (1-e^{-D_{1}T})^{2}(1-e^{-D_{3}T})^{2} } \|f\|_{H^{2}(0,\pi)}^{2} . \end{aligned}
(3.43)

This implies that

$$\bigl\Vert G_{\mu}^{\epsilon}-R_{\mu}^{\epsilon}\bigr\Vert \le Q_{1} \frac{\epsilon}{\mu^{2}} \|f\| _{H^{2}(0,\pi)} \le Q_{1}M \frac{\epsilon}{\mu^{2}},$$
(3.44)

where

$$Q_{1}= \frac{ C_{2} D_{1} D_{3} T (\mu^{2}+ B_{2}C_{2} T^{2} ) }{4 \min (D_{2}, D_{4}) C_{2}D_{2}} \frac{1}{ (1-e^{-D_{1}T})(1-e^{-D_{3}T}) }.$$

Combining (3.35), (3.36), (3.37), (3.44) and LemmaÂ 3.2, we get

\begin{aligned} \bigl\Vert F_{\mu}^{\epsilon}-f\bigr\Vert \le& \bigl\Vert F_{\mu}^{\epsilon}-G_{\mu}^{\epsilon}\bigr\Vert + \bigl\Vert G_{\mu}^{\epsilon}-R_{\mu}^{\epsilon}\bigr\Vert + \bigl\Vert R_{\mu}^{\epsilon}-f_{\mu}\bigr\Vert + \Vert f_{\mu}-f\Vert \\ \le& \frac{\epsilon}{2\mu}+ Q_{1}M \frac{\epsilon}{\mu^{2}}+ \frac {\Vert f\Vert (\mu^{2}+D_{1}^{-2}B_{2}C_{2} )}{4\mu ^{2}T^{2}B_{1}C_{1}} \epsilon \\ &{}+\max \biggl\{ 1,\frac{D_{2}^{2}}{ [B_{1} (1-e^{-D_{1}T} ) ]^{2}} \biggr\} M\mu. \end{aligned}

By choosing $$\mu=\epsilon^{\frac{1}{3}}$$ and noting that $$\|f\| \le \|f\| _{H^{2}(0,\pi)} \le M$$, we obtain

\begin{aligned} \bigl\Vert F_{\mu}^{\epsilon}-f\bigr\Vert \le& \frac{ \epsilon^{\frac{2}{3}} }{2}+ Q_{1}M \epsilon^{\frac{1}{3}}+ \frac{M (\mu^{2}+D_{1}^{-2}B_{2}C_{2} )}{4T^{2}B_{1}C_{1}} \epsilon^{\frac{1}{3}} \\ &{}+\max \biggl\{ 1,\frac {D_{2}^{2}}{ [B_{1} (1-e^{-D_{1}T} ) ]^{2}} \biggr\} M \epsilon^{\frac{1}{3}} \\ \le& Q_{2} \epsilon^{\frac{1}{3}}, \end{aligned}

where

$$Q_{2}= \frac{ \epsilon^{\frac{1}{3}} }{2}+ Q_{1}M + \frac{M (\mu ^{2}+D_{1}^{-2}B_{2}C_{2} )}{4T^{2}B_{1}C_{1}} + \max \biggl\{ 1,\frac{D_{2}^{2}}{ [B_{1} (1-e^{-D_{1}T} ) ]^{2}} \biggr\} M.$$

The proof is completed.â€ƒâ–¡

## 4 Tikhonov regularization under a posteriori parameter choice rule

In this section, we consider an a posteriori regularization parameter choice in Morozovâ€™s discrepancy principle (see in [19, 20]). First, we introduce the following lemma.

### Lemma 4.1

Set $$\rho (\mu )=\Vert Kf_{\mu }^{\epsilon}-g_{\epsilon} \Vert$$ and assume that $$0<\epsilon<\Vert g_{\epsilon }\Vert$$, then the following results hold:

1. (a)

$$\rho (\mu )$$ is a continuous function.

2. (b)

$$\rho (\mu )\to0$$ as $$\mu\to0$$.

3. (c)

$$\rho (\mu )\to \Vert g_{\epsilon }\Vert$$ as $$\mu\to\infty$$.

4. (d)

$$\rho (\mu )$$ is a strictly increasing function.

### Proof

All results are derived from

$$\rho (\mu )=\sqrt{\sum_{n=1}^{\infty} \biggl(\frac{\mu ^{2}}{\mu^{2}+ (\Phi (n,\varphi ) )^{2}} \biggr)^{2} \bigl(g_{n}^{\epsilon} \bigr)^{2}}.$$
(4.1)

â€ƒâ–¡

Let us define a function $$H (y )$$ as follows:

$$H (y )= \begin{cases} y^{y} (1-y )^{1-y} ,&y\in (0,1 ), \\ 1 ,&y= \{ 0;1 \} . \end{cases}$$
(4.2)

It is clear that $$0< H (y )\le1$$ since we have

$$\sup_{x>0}\frac{x^{y}}{1+x}=H (y ), \quad y\in [0,1 ].$$
(4.3)

### Lemma 4.2

Choose $$\tau>1$$ such that $$0<\tau\epsilon<\Vert g_{\epsilon }\Vert$$, then there exists a unique regularization parameter $$\mu>0$$ such that $$\Vert Kf_{\mu}^{\epsilon}-g_{\epsilon} \Vert =\tau \epsilon$$. Moreover, if the a priori condition with $$k\in (0,1 ]$$ and the noise assumptions hold, we have the following inequality:

$$\frac{\epsilon}{\mu^{k+1}}\le\frac{P}{\tau-1}H \biggl(\frac {1-k}{2} \biggr)M,$$
(4.4)

where P is constant and dependent on the constants k, T, $$B_{2}$$, $$C_{1}$$, $$D_{1}$$, $$D_{2}$$.

### Proof

The uniqueness of regularization parameter $$\mu>0$$ is derived from LemmaÂ 4.1. We thus only need to prove the inequality. First, we notice that

\begin{aligned} \tau\epsilon = & \sqrt{\sum_{n=1}^{\infty} \biggl(\frac{\mu ^{2}}{\mu^{2}+ (\Phi (n,\varphi_{\epsilon} ) )^{2}} \biggr)^{2} \bigl(g_{n}^{\epsilon} \bigr)^{2}} \\ \le& \sqrt{\sum_{n=1}^{\infty} \biggl(\frac{\mu^{2}}{\mu ^{2}+ (\Phi (n,\varphi_{\epsilon} ) )^{2}} \biggr)^{2} \bigl(g_{n}-g_{n}^{\epsilon} \bigr)^{2}} \\ &{} +\sqrt{\sum_{n=1}^{\infty} \biggl(\frac{\mu^{2}\Phi (n,\varphi )}{ (\mu^{2}+ (\Phi (n,\varphi _{\epsilon} ) )^{2} ) (1+n^{2} )^{k}} \biggr)^{2}\frac{ (1+n^{2} )^{2k} (g_{n} )^{2}}{ (\Phi (n,\varphi ) )^{2}}}. \end{aligned}
(4.5)

Due to $${ \frac{\mu^{2}}{\mu^{2}+ (\Phi (n,\varphi_{\epsilon} ) )^{2}}\le1}$$ and setting

$$K (n )=\frac{\mu^{2}\Phi (n,\varphi )}{ (\mu^{2}+ (\Phi (n,\varphi_{\epsilon} ) )^{2} ) (1+n^{2} )^{k}},$$
(4.6)

we then have

$$\tau\epsilon\le\epsilon+\sqrt{\sum_{n=1}^{\infty}K^{2} (n )\frac{ (1+n^{2} )^{2k} (g_{n} )^{2}}{ (\Phi (n,\varphi ) )^{2}}}.$$
(4.7)

Now, we estimate $$K (n )$$ as follows:

\begin{aligned} K (n ) \le& \frac{ (\frac{\mu}{\Phi (n,\varphi_{\epsilon} )} )^{1-k}\Phi (n,\varphi ) (\Phi (n,\varphi_{\epsilon} ) )^{k-3}}{ ( (\frac{\mu}{\Phi (n,\varphi_{\epsilon } )} )^{2}+1 ) (1+n^{2} )^{k}}\mu ^{k+1} \\ \le& H \biggl(\frac{1-k}{2} \biggr)\mu ^{k+1}B_{2}C_{1}D_{1}^{-1}D_{2}^{3-k} \frac{ (1-e^{-D_{1}T} )^{k-3}}{n^{2 (k-2 )} (1+n^{2} )^{k}} \\ \le& H \biggl(\frac{1-k}{2} \biggr)\mu ^{k+1}B_{2}C_{1}D_{1}^{-1}D_{2}^{3-k} \bigl(1-e^{-D_{1}T} \bigr)^{k-3}. \end{aligned}
(4.8)

Therefore, combining (4.7) and (4.8), we conclude that

$$\tau\epsilon\le\epsilon+B_{2}C_{1}D_{1}^{-1}D_{2}^{3-k} \bigl(1-e^{-D_{1}T} \bigr)^{k-3}H \biggl(\frac{1-k}{2} \biggr)\mu ^{k+1}M,$$
(4.9)

which gives the desired result.â€ƒâ–¡

### Theorem 4.1

Assume that the a priori condition and the noise assumptions hold, and there exists $$\tau>1$$ such that $$0<\tau\epsilon< \Vert g_{\epsilon} \Vert$$. Then we choose a unique regularization parameter $$\mu>0$$ such that

$$\bigl\Vert f-f_{\mu}^{\epsilon}\bigr\Vert \le \begin{cases} \epsilon^{\frac{k}{k+1}}P, & 0< k\le1, \\ \epsilon^{\frac{1}{2}}Q, & k>1, \end{cases}$$
(4.10)

where constants P and Q are dependent on the constants T, Î¼, k, Ï„, $$B_{1}$$, $$B_{2}$$, $$C_{1}$$, $$C_{2}$$, $$D_{1}$$, $$D_{2}$$ and M.

### Proof

For $$0< k\le1$$, we have

\begin{aligned} \bigl\Vert f-f_{\mu}^{\epsilon}\bigr\Vert = & \sum _{n=1}^{\infty}\frac{1}{\Phi (n,\varphi )} \biggl[g_{n}- \frac{ (\Phi (n,\varphi ) )^{2}}{\mu ^{2}+ (\Phi (n,\varphi ) )^{2}}g_{n}^{\epsilon } \biggr] \\ &{} +\sum_{n=1}^{\infty} \biggl[ \frac{\Phi (n,\varphi )}{\mu^{2}+ (\Phi (n,\varphi ) )^{2}}-\frac {\Phi (n,\varphi_{\epsilon} )}{\mu^{2}+ (\Phi (n,\varphi_{\epsilon} ) )^{2}} \biggr]g_{n}^{\epsilon}. \end{aligned}
(4.11)

It follows that

\begin{aligned} \bigl\Vert f-f_{\mu}^{\epsilon}\bigr\Vert ^{2} \le& 2\sum_{n=1}^{\infty}\frac{1}{ (\Phi (n,\varphi ) )^{2}} \biggl[g_{n}-\frac{ (\Phi (n,\varphi ) )^{2}}{\mu^{2}+ (\Phi (n,\varphi ) )^{2}}g_{n}^{\epsilon} \biggr]^{2} \\ &{} +2\sum_{n=1}^{\infty} \biggl( \biggl[ \frac{\Phi (n,\varphi )}{\mu^{2}+ (\Phi (n,\varphi ) )^{2}}-\frac{\Phi (n,\varphi_{\epsilon} )}{\mu ^{2}+ (\Phi (n,\varphi_{\epsilon} ) )^{2}} \biggr]g_{n}^{\epsilon} \biggr)^{2}. \end{aligned}
(4.12)

We set $$K_{1}$$ and $$K_{2}$$ as follows:

\begin{aligned}& K_{1} (n )=\frac{1}{ (\Phi (n,\varphi ) )^{2}} \biggl[g_{n}- \frac{ (\Phi (n,\varphi ) )^{2}}{\mu^{2}+ (\Phi (n,\varphi ) )^{2}}g_{n}^{\epsilon} \biggr]^{\frac{2}{k+1}} \biggl[g_{n}-\frac { (\Phi (n,\varphi ) )^{2}}{\mu^{2}+ (\Phi (n,\varphi ) )^{2}}g_{n}^{\epsilon} \biggr]^{\frac{2k}{k+1}}, \end{aligned}
(4.13)
\begin{aligned}& K_{2} (n )= \biggl( \biggl[\frac{\Phi (n,\varphi )}{\mu^{2}+ (\Phi (n,\varphi ) )^{2}}-\frac {\Phi (n,\varphi_{\epsilon} )}{\mu^{2}+ (\Phi (n,\varphi_{\epsilon} ) )^{2}} \biggr]g_{n}^{\epsilon } \biggr)^{2}. \end{aligned}
(4.14)

Afterwards, we estimate $$\Vert f-f_{\mu}^{\epsilon} \Vert$$ by considering the following inequalities. First, we see that

$$\sum_{n=1}^{\infty}K_{1} (n )\le L_{1}^{\frac {k}{k+1}}L_{2}^{\frac{1}{k+1}},$$
(4.15)

where

\begin{aligned}& L_{1}=\sum_{n=1}^{\infty} \biggl(g_{n}-\frac{ (\Phi (n,\varphi ) )^{2}}{\mu^{2}+ (\Phi (n,\varphi ) )^{2}}g_{n}^{\epsilon} \biggr)^{2}, \end{aligned}
(4.16)
\begin{aligned}& L_{2}=\sum_{n=1}^{\infty} \bigl(\Phi (n,\varphi ) \bigr)^{-2 (k+1 )} \biggl(g_{n}-\frac{ (\Phi (n,\varphi ) )^{2}}{\mu^{2}+ (\Phi (n,\varphi ) )^{2}}g_{n}^{\epsilon} \biggr)^{2}. \end{aligned}
(4.17)

Now, we estimate $$L_{1}$$ and $$L_{2}$$ as follows:

\begin{aligned}& L_{1}^{\frac{1}{2}}\le\sqrt{\sum _{n=1}^{\infty} \bigl(g_{n}-g_{n}^{\epsilon} \bigr)^{2}}+\sqrt{\sum_{n=1}^{\infty} \biggl(g_{n}^{\epsilon}-\frac{ (\Phi (n,\varphi ) )^{2}}{\mu^{2}+ (\Phi (n,\varphi ) )^{2}}g_{n}^{\epsilon} \biggr)^{2}}\le (1+\tau )\epsilon, \end{aligned}
(4.18)
\begin{aligned}& L_{2}^{\frac{1}{2}}\le\sqrt{\sum _{n=1}^{\infty} \bigl(\Phi (n,\varphi ) \bigr)^{-2 (k+1 )}g_{n}^{2}}+\sqrt {\sum _{n=1}^{\infty} \bigl(\Phi (n,\varphi ) \bigr)^{-2 (k+1 )} \biggl(\frac{ (\Phi (n,\varphi ) )^{2}}{\mu^{2}+ (\Phi (n,\varphi ) )^{2}} \biggr)^{2} \bigl(g_{n}^{\epsilon} \bigr)^{2}}. \end{aligned}
(4.19)

We assign $$L_{3}$$ and $$L_{4}$$ to the two terms of the sum on the right-hand side of (4.19), we now continue to estimate these terms by the following direct computation:

\begin{aligned} L_{3} \le& \sqrt{\sum_{n=1}^{\infty} \bigl(\Phi (n,\varphi ) \bigl(1+n^{2} \bigr) \bigr)^{-2k} \frac{ (1+n^{2} )^{2k}g_{n}^{2}}{ (\Phi (n,\varphi ) )^{2}}} \\ \le& \sqrt{\sum_{n=1}^{\infty} \biggl(\frac {n^{2}D_{2}}{B_{1} (1-e^{-D_{1}T} ) (1+n^{2} )} \biggr)^{2k}\frac{ (1+n^{2} )^{2k}g_{n}^{2}}{ (\Phi (n,\varphi ) )^{2}}} \\ \le& \biggl(\frac{D_{2}}{B_{1} (1-e^{-D_{1}T} )} \biggr)^{k}M \end{aligned}
(4.20)

and

\begin{aligned} L_{4} \le& \sqrt{\sum_{n=1}^{\infty} \bigl(\Phi (n,\varphi ) \bigr)^{-2 (k+1 )} \biggl(\frac{ (\Phi (n,\varphi ) )^{2}}{\mu^{2}+ (\Phi (n,\varphi ) )^{2}} \biggr)^{2} \bigl(g_{n}-g_{n}^{\epsilon} \bigr)^{2}} \\ &{} +\sqrt{\sum_{n=1}^{\infty} \bigl(\Phi (n,\varphi ) \bigr)^{-2 (k+1 )} \biggl(\frac{ (\Phi (n,\varphi ) )^{2}}{\mu^{2}+ (\Phi (n,\varphi ) )^{2}} \biggr)^{2} (g_{n} )^{2}}. \end{aligned}
(4.21)

In (4.21), we denote two terms on the right-hand side by $$L_{5}$$ and $$L_{6}$$. Then, using TheoremÂ 4.1 and (4.20), we estimate them as follows:

\begin{aligned}& L_{5} \le \sqrt{\sum_{n=1}^{\infty} \mu^{-2 (k+1 )} \biggl(\frac{ (\frac{\mu}{\Phi (n,\varphi )} )^{k+1}}{ (\frac{\mu}{\Phi (n,\varphi )} )^{2}+1} \biggr)^{2} \bigl(g_{n}-g_{n}^{\epsilon} \bigr)^{2}} \\& \hphantom{L_{5}} \le \mu^{- (k+1 )}H \biggl(\frac{k+1}{2} \biggr) \epsilon \\& \hphantom{L_{5}} \le \frac{P}{\tau-1}H \biggl(\frac{1-k}{2} \biggr)H \biggl(\frac {k+1}{2} \biggr)M \\& \hphantom{L_{5}} \le \frac{P}{\tau-1} \biggl(H \biggl(\frac{1-k}{2} \biggr) \biggr)^{2}M, \end{aligned}
(4.22)
\begin{aligned}& L_{6}\le\sqrt{\sum_{n=1}^{\infty} \bigl(\Phi (n,\varphi ) \bigr)^{-2 (k+1 )} (g_{n} )^{2}} \le \biggl(\frac{D_{2}}{B_{1} (1-e^{-D_{1}T} )} \biggr)^{k}M. \end{aligned}
(4.23)

Then from (4.16)-(4.23) we have

$$\sum_{n=1}^{\infty}K_{1} (n )\le \bigl[ (1+\tau )\epsilon \bigr]^{\frac{2k}{k+1}} \biggl[2 \biggl(\frac {D_{2}}{B_{1} (1-e^{-D_{1}T} )} \biggr)^{k}M+\frac{P}{\tau -1} \biggl(H \biggl(\frac{1-k}{2} \biggr) \biggr)^{2}M \biggr]^{\frac {2}{k+1}}.$$
(4.24)

Next, the term $$K_{2} (n )$$ is estimated. Similarly, we recall (3.15)-(3.17), it shows that

\begin{aligned} \sqrt{\sum_{n=1}^{\infty}K_{2} (n )} \le& \sqrt{\sum_{n=1}^{\infty} \biggl( \biggl[\frac{\Phi (n,\varphi )}{\mu^{2}+ (\Phi (n,\varphi ) )^{2}}-\frac {\Phi (n,\varphi_{\epsilon} )}{\mu^{2}+ (\Phi (n,\varphi_{\epsilon} ) )^{2}} \biggr] \bigl(g_{n}^{\epsilon}-g_{n} \bigr) \biggr)^{2}} \\ &{} +\sqrt{\sum_{n=1}^{\infty} \biggl( \biggl[\frac{\Phi (n,\varphi )}{\mu^{2}+ (\Phi (n,\varphi ) )^{2}}-\frac{\Phi (n,\varphi_{\epsilon} )}{\mu ^{2}+ (\Phi (n,\varphi_{\epsilon} ) )^{2}} \biggr]g_{n} \biggr)^{2}} \\ \le& \Vert \varphi_{\epsilon}-\varphi \Vert _{L^{2} [0,T ]} \bigl(P \Vert g_{\epsilon}-g \Vert +\Vert f\Vert \bigr). \end{aligned}
(4.25)

Combining (4.12), (4.24) and (4.25), the first part of (4.10) with $$0< k\le1$$ is deduced. Furthermore, we can obtain the second part of (4.10) with $$k>1$$ by embedding $$H^{2k}$$ into $$H^{1}$$.â€ƒâ–¡

## 5 Numerical examples

In order to estimate the errors between the proposed regularized solution and its exact solution, the numerical experiments have been carried out. Two different numerical examples corresponding to $$k=T=1$$ are introduced in this section. The first example is to consider a situation where a in equation (1.2) is a constant, and the function f is obtained from exact data function. The second example is to consider an example where a is a non-constant function, and f is obtained from observation data of g and Ï†.

The couple of $$(g_{\epsilon},\varphi_{\epsilon} )$$, which are determined below, serves as the measured data with a random noise as follows:

\begin{aligned}& g_{\epsilon} (\cdot ) = g (\cdot ) \biggl(1+\frac {\epsilon\cdot\operatorname{rand} (\cdot )}{\Vert g\Vert } \biggr), \end{aligned}
(5.1)
\begin{aligned}& \varphi_{\epsilon} (\cdot ) = \varphi (\cdot )+\epsilon\cdot \operatorname{rand} (\cdot ), \end{aligned}
(5.2)

where $$\operatorname{rand}(\cdot) \in(-1,1)$$ is a random number. Hence, we can easily verify the following inequality:

$$\| g - g_{\epsilon} \| \leq\epsilon$$

and

$$\|\varphi- \varphi_{\epsilon} \| \leq\epsilon.$$

In addition, we can take the regularization parameter for the a priori parameter choice rule $$\mu= (\frac{\epsilon}{M} )^{\frac {1}{3}}$$, where M plays a role as a priori condition computed by $$\Vert f\Vert _{H^{2} (0,\pi )}$$. The absolute and relative errors between regularized and exact solutions are estimated. The regularized solutions are defined by

\begin{aligned}& f_{\mu}^{\epsilon} (x )=\frac{2}{\pi}\sum _{n=1}^{N}\frac{\Phi (n,\varphi_{\epsilon} )}{\mu ^{2}+ (\Phi (n,\varphi_{\epsilon} ) )^{2}} \bigl\langle g_{\epsilon} (x ),\sin (nx ) \bigr\rangle \sin (nx ), \end{aligned}
(5.3)
\begin{aligned}& \Phi (n,\varphi_{\epsilon} )=\int_{0}^{1}e^{n^{2} (A (t )-A (1 ) )} \varphi_{\epsilon} (t )\,dt, \end{aligned}
(5.4)

where N is the truncation number; whereby $$N =1\text{,}000$$ is chosen in the examples.

In general, the whole numerical procedure is shown in the following steps.

Step 1. Choose L and K to generate temporal and spatial discretizations in the manner that

\begin{aligned}& x_{j}=j\Delta x,\quad \Delta x=\frac{\pi}{K}, j= \overline{0,K}, \end{aligned}
(5.5)
\begin{aligned}& t_{i}=i\Delta t,\quad \Delta t=\frac{1}{L}, i= \overline{0,L}. \end{aligned}
(5.6)

Obviously, the higher value of L and K will provide a more stable and accurate numerical calculation; however, in the following examples $$L=K=100$$ are satisfied.

Step 2. Setting $$f_{\mu}^{\epsilon} (x_{j} )=f_{\mu,j}^{\epsilon}$$ and $$f (x_{j} )=f_{j}$$, we construct two vectors containing all discrete values of $$f_{\mu}^{\epsilon}$$ and f denoted by $$\Lambda _{\mu}^{\epsilon}$$ and Î¨, respectively, as shown below:

\begin{aligned}& \Lambda_{\mu}^{\epsilon}= \bigl[ \begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} f_{\mu,0}^{\epsilon} & f_{\mu,1}^{\epsilon} & \cdots & f_{\mu,K-1}^{\epsilon} & f_{\mu,K}^{\epsilon} \end{array}\bigr] \in \mathbb{R}^{K+1}, \end{aligned}
(5.7)
\begin{aligned}& \Psi= [ \begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} f_{0} & f_{1} & \cdots & f_{K-1} & f_{K} \end{array}] \in\mathbb{R}^{K+1}. \end{aligned}
(5.8)

Step 3. Error estimate between the exact and regularized solutions.

Absolute error estimation:

$$E_{1} = \sqrt{\frac{1}{K+1}\sum _{j=0}^{K}\bigl\vert f_{\mu }^{\epsilon} (x_{j} )-f (x_{j} )\bigr\vert ^{2}}.$$
(5.9)

Relative error estimation:

$$E_{2} = \frac{\sqrt{\sum_{j=0}^{K}\vert f_{\mu}^{\epsilon } (x_{j} )-f (x_{j} )\vert ^{2}}}{\sqrt{\sum_{j=0}^{K}\vert f (x_{j} )\vert ^{2}}}.$$
(5.10)

### 5.1 ExampleÂ 1

As mentioned above, in this example we consider the coefficient a as a constant, and f is an exact data function. Particularly, we consider a type of problem (1.2)-(1.4) as follows:

$$\begin{cases} u_{t}-u_{xx}=2^{-1} (e^{t}-1 )\sin2x; & (x,t )\in (0,\pi )\times (0,1 ), \\ u (x,0 )=0, \qquad u (x,1 )=10^{-1} (e-1 )\sin2x; & x\in [0,\pi ], \\ u (0,t )=u (\pi,t )=0; & t\in [0,1 ]. \end{cases}$$
(5.11)

This implies that $$a (t )=1$$, $$\varphi (t )=e^{t}-1$$, $$g (x )=10^{-1} (e-1 )\sin2x$$ and $$f (x )=2^{-1}\sin2x$$.

It is easy to see that $$u (x,t )=10^{-1} (e^{t}-1 )\sin2x$$ is the unique solution of the problem. Next, we establish the regularized solution according to the composite Simpsonâ€™s rule:

\begin{aligned}& f_{\mu}^{\epsilon} (x )=\frac{e-1}{10} \biggl(1+ \frac {\epsilon\cdot\operatorname{rand} (\cdot )}{\sqrt{\pi} \Vert g\Vert } \biggr)\frac{\Phi (2,\varphi_{\epsilon} )}{\mu^{2}+ (\Phi (2,\varphi_{\epsilon} ) )^{2}}\sin2x, \end{aligned}
(5.12)
\begin{aligned}& \Phi (2,\varphi_{\epsilon} )=\frac{1}{3M} \Biggl[h (t_{0} )+2\sum_{i=1}^{\frac{L}{2}-1}h (t_{2i} )+4 \sum_{i=1}^{\frac{L}{2}}h (t_{2i-1} )+h (t_{M} ) \Biggr], \end{aligned}
(5.13)
\begin{aligned}& h (t_{i} )=e^{4 (t_{i}-1 )} \bigl(\varphi (t_{i} )+ \epsilon\cdot\bigl\vert \operatorname{rand} (t_{i} )\bigr\vert \bigr). \end{aligned}
(5.14)

In practice, it is very difficult to obtain the value of M without having an exact solution. We thus try taking $$M=1\text{,}000$$ leading to $$\mu _{1}=\frac{\epsilon^{\frac{1}{3}}}{10}$$ for the a priori parameter choice rule, and $$\mu_{2}=\frac {\epsilon^{\frac{9}{20}}}{40}$$ for the a posteriori parameter choice rule based on (4.9) with $$\tau=1.5$$.

### 5.2 ExampleÂ 2

Similar to the first example; however, in this example we consider the coefficient a is not a constant, i.e., it is temporally dependent $$a (t )=2t+1$$, then $$A (t )-A (1 )=t^{2}+t-2$$; we choose

$$g (x )=e^{3}\sum_{m=1}^{3}\sin (mx ),\qquad \varphi (t )=1.$$
(5.15)

Thus, the exact solution is obtained by

\begin{aligned}& f (x )=\frac{2e^{3}}{\pi}\sum_{n=1}^{1\text{,}000} \frac{1}{\Phi (n,1 )}\sum_{m=1}^{3} \bigl\langle \sin (mx ),\sin (nx ) \bigr\rangle \sin (nx ), \end{aligned}
(5.16)
\begin{aligned}& \Phi (n,1 )=\frac{1}{n^{2}} \bigl(1-e^{-2n^{2}} \bigr). \end{aligned}
(5.17)

Unlike the first example, from the analytical solution, we can have the estimate $$\Vert f\Vert _{H^{2} (0,\pi )}<5\text{,}500$$ which implies that $$\mu_{1}= (\frac{\epsilon}{5\text{,}500} )^{\frac{1}{3}}$$ for the a priori parameter choice rule. Afterwards, based on (4.9) with $$\tau=1.1$$ again, we can compute the regularization parameter for the a posteriori parameter choice rule, $$\mu_{2}=\frac {\epsilon^{\frac{1}{2}}}{1\text{,}100}$$. Therefore, the regularized solution can be computed by

$$f_{\mu}^{\epsilon} (x )=e^{3} \biggl(1+ \frac{\epsilon \cdot\operatorname{rand} (\cdot )}{\sqrt{\pi} \Vert g \Vert } \biggr) \sum_{n=1}^{3} \frac{n^{2} (1-e^{-2n^{2}} ) (1+\epsilon\cdot\operatorname{rand} (\cdot ) )}{n^{4}\mu ^{2}+ ( (1+\epsilon\cdot\operatorname{rand} (\cdot ) ) (1-e^{-2n^{2}} ) )^{2}}\sin (nx ).$$
(5.18)

TablesÂ 1 and 2 show the absolute and relative error estimates between the exact solution and its regularized solution for both, the a priori and the a posteriori, parameter choice rules in the numerical examples. In the first example, as shown in TableÂ 1, when the coefficient a is constant, and f is an exact data function, the convergence speed of both parameter choice rule methods are quite similar and slow as Ïµ tends to 0. Whereas, in the second example shown in TableÂ 2, when the coefficient a is temporally dependent, and f is obtained from the measured data, the convergence speed of the a posteriori parameter choice rule is better than that of the a priori parameter choice rule (by second order) as Ïµ tends to 0.

In addition, FiguresÂ 1 and 2 show comparison between the exact solution and its regularized solution for the a priori parameter and the a posteriori parameter choice rules in the first example, respectively. It again shows that the regularized solution was strong oscillated around the exact solution when Ïµ around 0.1 in both parameter choice rule methods; nevertheless, it converges to the exact solution as Ïµ tends to 0. In the second example, FiguresÂ 3 and 4 show the same tendency as in the first example for both methods.

## 6 Conclusion

In this study, we solved problem (1.2)-(1.4) to recover temperature function of the unknown sources in the parabolic equation with the time-dependent coefficient (i.e., inhomogeneous source) by suggesting two methods, the a priori and a posteriori parameter choice rules.

In the theoretical results, we obtained the error estimates of both methods based on a priori condition. From the numerical results, it shows that the regularized solutions are convergent to the exact solutions. Furthermore, it also shows that the a posteriori parameter choice rule method is better than the a priori parameter choice rule method in terms of the convergence speed.

## References

1. McLaughlin, D: Investigation of alternative procedures for estimating groundwater basin parameters. Water Res. Eng., Walnut Creek, Calif. (1975)

2. Yeh, WW-G: Review of parameter identification procedures in groundwater hydrology: the inverse problem. Water Resour. Res. 22(2), 95-108 (1986)

3. Carrera, J: State of the art of the inverse problem applied to the flow and solute transport problem. In: Groundwater Flow and Quality Modeling. NATO ASI Ser. (1987)

4. Ginn, TR, Cushman, JH: Inverse methods for subsurface flow: a critical review of stochastic techniques. Stoch. Hydrol. Hydraul. 4, 1-26 (1990)

5. Kuiper, L: A comparison of several methods for solution of the inverse problem in two-dimensional steady state groundwater flow modeling. Water Resour. Res. 22(5), 705-714 (1986)

6. Sun, NZ: Inverse Problems in Groundwater Modeling. Kluwer Academic, Norwell (1994)

7. McLaughlin, D, Townley, LR: A reassessment of the groundwater inverse problem. Water Resour. Res. 32(5), 1131-1161 (1996)

8. Poeter, EP, Hill, MC: Inverse models: a necessary next step in groundwater modeling. Groundwater 35(2), 250-260 (1997)

9. Farcas, A, Lesnic, D: The boundary-element method for the determination of a heat source dependent on one variable. J. Eng. Math. 54, 375-388 (2006)

10. Johansson, T, Lesnic, D: Determination of a spacewise dependent heat source. J. Comput. Appl. Math. 209, 66-80 (2007)

11. Yang, F, Fu, CL: Two regularization methods for identification of the heat source depending only on spatial variable for the heat equation. J. Inverse Ill-Posed Probl. 17(8), 815-830 (2009)

12. Atmadja, J, Bagtzoglou, AC: Marching-jury backward beam equation and quasi-reversibility methods for hydrologic inversion: application to contaminant plume spatial distribution recovery. Water Resour. Res. 39, 1038-1047 (2003)

13. Cheng, W, Fu, CL: Identifying an unknown source term in a spherically symmetric parabolic equation. Appl. Math. Lett. 26, 387-391 (2013)

14. Savateev, EG: On problems of determining the source function in a parabolic equation. J. Inverse Ill-Posed Probl. 3, 83-102 (1995)

15. Yang, F, Fu, CL: A simplified Tikhonov regularization method for determining the heat source. Appl. Math. Model. 34, 3286-3299 (2010)

16. Yang, F, Fu, CL: A mollification regularization method for the inverse spatial-dependent heat source problem. J.Â Comput. Appl. Math. 255, 555-567 (2014)

17. Hasanov, A: Identification of spacewise and time dependent source terms in 1D heat conduction equation from temperature measurement at a final time. Int. J. Heat Mass Transf. 55, 2069-2080 (2012)

18. Kirsch, A: An Introduction to the Mathematical Theory of Inverse Problems, 2nd edn. Applied Mathematical Sciences, vol.Â 120. Springer, Berlin (2011)

19. Scherzer, O: The use of Morozovâ€™s discrepancy principle for Tikhonov regularization for solving nonlinear ill-posed problems. Computing 51, 45-60 (1993)

20. Coltony, D, Pianayand, M, Potthast, R: A simple method using Morozovâ€™s discrepancy principle for solving inverse scattering problems. Inverse Probl. 13, 1477-1493 (1997)

## Acknowledgements

This research is supported by the Seoul National University Research Grant. The authors also would like to thank the editors and anonymous reviewers for their very valuable constructive comments to improve this manuscript.

## Author information

Authors

### Corresponding author

Correspondence to Huy Tuan Nguyen.

### Competing interests

The authors declare that they have no competing interests.

### Authorsâ€™ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

## Rights and permissions

Reprints and permissions

Nguyen, V.T., Nguyen, H.T., Tran, T.B. et al. On an inverse problem in the parabolic equation arising from groundwater pollution problem. Bound Value Probl 2015, 67 (2015). https://doi.org/10.1186/s13661-015-0319-3