# Generalized Tikhonov regularization method for an inverse boundary value problem of the fractional elliptic equation

## Abstract

This research studies the inverse boundary value problem for fractional elliptic equation of Tricomi–Gellerstedt–Keldysh type and obtains a condition stability result. To recover the continuous dependence of the solution on the measurement data, a generalized Tikhonov regularization method based on ill-posedness analysis is constructed. Under the a priori and a posterior selection rules for the regularization parameter, corresponding Hölder type convergence results are obtained. On this basis, this thesis verifies the simulation effect of the generalized Tikhonov method through numerical examples. The examples show that the method performs well in dealing with the problem under consideration.

## 1 Introduction

In 1993, Podlubny proposed the concept of generalized calculus, which provided a theoretical foundation for the study of generalized differential equations [1]. In 2000, Kilbas et al. introduced the concept of fractional elliptic equations in their study of generalized calculus [2]. In recent years, fractional elliptic equations have been widely applied in various scientific fields. For example, subsonic and Transonic aerodynamics [3], blow up dynamics [4], microscopic Fermi-liquid [5], etc. Due to the needs of real life, the study of some types of fractional elliptic equations has gradually deepened, such as Tricomi equation [6], Gellerstedt equation [7], and Keldysh equation [8]. At the same time, scholars have done some systematic work in theoretical research, as seen in references [915], which provides a foundation for understanding and applying these equations. Based on this research, this paper studies the Tricomi–Gellerstedt–Keldysh type of fractional elliptic equations:

$$\textstyle\begin{cases} D_{x}^{2 a}u(x, y)-x^{2\beta}Lu(x, y)=0, &(x, y) \in [0, \infty ) \times \Omega , \\ u(x, y)=0,&(x, y) \in [0,\infty )\times \partial \Omega , \\ u(0, y)=f(y),&y \in \Omega , \\ \lim_{x \rightarrow \infty} u(x, y)=0,&y \in \Omega , \end{cases}$$
(1.1)

where $$\alpha \in (\frac{1}{2}, 1 ]$$, $$\beta >-\alpha$$, $$D_{x}^{2 \alpha}=\partial _{0+, x}^{\alpha} \partial _{0+, x}^{ \alpha}$$, $$\Omega \subset {\mathbb{R}}^{N}(N \geq 1)$$ is a bounded connected domain with a smooth boundary Ω, and $$\partial _{0+, x}^{\alpha}$$ represents the α-order ($$0<\alpha \leq 1$$) Caputo fractional derivative of the variable x [1]:

$$\partial _{0+, x}^{\alpha} u(x, y)=\frac{1}{\Gamma (1-\alpha )} \int _{0}^{x}(x-s)^{- \alpha} \partial _{s} u(s, y)\,ds,$$
(1.2)

and $$L:H^{2}(\Omega ) \cap H_{0}^{1}(\Omega ) \rightarrow L^{2}(\Omega )$$ is a symmetric uniformly elliptic operator. In addition, let $$\varphi _{n}$$ and $$\lambda _{n}$$ respectively represent the orthonormal eigenfunctions and eigenvalues of the operator L, such that $$L \varphi _{n}=\lambda _{n} \varphi _{n}$$, $$n=1,2, \ldots$$ , and $$\lambda _{n}$$ satisfy $$0<\lambda _{1} \leq \lambda _{2} \leq \lambda _{3} \leq \cdots$$ , $$\lim_{n \rightarrow \infty}\lambda _{n}=+\infty$$.

In reference [16], the authors studied the well-posedness of problem (1.1), including the existence, uniqueness, and stability of generalized solutions. This paper considers the inverse boundary value problem of problem (1.1)

$$\textstyle\begin{cases} D_{x}^{2 \alpha} u(x, y)-x^{2 \beta} L u(x, y)=0,&(x, y) \in [0, \infty ) \times \Omega , \\ u(x, y)=0, &(x, y) \in [0, \infty ) \times \partial \Omega , \\ u(T, y)=g(y),& y \in \Omega , \\ \lim_{x \rightarrow \infty} u(x, y)=0,& y \in \Omega . \end{cases}$$
(1.3)

The inverse problem is the problem of ill-posedness of determining the unknown parameters from the measurement data, i.e., recovering the boundary data $$f(y)=u(0, y)$$ by data $$g(y)=u(T, y)$$, ($$0< T<+\infty$$), which contains the measurement error and satisfies

$$\bigl\Vert g^{\delta}-g \bigr\Vert _{L^{2}(\Omega )} \leq \delta ,$$
(1.4)

where $$g^{\delta}$$ denotes the measurement data and $$\delta >0$$ denotes the measurement error bound.

This inverse problem is ill-posed due to the presence of noise. In 2021, the special case of the inverse problem (1.3) $$\beta =0$$ was studied in the literature [17] by using an iterative approach. In 2024, [18] utilized a similar regularization method to study the inverse boundary value problem (1.3). Based on the above current research status, this paper investigates the conditional stability, regularization theory, and numerical algorithms for inverse problem (1.3). This research solves the ill-posedness of inverse problems by regularization method, i.e., by adding regularization terms. The generalized Tikhonov regularization method is a commonly used method for dealing with the ill-posedness of inverse problems and can be applied to different types of inverse problems, see [1923].

This paper is summarized below. In Sect. 2, some necessary preliminary results are given. Section 3 gives the ill-posedness and conditional stability of the inverse problem. In Sect. 4, generalized Tikhonov regularization method to overcome the ill-posedness is constructed. In Sect. 5, the corresponding convergence results of Hölder type are given and proved respectively under the a priori and a posteriori selection rules for regularization parameter. In Sect. 6, this thesis verifies the simulation effects of the generalized Tikhonov regularization through numerical examples. It shows that the use of this method to solve inverse problem in the thesis is feasible.

## 2 Preliminaries

### Definition 2.1

[24] The definition of the three-parameter Mittag-Leffler function is as follows:

$$E_{\alpha , m, l}(z)=\sum_{n=0}^{\infty} \Biggl( \prod_{k=1}^{n} \frac{\Gamma (1+\alpha ((k-1) m+l))}{\Gamma (1+\alpha ((k-1) m+l+1))} \Biggr) z^{n},\quad z \in \mathbb{C},$$
(2.1)

where α, $$m>0$$ and $$l>-\frac{1}{\alpha}$$.

### Lemma 2.2

[25] For $$\alpha \in (0,1)$$ and $$z \geq 0$$, the following inequality holds:

$$\frac{1}{1+\Gamma (1-\alpha ) z} \leq E_{\alpha , m, m-1}(-z) \leq \frac{1}{1+\frac{\Gamma (1+(m-1) \alpha )}{\Gamma (1+m \alpha )} z} .$$
(2.2)

### Lemma 2.3

Based on Lemma 2.2, let $$z=\sqrt{\lambda _{n}} T^{\alpha +\beta}$$ and $$m=1+\frac{\beta}{\alpha}$$, it can be established that

$$\frac{\eta _{1}}{\sqrt{\lambda _{n}}} \leq E_{\alpha , 1+ \frac{\beta}{\alpha},\frac{\beta}{\alpha}} \bigl(-\sqrt{\lambda _{n}} T^{ \alpha +\beta} \bigr) \leq \frac{\eta _{2}}{\sqrt{\lambda _{n}}},\quad \lambda _{n} \geq \lambda _{1}>0,$$
(2.3)

where $$\eta _{1}= \frac{1}{\lambda _{1}^{-\frac{1}{2}}+\Gamma (1-\alpha ) T^{\alpha +\beta}}$$, $$\eta _{2}= \frac{1}{C_{\alpha , 1+\frac{\beta}{\alpha}}T^{\alpha +\beta}}$$, and $$C_{\alpha , 1+\frac{\beta}{\alpha}}= \frac{\Gamma (1+\beta )}{\Gamma (1+\alpha +\beta )}$$.

### Lemma 2.4

Let $$s \geq \lambda _{1}>0$$ and $$p>0$$, it holds that the following inequality is satisfied:

$$A(s)=\frac{\eta _{2} \sqrt{s}}{\eta _{1}{ }^{2}+\mu s^{p+1}} \leq c_{1} \mu ^{-\frac{1}{2 p+2}},$$
(2.4)

here, $$c_{1}=c_{1} (p, \eta _{1}, \eta _{2} )>0$$ is independent of the values of μ and s.

### Proof

From $$\lim_{s \rightarrow 0} A(s)=\lim_{s \rightarrow \infty} A(s)=0$$, it can be inferred that $$A(s)$$ attains a maximum value. If let $$A^{\prime} (s_{0} )=0$$, then $$s_{0}= (\frac{\eta _{1}^{2}}{\mu +2 \mu p} )^{ \frac{1}{p+1}}>0$$, it can be shown that

$$A(s) \leq A (s_{0} ) = \frac{\eta _{2}(2 p+1)^{\frac{2 p+1}{2 p+2}}}{(2 p+2) \eta _{1}^{\frac{2 p+1}{p+1}}} \mu ^{-\frac{1}{2 p+2}}=c_{1} (p, \eta _{1}, \eta _{2} ) \mu ^{-\frac{1}{2 p+2}}.$$
(2.5)

□

### Lemma 2.5

Let $$s \geq \lambda _{1}>0$$ and $$p>0$$, the following inequality holds:

$$B(s)=\frac{\mu s^{\frac{p}{2}+1}}{\eta _{1}^{2}+\mu s^{p+1}} \leq c_{2} \mu ^{\frac{p}{2 p+2}},$$
(2.6)

where $$c_{2}=c_{2} (p, \eta _{1} )>0$$ is independent of μ, s, and $$\eta _{2}$$.

### Proof

As $$\lim_{s \rightarrow 0} B(s)=\lim_{s \rightarrow \infty} B(s)=0$$, it becomes clear that $$B(s)$$ has a maximum value as well. By setting $$B^{\prime} (s_{0} )=0$$, it can be concluded that $$s_{0}= [\frac{(p+2) \eta _{1}^{2}}{p \mu} ]^{ \frac{1}{p+1}}>0$$, then

$$B(s) \leq B (s_{0} )= \frac{p^{\frac{p}{2 p+2}}(p+2)^{\frac{p+2}{2 p+2}}}{(2 p+2) \eta _{1}^{\frac{ p}{p+1}}} \mu ^{\frac{p}{2 p+2}}=c_{2} (p, \eta _{1} ) \mu ^{ \frac{p}{2 p+2}}.$$
(2.7)

□

### Lemma 2.6

Let $$s \geq \lambda _{1}>0$$ and $$p>0$$, it can be derived that

$$C(s)= \frac{\mu \eta _{2} s^{\frac{p+1}{2}}}{\eta _{1}^{2}+\mu s^{p+1}} \leq c_{3} \mu ^{\frac{1}{2}},$$
(2.8)

where $$c_{3}=c_{3} (\eta _{1}, \eta _{2} )>0$$ is independent of changes in μ, s, and p.

### Proof

As $$\lim_{s \rightarrow 0} C(s)=\lim_{s \rightarrow \infty} C(s)=0$$, it is evident that $$C(s)$$ has a maximum value. Let $$C^{\prime} (s_{0} )=0$$, we obtain $$s_{0}= [\frac{\eta _{1}{ }^{2}}{\mu} ]^{\frac{1}{p+1}}>0$$, it follows that

$$C(s) \leq C (s_{0} )=\frac{\eta _{2}}{2 \eta _{1}} \mu ^{ \frac{1}{2}}=c_{3} (\eta _{1}, \eta _{2} ) \mu ^{ \frac{1}{2}}.$$
(2.9)

□

## 3 The ill-posedness and conditional stability of inverse problem

Define the space

$$D \bigl(L^{q} \bigr)= \Biggl\{ \psi \in L^{2}(\Omega ); \sum_{n=1}^{ \infty} \lambda _{n}^{2q} \bigl\lvert (\psi , \varphi _{n} ) \bigr\rvert ^{2}< \infty \Biggr\} ,$$
(3.1)

where $$(\cdot , \cdot )$$ denotes the inner product in $$L^{2}(\Omega )$$, then $$D (L^{q} ) \subset L^{2}(\Omega )$$ is a Hilbert space with the norm given by [26]

$$\Vert \psi \Vert _{D (L^{q} )}= \Biggl(\sum_{n=1}^{\infty} \lambda _{n}^{2 q}\bigl\lvert (\psi , \varphi _{n} )\bigr\rvert ^{2} \Biggr)^{\frac{1}{2}}.$$
(3.2)

According to reference [16], for $$f\in D (L^{1} )$$, the generalized solution of the forward problem (1.1) can be expressed as

$$u(x, y)=\sum_{n=1}^{\infty} f_{n} E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \bigl(-\sqrt{\lambda _{n}} x^{\alpha +\beta} \bigr) \varphi _{n}(y).$$
(3.3)

Let $$x=T$$, then

$$g(y)=u(T, y)=\sum_{n=1}^{\infty} f_{n} E_{\alpha , 1+ \frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \bigl(-\sqrt{\lambda _{n}} T^{\alpha +\beta} \bigr) \varphi _{n}(y).$$
(3.4)

Denote $$g_{n}= (g(y), \varphi _{n} )$$, the following result can be obtained:

$$g_{n}=f_{n} E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \bigl(-\sqrt{\lambda _{n}} T^{\alpha +\beta} \bigr),$$
(3.5)

therefore

$$f(y)=\sum_{n=1}^{\infty} \frac{1}{E_{\alpha , 1+\frac{\beta}{\alpha},\frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} )} g_{n} \varphi _{n}(y).$$
(3.6)

Subsequently, the operator equation can be derived by using equation (3.4)

$$(K f) (y)= \int k(\xi , y) f(\xi )\,d\xi =g(y),$$
(3.7)

where $$K:L^{2}(\Omega ) \rightarrow L^{2}(\Omega )$$ is a Fredholm integral operator of the first kind, and k represents the kernel function defined by

$$k(\xi , y)=\sum_{n=1}^{\infty} E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \bigl(-\sqrt{\lambda _{n}} T^{\alpha +\beta} \bigr) \varphi _{n}(y) \varphi _{n}(\xi ).$$
(3.8)

It is easy to see that $$k(\xi , y)=k(y, \xi )$$, so the operator K is self-adjoint. Moreover, the singular values of K can be determined from equation (3.4) as $$\sigma _{n}=E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} )$$, $$n=1,2, \ldots$$ . In this paper, the notation $$\|\cdot \|$$ is employed to represent the $$L^{2}$$-norm.

### Theorem 3.1

[27]The operator K defined in (3.7) is a compact operator.

As stated in Theorem 3.1, K is a compact operator. Therefore, the inverse problem (1.3) is ill-posed, meaning that the solution obtained from noisy data for the inverse boundary value problem (1.3) is not continuous with respect to the measurement data. To establish the conditional stability of the inverse problem, providing a priori condition that constrains the range of the exact solution is necessary. Next, under the a priori assumption of the exact solution, the conditional stability result for the inverse problem (1.3) is presented.

### Theorem 3.2

[27]Let f satisfy the a priori bound condition

$$\Vert f \Vert _{D (L^{\frac{p}{2}} )}= \Biggl(\sum_{n=1}^{\infty} \lambda _{n}^{p}\bigl\lvert (f, \varphi _{n} ) \bigr\rvert ^{2} \Biggr)^{\frac{1}{2}} \leq E, \quad p>0, E>0,$$
(3.9)

therefore, the stability result holds

$$\Vert f \Vert \leq C E^{\frac{1}{p+1}} \Vert g \Vert ^{\frac{p}{p+1}},\quad p>0,$$
(3.10)

where $$C=\eta _{1}^{-\frac{p}{p+1}}$$ is a constant depending on p and $$\eta _{1}$$.

## 4 The generalized Tikhonov regularization method

In this section, based on the idea from [19], a generalized Tikhonov regularization solution for the inverse problem (1.3) is constructed. The regularization solution is defined as the unique minimum value of the following functional:

$$J_{\mu}(f)=\min_{f \in L^{2}(\Omega )} \bigl\{ \bigl\Vert K f-g^{\delta} \bigr\Vert _{L^{2}(\Omega )}^{2}+\mu \Vert f \Vert _{D (L^{\frac{p}{2}} )}^{2} \bigr\} ,$$
(4.1)

where $$p>0$$ is a constant, μ denotes the regularization parameter, $$g^{\delta}$$ denotes the measurement data, and $$\delta >0$$ denotes the measurement error bound. Let $$f_{\mu}^{\delta}$$ be the regularization solution of the inverse problem. Through the basic calculation and combining the first-order necessary condition, it follows that $$f_{\mu}^{\delta}$$ satisfies the Euler equation

$$\bigl(K^{*} K+\mu L^{p} \bigr) f_{\mu}^{\delta}=K^{*} g^{\delta},$$
(4.2)

then by utilizing the singular decomposition of the self-adjoint compact operator K, the generalized Tikhonov regularization solution for the inverse problem can be derived, which is expressed as follows:

$$f_{\mu}^{\delta}(y)=\sum_{n=1}^{\infty} \frac{E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} )}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} g_{n}^{\delta} \varphi _{n}(y),\quad p>0,$$
(4.3)

here, $$g_{n}^{\delta}= (g^{\delta}(y), \varphi _{n}(y) )$$. Meanwhile, the regularization solution with exact data $$g(y)$$ can be represented as

$$f_{\mu}(y)=\sum_{n=1}^{\infty} \frac{E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} )}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} g_{n} \varphi _{n}(y),\quad p>0,$$
(4.4)

and $$g_{n}= (g(y), \varphi _{n}(y) )$$.

## 5 Convergence estimates for the a priori and a posteriori rules

### 5.1 An a priori selection rule

In this section, the regularization parameter is selected by an a priori rule and a convergence analysis for the generalized Tikhonov regularization method is provided. The solutions for the exact and generalized Tikhonov regularization are presented in equations (3.6) and (4.3), respectively.

### Theorem 5.1

Assuming that the a priori condition (3.9) is satisfied, the exact data g and the measurement data $$g^{\delta}$$ satisfy (1.4), then

If $$p>0$$ and $$\mu = (\frac{\delta}{E} )^{2}$$, then the convergence estimate can be derived

$$\bigl\Vert f_{\mu}^{\delta}(y)-f(y) \bigr\Vert \leq (c_{1}+c_{2} ) \delta ^{\frac{p}{p+1}} E^{\frac{1}{p+1}},$$
(5.1)

where $$c_{1}$$, $$c_{2}$$ are defined in Lemmas 2.4and 2.5.

### Proof

According to the triangle inequality, it is known that

$$\bigl\Vert f_{\mu}^{\delta}(y)-f(y) \bigr\Vert \leq \bigl\Vert f_{\mu}^{ \delta}(y)-f_{\mu}(y) \bigr\Vert + \bigl\Vert f_{\mu}(y)-f(y) \bigr\Vert =I_{1}+I_{2}.$$
(5.2)

Firstly, the estimation of $$I_{1}$$ is conducted. Using Lemma 2.4 and (1.3), it can be expressed as

\begin{aligned} I_{1}&= \bigl\Vert f_{\mu}^{\delta}(y)-f_{\mu}(y) \bigr\Vert \\ & = \Biggl\Vert \sum_{n=1}^{\infty} \frac{E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} )}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} \bigl(g_{n}^{\delta}-g_{n} \bigr) \varphi _{n}(y) \Biggr\Vert \\ & \leq \delta \sup_{n} \frac{E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} )}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} \\ & \leq \delta \frac{\eta _{2} \sqrt{\lambda _{n}}}{\eta _{1}{ }^{2}+\mu \lambda _{n}{ }^{p+1}} \\ & \leq \delta c_{1} (p, \eta _{1}, \eta _{2} ) \mu ^{- \frac{1}{2 p+2}}. \end{aligned}
(5.3)

Next, an estimate is made for $$I_{2}$$. Utilizing Lemma 2.5 and (3.9), it can be derived that

\begin{aligned} I_{2} & = \bigl\Vert f_{\mu}(y)-f(y) \bigr\Vert \\ & = \Biggl\Vert \sum_{n=1}^{\infty} \biggl( \frac{E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} )}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} g_{n}- \frac{1}{E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} )} g_{n} \biggr) \varphi _{n}(y) \Biggr\Vert \\ & = \Biggl\Vert \sum_{n = 1}^{\infty} \frac{\mu \lambda _{n}^{p}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} f_{n} \varphi _{n}(y) \Biggr\Vert \\ &= \Biggl\Vert \sum_{n = 1}^{\infty} \frac{\mu \lambda _{n}^{\frac{p}{2}}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} \lambda _{n}^{\frac{p}{2}} f_{n} \varphi _{n}(y) \Biggr\Vert \\ &\leq E \sup_{n} \frac{\mu \lambda _{n}^{\frac{p}{2}}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} \\ &\leq E \sup_{n} \frac{\mu \lambda _{n}^{\frac{p}{2}+1}}{\eta _{1}^{2}+\mu \lambda _{n}^{p+1}} \\ &\leq c_{2} E \mu ^{\frac{p}{2 p+2}}. \end{aligned}
(5.4)

By combining inequalities (5.3) and (5.4), it can obtain the expression

$$\bigl\Vert f_{\mu}^{\delta}(y)-f(y) \bigr\Vert \leq c_{1} \delta \mu ^{- \frac{1}{2 p+2}}+c_{2} E \mu ^{\frac{p}{2 p+2}}.$$
(5.5)

Next, the regularized parameter μ is chosen as

$$\mu = \biggl(\frac{\delta}{E} \biggr)^{2},$$
(5.6)

then, from (5.5) and (5.6), the convergence result can be derived

$$\bigl\Vert f_{\mu}^{\delta}(y)-f(y) \bigr\Vert \leq (c_{1}+c_{2} ) \delta ^{\frac{p}{p+1}} E^{\frac{1}{p+1}}.$$
(5.7)

This completes the proof of this theorem. □

### 5.2 An a posteriori selection rule

In this subsection, an a posteriori choice rule for the regularization parameter μ is considered, primarily based on Morozov’s discrepancy principle [28]. Using the result of conditional stability in Theorem 3.2, a convergence result of Hölder type is obtained for the regularization method.

Let $$0<\tau \delta < \Vert g^{\delta} \Vert$$, the regularization parameter μ is selected using the following equation:

$$\bigl\Vert K f_{\mu}^{\delta}(y)-g^{\delta}(y) \bigr\Vert =\tau \delta .$$
(5.8)

### Lemma 5.2

Let $$\theta (\mu )= \Vert K f_{\mu}^{\delta}(y)-g^{\delta}(y) \Vert$$, then the following results are true:

1. (1)

$$\theta (\mu )$$ is a continuous function;

2. (2)

$$\lim_{\mu \rightarrow 0} \theta (\mu )=0$$;

3. (3)

$$\lim_{\mu \rightarrow \infty} \theta (\mu )= \Vert g^{\delta} \Vert$$;

4. (4)

For $$\mu \in (0, \infty )$$, $$\theta (\mu )$$ is a strictly increasing function.

### Proof

This lemma can be proven by writing

$$\theta (\mu )= \Biggl(\sum_{n=1}^{\infty} \biggl( \frac{\mu \lambda _{n}^{p}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} \biggr)^{2} \bigl(g_{n}^{\delta} \bigr)^{2} \Biggr)^{\frac{1}{2}}.$$
(5.9)

□

To obtain a convergence estimate, the following conclusions are required.

### Lemma 5.3

For fixed $$\tau >1$$, the regularization parameter μ determined by the discrepancy principle (5.8) satisfies

$$\mu ^{-\frac{1}{2 p+2}}\leq \biggl(\frac{c_{3}}{\tau -1} \biggr)^{ \frac{1}{p+1}} \biggl( \frac{E}{\delta} \biggr)^{\frac{1}{p+1}}.$$
(5.10)

### Proof

From the a posteriori choice rule (5.8),

\begin{aligned} \tau \delta ={}& \Biggl\Vert \sum _{n = 1}^{\infty} \frac{\mu \lambda _{n}^{p}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} g_{n}^{\delta} \varphi _{n}(y) \Biggr\Vert \\ \leq{}& \Biggl\Vert \sum_{n = 1}^{\infty} \frac{\mu \lambda _{n}^{p}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} \bigl(g_{n}^{\delta}-g_{n} \bigr) \varphi _{n}(y) \Biggr\Vert \\ &{}+ \Biggl\Vert \sum_{n = 1}^{\infty} \frac{\mu \lambda _{n}^{p}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} g_{n} \varphi _{n}(y) \Biggr\Vert \\ \leq{}& \delta + \Biggl\Vert \sum_{n = 1}^{\infty} \frac{\mu \lambda _{n}^{\frac{p}{2}}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \bigl(- \sqrt{\lambda _{n}} T^{\alpha +\beta} \bigr) \lambda _{n}^{ \frac{p}{2}} f_{n} \varphi _{n}(y) \Biggr\Vert \\ \leq {}&\delta +E \sup_{n} \frac{\mu \lambda _{n}^{\frac{p}{2}}}{ (\frac{\eta _{1}}{\sqrt{\lambda _{n}}} )^{2}+\mu \lambda _{n}^{p}} \frac{\eta _{2}}{\sqrt{\lambda _{n}}} \\ \leq {}&\delta +E \sup_{s} \frac{\mu \eta _{2} s^{\frac{p+1}{2}}}{\eta _{1}^{2}+\mu s^{p+1}} . \end{aligned}
(5.11)

On the other hand, from Lemma 2.6 we can obtain that

$$\tau \delta \leq \delta +E c_{3} \mu ^{\frac{1}{2}}.$$
(5.12)

After a straightforward calculation, it follows that

$$\mu ^{-\frac{1}{2 p+2}}\leq \biggl(\frac{c_{3}}{\tau -1} \biggr)^{ \frac{1}{p+1}} \biggl( \frac{E}{\delta} \biggr)^{\frac{1}{p+1}}.$$
(5.13)

□

### Theorem 5.4

Assuming that the a priori condition (3.9) is satisfied, the exact data g and measurement data $$g^{\delta}$$ satisfy (1.4), then:

If $$p>0$$, a convergence estimate is derived

$$\bigl\Vert f_{\mu}^{\delta}(y)-f(y) \bigr\Vert \leq \biggl(c_{1} \biggl( \frac{c_{3}}{\tau -1} \biggr)^{\frac{1}{p+1}}+ \biggl( \frac{\tau +1}{\eta _{1}} \biggr)^{\frac{p}{p+1}} \biggr) E^{ \frac{1}{p+1}} \delta ^{\frac{p}{p+1}},$$
(5.14)

where $$c_{1}$$, $$c_{3}$$ are given in Lemmas 2.4and 2.6.

### Proof

According to the triangle inequality,

$$\bigl\Vert f_{\mu}^{\delta}(y)-f(y) \bigr\Vert \leq \bigl\Vert f_{\mu}^{ \delta}(y)-f_{\mu}(y) \bigr\Vert + \bigl\Vert f_{\mu}(y)-f(y) \bigr\Vert =I_{3}+I_{4}.$$
(5.15)

Firstly, an estimation of $$I_{3}$$ is conducted. Lemma 5.3 provides the necessary information for this estimation.

$$I_{3}= \bigl\Vert f_{\mu}^{\delta}(y)-f_{\mu}(y) \bigr\Vert \leq c_{1} \delta \mu ^{-\frac{1}{2p+2}} \leq c_{1} \biggl(\frac{c_{3}}{\tau -1} \biggr)^{\frac{1}{p+1}} E^{\frac{1}{p+1}} \delta ^{ \frac{p}{p+1}}.$$
(5.16)

On the other hand, an estimation of $$I_{4}$$ can be deduced as follows:

\begin{aligned} I_{4} ={}& \bigl\Vert f_{\mu}(y)-f(y) \bigr\Vert \\ ={}& \Biggl\Vert \sum_{n = 1}^{ \infty} \frac{\mu \lambda _{n}^{p}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} f_{n} \varphi _{n}(y) \Biggr\Vert \\ ={}& \Biggl\Vert \sum_{n = 1}^{\infty} \frac{\mu \lambda _{n}^{p} E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} )}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} \frac{f_{n}}{E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} )} \varphi _{n}(y) \Biggr\Vert \\ ={}& \Biggl\Vert \sum_{n=1}^{\infty} \biggl[ \frac{\mu \lambda _{n}^{p} E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} )}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} f_{n} \varphi _{n}(y) \biggr]^{\frac{p}{p+1}} \\ & {}\times \biggl[ \frac{\mu \lambda _{n}^{p} E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} )}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} \frac{f_{n}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{p+1}} \varphi _{n}(y) \biggr]^{\frac{1}{p+1}} \Biggr\Vert . \end{aligned}

By Hölder inequality, it follows that

\begin{aligned} I_{4}\leq{}& \Biggl\Vert \sum _{n=1}^{\infty} \frac{\mu \lambda _{n}^{p} E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} )}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} f_{n} \varphi _{n}(y) \Biggr\Vert ^{\frac{p}{p+1}} \\ &{}\times \Biggl\Vert \sum _{n=1}^{\infty} \frac{\mu \lambda _{n}^{p} E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} )}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} \frac{f_{n}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{p+1}} \varphi _{n}(y) \Biggr\Vert ^{\frac{1}{p+1}} \\ ={}& \Biggl\Vert \sum_{n = 1}^{\infty} \frac{\mu \lambda _{n}^{p}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} g_{n} \varphi _{n}(y) \Biggr\Vert ^{\frac{p}{p+1}} \\ &{}\times \Biggl\Vert \sum_{n = 1}^{\infty} \frac{\mu \lambda _{n}^{p}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} \frac{f_{n}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{p}} \varphi _{n}(y) \Biggr\Vert ^{\frac{1}{p+1}}. \end{aligned}

From (1.4), (5.8), (3.9) one can get that

\begin{aligned} I_{4} \leq& \Biggl( \Biggl\Vert \sum _{n = 1}^{\infty} \frac{\mu \lambda _{n}^{p}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} \bigl(g_{n}-g_{n}^{\delta} \bigr) \varphi _{n}(y) \Biggr\Vert \\ & {}+ \Biggl\Vert \sum_{n = 1}^{\infty} \frac{\mu \lambda _{n}^{p}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} (-\sqrt{\lambda _{n}} T^{\alpha +\beta} ) )^{2}+\mu \lambda _{n}^{p}} g_{n}^{\delta} \varphi _{n}(x) \Biggr\Vert \Biggr)^{\frac{p}{p+1}} \Biggl\Vert \sum _{n = 1}^{\infty} \biggl(\frac{\sqrt{\lambda _{n}}}{\eta _{1}} \biggr)^{p} f_{n} \varphi _{n}(y) \Biggr\Vert ^{\frac{1}{p+1}} \\ \leq &(\delta +\tau \delta )^{\frac{p}{p+1}}\eta _{1}^{-\frac{p}{p+1}} E^{\frac{1}{p+1}} \\ =& \biggl(\frac{\tau +1}{\eta _{1}} \biggr)^{\frac{p}{p+1}} E^{ \frac{1}{p+1}} \delta ^{\frac{p}{p+1}} . \end{aligned}
(5.17)

According to (5.15), (5.16), and (5.17), the estimate is obtained

$$\bigl\Vert f_{\mu}^{\delta}(y)-f(y) \bigr\Vert \leq \biggl(c_{1} \biggl( \frac{c_{3}}{\tau -1} \biggr)^{\frac{1}{p+1}}+ \biggl( \frac{\tau +1}{\eta _{1}} \biggr)^{\frac{p}{p+1}} \biggr) E^{ \frac{1}{p+1}} \delta ^{\frac{p}{p+1}}.$$

The proof of this theorem is completed. □

## 6 Numerical experiments

In this section, the effectiveness of the generalized Tikhonov regularization method is demonstrated through numerical examples. Let $$\Omega = (0, \pi )$$, $$L: H^{2}(0, \pi ) \cap H_{0}^{1}(0, \pi ) \rightarrow L^{2}(0,\pi )=- \frac{\partial ^{2}}{\partial y^{2}}$$, $$\lambda _{n}=n^{2}$$, and $$\varphi _{n}(y)= \sqrt{\frac{2}{\pi}} \sin (n y)$$ for $$n=1,2, \ldots$$ . Generally, the a priori bound E is hard to obtain accurately. Therefore, the numerical results are presented solely under the a posteriori selection rule for the regularization parameter.

### 6.1 Expression of solution

This paper considers the following forward problem:

$$\textstyle\begin{cases} D_{x}^{2 \alpha} u(x, y)+x^{2 \beta} u_{y y}(x, y) = 0,&(x, y) \in [0, \infty ) \times (0, \pi ), \\ u(x, 0) = u(x, \pi ) = 0,&x \in [0,\infty ), \\ u(0, y) = f(y),&y\in (0,\pi ), \\ \lim_{x \rightarrow \infty} u(x, y)=0,&y\in (0,\pi ). \end{cases}$$
(6.1)

For the given function $$f(y)$$, the solution of problem (6.1) can be written as

$$u(x, y)=\sum_{n=1}^{\infty} f_{n} E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \bigl(-\sqrt{\lambda _{n}} x^{\alpha +\beta} \bigr) \varphi _{n}(y).$$
(6.2)

Let $$T=1$$, the exact data is given as

$$g(y)=u(1, y)=\sqrt{\frac{2}{\pi}} \sum_{n=1}^{ \infty} f_{n} E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}}(-n) \sin (n y),$$
(6.3)

here, the inner product $$f_{n}= (f_{n}({y}), \varphi _{n}( y) )$$ is calculated by adopting the composite trapezoidal rule. The measurement data $$g^{\delta}$$ is generated by adding random noise to g

$$g^{\delta}=g+\varepsilon \cdot g \cdot \bigl(2 \cdot \operatorname{rand}( M+1)-1\bigr).$$
(6.4)

The measurement error bound is calculated by $$\delta =\epsilon \|g\|_{l^{2}}$$, where $$\|g\|_{l^{2}}= \sqrt{\frac{1}{M+1} \sum_{i=1}^{ M+1} (g_{i} )^{2}}$$. Based on (4.3), the regularization solution of (6.1) is computed by

$$f_{\mu}^{\delta}(y) = \sqrt{\frac{2}{\pi}} \sum _{n=1}^{ \infty} \frac{E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}}(-n)}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}}(-n) )^{2}+\mu n^{2 p}} g_{n}^{\delta} \sin (n y) .$$
(6.5)

To conduct a sensitivity analysis on numerical results, $$[0, \pi ]$$ is evenly divided into M parts, and the relative root mean square error is calculated by

$$\epsilon _{1}(f)= \frac{\sqrt{\frac{1}{M+1} \sum_{i=1}^{M+1}\lvert f (y_{i} )-f_{\mu}^{\delta} (y_{i} )\rvert ^{2} }}{\sqrt{\frac{1}{M+1} \sum_{i=1}^{M+1}\lvert f (y_{i} )\rvert ^{2}}}.$$
(6.6)

From (4.1), it is known that

$$J_{\mu} \bigl(f^{k} \bigr)=\min_{f^{k} \in L^{2}(\Omega )} \bigl\{ \bigl\Vert K f^{k}-g^{\delta} \bigr\Vert ^{2}+\mu \bigl\Vert f^{k} \bigr\Vert _{D (L^{\frac{p}{2}} )}^{2} \bigr\} ,$$
(6.7)

let

$$J \bigl(f^{k} \bigr)= \bigl\Vert K f^{k}-g^{\delta} \bigr\Vert ^{2}+\mu \bigl\Vert f^{k} \bigr\Vert _{D (L^{\frac{p}{2}} )}^{2} ,$$
(6.8)

so

$$J_{\mu} \bigl(f^{k} \bigr)=\min_{f^{k} \in L^{2}(\Omega )} J \bigl(f^{k} \bigr).$$
(6.9)

The conjugate gradient method is employed to find the minimizer of $$J (f^{k} )$$, and the fundamental principle of this method can be outlined as follows.

Given the initial guess $$f^{0}$$, it is assumed that the $$f^{k}$$ of the $$k-th$$ iteration step is known. Subsequently, an appropriate step size $$r_{k}>0$$ is selected, and update $$f^{k}$$ by

$$f^{k+1}=f^{k}+r_{k} d_{k},$$
(6.10)

where the iteration direction $$d_{k}$$ is

$$d_{k}= \textstyle\begin{cases} -J^{\prime} (f^{k} ), & \text{if } k= 0, \\ -J^{\prime} (f^{k} )+s_{k} d_{k-1}, &\text{if } k>0, \end{cases}$$
(6.11)

with $$s_{k}= \frac{ \Vert J^{\prime} (f^{k} ) \Vert ^{2}}{ \Vert J^{\prime} (f^{k-1} ) \Vert ^{2}}$$, $$r_{k}=\arg \min_{r \geq 0} J (f^{k}+r d_{k} )$$.

To obtain $$f^{k+1}$$, the next step involves finding $$d_{k}$$ and $$r_{k}$$. Based on (6.8), it is known that

$$J(f)= \bigl\Vert Kf-g^{\delta} \bigr\Vert ^{2}+\mu \Vert f \Vert _{D (L^{ \frac{p}{2}} )}^{2},$$
(6.12)

so

$$\frac{\partial J(f)}{\partial f}=2 f \Vert K \Vert ^{2}-2 \bigl({K}, g^{\delta} \bigr)+2 f \mu L^{p},$$
(6.13)

substituting $$f^{k}$$ into equation (6.13) yields

\begin{aligned} J^{\prime} \bigl(f^{k} \bigr)&=2 f^{k} \Vert K \Vert ^{2}-2 \bigl({K}, g^{\delta} \bigr)+2 f^{k} \mu L^{p} . \\ &=-\sum_{n=1}^{\infty} \bigl(E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}}(-n) \bigr)^{2} f_{n}^{k}-E_{\alpha , 1+ \frac{\beta}{\alpha}, \frac{\beta}{\alpha}}(-n) g_{n}^{\delta}+\mu {n}^{2p} f_{n}^{k}. \end{aligned}
(6.14)

According to (6.8), the following can be derived:

$$J \bigl(f^{k}+r d_{k} \bigr)= \bigl\Vert K \bigl(f^{k}+r d_{k} \bigr)-g^{ \delta} \bigr\Vert ^{2}+\mu \bigl\Vert f^{k}+r d_{k} \bigr\Vert _{D (L^{ \frac{p}{2}} )}^{2},$$
(6.15)

so

$$\frac{\partial J (f^{k}+r d_{k} )}{\partial r}=2 \bigl({K} d_{k}, K f^{k}-g^{\delta} \bigr)+2 \mu L^{p} \bigl(f^{k}, d_{k} \bigr)+2r \Vert K d_{k} \Vert ^{2}+2r\mu L^{p} \Vert d_{k} \Vert ^{2}.$$

Let $$\frac{\partial J (f^{k}+r d_{k} )}{\partial r}=0$$, the step size $$r_{k}$$ can be determined as

\begin{aligned} r_{k} & =- \frac{ ({K} d_{k}, K f^{k}-g^{\delta} )+\mu L^{p} (f^{k}, d_{k} )}{ \Vert K d_{k} \Vert ^{2}+\mu L^{p} \Vert d_{k} \Vert ^{2}} \\ & =-\sum_{n=1}^{\infty} \frac{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}}(-n) )^{2} f_{n}^{k}-E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}}(-n) g_{n}^{\delta}+\mu {n}^{2p} f_{n}^{k}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}}(-n) )^{2} \Vert d_{k} \Vert ^{2}+\mu n^{2p} \Vert d_{k} \Vert ^{2}} d_{k} \varphi _{n}(y). \end{aligned}

The iterative steps of the conjugate gradient method for numerical reconstruction of unknown inverse boundary value $$f(y)$$ can be expressed as follows:

Step 1: Let $$k=0$$, select the initial guess $$f^{0}$$ and the error accuracy $$\epsilon >0$$;

Step 2: Calculate the initial iterative direction $$d_{0}=-J^{\prime} (f^{0} )$$;

Step 3: Compute the initial step size

$$r_{0}=-\sum_{n=1}^{\infty} \frac{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}}(-n) )^{2} f_{n}^{0}-E_{\alpha , 1+\frac{\beta}{\alpha} , \frac{\beta}{\alpha}}(-n) g_{n}^{\delta}+\mu {n}^{2p} f_{n}^{0}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}}(-n) )^{2} \Vert d_{0} \Vert ^{2}+\mu {n}^{2p} \Vert d_{0} \Vert ^{2}} d_{0} \varphi _{n}(y).$$

Update $$f^{1}=f^{0}+r_{0} d_{0}$$;

Step 4: For $$k=1,2, \ldots$$ , calculate the conjugate direction

$$d_{k}=-J^{\prime} \bigl(f^{k} \bigr)+s_{k} d_{k-1},$$
(6.16)

with $$s_{k}= \frac{ \Vert J^{\prime} (f^{k} ) \Vert ^{2}}{ \Vert J^{\prime} (f^{k-1} ) \Vert ^{2}}$$.

Compute the step size

$$r_{k}=-\sum_{n=1}^{\infty} \frac{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}}(-n) )^{2} f_{n}^{k}-E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}}(-n) g_{n}^{\delta}+\mu {n}^{2p} f_{n}^{k}}{ (E_{\alpha , 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}}(-n) )^{2} \Vert d_{k} \Vert ^{2}+\mu {n}^{2p} \Vert d_{k} \Vert ^{2}} d_{k} \varphi _{n}(y).$$

Update $$f^{k+1}=f^{k}+r_{k} d_{k}$$, if $$\Vert J^{\prime} (f^{k+1} ) \Vert \leq \epsilon$$, out put $$f^{k+1}$$ and stop;

Step 5: Set $$k+1 \Rightarrow k$$ and go to step 4.

To perform sensitivity analysis on numerical results, $$[0, \pi ]$$ is evenly divided into M parts, and the relative root mean square error is calculated by

$$\epsilon _{2}(f)= \frac{\sqrt{\frac{1}{M+1} \sum_{i=1}^{M+1}\lvert f (y_{i} )-f^{k+1} (y_{i} )\rvert ^{2} }}{\sqrt{\frac{1}{M+1} \sum_{i=1}^{M+1}\lvert f (y_{i} )\rvert ^{2}}}.$$
(6.17)

### 6.3 Numerical examples

This section presents two numerical methods to verify the effectiveness of the generalized Tikhonov regularization method through two examples of smooth and piecewise function.

In Example 1 and Example 2, let $$M=100$$, $$n=1,2, \ldots , 50$$, $$\tau =1.01$$, $$\varepsilon = 0.01$$. When calculating based on a regularization solution expression, the regularization parameter μ is selected by the a posteriori rule (5.9). When calculated using the conjugate gradient method, the regularization parameter μ is selected as μ in 6.1 by the a priori rule, and take $$\epsilon =0.001$$.

### Example 1

Consider the case where the solution is a smooth function

$$f(y)=\frac{\sqrt{3}}{3 \pi} \bigl(y^{3}-1 \bigr) \sin (3 y),\quad 0 \leq y \leq \pi .$$

For $$\alpha =0.7,0.9$$, $$\beta =0.1,0.9$$, $$p=1,2$$, and $$\varepsilon = 0.1,0.01,0.001$$, the simulation results for the exact and generalized Tikhonov regularization solutions are shown in Figs. 14, respectively. The simulation results of two numerical methods are shown in Fig. 5.

According to Figs. 13, α, β, and p have little effect on regularization resolution. As shown in Fig. 4, the simulation effect is slightly worse when $$\varepsilon = 0.1$$ and better when $$\varepsilon = 0.01$$ and $$\varepsilon = 0.001$$. Overall, the proposed method is stable and feasible.

As shown in Fig. 5, when the exact solution is a smooth function, the difference between the expression of solution and the conjugate gradient method is not significant.

The effects of α and β on the numerical results are shown in Tables 12 respectively.

According to Tables 12, it becomes evident that as α increases, both $$\epsilon _{1}(f)$$ and $$\epsilon _{2}(f)$$ increase, and $$\epsilon _{2}(f)$$ increases more and more. When β increases, both $$\epsilon _{1}(f)$$ and $$\epsilon _{2}(f)$$ increase, and the impact on $$\epsilon _{2}(f)$$ is slightly greater.

### Example 2

Consider the case where the solution is a smooth function

$$f(y)=\textstyle\begin{cases} y, & 0 \leq y \leq \frac{\pi}{3}, \\ y \sin (2 y-\frac{\pi}{6} ), & \frac{\pi}{3}< y \leq \frac{2 \pi}{3}, \\ y-\pi , & \frac{2 \pi}{3}< y \leq \pi . \end{cases}$$

For $$\alpha =0.7,0.9$$, $$\beta =0.1,0.9$$, $$p=1,2$$, and $$\varepsilon = 0.1,0.01,0.001$$, the simulation results for the exact and generalized Tikhonov regularization solutions are shown in Figs. 69, respectively. The simulation results of two numerical methods are shown in Fig. 10.

According to Figs. 68, α, β, and p have little effect on regularization solution. As shown in Fig. 9, when $$\varepsilon = 0.001$$, the regularization solution and exact solution are basically the same at nonsmooth points. When $$\varepsilon = 0.01$$, the regularization solution and exact solution are not significantly different at nonsmooth points. When $$\varepsilon = 0.1$$, the simulation effect at nonsmooth points is not as good as that under smooth conditions.

From Fig. 10, it can be seen that when the exact solution is a nonsmooth function, the regularization solution obtained based on the expression of the solution is better than the regularization solution obtained by the conjugate gradient method, especially at nonsmooth points where the difference can be clearly seen.

The effects of α and β on the numerical results are shown in Tables 34 respectively.

According to Tables 34, it is observable that as α increases, $$\epsilon _{1}(f)$$ keeps increasing, while $$\epsilon _{2}(f)$$ fluctuates up and down but shows an overall upward trend; when β increases, $$\epsilon _{1}(f)$$ also increases, but the impact is relatively small compared to the impact on $$\epsilon _{2}(f)$$, which shows a slightly greater change.

## 7 Conclusions

This paper considers an inverse boundary value problem for fractional elliptic equation of Tricomi–Gellerstedt–Keldysh type. Firstly, this article establishes a result of conditional stability of Hölder type for the inverse problem. And then, based on the ill-posedness analysis, a generalized Tikhonov regularization method is proposed to restore the continuous dependence of solution on the noisy data. Meanwhile, the corresponding convergence results of Hölder type are derived under the a priori and a posteriori selection rules for regularized parameter. Some numerical results show that this method is stable and feasible when it is used to solve the considered problem. Given the novelty of the inverse problem addressed in this paper and its significant applications across various scientific research fields, future efforts will focus on further exploring the regularization theory and numerical algorithms pertaining to this problem.

## Data Availability

No datasets were generated or analysed during the current study.

Not applicable.

## References

1. Podlubny, I.: Fractional Differential Equations. Mathematics in Science and Engineering, pp. 1–340. Academic Press, San Diego (1999)

2. Kilbas, A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differential Equations. Elsevier, Amsterdam (2006)

3. Bres, L.: Mathematical Aspects of Subsonic and Transonic Gas Dynamics. Wiley, New York (1958)

4. Chen, W., Lucente, S., Palmieri, A.: Nonexistence of global solutions for generalized Tricomi equations with combined nonlinearity. Nonlinear Anal., Real World Appl. 61(2), 103354 (2021)

5. Oguri, A., Teratani, Y., Tsutsumi, K., Sakano, R.: Current noise and Keldysh vertex function of an anderson impurity in the fermi liquid regime (2021)

6. Tricomi, F.: Sulle equazioni lineari alle derivate parziali di tipo miste (1923)

7. Gellerstedt, S.: Sur un probleme aux limites pour une equation linearire aux derivees partielles du second ordre de type mixtes. Uppsala University (1935)

8. Gellerstedt, G.M.: On some cases of degenerate elliptic equations on the boundary of a domain. Dokl. Acad. Nauk USSR. 77, 181–183 (1951)

9. Galstyan, A.: Global existence for the one-dimensional semilinear Tricomi-type equations (2022)

10. Dubey, S., Dubey, V.P., Singh, J., Alshehri, A.M., Kumar, D.: Computational study of a local fractional Tricomi equation occurring in fractal transonic flow. J. Comput. Nonlinear Dyn. 17(8), 081006 (2022)

11. Baishemirov, Z., Berdyshev, A., Ryskan, A.: A solution of a boundary value problem with mixed conditions for a four-dimensional degenerate elliptic equation. Mathematics 10 (2022)

12. Weber, B.: Regularity and a Liouville theorem for a class of boundary-degenerate second order equations - sciencedirect. J. Differ. Equ. 281, 459–502 (2021)

13. Algazin, O.D.: Exact solution to the Dirichlet problem for degenerating on the boundary elliptic equation of Tricomi-Keldysh type in the half-space (2016)

14. Zhang, K.: Nonexistence of global weak solutions of nonlinear Keldysh type equation with one derivative term. Adv. Math. Phys. 2018, 1–7 (2018)

15. Baltaeva, U., Torres, P.J.: Analog of the Gellerstedt problem for a loaded equation of the third order. Math. Methods Appl. Sci. (2019)

16. Ruzhansky, M., Torebek, B.T., Turmetov, B.K.: Well-posedness of Tricomi-Gellerstedt-Keldysh-type fractional elliptic problems (2021)

17. Roumaissa, S., Nadjib, B., Faouzia, R., Abderafik, B.: Iterative regularization method for an abstract ill-posed generalized elliptic equation. Asian-Eur. J. Math. 14(05), 1–27 (2021)

18. Djemoui, S., Meziani, M.S.E., Nadjib, B.: The conditional stability and an iterative regularization method for a fractional inverse elliptic problem of Tricomi-Gellerstedt-Keldysh type. Math. Model. Anal. 29, 23–45 (2024)

19. Hongwu, Z., Xiaoju, Z.: Generalized Tikhonov method for the final value problem of time-fractional diffusion equation. Int. J. Comput. Math. 94(1/4), 66–78 (2017)

20. Ma, Y.K., Prakash, P., Deiveegan, A.: Generalized Tikhonov methods for an inverse source problem of the time-fractional diffusion equation. Chaos Solitons Fractals 108, 39–48 (2018)

21. Hongwu, Z., Xiaoju, Z.: Generalized Tikhonov-type regularization method for the cauchy problem of a semi-linear elliptic equation. Numer. Algorithms 81 (2018)

22. Deiveegan, A., Nieto, J.J., Prakash, P.: The revised generalized Tikhonov method for the backward time-fractional diffusion equation. J. Appl. Anal. Comput. 9(1), 45–56 (2019)

23. Hongwu, Z., Xiaoju, Z.: Solving the Riesz–Feller space-fractional backward diffusion problem by a generalized Tikhonov method. Adv. Differ. Equ. 2020(1), 390 (2020)

24. Kilbas, A.A., Saigo, M.: On solution of integral equation of Abel-Volterra type. Differ. Integral Equ. 8(5), 993–1011 (1995)

25. Boudabsa, L., Simon, T., Vallois, P.: Fractional extreme distributions (2019)

26. Xiong, X., Xue, X.: A fractional Tikhonov regularization method for identifying a space-dependent source in the time-fractional diffusion equation. Appl. Math. Comput. 349, 292–303 (2019)

27. Zhang, X., Zhang, H.: Fractional Tikhonov regularization method for an inverse boundary value problem of the fractional elliptic equation. Acta Math. Sci. Ser. A (2024) [in Chinese]

28. Morozov, V.A.: Methods for Solving Incorrectly Posed Problems. Springer, New York (1984)

## Acknowledgements

The author would like to thank the reviewers for their careful reading and valuable suggestions that improved the quality of this paper.

Not applicable.

## Author information

Authors

### Contributions

This paper is completed by myself.

### Corresponding author

Correspondence to Xiao Zhang.

## Ethics declarations

Not applicable.

### Consent for publication

The author agrees to publish this paper in Boundary Value Problems.

### Competing interests

The authors declare no competing interests.

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

Reprints and permissions

Zhang, X. Generalized Tikhonov regularization method for an inverse boundary value problem of the fractional elliptic equation. Bound Value Probl 2024, 80 (2024). https://doi.org/10.1186/s13661-024-01887-7