Expanding the applicability of Lavrentiev regularization methods for ill-posed problems
© Argyros et al.; licensee Springer. 2013
Received: 29 January 2013
Accepted: 18 April 2013
Published: 7 May 2013
In this paper, we are concerned with the problem of approximating a solution of an ill-posed problem in a Hilbert space setting using the Lavrentiev regularization method and, in particular, expanding the applicability of this method by weakening the popular Lipschitz-type hypotheses considered in earlier studies such as (Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 26:35-48, 2005; Bakushinskii and Smirnova in Nonlinear Anal. 64:1255-1261, 2006; Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 28:13-25, 2007; Jin in Math. Comput. 69:1603-1623, 2000; Mahale and Nair in ANZIAM J. 51:191-217, 2009). Numerical examples are given to show that our convergence criteria are weaker and our error analysis tighter under less computational cost than the corresponding works given in (Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 26:35-48, 2005; Bakushinskii and Smirnova in Nonlinear Anal. 64:1255-1261, 2006; Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 28:13-25, 2007; Jin in Math. Comput. 69:1603-1623, 2000; Mahale and Nair in ANZIAM J. 51:191-217, 2009).
MSC:65F22, 65J15, 65J22, 65M30, 47A52.
KeywordsLavrentiev regularization method Hilbert space ill-posed problems stopping index Fréchet-derivative source function boundary value problem
for all .
where is the regularization parameter and is an initial guess for the solution .
where and is a sequence of positive real numbers satisfying as . It is important to stop the iteration at an appropriate step, say , and show that is well defined for and as (see ).
There exists such that for all ;
- (2)There exists such that(1.6)
- (3), where
In , Mahale and Nair, motivated by the work of Qi-Nian Jin  for an iteratively regularized Gauss-Newton method, considered an alternate stopping criterion which not only ensures the convergence, but also derives an order optimal error estimate under a general source condition on . Moreover, the condition that they imposed on is weaker than (1.6).
In the present paper, we are motivated by . In particular, we expand the applicability of the method (1.5) by weakening one of the major hypotheses in  (see Assumption 2.1(2) in the next section).
In Section 2, we consider some basic assumptions required throughout the paper. Section 3 deals with the stopping rule and the result that establishes the existence of the stopping index. In Section 4, we prove results for the iterations based on the exact data and, in Section 5, the error analysis for the noisy data case is proved. The main order optimal result using the a posteriori stopping rule is provided in Section 6.
2 Basic assumptions and some preliminary results
We use the following assumptions to prove the results in this paper.
There exists such that and is Fréchet differentiable.
- (2)There exists such that, for all , and , there exists an element, say , satisfying
for all .
for all .
Clearly, Assumption 2.2 implies Assumption 2.1(2) with , but not necessarily vice versa. Note that holds in general and can be arbitrarily large [16–20]. Indeed, there are many classes of operators satisfying Assumption 2.1(2), but not Assumption 2.2 (see the numerical examples at the end of this study). Moreover, if is sufficiently smaller than K, which can happen since can be arbitrarily large, then the results obtained in this study provide a tighter error analysis than the one in .
Finally, note that the computation of constant K is more expensive than the computation of .
We need the auxiliary results based on Assumption 2.1.
This completes the proof. □
for all . This completes the proof. □
for all ;
- (3)there exists with such that(2.2)
Next, we assume a condition on the sequence considered in (1.5).
Assumption 2.6 (, Assumption 2.6)
for a constant .
Note that the condition (2.3) on is weaker than (1.6) considered by Bakushinskii and Smirnova  (see ). In fact, if (1.6) is satisfied, then it also satisfies (2.3) with , but the converse need not be true (see ). Further, note that for these choices of , is bounded, whereas as . (2) in Assumption 2.1 is used in the literature for regularization of many nonlinear ill-posed problems (see [4, 7, 8, 13, 21]).
3 Stopping rule
for all , where and .
The following technical lemma from  is used to prove some of the results of this paper.
Lemma 3.1 (, Lemma 3.1)
Let and be such that and . Let be non-negative real numbers such that and . Then for all .
The rest of the results in this paper can be proved along the same lines as those of the proof in . In order for us to make the paper as self-contained as possible, we present the proof of one of them, and for the proof of the rest, we refer the reader to .
Theorem 3.2 (, Theorem 3.2)
for all . In particular, if , then we have for all .
Therefore, we have , where . This completes the proof. □
4 Error bound for the case of noise-free data
for all .
We show that each is well defined and belongs to for . For this, we make use of the following lemma.
Lemma 4.1 (, Lemma 4.1)
for all .
Theorem 4.2 (, Theorem 4.2)
for all .
Lemma 4.3 (, Lemma 4.3)
The following corollary follows from Lemma 4.3 by taking . We show that this particular case of Lemma 4.3 is better suited for our later results.
Corollary 4.4 (, Corollary 4.4)
Theorem 4.5 (, Theorem 4.5)
Let the assumptions of Lemma 4.3 hold. If is chosen such that , then .
Lemma 4.6 (, Lemma 4.6)
Remark 4.7 (, Remark 4.7)
It can be seen that (4.7) is satisfied if .
Now, if we take , that is, in Lemma 4.6, then it takes the following form.
Lemma 4.8 (, Lemma 4.8)
5 Error analysis with noisy data
The first result in this section gives an error estimate for under Assumption 2.5, where .
Lemma 5.1 (, Lemma 5.1)
If we take in Lemma 5.1, then we get the following corollary as a particular case of Lemma 5.1. We make use of it in the following error analysis.
Corollary 5.2 (, Corollary 5.2)
Lemma 5.3 (, Lemma 5.3)
with and κ as in Lemma 5.1.
Theorem 5.4 (, Theorem 5.4)
where , with and κ as in Lemma 4.8 and Corollary 5.2, respectively, and , as in Lemma 5.3.
6 Order optimal result with an a posteriori stopping rule
In this section, we show the convergence as and also give an optimal error estimate for .
Theorem 6.1 (, Theorem 6.1)
where with ξ as in Theorem 5.4 and is defined as , .
From (6.4), . Now, using (6.5) and (6.6), we get . This completes the proof. □
7 Numerical examples
We provide two numerical examples, where .
for all . Using (7.2), (7.3), Assumptions 2.1(2), 2.2 for , we get .
Next, we provide an example where can be arbitrarily large.
where , and are the given parameters. Note that . Then it can easily be seen that, for sufficiently large and sufficiently small, can be arbitrarily large.
We now present two examples where Assumption 2.2 is not satisfied, but Assumption 2.1(2) is satisfied.
for all , where f is a given continuous function satisfying for all , λ is a real number and the kernel G is continuous and positive in .
where and . Then Assumption 2.1(2) holds for sufficiently small λ.
In the following remarks, we compare our results with the corresponding ones in .
Remark 7.6 Note that the results in  were shown using Assumption 2.2, whereas we used weaker Assumption 2.1(2) in this paper. Next, our result, Proposition 2.3, was shown with replacing K. Therefore, if (see Example 7.3), then our result is tighter. Proposition 2.4 was shown with replacing K. Then, if , then our result is tighter. Theorem 3.2 was shown with replacing 2K. Hence, if , our result is tighter. Similar favorable to us observations are made for Lemma 4.1, Theorem 4.2 and the rest of the results in .
where is a known continuous operator. Since , we can compute in Assumption 2.1(2) without actually knowing . Returning back to Example 7.1, we see that we can set .
Dedicated to Professor Hari M Srivastava.
This paper was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (Grant Number: 2012-0008170).
- Binder A, Engl HW, Groetsch CW, Neubauer A, Scherzer O: Weakly closed nonlinear operators and parameter identification in parabolic equations by Tikhonov regularization. Appl. Anal. 1994, 55: 215-235. 10.1080/00036819408840301MathSciNetView ArticleMATHGoogle Scholar
- Engl HW, Hanke M, Neubauer A: Regularization of Inverse Problems. Kluwer, Dordrecht; 1993.MATHGoogle Scholar
- Engl HW, Kunisch K, Neubauer A: Convergence rates for Tikhonov regularization of nonlinear ill-posed problems. Inverse Probl. 1989, 5: 523-540. 10.1088/0266-5611/5/4/007MathSciNetView ArticleMATHGoogle Scholar
- Jin Q, Hou ZY: On the choice of the regularization parameter for ordinary and iterated Tikhonov regularization of nonlinear ill-posed problems. Inverse Probl. 1997, 13: 815-827. 10.1088/0266-5611/13/3/016MathSciNetView ArticleMATHGoogle Scholar
- Jin Q, Hou ZY: On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems. Numer. Math. 1990, 83: 139-159.MathSciNetMATHGoogle Scholar
- Scherzer O, Engl HW, Kunisch K: Optimal a posteriori parameter choice for Tikhonov regularization for solving nonlinear ill-posed problems. SIAM J. Numer. Anal. 1993, 30: 1796-1838. 10.1137/0730091MathSciNetView ArticleMATHGoogle Scholar
- Tautenhahn U: Lavrentiev regularization of nonlinear ill-posed problems. Vietnam J. Math. 2004, 32: 29-41.MathSciNetMATHGoogle Scholar
- Tautenhahn U: On the method of Lavrentiev regularization for nonlinear ill-posed problems. Inverse Probl. 2002, 18: 191-207. 10.1088/0266-5611/18/1/313MathSciNetView ArticleMATHGoogle Scholar
- Bakushinskii A, Smirnova A: Iterative regularization and generalized discrepancy principle for monotone operator equations. Numer. Funct. Anal. Optim. 2007, 28: 13-25. 10.1080/01630560701190315MathSciNetView ArticleMATHGoogle Scholar
- Mahale P, Nair MT: Iterated Lavrentiev regularization for nonlinear ill-posed problems. ANZIAM J. 2009, 51: 191-217. 10.1017/S1446181109000418MathSciNetView ArticleMATHGoogle Scholar
- Bakushinskii A, Smirnova A: On application of generalized discrepancy principle to iterative methods for nonlinear ill-posed problems. Numer. Funct. Anal. Optim. 2005, 26: 35-48. 10.1081/NFA-200051631MathSciNetView ArticleMATHGoogle Scholar
- Bakushinskii A, Smirnova A: A posteriori stopping rule for regularized fixed point iterations. Nonlinear Anal. 2006, 64: 1255-1261. 10.1016/j.na.2005.06.031MathSciNetView ArticleMATHGoogle Scholar
- Jin Q: On the iteratively regularized Gauss-Newton method for solving nonlinear ill-posed problems. Math. Comput. 2000, 69: 1603-1623. 10.1090/S0025-5718-00-01199-6View ArticleMathSciNetMATHGoogle Scholar
- Mahale P, Nair MT: General source conditions for nonlinear ill-posed problems. Numer. Funct. Anal. Optim. 2007, 28: 111-126. 10.1080/01630560701189929MathSciNetView ArticleMATHGoogle Scholar
- Semenova EV: Lavrentiev regularization and balancing principle for solving ill-posed problems with monotone operators. Comput. Methods Appl. Math. 2010, 4: 444-454.MathSciNetMATHGoogle Scholar
- Argyros IK: Convergence and Application of Newton-Type Iterations. Springer, New York; 2008.MATHGoogle Scholar
- Argyros IK: Approximating solutions of equations using Newton’s method with a modified Newton’s method iterate as a starting point. Rev. Anal. Numér. Théor. Approx. 2007, 36: 123-138.MathSciNetMATHGoogle Scholar
- Argyros IK: A semilocal convergence for directional Newton methods. Math. Comput. 2011, 80: 327-343.View ArticleMathSciNetMATHGoogle Scholar
- Argyros IK, Hilout S: Weaker conditions for the convergence of Newton’s method. J. Complex. 2012, 28: 364-387. 10.1016/j.jco.2011.12.003MathSciNetView ArticleMATHGoogle Scholar
- Argyros IK, Cho YJ, Hilout S: Numerical Methods for Equations and Its Applications. CRC Press, New York; 2012.MATHGoogle Scholar
- Tautenhahn U, Jin Q: Tikhonov regularization and a posteriori rule for solving nonlinear ill-posed problems. Inverse Probl. 2003, 19: 1-21. 10.1088/0266-5611/19/1/301MathSciNetView ArticleMATHGoogle Scholar