Open Access

Expanding the applicability of Lavrentiev regularization methods for ill-posed problems

Boundary Value Problems20132013:114

DOI: 10.1186/1687-2770-2013-114

Received: 29 January 2013

Accepted: 18 April 2013

Published: 7 May 2013

Abstract

In this paper, we are concerned with the problem of approximating a solution of an ill-posed problem in a Hilbert space setting using the Lavrentiev regularization method and, in particular, expanding the applicability of this method by weakening the popular Lipschitz-type hypotheses considered in earlier studies such as (Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 26:35-48, 2005; Bakushinskii and Smirnova in Nonlinear Anal. 64:1255-1261, 2006; Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 28:13-25, 2007; Jin in Math. Comput. 69:1603-1623, 2000; Mahale and Nair in ANZIAM J. 51:191-217, 2009). Numerical examples are given to show that our convergence criteria are weaker and our error analysis tighter under less computational cost than the corresponding works given in (Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 26:35-48, 2005; Bakushinskii and Smirnova in Nonlinear Anal. 64:1255-1261, 2006; Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 28:13-25, 2007; Jin in Math. Comput. 69:1603-1623, 2000; Mahale and Nair in ANZIAM J. 51:191-217, 2009).

MSC:65F22, 65J15, 65J22, 65M30, 47A52.

Keywords

Lavrentiev regularization method Hilbert space ill-posed problems stopping index Fréchet-derivative source function boundary value problem

1 Introduction

In this paper, we are interested in obtaining a stable approximate solution for a nonlinear ill-posed operator equation of the form
F ( x ) = y ,
(1.1)
where F : D ( F ) X X is a monotone operator and X is a Hilbert space. We denote the inner product and the corresponding norm on a Hilbert space by , and , respectively. Let U ( x , r ) stand for the open ball in X with center x X and radius r > 0 . Note that F is a monotone operator if it satisfies the relation
F ( x 1 ) F ( x 2 ) , x 1 x 2 0
(1.2)

for all x 1 , x 2 D ( F ) .

We assume, throughout this paper, that y δ Y is the available noisy data with
y y δ δ ,
(1.3)
and (1.1) has a solution x ˆ . Since (1.1) is ill-posed, its solution need not depend continuously on the data, i.e., small perturbation in the data can cause large deviations in the solution. So, the regularization methods are used [18]. Since F is monotone, the Lavrentiev regularization is used to obtain a stable approximate solution of (1.1). In the Lavrentiev regularization, the approximate solution is obtained as a solution of the equation
F ( x ) + α ( x x 0 ) = y δ ,
(1.4)

where α > 0 is the regularization parameter and x 0 is an initial guess for the solution x ˆ .

In [9], Bakushinskii and Smirnova proposed an iterative method
x k + 1 δ = x k δ ( α k I + A k , δ ) 1 [ ( F ( x k δ ) y δ ) + α k ( x k δ x 0 ) ] , x 0 δ = x 0 ,
(1.5)

where A k , δ : = F ( x k δ ) and ( α k ) is a sequence of positive real numbers satisfying α k 0 as k . It is important to stop the iteration at an appropriate step, say k = k δ , and show that x k is well defined for 0 < k k δ and x k δ δ x ˆ as δ 0 (see [10]).

In [9, 11, 12], Bakushinskii and Smirnova chose the stopping index k δ by requiring it to satisfy
F ( x k δ δ ) y δ 2 τ δ < F ( x k δ ) y δ
for k = 0 , 1 , and k δ 1 , τ > 1 . In fact, they showed that x k δ δ x ˆ as δ 0 under the following assumptions:
  1. (1)

    There exists L > 0 such that F ( x ) F ( y ) L x y for all x , y D ( F ) ;

     
  2. (2)
    There exists p > 0 such that
    α k α k + 1 α k α k + 1 p
    (1.6)
     
for all k N ;
  1. (3)
    ( 2 + L σ ) x 0 x ˆ t d σ 2 x 0 x ˆ t d α 0 , where
    σ : = ( τ 1 ) 2 , t : = p α 0 + 1 , d = 2 ( t x 0 x ˆ + p σ ) .
     

However, no error estimate was given in [9] (see [10]).

In [10], Mahale and Nair, motivated by the work of Qi-Nian Jin [13] for an iteratively regularized Gauss-Newton method, considered an alternate stopping criterion which not only ensures the convergence, but also derives an order optimal error estimate under a general source condition on x ˆ x 0 . Moreover, the condition that they imposed on { α k } is weaker than (1.6).

In the present paper, we are motivated by [10]. In particular, we expand the applicability of the method (1.5) by weakening one of the major hypotheses in [10] (see Assumption 2.1(2) in the next section).

In Section 2, we consider some basic assumptions required throughout the paper. Section 3 deals with the stopping rule and the result that establishes the existence of the stopping index. In Section 4, we prove results for the iterations based on the exact data and, in Section 5, the error analysis for the noisy data case is proved. The main order optimal result using the a posteriori stopping rule is provided in Section 6.

2 Basic assumptions and some preliminary results

We use the following assumptions to prove the results in this paper.

Assumption 2.1
  1. (1)

    There exists r > 0 such that U ( x ˆ , r ) D ( F ) and F : U ( x ˆ , r ) X is Fréchet differentiable.

     
  2. (2)
    There exists K 0 > 0 such that, for all u θ = u + θ ( x ˆ u ) U ( x ˆ , r ) , θ [ 0 , 1 ] and v X , there exists an element, say ϕ ( x ˆ , u θ , v ) X , satisfying
    [ F ( x ˆ ) F ( u θ ) ] v = F ( u θ ) ϕ ( x ˆ , u θ , v ) , ϕ ( x ˆ , u θ , v ) K 0 v x ˆ u θ
     
for all u θ U ( x ˆ , r ) and v X .
  1. (3)

    ( F ( u ) + α I ) 1 F ( u θ ) 1 for all u θ U ( x ˆ , r ) .

     
  2. (4)

    ( F ( u ) + α I ) 1 1 α for all u θ U ( x ˆ , r ) .

     

The condition (2) in Assumption 2.1 weakens the popular hypotheses given in [10, 14] and [15].

Assumption 2.2 There exists a constant K > 0 such that, for all x , y U ( x ˆ , r ) and v X , there exists an element denoted by P ( x , u , v ) X satisfying
[ F ( x ) F ( u ) ] v = F ( u ) P ( x , u , v ) , P ( x , u , v ) K v x u .

Clearly, Assumption 2.2 implies Assumption 2.1(2) with K 0 = K , but not necessarily vice versa. Note that K 0 K holds in general and K 0 K can be arbitrarily large [1620]. Indeed, there are many classes of operators satisfying Assumption 2.1(2), but not Assumption 2.2 (see the numerical examples at the end of this study). Moreover, if K 0 is sufficiently smaller than K, which can happen since K K 0 can be arbitrarily large, then the results obtained in this study provide a tighter error analysis than the one in [10].

Finally, note that the computation of constant K is more expensive than the computation of K 0 .

We need the auxiliary results based on Assumption 2.1.

Proposition 2.3 For any u U ( x ˆ , r ) and α > 0 ,
( F ( u ) + α I ) 1 [ F ( x ˆ ) F ( u ) F ( u ) ] ( x ˆ u ) 3 K 0 2 x ˆ u 2 .
Proof Using the fundamental theorem of integration, for any u U ( x ˆ , r ) , we get
F ( x ˆ ) F ( u ) = 0 1 F ( u + t ( x ˆ u ) ) ( x ˆ u ) d t .
Hence, by Assumption 2.2,
F ( x ˆ ) F ( u ) F ( u ) ( x ˆ u ) = 0 1 [ F ( u + t ( x ˆ u ) ) F ( x ˆ ) + F ( x ˆ ) F ( u ) ] ( x ˆ u ) d t = 0 1 F ( x ˆ ) [ ϕ ( u + t ( x ˆ u ) , x ˆ , x ˆ u ) ϕ ( u , x ˆ , x ˆ u ) ] d t .
Then, by (2), (3) in Assumption 2.1 and the inequality ( F ( u ) + α I ) 1 F ( u θ ) 1 , we obtain in turn
( F ( u ) + α I ) 1 [ F ( x ˆ ) F ( u ) F ( u ) ] ( x ˆ u ) 0 1 ϕ ( u + t ( x ˆ u ) , x ˆ , x ˆ u ) + ϕ ( u , x ˆ , x ˆ u ) d t 0 1 K 0 x ˆ u 2 t d t + K 0 x ˆ u 2 3 K 0 2 x ˆ u 2 .

This completes the proof. □

Proposition 2.4 For any u U ( x ˆ , r ) and α > 0 ,
α ( F ( x ˆ ) + α I ) 1 ( F ( u ) + α I ) 1 K 0 x ˆ u .
(2.1)
Proof Let T x ˆ , u = α ( ( F ( x ˆ ) + α I ) 1 ( F ( u ) + α I ) 1 ) for all v X . Then we have, by Assumption 2.2,
T x ˆ , u v = α ( F ( x ˆ ) + α I ) 1 ( F ( u ) F ( x ˆ ) ( F ( u ) + α I ) 1 ) v = ( F ( x ˆ ) + α I ) 1 F ( x ˆ ) ϕ ( u , x ˆ , α ( F ( u ) + α I ) 1 v ) K 0 x ˆ u v

for all v X . This completes the proof. □

Assumption 2.5 There exists a continuous and strictly monotonically increasing function φ : ( 0 , a ] ( 0 , ) with a F ( x ˆ ) satisfying
  1. (1)

    lim λ 0 φ ( λ ) = 0 ;

     
  2. (2)

    sup λ 0 α φ ( λ ) λ + α φ ( α ) for all α ( 0 , a ] ;

     
  3. (3)
    there exists v X with v 1 such that
    x ˆ x 0 = φ ( F ( x ˆ ) ) v .
    (2.2)
     

Next, we assume a condition on the sequence { α k } considered in (1.5).

Assumption 2.6 ([10], Assumption 2.6)

The sequence { α k } of positive real numbers is such that
1 α k α k + 1 μ , lim k 0 α k = 0
(2.3)

for a constant μ > 1 .

Note that the condition (2.3) on { α k } is weaker than (1.6) considered by Bakushinskii and Smirnova [9] (see [10]). In fact, if (1.6) is satisfied, then it also satisfies (2.3) with μ = p α 0 + 1 , but the converse need not be true (see [10]). Further, note that for these choices of { α k } , α k / α k + 1 is bounded, whereas ( α k α k + 1 ) / α k α k + 1 as k . (2) in Assumption 2.1 is used in the literature for regularization of many nonlinear ill-posed problems (see [4, 7, 8, 13, 21]).

3 Stopping rule

Let c 0 > 4 and choose k δ to be the first non-negative integer such that x k δ in (1.5) is defined for each k { 0 , 1 , 2 , , k δ } and
α k δ ( A k δ δ + α k δ I ) 1 ( F ( x k δ δ ) y δ ) c 0 δ .
(3.1)
In the following, we establish the existence of such a k δ . First, we consider the positive integer N N satisfying
α N ( c 1 ) δ x 0 x ˆ < α k
(3.2)

for all k { 0 , 1 , , N 1 } , where c > 1 and α 0 > ( c 1 ) δ / x 0 x ˆ .

The following technical lemma from [10] is used to prove some of the results of this paper.

Lemma 3.1 ([10], Lemma 3.1)

Let a > 0 and b 0 be such that 4 a b 1 and θ : = ( 1 1 4 a b ) / 2 a . Let θ 1 , , θ n be non-negative real numbers such that θ k + 1 a θ k 2 + b and θ 0 θ . Then θ k θ for all k = 1 , 2 , , n .

The rest of the results in this paper can be proved along the same lines as those of the proof in [10]. In order for us to make the paper as self-contained as possible, we present the proof of one of them, and for the proof of the rest, we refer the reader to [10].

Theorem 3.2 ([10], Theorem 3.2)

Let (1.2), (1.3), (2.3) and Assumption  2.1 be satisfied. Let N be as in (3.2) for some c > 1 and 6 c K 0 x 0 x ˆ / ( c 1 ) 1 . Then x k δ is defined iteratively for each k { 0 , 1 , , N } and
x k δ x ˆ 2 c x 0 x ˆ c 1
(3.3)
for all k { 0 , 1 , , N } . In particular, if r > 2 c x 0 x ˆ / ( c 1 ) , then x k δ B r ( x ˆ ) for k { 0 , 1 , , N } . Moreover,
α N ( A N δ + α N I ) 1 ( F ( x N δ ) y δ ) c 0 δ
(3.4)

for c 0 : = 7 3 c + 1 .

Proof We show (3.3) by induction. It is obvious that (3.3) holds for k = 0 . Now, assume that (3.3) holds for some k { 0 , 1 , , N } . Then it follows from (1.5) that
x k + 1 δ x ˆ = x k δ x ˆ ( A k δ + α k I ) 1 [ F ( x k δ ) y δ + α k ( x k δ x 0 ) ] = ( A k δ + α k I ) 1 ( ( A k δ + α k I ) ( x k δ x ˆ ) [ F ( x k δ ) y δ + α k ( x k δ x 0 ) ] ) = ( A k δ + α k I ) 1 [ A k δ ( x k δ x ˆ ) + y δ F ( x k δ ) + α k ( x 0 x ˆ ) ] = α k ( A k δ + α k I ) 1 ( x 0 x ˆ ) + ( A k δ + α k I ) 1 ( y δ y ) + ( A k δ + α k I ) 1 [ F ( x ˆ ) F ( x k δ ) + A k δ ( x k δ x ˆ ) ] .
(3.5)
Using (1.3), the estimates ( A k δ + α k I ) 1 1 / α k , ( A k δ + α k I ) 1 A k δ 1 and Proposition 2.3, we have
α k ( A k δ + α k I ) 1 ( x 0 x ˆ ) + ( A k δ + α k I ) 1 ( y δ y ) x 0 x ˆ + δ α k
and
( A k δ + α k I ) 1 [ F ( x ˆ ) F ( x k δ ) + A k δ ( x k δ x ˆ ) ] 3 K 0 2 x k δ x ˆ 2 .
Thus we have
x k + 1 δ x ˆ x 0 x ˆ + δ α k + 3 K 0 2 x k δ x ˆ 2 .
But, by (3.2), δ α k x 0 x ˆ / ( c 1 ) and so
x k + 1 δ x ˆ c x 0 x ˆ c 1 + 3 K 0 2 x k δ x ˆ 2 ,
which leads to the recurrence relation
θ k + 1 a θ k 2 + b ,
where
θ k = x k δ x ˆ , a = 3 K 0 2 , b = c x 0 x ˆ c 1 .
From the hypothesis of the theorem, we have 4 a b = 6 c K 0 x 0 x ˆ c 1 < 1 . It is obvious that
θ 0 x 0 x ˆ θ : = 1 1 4 a b 2 a = 2 b 1 + 1 4 a b 2 b = 2 c x 0 x ˆ c 1 .
Hence, by Lemma 3.1, we get
x k δ x ˆ θ 2 c x 0 x ˆ c 1
(3.6)

for all k { 0 , 1 , , N } . In particular, if r > 2 c x 0 x ˆ / ( c 1 ) , then we have x k δ B r ( x ˆ ) for all k { 0 , 1 , , N } .

Next, let γ = α N ( A N δ + α N I ) 1 ( F ( x N δ ) y δ ) . Then, using the estimates
α N ( A N δ + α N I ) 1 1 , α N ( A N δ + α N I ) 1 A N δ α k
and Proposition 2.3, we have
γ δ + α N ( A N δ + α N I ) 1 ( F ( x N δ ) y + A N δ ( x N δ x ˆ ) A N δ ( x N δ x ˆ ) ) = δ + α N ( A N δ + α N I ) 1 [ F ( x N δ ) F ( x ˆ ) A N δ ( x N δ x ˆ ) + A N δ ( x N δ x ˆ ) ] δ + α N [ 3 K 0 x N δ x ˆ 2 2 + x N δ x ˆ ] δ + α N x N δ x ˆ [ 1 + 3 K 0 x N δ x ˆ 2 ] δ + 2 α N c x 0 δ x ˆ c 1 [ 1 + 3 K 0 c x 0 δ x ˆ c 1 ] δ + 2 c δ [ 1 + 1 6 ] ( 7 c 3 + 1 ) δ .
(3.7)

Therefore, we have α N ( A N δ + α N I ) 1 ( F ( x N δ ) y δ ) c 0 δ , where c 0 : = 7 3 c + 1 . This completes the proof. □

4 Error bound for the case of noise-free data

Let
x k + 1 = x k ( A k + α k I ) 1 [ F ( x k ) y + α k ( x k x 0 ) ]
(4.1)

for all k 0 .

We show that each x k is well defined and belongs to U ( x ˆ , r ) for r > 2 x 0 x ˆ . For this, we make use of the following lemma.

Lemma 4.1 ([10], Lemma 4.1)

Let Assumption  2.1 hold. Suppose that, for all k { 0 , 1 , , n } , x k in (4.1) is well defined and ρ k : = α k ( A k + α k I ) 1 ( x 0 x ˆ ) for some n N . Then we have
ρ k 3 K 0 x k x ˆ 2 2 x k + 1 x ˆ ρ k + 3 K 0 x k x ˆ 2 2
(4.2)

for all k { 0 , 1 , , n } .

Theorem 4.2 ([10], Theorem 4.2)

Let Assumption  2.1 hold. If 6 K 0 x 0 x ˆ 1 and r > 2 x 0 x ˆ , then, for all k N , the iterates x k in (4.1) are well defined and
x k x ˆ 2 x 0 x ˆ 1 + 1 6 K 0 x 0 x ˆ 2 x 0 x ˆ
(4.3)

for all k N .

Lemma 4.3 ([10], Lemma 4.3)

Let Assumptions  2.1 and  2.6 hold and let r > 2 x 0 x ˆ . Assume that A η α 0 and 4 μ ( 1 + η 1 ) K 0 x 0 x ˆ 1 for some η with 0 < η < 1 . Then, for all k N , we have
1 ( 1 + η ) μ x k x ˆ α k ( A k + α k I ) 1 ( x 0 x ˆ ) 1 1 η x k x ˆ
(4.4)
and
1 η ( 1 + η ) μ x k x ˆ ( x k + 1 x ˆ ) ( 1 1 η + η ( 1 + η ) μ ) x k x ˆ .
(4.5)

The following corollary follows from Lemma 4.3 by taking η = 1 / 3 . We show that this particular case of Lemma 4.3 is better suited for our later results.

Corollary 4.4 ([10], Corollary 4.4)

Let Assumptions  2.1 and  2.6 hold and let r > 2 x 0 x ˆ . Assume that A α 0 / 3 and 16 μ K 0 x 0 x ˆ 1 . Then, for all k N , we have
3 4 μ x k x ˆ α k ( A + α k I ) 1 ( x 0 x ˆ ) 3 2 x k x ˆ
(4.6)
and
x k x ˆ 2 μ ( x k + 1 x ˆ ) 2 x k x ˆ .

Theorem 4.5 ([10], Theorem 4.5)

Let the assumptions of Lemma  4.3 hold. If x 0 is chosen such that x 0 x ˆ N ( F ( x ˆ ) ) , then lim k x k = x ˆ .

Lemma 4.6 ([10], Lemma 4.6)

Let the assumptions of Lemma  4.3 hold for η satisfying
( 1 1 η ( 1 + η ) μ ) [ 1 + ( 2 μ 1 ) η + 2 μ ] + 2 η < 4 3 .
(4.7)
Then, for all k , l N { 0 } with k l , we have
x l x ˆ c η [ x k x ˆ + α l ( A + α l I ) 1 ( F ( x l ) y ) α k ] ,
where
c η : = ( 1 b η ) 1 max { μ , 1 + ( 3 ε + 1 ) η 4 ( 1 η ) } , b η : = ( 3 ε + 1 ) η ( 1 η ) + 3 ε a 4 , ε : = 1 1 a a , a : = η ( 1 + η ) μ .

Remark 4.7 ([10], Remark 4.7)

It can be seen that (4.7) is satisfied if η 1 / 3 + 1 / 24 .

Now, if we take η = 1 / 3 , that is, K 0 x 0 x ˆ μ 1 / 16 in Lemma 4.6, then it takes the following form.

Lemma 4.8 ([10], Lemma 4.8)

Let the assumptions of Lemma  4.3 hold with η = 1 / 3 . Then, for all k l 0 , we have
x l x ˆ c 1 / 3 [ x k x ˆ + α l ( A + α l I ) 1 ( F ( x l ) y ) α k ] ,
where
c 1 / 3 = [ 1 8 μ + ( 8 μ + 1 ) 3 ε 16 μ ] 1 max { μ , 1 + 3 ε + 1 8 } , ε : = 4 μ 4 μ + 4 μ 1 .

5 Error analysis with noisy data

The first result in this section gives an error estimate for x k δ x k under Assumption 2.5, where k = 0 , 1 , 2 , , N .

Lemma 5.1 ([10], Lemma 5.1)

Let Assumption  2.1 hold and let K 0 x 0 x ˆ 1 / m , where m > ( 7 + 73 ) / 2 , and N be the integer satisfying (3.2) with
c > m 2 4 m 6 m 2 7 m 6 .
Then, for all k { 0 , 1 , , N } , we have
x k δ x k δ ( 1 κ ) α k ,
(5.1)
where
κ : = 1 m ( 4 + 3 c c 1 + 6 m ) .

If we take m = 8 in Lemma 5.1, then we get the following corollary as a particular case of Lemma 5.1. We make use of it in the following error analysis.

Corollary 5.2 ([10], Corollary 5.2)

Let Assumption  2.1 hold and let 16 K 0 x 0 x ˆ 1 . Let N be the integer defined by (3.2) with c > 13 . Then, for all k { 0 , 1 , , N } , we have
x k δ x k δ ( 1 κ ) α k ,
where
κ : = 31 c 19 32 ( c 1 ) .

Lemma 5.3 ([10], Lemma 5.3)

Let the assumptions of Lemma  5.1 hold. Then we have
α k ( A + α k δ I ) 1 ( F ( x k δ ) y ) c 1 δ .
Moreover, if k δ > 0 , then, for all 0 k < k δ , we have
α k ( A + α k I ) 1 ( F ( x k ) y ) c 2 δ ,
where
c 1 = ( 1 + 2 c K 0 x 0 x ˆ c 1 ) ( c 0 + 2 κ 1 κ + 3 K 0 μ x 0 x ˆ 2 ( 1 κ ) 2 ( c 1 ) ) , c 2 = c 0 ( ( 2 κ ) ( 1 κ ) ) ( 3 K 0 x 0 x ˆ / 2 ( 1 κ ) 2 ( c 1 ) ) 1 + 2 ( c K 0 x 0 x ˆ / ( c 1 ) )

with c 0 = 7 3 c + 1 and κ as in Lemma  5.1.

Theorem 5.4 ([10], Theorem 5.4)

Let Assumptions  2.1 and  2.6 hold. If 16 k μ x 0 x ˆ 1 and the integer k δ is chosen according to stopping rule (3.1) with c 0 > 94 3 , then we have
x k δ δ x ˆ ξ inf { x k x ˆ + δ α k : k 0 } ,
(5.2)

where ξ = max { 2 μ ϱ , c 1 / 3 c 1 + 1 1 κ , c } , ϱ : = 1 + μ ( 1 + 3 K 0 x 0 x ˆ ) c 2 ( 1 κ ) with c 1 / 3 and κ as in Lemma  4.8 and Corollary  5.2, respectively, and c 1 , c 2 as in Lemma  5.3.

6 Order optimal result with an a posteriori stopping rule

In this section, we show the convergence x k δ δ x ˆ as δ 0 and also give an optimal error estimate for x k δ δ x ˆ .

Theorem 6.1 ([10], Theorem 6.1)

Let the assumptions of Theorem  5.4 hold and let k δ be the integer chosen by (3.1). If x 0 is chosen such that x 0 x ˆ N ( F ( x ˆ ) ) , then we have lim δ 0 x k δ δ = x ˆ . Moreover, if Assumption  2.5 is satisfied, then we have
x k δ δ x ˆ ξ μ ψ 1 ( δ ) ,

where ξ : = 8 μ ξ / 3 with ξ as in Theorem  5.4 and ψ : ( 0 , φ ( a ) ] ( 0 , a φ ( a ) ] is defined as ψ ( λ ) : = λ φ 1 ( λ ) , λ ( 0 , φ ( a ) ] .

Proof From (4.6) and (5.2), we get
x k δ δ x ˆ ξ inf { α k ( A + α k I ) 1 ( x 0 x ˆ ) + δ α k : k = 0 , 1 , } ,
(6.1)
where ξ = 4 μ 3 max { 2 μ ( 1 + μ ( 1 + 3 k 0 x 0 x ˆ ) c 2 ( 1 κ ) ) , c 1 / 3 c 1 + 1 1 κ , c } . Now, we choose an integer m δ such that m δ = max { k : α k δ } . Then we have
x k δ δ x ˆ ξ inf { α m δ ( A + α m δ I ) 1 ( x 0 x ˆ ) + δ α m δ : k = 0 , 1 , } .
(6.2)
Note that δ α m δ δ , so δ α m δ 0 as δ 0 . Therefore by (6.2) to show that x k δ δ x ˆ as δ 0 , it is enough to prove that α m δ ( A + α m δ I ) 1 ( x 0 x ˆ ) 0 as δ 0 . Observe that for w R ( F ( x ˆ ) ) , i.e., w = F ( x ˆ ) u for some u D ( F ) , we have α m δ ( A + α m δ I ) 1 w α m δ u 0 as δ 0 . Now since R ( F ( x ˆ ) ) is a dense subset of N ( F ( x ˆ ) ) , it follows that α m δ ( A + α m δ I ) 1 ( x 0 x ˆ ) 0 as δ 0 . Using Assumption 2.5, we get that
α k ( A + α k I ) 1 ( x 0 x ˆ ) φ ( α k ) .
(6.3)
So, by (6.2) and (6.3), we obtain that
x k δ δ x ˆ ξ inf { φ ( α k ) + δ α k : k = 0 , 1 , } .
(6.4)
Choose k ˆ δ such that
φ ( α k ˆ δ ) α k ˆ δ δ < φ ( α k ) α k for  k = 0 , 1 , , k δ 1 .
(6.5)
This also implies that
ψ ( φ ( α k ˆ δ ) ) δ < ψ ( φ ( α k ) ) for  k = 0 , 1 , , k δ 1 .
(6.6)

From (6.4), x k δ δ x ˆ ξ { φ ( α k ˆ δ ) + δ α k ˆ δ } . Now, using (6.5) and (6.6), we get x k δ δ x ˆ 2 ξ δ α k ˆ δ 2 ξ μ δ α k ˆ δ 1 2 ξ μ ψ 1 ( δ ) . This completes the proof. □

7 Numerical examples

We provide two numerical examples, where K 0 < K .

Example 7.1 Let X = R , D ( F ) = U ( 0 , 1 ) ¯ , x ˆ = 0 and define a function F on D ( F ) by
F ( x ) = e x 1 .
(7.1)
Then, using (7.1) and Assumptions 2.1(2) and 2.2, we get
K 0 = e 1 < K = e .
Example 7.2 Let X = C ( [ 0 , 1 ] ) (: the space of continuous functions defined on [ 0 , 1 ] equipped with the max norm) and D ( F ) = U ( 0 , 1 ) ¯ . Define an operator F on D ( F ) by
F ( h ) ( x ) = h ( x ) 5 0 1 x θ h ( θ ) 3 d θ .
(7.2)
Then the Fréchet-derivative is given by
F ( h [ u ] ) ( x ) = u ( x ) 15 0 1 x θ h ( θ ) 2 u ( θ ) d θ
(7.3)

for all u D ( F ) . Using (7.2), (7.3), Assumptions 2.1(2), 2.2 for x ˆ = 0 , we get K 0 = 7.5 < K = 15 .

Next, we provide an example where K K 0 can be arbitrarily large.

Example 7.3 Let X = D ( F ) = R , x ˆ = 0 and define a function F on D ( F ) by
F ( x ) = d 0 x d 1 sin 1 + d 1 sin e d 2 x ,
(7.4)

where d 0 , d 1 and d 2 are the given parameters. Note that F ( x ˆ ) = F ( 0 ) = 0 . Then it can easily be seen that, for d 2 sufficiently large and d 1 sufficiently small, K K 0 can be arbitrarily large.

We now present two examples where Assumption 2.2 is not satisfied, but Assumption 2.1(2) is satisfied.

Example 7.4 Let X = D ( F ) = R , x ˆ = 0 and define a function F on D by
F ( x ) = x 1 + 1 i 1 + 1 i + c 1 x c 1 i i + 1 ,
(7.5)
where c 1 is a real parameter and i > 2 is an integer. Then F ( x ) = x 1 / i + c 1 is not Lipschitz on D. Hence Assumption 2.2 is not satisfied. However, the central Lipschitz condition in Assumption 2.2(2) holds for K 0 = 1 . We also have that F ( x ˆ ) = 0 . Indeed, we have
F ( x ) F ( x ˆ ) = | x 1 / i x ˆ 1 / i | = | x x ˆ | x ˆ i 1 i + + x i 1 i ,
and so
F ( x ) F ( x ˆ ) K 0 | x x ˆ | .
Example 7.5 We consider the integral equation
u ( s ) = f ( s ) + λ a b G ( s , t ) u ( t ) 1 + 1 / n d t
(7.6)

for all n N , where f is a given continuous function satisfying f ( s ) > 0 for all s [ a , b ] , λ is a real number and the kernel G is continuous and positive in [ a , b ] × [ a , b ] .

For example, when G ( s , t ) is the Green kernel, the corresponding integral equation is equivalent to the boundary value problem
u = λ u 1 + 1 / n , u ( a ) = f ( a ) , u ( b ) = f ( b ) .
These types of problems have been considered in [1620]. The equation of the form (7.6) generalizes the equation of the form
u ( s ) = a b G ( s , t ) u ( t ) n d t ,
(7.7)
which was studied in [1620]. Instead of (7.6), we can try to solve the equation F ( u ) = 0 , where
F : Ω C [ a , b ] C [ a , b ] , Ω = { u C [ a , b ] : u ( s ) 0 , s [ a , b ] }
and
F ( u ) ( s ) = u ( s ) f ( s ) λ a b G ( s , t ) u ( t ) 1 + 1 / n d t .
The norm we consider is the max-norm. The derivative F is given by
F ( u ) v ( s ) = v ( s ) λ ( 1 + 1 n ) a b G ( s , t ) u ( t ) 1 / n v ( t ) d t
for all v Ω . First of all, we notice that F does not satisfy the Lipschitz-type condition in Ω. Let us consider, for instance, [ a , b ] = [ 0 , 1 ] , G ( s , t ) = 1 and y ( t ) = 0 . Then we have F ( y ) v ( s ) = v ( s ) and
F ( x ) F ( y ) = | λ | ( 1 + 1 n ) a b x ( t ) 1 / n d t .
If F were the Lipschitz function, then we had
F ( x ) F ( y ) L 1 x y
or, equivalently, the inequality
0 1 x ( t ) 1 / n d t L 2 max x [ 0 , 1 ] x ( s )
(7.8)
would hold for all x Ω and for a constant L 2 . But this is not true. Consider, for example, the function
x j ( t ) = t j
for all j 1 and t [ 0 , 1 ] . If these are substituted into (7.7), then we have
1 j 1 / n ( 1 + 1 / n ) L 2 j j 1 1 / n L 2 ( 1 + 1 / n )
for all j 1 . This inequality is not true when j . Therefore, Assumption 2.2 is not satisfied in this case. However, Assumption 2.1(2) holds. To show this, suppose that x ˆ ( t ) = f ( t ) and γ = min s [ a , b ] f ( s ) . Then, for all v Ω , we have
[ F ( x ) F ( x ˆ ) ] v = | λ | ( 1 + 1 n ) max s [ a , b ] | a b G ( s , t ) ( x ( t ) 1 / n f ( t ) 1 / n ) v ( t ) d t | | λ | ( 1 + 1 n ) max s [ a , b ] G n ( s , t ) ,
where G n ( s , t ) = G ( s , t ) | x ( t ) f ( t ) | x ( t ) ( n 1 ) / n + x ( t ) ( n 2 ) / n f ( t ) 1 / n + + f ( t ) ( n 1 ) / n v . Hence it follows that
[ F ( x ) F ( x ˆ ) ] v = | λ | ( 1 + 1 / n ) γ ( n 1 ) / n max s [ a , b ] a b G ( s , t ) d t x x ˆ K 0 x x ˆ ,

where K 0 = | λ | ( 1 + 1 / n ) γ ( n 1 ) / n N and N = max s [ a , b ] a b G ( s , t ) d t . Then Assumption 2.1(2) holds for sufficiently small λ.

In the following remarks, we compare our results with the corresponding ones in [10].

Remark 7.6 Note that the results in [10] were shown using Assumption 2.2, whereas we used weaker Assumption 2.1(2) in this paper. Next, our result, Proposition 2.3, was shown with 3 K 0 replacing K. Therefore, if 3 K 0 < K (see Example 7.3), then our result is tighter. Proposition 2.4 was shown with K 0 replacing K. Then, if K 0 < K , then our result is tighter. Theorem 3.2 was shown with 6 K 0 replacing 2K. Hence, if 3 K 0 < K , our result is tighter. Similar favorable to us observations are made for Lemma 4.1, Theorem 4.2 and the rest of the results in [10].

Remark 7.7 The results obtained here can also be realized for the operators F satisfying an autonomous differential equation of the form
F ( x ) = P ( F ( x ) ) ,

where P : X X is a known continuous operator. Since F ( x ˆ ) = P ( F ( x ˆ ) ) = P ( 0 ) , we can compute K 0 in Assumption 2.1(2) without actually knowing x ˆ . Returning back to Example 7.1, we see that we can set P ( x ) = x + 1 .

Declarations

Acknowledgements

Dedicated to Professor Hari M Srivastava.

This paper was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (Grant Number: 2012-0008170).

Authors’ Affiliations

(1)
Department of Mathematical Sciences, Cameron University
(2)
Department of Mathematics Education and the RINS, Gyeongsang National University
(3)
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka

References

  1. Binder A, Engl HW, Groetsch CW, Neubauer A, Scherzer O: Weakly closed nonlinear operators and parameter identification in parabolic equations by Tikhonov regularization. Appl. Anal. 1994, 55: 215-235. 10.1080/00036819408840301MathSciNetView ArticleMATH
  2. Engl HW, Hanke M, Neubauer A: Regularization of Inverse Problems. Kluwer, Dordrecht; 1993.MATH
  3. Engl HW, Kunisch K, Neubauer A: Convergence rates for Tikhonov regularization of nonlinear ill-posed problems. Inverse Probl. 1989, 5: 523-540. 10.1088/0266-5611/5/4/007MathSciNetView ArticleMATH
  4. Jin Q, Hou ZY: On the choice of the regularization parameter for ordinary and iterated Tikhonov regularization of nonlinear ill-posed problems. Inverse Probl. 1997, 13: 815-827. 10.1088/0266-5611/13/3/016MathSciNetView ArticleMATH
  5. Jin Q, Hou ZY: On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems. Numer. Math. 1990, 83: 139-159.MathSciNetMATH
  6. Scherzer O, Engl HW, Kunisch K: Optimal a posteriori parameter choice for Tikhonov regularization for solving nonlinear ill-posed problems. SIAM J. Numer. Anal. 1993, 30: 1796-1838. 10.1137/0730091MathSciNetView ArticleMATH
  7. Tautenhahn U: Lavrentiev regularization of nonlinear ill-posed problems. Vietnam J. Math. 2004, 32: 29-41.MathSciNetMATH
  8. Tautenhahn U: On the method of Lavrentiev regularization for nonlinear ill-posed problems. Inverse Probl. 2002, 18: 191-207. 10.1088/0266-5611/18/1/313MathSciNetView ArticleMATH
  9. Bakushinskii A, Smirnova A: Iterative regularization and generalized discrepancy principle for monotone operator equations. Numer. Funct. Anal. Optim. 2007, 28: 13-25. 10.1080/01630560701190315MathSciNetView ArticleMATH
  10. Mahale P, Nair MT: Iterated Lavrentiev regularization for nonlinear ill-posed problems. ANZIAM J. 2009, 51: 191-217. 10.1017/S1446181109000418MathSciNetView ArticleMATH
  11. Bakushinskii A, Smirnova A: On application of generalized discrepancy principle to iterative methods for nonlinear ill-posed problems. Numer. Funct. Anal. Optim. 2005, 26: 35-48. 10.1081/NFA-200051631MathSciNetView ArticleMATH
  12. Bakushinskii A, Smirnova A: A posteriori stopping rule for regularized fixed point iterations. Nonlinear Anal. 2006, 64: 1255-1261. 10.1016/j.na.2005.06.031MathSciNetView ArticleMATH
  13. Jin Q: On the iteratively regularized Gauss-Newton method for solving nonlinear ill-posed problems. Math. Comput. 2000, 69: 1603-1623. 10.1090/S0025-5718-00-01199-6View ArticleMathSciNetMATH
  14. Mahale P, Nair MT: General source conditions for nonlinear ill-posed problems. Numer. Funct. Anal. Optim. 2007, 28: 111-126. 10.1080/01630560701189929MathSciNetView ArticleMATH
  15. Semenova EV: Lavrentiev regularization and balancing principle for solving ill-posed problems with monotone operators. Comput. Methods Appl. Math. 2010, 4: 444-454.MathSciNetMATH
  16. Argyros IK: Convergence and Application of Newton-Type Iterations. Springer, New York; 2008.MATH
  17. Argyros IK: Approximating solutions of equations using Newton’s method with a modified Newton’s method iterate as a starting point. Rev. Anal. Numér. Théor. Approx. 2007, 36: 123-138.MathSciNetMATH
  18. Argyros IK: A semilocal convergence for directional Newton methods. Math. Comput. 2011, 80: 327-343.View ArticleMathSciNetMATH
  19. Argyros IK, Hilout S: Weaker conditions for the convergence of Newton’s method. J. Complex. 2012, 28: 364-387. 10.1016/j.jco.2011.12.003MathSciNetView ArticleMATH
  20. Argyros IK, Cho YJ, Hilout S: Numerical Methods for Equations and Its Applications. CRC Press, New York; 2012.MATH
  21. Tautenhahn U, Jin Q: Tikhonov regularization and a posteriori rule for solving nonlinear ill-posed problems. Inverse Probl. 2003, 19: 1-21. 10.1088/0266-5611/19/1/301MathSciNetView ArticleMATH

Copyright

© Argyros et al.; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.