Open Access

Positive solutions for a system of fractional integral boundary value problem

Boundary Value Problems20132013:256

https://doi.org/10.1186/1687-2770-2013-256

Received: 9 March 2013

Accepted: 8 November 2013

Published: 25 November 2013

Abstract

In this paper, we shall discuss the existence of positive solutions for the system of fractional integral boundary value problem

{ D 0 + α u i ( t ) + f i ( t , u 1 ( t ) , u 2 ( t ) ) = 0 , 0 < t < 1 , i = 1 , 2 , u i ( 0 ) = u i ( 0 ) = 0 , u i ( 1 ) = 0 1 u i ( t ) d η ( t ) ,

where α ( 2 , 3 ] is a real number , D 0 + α is the standard Riemann-Liouville fractional derivative of order α and f i C ( [ 0 , 1 ] × R + × R + , R ) , i = 1 , 2 . 0 1 u i ( t ) d η ( t ) denotes the Riemann-Stieltjes integral, i.e., η ( t ) has bounded variation. By virtue of some inequalities associated with Green’s function, without the assumption of the nonnegativity of f i , we utilize the fixed point index theory to establish our main results. In addition, a square function and its inverse function are used to characterize coupling behaviors of f i , so that f i are allowed to grow superlinearly and sublinearly.

MSC: 34B10, 34B18, 34A34, 45G15, 45M20.

Keywords

fractional integral boundary value problem positive solution fixed point index superlinearly and sublinearly

1 Introduction

In this paper, we study the existence of positive solutions for the system of fractional integral boundary value problem
{ D 0 + α u i ( t ) + f i ( t , u 1 ( t ) , u 2 ( t ) ) = 0 , 0 < t < 1 , i = 1 , 2 , u i ( 0 ) = u i ( 0 ) = 0 , u i ( 1 ) = 0 1 u i ( t ) d η ( t ) ,
(1.1)

where α ( 2 , 3 ] is a real number , D 0 + α is the standard Riemann-Liouville fractional derivative of order α and f i C ( [ 0 , 1 ] × R + × R + , R ) , i = 1 , 2 . 0 1 u i ( t ) d η ( t ) denotes the Riemann-Stieltjes integral, η is right continuous on [ 0 , 1 ) , left continuous at t = 1 , and nondecreasing on [ 0 , 1 ] , with η ( 0 ) = 0 .

The subject of multi-point nonlocal boundary value problems, initiated by Il’in and Moiseev [1], has been addressed by many authors. The multi-point boundary conditions appear in certain problems of thermodynamics, elasticity, and wave propagation; see [2] and the references therein. For example, the vibrations of a guy wire of a uniform cross-section and composed of N parts of different densities can be set up as a multi-point boundary value problem (see [3]); many problems in the theory of elastic stability can be handled by the method of multi-point problems (see [4]). On the other hand, we all know that the Riemann-Stieltjes integral, as in the form of 0 1 u ( s ) d η ( s ) , where η is of bounded variation, that is, dη can be a signed measure, includes as special cases the multi-point boundary value problems and integral boundary value problems. That is why many authors are particularly interested in Riemann-Stieltjes integral boundary value problems.

Meanwhile, we also note that fractional differential equation’s modeling capabilities in engineering, science, economy, and other fields, have resulted in a rapid development of the theory of fractional differential equations in the last few decades; see the recent books [59]. This may explain the reason why the last few decades have witnessed an overgrowing interest in the research of such problems, with many papers in this direction published. Recently, there are some papers dealing with the existence of solutions (or positive solutions) of nonlinear fractional differential equation by the use of techniques of nonlinear analysis (fixed-point theorems, Leray-Schauder theory, upper and lower solution method, etc.); for example, see [1018] and the references therein.

However, to the best our knowledge, there are only a few papers dealing with systems with fractional boundary value problems. In [13] and [18], Bai and Su considered respectively the existence of solutions for systems of fractional differential equations, and obtained some excellent results. Motivated by the works mentioned above, in this paper, we shall discuss the existence of positive solutions for the system of fractional integral boundary value problem (1.1). It is interesting that a square function and its inverse function are used to characterize coupling behaviors of f i , so that f i are allowed to grow superlinearly and sublinearly.

2 Preliminaries

We first offer some definitions and fundamental facts of fractional calculus theory, which can be found in [59].

Definition 2.1 (see [7, 8], [[6], pp.36-37])

The Riemann-Liouville fractional derivative of order α > 0 of a continuous function f : ( 0 , + ) R is given by
D 0 + α f ( t ) = 1 Γ ( n α ) ( d d t ) n 0 t f ( s ) ( t s ) α n + 1 d s ,

where n = [ α ] + 1 , [ α ] denotes the integer part of number α, provided that the right-hand side is pointwise defined on ( 0 , + ) .

Definition 2.2 (see [[6], Definition 2.1])

The Riemann-Liouville fractional integral of order α > 0 of a function f : ( 0 , + ) R is given by
I 0 + α f ( t ) = 1 Γ ( α ) 0 t ( t s ) α 1 f ( s ) d s ,

provided that the right-hand side is pointwise defined on ( 0 , + ) .

From the definition of the Riemann-Liouville derivative, we can obtain the following statement.

Lemma 2.1 (see [11])

Let α > 0 . If we assume u C ( 0 , 1 ) L ( 0 , 1 ) , then the fractional differential equation D 0 + α u ( t ) = 0 has a unique solution
u ( t ) = c 1 t α 1 + c 2 t α 2 + + c N t α N , c i R , i = 1 , 2 , , N ,

where N is the smallest integer greater than or equal to α.

Lemma 2.2 (see [11])

Assume that u C ( 0 , 1 ) L ( 0 , 1 ) with a fractional derivative of order α > 0 that belongs to C ( 0 , 1 ) L ( 0 , 1 ) . Then
I 0 + α D 0 + α u ( t ) = u ( t ) + c 1 t α 1 + c 2 t α 2 + + c N t α N , for some  c i R , i = 1 , 2 , , N ,

where N is the smallest integer greater than or equal to α.

In what follows, we need to consider the following fractional integral boundary value problem:
{ D 0 + α u ( t ) + h ( t , u ) = 0 , 0 < t < 1 , u ( 0 ) = u ( 0 ) = 0 , u ( 1 ) = 0 1 u ( t ) d η ( t ) ,
(2.1)

then we present Green’s function for (2.1), and study the properties of Green’s function. In our paper, we always assume that the following two conditions are satisfied:

(H0) κ 0 : = 1 0 1 t α 1 d η ( t ) > 0 .

(H1) h C ( [ 0 , 1 ] × R + , R ) is bounded from below, i.e., there is a positive constant M such that h ( t , u ) M , ( t , u ) [ 0 , 1 ] × R + .

Lemma 2.3 Let (H0), (H1) hold. Then problem (2.1) is equivalent to
u ( t ) = 0 1 G ( t , s ) h ( s , u ( s ) ) d s ,
where
G ( t , s ) = H ( t , s ) + κ 0 1 t α 1 0 1 H ( t , s ) d η ( t ) ,
(2.2)
and
H ( t , s ) : = 1 Γ ( α ) { t α 1 ( 1 s ) α 1 ( t s ) α 1 , 0 s t 1 , t α 1 ( 1 s ) α 1 , 0 t s 1 .
(2.3)
Proof By Lemmas 2.1 and 2.2, we can reduce the equation of problem (2.1) to an equivalent integral equation
u ( t ) = I 0 + α h ( t ) + c 1 t α 1 + c 2 t α 2 + c 3 t α 3 = 1 Γ ( α ) 0 t ( t s ) α 1 h ( s ) d s + c 1 t α 1 + c 2 t α 2 + c 3 t α 3 ,
(2.4)
where c i ( i = 1 , 2 , 3 ) are fixed constants. By u ( 0 ) = 0 , there is c 3 = 0 . Thus,
u ( t ) = 1 Γ ( α ) 0 t ( t s ) α 1 h ( s ) d s + c 1 t α 1 + c 2 t α 2 .
(2.5)
Differentiating (2.5), we have
u ( t ) = α 1 Γ ( α ) 0 t ( t s ) α 2 h ( s ) d s + c 1 ( α 1 ) t α 2 + c 2 ( α 2 ) t α 3 .
(2.6)
By (2.6) and u ( 0 ) = 0 , we have c 2 = 0 . Then
u ( t ) = 1 Γ ( α ) 0 t ( t s ) α 1 h ( s ) d s + c 1 t α 1 .
(2.7)
From u ( 1 ) = 0 1 u ( t ) d η ( t ) , we arrive at
u ( 1 ) = 1 Γ ( α ) 0 1 ( 1 s ) α 1 h ( s ) d s + c 1 = 0 1 u ( t ) d η ( t ) ,
and thus
c 1 = 1 Γ ( α ) 0 1 ( 1 s ) α 1 h ( s ) d s + 0 1 u ( t ) d η ( t ) .
Therefore, we obtain by (2.7)
u ( t ) = 1 Γ ( α ) 0 t ( t s ) α 1 h ( s ) d s + t α 1 Γ ( α ) 0 1 ( 1 s ) α 1 h ( s ) d s + t α 1 0 1 u ( t ) d η ( t ) = 0 1 H ( t , s ) h ( s ) d s + t α 1 0 1 u ( t ) d η ( t ) ,
(2.8)
where H ( t , s ) is defined by (2.3). From (2.8), we have
0 1 u ( t ) d η ( t ) = 0 1 d η ( t ) 0 1 H ( t , s ) h ( s ) d s + 0 1 t α 1 d η ( t ) 0 1 u ( t ) d η ( t ) ,
(2.9)
and by (H0) we find
0 1 u ( t ) d η ( t ) = κ 0 1 0 1 d η ( t ) 0 1 H ( t , s ) h ( s ) d s .
(2.10)
Combining (2.8) and (2.10), we see
u ( t ) = 0 1 H ( t , s ) h ( s ) d s + t α 1 κ 0 1 0 1 d η ( t ) 0 1 H ( t , s ) h ( s ) d s = 0 1 G ( t , s ) h ( s ) d s ,
(2.11)

where G ( t , s ) is determined by (2.2). This completes the proof. □

Lemma 2.4 (see [[10], Lemma 3.2])

For any ( t , s ) [ 0 , 1 ] × [ 0 , 1 ] , let k ( t ) : = t α 1 ( 1 t ) + κ 0 1 t α 1 0 1 t α 1 ( 1 t ) d η ( t ) , φ ( t ) : = t ( 1 t ) α 1 Γ ( α ) , μ = ( α 1 ) ( 1 + η ( 1 ) κ 0 1 ) > 0 . Then the following two inequalities are satisfied:
  1. (i)

    k ( t ) φ ( s ) G ( t , s ) μ φ ( s ) ,

     
  2. (ii)

    H ( t , s ) Γ 1 ( α ) ( α 1 ) t α 1 ( 1 t ) .

     
Proof (i) For s t , we have 1 s 1 t , then
Γ ( α ) H ( t , s ) = t α 1 ( 1 s ) α 1 ( t s ) α 1 = ( α 1 ) t s t t s x α 2 d x ( α 1 ) ( t t s ) α 2 ( ( t t s ) ( t s ) ) = ( α 1 ) t α 2 ( 1 s ) α 2 s ( 1 t ) ( α 1 ) t α 2 ( 1 s ) α 2 s ( 1 s ) ( α 1 ) s ( 1 s ) α 1 .
(2.12)
On the other hand, for t s , since α > 2 , we have
Γ ( α ) H ( t , s ) = t α 1 ( 1 s ) α 1 ( α 1 ) t α 1 ( 1 s ) α 1 = ( α 1 ) t α 2 t ( 1 s ) α 1 ( α 1 ) t α 2 s ( 1 s ) α 1 ( α 1 ) s ( 1 s ) α 1 .
Consequently,
Γ ( α ) G ( t , s ) = Γ ( α ) H ( t , s ) + κ 0 1 t α 1 0 1 Γ ( α ) H ( t , s ) d η ( t ) ( α 1 ) s ( 1 s ) α 1 + κ 0 1 t α 1 0 1 ( α 1 ) s ( 1 s ) α 1 d η ( t ) ( α 1 ) ( 1 + η ( 1 ) κ 0 1 ) s ( 1 s ) α 1 = μ φ ( s ) .
Moreover, for s t , note that ( t s ) α 2 ( t t s ) α 2 , ( 1 s ) α 2 ( 1 s ) α 1 , and t α 2 t α 1 , then we find
Γ ( α ) H ( t , s ) = t α 1 ( 1 s ) α 1 ( t s ) α 1 = ( t t s ) α 2 ( t t s ) ( t s ) α 2 ( t s ) ( t t s ) α 2 ( t t s ) ( t t s ) α 2 ( t s ) = t α 2 ( 1 s ) α 2 s ( 1 t ) t α 1 ( 1 t ) s ( 1 s ) α 1 .
On the other hand, for t s , we have
Γ ( α ) H ( t , s ) = t α 1 ( 1 s ) α 1 t α 1 ( 1 t ) s ( 1 s ) α 1 .
Therefore, we get
Γ ( α ) G ( t , s ) = Γ ( α ) H ( t , s ) + κ 0 1 t α 1 0 1 Γ ( α ) H ( t , s ) d η ( t ) t α 1 ( 1 t ) s ( 1 s ) α 1 + κ 0 1 t α 1 0 1 t α 1 ( 1 t ) s ( 1 s ) α 1 d η ( t ) s ( 1 s ) α 1 [ t α 1 ( 1 t ) + κ 0 1 t α 1 0 1 t α 1 ( 1 t ) d η ( t ) ] = k ( t ) φ ( s ) .
  1. (ii)
    If t s , since α > 2 , we have ( 1 s ) α 1 ( 1 t ) α 1 ( 1 t ) and
    H ( t , s ) Γ 1 ( α ) ( α 1 ) t α 1 ( 1 s ) α 1 Γ 1 ( α ) ( α 1 ) t α 1 ( 1 t ) .
     
For s t , we have 1 s 1 t , then by (2.12) we get
H ( t , s ) Γ 1 ( α ) ( α 1 ) t α 2 ( 1 s ) α 2 s ( 1 t ) Γ 1 ( α ) ( α 1 ) t α 2 ( 1 s ) α 2 t ( 1 t ) Γ 1 ( α ) ( α 1 ) t α 1 ( 1 t ) .

This completes the proof. □

Lemma 2.5 Let κ 1 : = α Γ ( α + 1 ) Γ ( 2 α + 2 ) + κ 0 1 Γ ( α + 1 ) 0 1 t α 1 ( 1 t ) d η ( t ) Γ ( 2 α + 1 ) and κ 2 : = ( α 1 ) ( 1 + η ( 1 ) κ 0 1 ) Γ ( α + 2 ) . Then the following inequality holds:
κ 1 φ ( s ) 0 1 G ( t , s ) φ ( t ) d t κ 2 φ ( s ) , s [ 0 , 1 ] .
(2.13)
Proof By (i) of Lemma 2.4, we have
[ α Γ ( α + 1 ) Γ ( 2 α + 2 ) + κ 0 1 Γ ( α + 1 ) 0 1 t α 1 ( 1 t ) d η ( t ) Γ ( 2 α + 1 ) ] φ ( s ) = 0 1 [ t α 1 ( 1 t ) + κ 0 1 t α 1 0 1 t α 1 ( 1 t ) d η ( t ) ] φ ( s ) φ ( t ) d t 0 1 G ( t , s ) φ ( t ) d t 0 1 ( α 1 ) ( 1 + η ( 1 ) κ 0 1 ) φ ( s ) φ ( t ) d t = ( α 1 ) ( 1 + η ( 1 ) κ 0 1 ) Γ ( α + 2 ) φ ( s ) ,

and we easily obtain (2.13), as claimed. This completes the proof. □

Let
E : = C [ 0 , 1 ] , u : = max t [ 0 , 1 ] | u ( t ) | , P : = { u E : u ( t ) 0 , t [ 0 , 1 ] } .

Then ( E , ) is a real Banach space and P is a cone on E.

The norm on E × E is defined by ( u , v ) : = u + v , ( u , v ) E × E . Note that E × E is a real Banach space under the above norm, and P × P is a positive cone on E × E .

By Lemma 2.3, we can obtain that system (1.1) is equivalent to the system of nonlinear Hammerstein integral equations
u i ( t ) = 0 1 G ( t , s ) f i ( s , u 1 ( s ) , u 2 ( s ) ) d s , i = 1 , 2 ,
(2.14)

where G ( t , s ) is defined by (2.2).

Lemma 2.6 (i) If u ( t ) is a positive solution of (2.1), then u ( t ) + w ( t ) is a positive solution of the following differential equation:
{ D 0 + α u = F ( t , u ( t ) w ( t ) ) , u ( 0 ) = u ( 0 ) = 0 , u ( 1 ) = 0 1 u ( t ) d η ( t ) ,
(2.15)
where
F ( t , x ) : = { h ˜ ( t , x ) , t [ 0 , 1 ] , x 0 , h ˜ ( t , 0 ) , t [ 0 , 1 ] , x < 0 ,
the function h ˜ ( t , x ) = h ( t , x ) + M , h ˜ : [ 0 , 1 ] × R + R + is continuous,
w ( t ) : = M 0 1 G ( t , s ) d s , t [ 0 , 1 ] .
(2.16)
  1. (ii)

    If u ( t ) is a solution of (2.15) and u ( t ) w ( t ) , t [ 0 , 1 ] , then u ( t ) = u ( t ) w ( t ) is a positive solution of (2.1).

     
Proof If u ( t ) is a positive solution of (2.1), then we obtain
{ D 0 + α u = h ( t , u ( t ) ) , u ( 0 ) = u ( 0 ) = 0 , u ( 1 ) = 0 1 u ( t ) d η ( t ) .
By a simple computation, we easily get u ( 0 ) + w ( 0 ) = u ( 0 ) + w ( 0 ) = 0 , u ( 1 ) + w ( 1 ) = 0 1 ( u ( t ) + w ( t ) ) d η ( t ) and
D 0 + α ( u ( t ) + w ( t ) ) + F ( t , u ( t ) ) = D 0 + α u ( t ) + D 0 + α w ( t ) + h ( t , u ( t ) ) + M = D 0 + α w ( t ) + M = D 0 + α M 0 1 G ( t , s ) d s + M = M + M = 0 ,

i.e., u ( t ) + w ( t ) satisfies (2.15). Therefore, (i) holds, as claimed. Similarly, it is easy to prove that (ii) is also satisfied. This completes the proof. □

By Lemma 2.3, we obtain that (2.15) is equivalent to the integral equation
u ( t ) = 0 1 G ( t , s ) F ( s , u ( s ) w ( s ) ) d s : = ( T u ) ( t ) ,
(2.17)

where G ( t , s ) is determined by (2.2). Clearly, the continuity and nonnegativity of G and F imply that T : P P is a completely continuous operator.

Lemma 2.7 Put P 1 : = { u P : u ( t ) μ 1 k ( t ) u  for  t [ 0 , 1 ] } . Then T ( P ) P 1 , where μ and T are defined by Lemma  2.4 and (2.17), respectively.

Proof By (i) of Lemma 2.4, we easily find
0 1 k ( t ) φ ( s ) F ( s , u ( s ) w ( s ) ) d s ( T u ) ( t ) = 0 1 G ( t , s ) F ( s , u ( s ) w ( s ) ) d s 0 1 ( α 1 ) ( 1 + η ( 1 ) κ 0 1 ) φ ( s ) F ( s , u ( s ) w ( s ) ) d s ,
and thus
( T u ) ( t ) k ( t ) ( α 1 ) ( 1 + η ( 1 ) κ 0 1 ) 0 1 ( α 1 ) ( 1 + η ( 1 ) κ 0 1 ) φ ( s ) F ( s , u ( s ) w ( s ) ) d s k ( t ) ( α 1 ) ( 1 + η ( 1 ) κ 0 1 ) T u .

This completes the proof. □

In this paper, we assume that f i ( i = 1 , 2 ) satisfy the following condition:

(H2) f i ( t , x , y ) C ( [ 0 , 1 ] × R + × R + , R ) and there is a positive constant M such that f i ( t , x , y ) M , ( t , x , y ) [ 0 , 1 ] × R + × R + .

By (H2) and Lemma 2.6, (2.14) is turned into the following integral equation:
u i ( t ) = 0 1 G ( t , s ) F i ( s , u 1 ( s ) w ( s ) , u 2 ( s ) w ( s ) ) d s ,
(2.18)
where
F i ( t , x , y ) : = { f ˜ i ( t , x , y ) , t [ 0 , 1 ] , x , y 0 , f ˜ i ( t , 0 , 0 ) , t [ 0 , 1 ] , x , y < 0 ,

the function f ˜ i ( t , x , y ) = f i ( t , x , y ) + M , f ˜ i C ( [ 0 , 1 ] × R + × R + , R + ) and w ( t ) is denoted by (2.16). By Lemma 2.6, we know if ( u 1 , u 2 ) is a solution of (2.18) and u i ( t ) w ( t ) , t [ 0 , 1 ] , then ( u 1 = u 1 w , u 2 = u 2 w ) is a positive solution of (1.1).

Define the operator A as follows:
A ( u 1 , u 2 ) ( t ) : = ( A 1 ( u 1 , u 2 ) , A 2 ( u 1 , u 2 ) ) ( t ) ,
(2.19)
where
A i ( u 1 , u 2 ) ( t ) = 0 1 G ( t , s ) F i ( s , u 1 ( s ) w ( s ) , u 2 ( s ) w ( s ) ) d s .

It is obvious that A i ( i = 1 , 2 ) : P × P P , A : P × P P × P are completely continuous operators. Clearly, ( u 1 w , u 2 w ) P × P is a positive solution of (1.1) if and only if ( u 1 , u 2 ) ( P × P ) { 0 } is a fixed point of A and u i w , i = 1 , 2 .

The following two lemmas play some important roles in our proofs involving fixed point index.

Lemma 2.8 ([19])

Let Ω E be a bounded open set, and let A : Ω ¯ P P be a completely continuous operator. If there exists v 0 P { 0 } such that v A v λ v 0 for all v Ω P and λ 0 , then i ( A , Ω P , P ) = 0 .

Lemma 2.9 ([19])

Let Ω E be a bounded open set with 0 Ω . Suppose that A : Ω ¯ P P is a completely continuous operator. If v λ A v for all v Ω P and 0 λ 1 , then i ( A , Ω P , P ) = 1 .

3 The existence of positive solutions for (1.1)

We list the assumptions on F i ( i = 1 , 2 ) in this section.

(H3) There are c > 0 and ξ i > 0 , i = 1 , 2 , satisfying ξ 1 ξ 2 1 2 μ 1 2 Γ 1 2 ( α ) κ 1 2 > 1 such that
F 1 ( t , x , y ) ξ 1 y c , F 2 ( t , x , y ) ξ 2 x 2 c , ( t , x , y ) [ 0 , 1 ] × R + × R + .
(H4) There exists Q ( t ) : [ 0 , 1 ] ( , + ) such that
F i ( t , x , y ) Q ( t ) , t [ 0 , 1 ] , i = 1 , 2 , x , y [ 0 , M μ Γ 1 ( α ) ( α 1 ) ] , 0 1 φ ( s ) Q ( s ) d s < M Γ 1 ( α ) ( α 1 ) .
(H5) There are c > 0 and ξ i > 0 , i = 3 , 4 , satisfying 2 μ Γ 1 ( α ) ξ 3 ξ 4 2 κ 2 2 < 1 such that
F 1 ( t , x , y ) ξ 3 y 2 + c , F 2 ( t , x , y ) ξ 4 x + c , ( t , x , y ) [ 0 , 1 ] × R + × R + .
(H6) There exist Q : [ 0 , 1 ] ( , + ) , θ ( 0 , 1 2 ) , and t 0 [ θ , 1 θ ] such that
F i ( t , x , y ) Q ( t ) , t [ θ , 1 θ ] , i = 1 , 2 , x , y [ 0 , M μ Γ 1 ( α ) ( α 1 ) ] , θ 1 θ k ( t 0 ) φ ( s ) Q ( s ) d s M μ Γ 1 ( α ) ( α 1 ) .

We adopt the convention in the sequel that c 1 , c 2 , stand for different positive constants. We denote B ρ : = { u E : u < ρ } for ρ > 0 in the sequel.

Theorem 3.1 Suppose that (H2)-(H4) hold, (1.1) has at least a positive solution.

Proof By Lemma 2.6, it suffices to find a fixed point ( u 1 , u 2 ) of A satisfying u i ( t ) w ( t ) , t [ 0 , 1 ] . By Lemma 2.7, for any u i P and t [ 0 , 1 ] , noting (ii) of Lemma 2.4, together with
0 1 G ( t , s ) d s = 0 1 ( H ( t , s ) + κ 0 1 t α 1 0 1 H ( t , s ) d η ( t ) ) d s Γ 1 ( α ) ( α 1 ) 0 1 ( t α 1 ( 1 t ) + κ 0 1 t α 1 0 1 t α 1 ( 1 t ) d η ( t ) ) d s = Γ 1 ( α ) ( α 1 ) k ( t ) ,
we have
u i ( t ) w ( t ) = u i ( t ) M 0 1 G ( t , s ) d s u i ( t ) M Γ 1 ( α ) ( α 1 ) k ( t ) u i ( t ) M μ Γ 1 ( α ) ( α 1 ) u i ( t ) u i 1 , i = 1 , 2 .
(3.1)

Therefore, u i M μ Γ 1 ( α ) ( α 1 ) leads to u i ( t ) w ( t ) , t [ 0 , 1 ] .

In what follows, we first show that there exists an adequately big positive number R > M μ Γ 1 ( α ) ( α 1 ) such that the following claim holds:
( u 1 , u 2 ) A ( u 1 , u 2 ) + λ ( ψ , ψ ) , ( u 1 , u 2 ) B R ( P × P ) , λ 0 ,
(3.2)
where ψ P is a given function. Indeed, if the claim is false, there exist ( u , v ) B R ( P × P ) and λ 0 such that ( u , v ) = A ( u , v ) + λ ( ψ , ψ ) , then u A 1 ( u , v ) and v A 2 ( u , v ) . In view of (H3) and the definition of A i ( i = 1 , 2 ), we get
u ( t ) 0 1 G ( t , s ) ξ 1 v ( s ) w ( s ) d s c 1 0 1 G ( t , s ) ξ 1 v ( s ) d s 0 1 G ( t , s ) ξ 1 w ( s ) d s c 1 0 1 G ( t , s ) ξ 1 v ( s ) d s c 2 ,
(3.3)
and
v ( s ) 0 1 G ( s , τ ) ξ 2 ( u ( τ ) w ( τ ) ) 2 d τ c 1 .
(3.4)
By the concavity of , we have by (3.4)
v ( s ) v ( s ) + c 1 c 1 0 1 G ( s , τ ) ξ 2 ( u ( τ ) w ( τ ) ) 2 d τ c 1 0 1 G ( s , τ ) ξ 2 ( u ( τ ) w ( τ ) ) 2 d τ c 1 = 0 1 μ Γ 1 ( α ) ξ 2 μ 1 Γ ( α ) G ( s , τ ) ( u ( τ ) w ( τ ) ) d τ c 1 0 1 μ Γ 1 ( α ) ξ 2 μ 1 Γ ( α ) G ( s , τ ) ( u ( τ ) w ( τ ) ) d τ c 1 μ 1 2 Γ 1 2 ( α ) ξ 2 1 2 0 1 G ( s , τ ) u ( τ ) d τ c 3 .
(3.5)
Combining (3.3) and (3.5), we easily find
u ( t ) 0 1 G ( t , s ) ξ 1 [ μ 1 2 Γ 1 2 ( α ) ξ 2 1 2 0 1 G ( s , τ ) u ( τ ) d τ c 3 ] d s c 2 ξ 1 ξ 2 1 2 μ 1 2 Γ 1 2 ( α ) 0 1 0 1 G ( t , s ) G ( s , τ ) u ( τ ) d τ d s c 4 .
(3.6)
Multiply the both sides of the above by φ ( t ) and integrate over [ 0 , 1 ] and use Lemma 2.5 to obtain
0 1 u ( t ) φ ( t ) d t ξ 1 ξ 2 1 2 μ 1 2 Γ 1 2 ( α ) κ 1 2 0 1 u ( t ) φ ( t ) d t c 5 ,
(3.7)
and thus
0 1 u ( t ) φ ( t ) d t c 5 ξ 1 ξ 2 1 2 μ 1 2 Γ 1 2 ( α ) κ 1 2 1 .
(3.8)
Noting Lemma 2.7, we obtain
μ 1 κ 1 u = 0 1 μ 1 k ( t ) u φ ( t ) d t 0 1 u ( t ) φ ( t ) d t c 5 ξ 1 ξ 2 1 2 μ 1 2 Γ 1 2 ( α ) κ 1 2 1 .
(3.9)
Hence,
u μ c 5 ξ 1 ξ 2 1 2 μ 1 2 Γ 1 2 ( α ) κ 1 3 κ 1 : = N 1 .
(3.10)
On the other hand, noting (3.3), together with the concavity of , we arrive at
u + c 2 u ( t ) + c 2 0 1 G ( t , s ) ξ 1 v ( s ) d s ξ 1 v 0 1 G ( t , s ) v ( s ) d s .
(3.11)
Multiply the both sides of the above by φ ( t ) and integrate over [ 0 , 1 ] and use Lemma 2.5, Lemma 2.7 to obtain
Γ 1 ( α + 2 ) ( u + c 2 ) = 0 1 ( u + c 2 ) φ ( t ) d t ξ 1 κ 1 v 0 1 v ( t ) φ ( t ) d t ξ 1 κ 1 v 0 1 μ 1 k ( t ) v φ ( t ) d t = μ 1 ξ 1 κ 1 2 v .
(3.12)
Consequently,
v [ Γ 1 ( α + 2 ) ( N 1 + c 2 ) μ 1 ξ 1 κ 1 2 ] 2 .
(3.13)
Taking R > max { N 1 , M μ Γ 1 ( α ) ( α 1 ) , [ Γ 1 ( α + 2 ) ( N 1 + c 2 ) μ 1 ξ 1 κ 1 2 ] 2 } , which contradicts ( u , v ) B R ( P × P ) . As a result, (3.2) is true. Lemma 2.8 implies
i ( A , B R ( P × P ) , P × P ) = 0 .
(3.14)
On the other hand, by (H4), we have, for i = 1 , 2 ,
A i ( u 1 , u 2 ) ( t ) = 0 1 G ( t , s ) F i ( s , u 1 ( s ) w ( s ) , u 2 ( s ) w ( s ) ) d s 0 1 μ φ ( s ) Q ( s ) d s < M μ Γ 1 ( α ) ( α 1 ) = u i
for any ( t , u 1 , u 2 ) [ 0 , 1 ] × B N × B N ( N = M μ Γ 1 ( α ) ( α 1 ) ), from which we obtain
A ( u 1 , u 2 ) < ( u 1 , u 2 ) , ( u 1 , u 2 ) B N ( P × P ) .
This leads to
( u 1 , u 2 ) λ A ( u 1 , u 2 ) , ( u 1 , u 2 ) B N ( P × P ) , λ [ 0 , 1 ] .
(3.15)
Now Lemma 2.9 implies
i ( A , B N ( P × P ) , P × P ) = 1 .
(3.16)
Combining (3.14) and (3.16) gives
i ( A , ( B R B ¯ N ) ( P × P ) , P × P ) = 0 1 = 1 .

Therefore the operator A has at least one fixed point in ( B R B ¯ N ) ( P × P ) . Equivalently, (1.1) has at least one positive solution. This completes the proof. □

Theorem 3.2 Suppose that (H2), (H5), and (H6) hold, (1.1) has at least a positive solution.

Proof We first find that there exists an adequately big positive number R > M μ Γ 1 ( α ) ( α 1 ) such that the following claim holds:
( u 1 , u 2 ) λ A ( u 1 , u 2 ) , ( u 1 , u 2 ) B R ( P × P ) , λ [ 0 , 1 ] .
(3.17)
If the claim is false, there exist ( u , v ) B R ( P × P ) and λ [ 0 , 1 ] such that ( u , v ) = λ A ( u , v ) . Therefore, u A 1 ( u , v ) and v A 2 ( u , v ) . In view of (H5), we have
u ( t ) 0 1 G ( t , s ) [ ξ 3 ( v ( s ) w ( s ) ) 2 + c ] d s 0 1 G ( t , s ) ξ 3 v 2 ( s ) d s 0 1 G ( t , s ) ξ 3 w 2 ( s ) d s + c 1 0 1 G ( t , s ) ξ 3 v 2 ( s ) d s + c 1 ,
(3.18)
and
v ( s ) 0 1 G ( s , τ ) [ ξ 4 u ( τ ) w ( τ ) + c ] d τ .
(3.19)
By (3.19), the convexity of a square function enables us to obtain
v 2 ( s ) ( 0 1 μ 1 Γ ( α ) G ( s , τ ) μ Γ 1 ( α ) [ ξ 4 u ( τ ) w ( τ ) + c ] d τ ) 2 0 1 μ 1 Γ ( α ) G ( s , τ ) ( μ Γ 1 ( α ) [ ξ 4 u ( τ ) w ( τ ) + c ] ) 2 d τ μ Γ 1 ( α ) 0 1 G ( s , τ ) [ 2 ξ 4 2 ( u ( τ ) w ( τ ) ) + 2 c 2 ] d τ 2 μ Γ 1 ( α ) ξ 4 2 0 1 G ( s , τ ) u ( τ ) d τ + c 6 .
(3.20)
We find from (3.18) and (3.20) that
u ( t ) 0 1 G ( t , s ) ξ 3 [ 2 μ Γ 1 ( α ) ξ 4 2 0 1 G ( s , τ ) u ( τ ) d τ + c 6 ] d s + c 1 2 μ Γ 1 ( α ) ξ 3 ξ 4 2 0 1 0 1 G ( t , s ) G ( s , τ ) u ( τ ) d τ d s + c 7 .
(3.21)
Multiply the both sides of the above by φ ( t ) and integrate over [ 0 , 1 ] and use Lemma 2.5 to obtain
0 1 u ( t ) φ ( t ) d t 2 μ Γ 1 ( α ) ξ 3 ξ 4 2 κ 2 2 0 1 u ( t ) φ ( t ) d t + c 8 .
(3.22)
Noting Lemma 2.7, we obtain
0 1 μ 1 k ( t ) u φ ( t ) d t 0 1 u ( t ) φ ( t ) d t c 8 1 2 μ Γ 1 ( α ) ξ 3 ξ 4 2 κ 2 2 ,
(3.23)
and hence
u μ c 8 κ 1 2 μ Γ 1 ( α ) ξ 3 ξ 4 2 κ 1 κ 2 2 : = N 2 .
(3.24)
Multiply the both sides of (3.19) by φ ( t ) and integrate over [ 0 , 1 ] and use Lemma 2.5, Lemma 2.7, note (3.24), to obtain
μ 1 κ 1 v 0 1 v ( t ) φ ( t ) d t κ 2 0 1 φ ( t ) [ ξ 4 u ( t ) w ( t ) + c ] d t κ 2 0 1 φ ( t ) [ ξ 4 N 2 + c ] d t = Γ 1 ( α + 2 ) κ 2 ( ξ 4 N 2 + c ) .
(3.25)
Consequently,
v μ Γ 1 ( α + 2 ) κ 1 1 κ 2 ( ξ 4 N 2 + c ) .
(3.26)
Take R > max { N 2 , M μ Γ 1 ( α ) ( α 1 ) , μ Γ 1 ( α + 2 ) κ 1 1 κ 2 ( ξ 4 N 2 + c ) } , which contradicts ( u , v ) B R ( P × P ) . As a result, (3.17) is true. So, we have from Lemma 2.9 that
i ( A , B R ( P × P ) , P × P ) = 1 .
(3.27)
On the other hand, by (H6), we have, for i = 1 , 2 ,
A i ( u 1 , u 2 ) ( t 0 ) = 0 1 G ( t 0 , s ) F i ( s , u 1 ( s ) w ( s ) , u 2 ( s ) w ( s ) ) d s θ 1 θ k ( t 0 ) φ ( s ) Q ( s ) d s M μ Γ 1 ( α ) ( α 1 ) = u i ,
and thus A i A i ( u 1 , u 2 ) ( t 0 ) u i for any ( t , u 1 , u 2 ) [ 0 , 1 ] × B N × B N ( N = M μ Γ 1 ( α ) ( α 1 ) ) . This yields
( u 1 , u 2 ) A ( u 1 , u 2 ) + λ ( ψ , ψ ) , ( u 1 , u 2 ) B N ( P × P ) , λ 0 .
Lemma 2.8 gives
i ( A , B N ( P × P ) , P × P ) = 0 .
(3.28)
Combining (3.27) and (3.28) gives
i ( A , ( B R B ¯ N ) ( P × P ) , P × P ) = 1 0 = 1 .

Therefore the operator A has at least one fixed point in ( B R B ¯ N ) ( P × P ) . Equivalently, (1.1) has at least one positive solution. This completes the proof. □

Declarations

Acknowledgements

The author is grateful to anonymous referees for their constructive comments and suggestions, which led to the improvement of the original manuscript. Research supported by the NNSF-China (No. 11202084), NSFC-Tian Yuan Special Foundation (No. 11226116), Natural Science Foundation of Jiangsu Province of China for Young Scholar (No. BK2012109), the China Scholarship Council (No. 201208320435).

Authors’ Affiliations

(1)
School of Science, Jiangnan University

References

  1. Ilin VA, Moiseev EI: Nonlocal boundary value problem of the first kind for a Sturm-Liouville operator in its differential and finite difference aspects. Differ. Equ. 1987, 23: 803-810.Google Scholar
  2. Ma R: Multiple positive solutions for nonlinear m -point boundary value problems. Appl. Math. Comput. 2004, 148: 249-262. 10.1016/S0096-3003(02)00843-3MathSciNetView ArticleGoogle Scholar
  3. Moshinsky M: Sobre los problemas de condiciones a la frontiera en una dimension de caracteristicas discontinuas. Bol. Soc. Mat. Mexicana 1950, 7: 1-25.MathSciNetGoogle Scholar
  4. Timoshenko S: Theory of Elastic Stability. McGraw-Hill, New York; 1961.Google Scholar
  5. Miller K, Ross B: An Introduction to the Fractional Calculus and Fractional Differential Equations. Wiley, New York; 1993.Google Scholar
  6. Samko S, Kilbas A, Marichev O: Fractional Integrals and Derivatives: Theory and Applications. Gordon & Breach, Yverdon; 1993.Google Scholar
  7. Podlubny I Mathematics in Science and Engineering 198. In Fractional Differential Equations. Academic Press, New York; 1999.Google Scholar
  8. Kilbas A, Srivastava H, Trujillo J North-Holland Mathematics Studies 204. In Theory and Applications of Fractional Differential Equations. Elsevier, Amsterdam; 2006.Google Scholar
  9. Lakshmikantham V, Leela S, Vasundhara Devi J: Theory of Fractional Dynamic Systems. Cambridge Academic Publishers, Cambridge; 2009.Google Scholar
  10. Yuan C:Multiple positive solutions for ( n 1 , 1 ) -type semipositone conjugate boundary value problems of nonlinear fractional differential equations. Electron. J. Qual. Theory Differ. Equ. 2010, 36: 1-12.View ArticleGoogle Scholar
  11. Bai Z, Lü H: Positive solutions for boundary value problem of nonlinear fractional differential equation. J. Math. Anal. Appl. 2005, 311: 495-505. 10.1016/j.jmaa.2005.02.052MathSciNetView ArticleGoogle Scholar
  12. Bai C: Impulsive periodic boundary value problems for fractional differential equation involving Riemann-Liouville sequential fractional derivative. J. Math. Anal. Appl. 2011, 384: 211-231. 10.1016/j.jmaa.2011.05.082MathSciNetView ArticleGoogle Scholar
  13. Bai C, Fang J: The existence of a positive solution for a singular coupled system of a nonlinear fractional differential equations. Appl. Math. Comput. 2004, 150(3):611-621. 10.1016/S0096-3003(03)00294-7MathSciNetView ArticleGoogle Scholar
  14. Benchohra M, Hamani S: The method of upper and lower solutions and impulsive fractional differential inclusions. Nonlinear Anal. 2009, 3: 433-440.MathSciNetGoogle Scholar
  15. El-Shahed M: Positive solutions for boundary value problems of nonlinear fractional differential equation. Abstr. Appl. Anal. 2007., 2007: Article ID 10368Google Scholar
  16. Khan R: Existence and approximation of solutions to three-point boundary value problems for fractional differential equations. Electron. J. Qual. Theory Differ. Equ. 2011, 58: 1-8.Google Scholar
  17. Stanĕk S: The existence of positive solutions of singular fractional boundary value problems. Comput. Math. Appl. 2011, 62: 1379-1388. 10.1016/j.camwa.2011.04.048MathSciNetView ArticleGoogle Scholar
  18. Su X: Boundary value problem for a coupled system of nonlinear fractional differential equations. Appl. Math. Lett. 2009, 22: 64-69. 10.1016/j.aml.2008.03.001MathSciNetView ArticleGoogle Scholar
  19. Guo D, Lakshmikantham V: Nonlinear Problems in Abstract Cones. Academic Press, Orlando; 1988.Google Scholar

Copyright

© Wang; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.