Open Access

Multiple positive solutions for first-order impulsive integral boundary value problems on time scales

Boundary Value Problems20112011:12

DOI: 10.1186/1687-2770-2011-12

Received: 10 March 2011

Accepted: 15 August 2011

Published: 15 August 2011

Abstract

In this paper, we first present a class of first-order nonlinear impulsive integral boundary value problems on time scales. Then, using the well-known Guo-Krasnoselskii fixed point theorem and Legget-Williams fixed point theorem, some criteria for the existence of at least one, two, and three positive solutions are established for the problem under consideration, respectively. Finally, examples are presented to illustrate the main results.

MSC: 34B10; 34B37; 34N05.

Keywords

integral boundary value problem fixed point multiple solutions time scale

1 Introduction

In fact, continuous and discrete systems are very important in implementing and applications. It is well known that the theory of time scales has received a lot of attention, which was introduced by Stefan Hilger in order to unify continuous and discrete analyses. Therefore, it is meaningful to study dynamic systems on time scales, which can unify differential and difference systems.

In recent years, a great deal of work has been done in the study of the existence of solutions for boundary value problems on time scales. For the background and results, we refer the reader to some recent contributions [15] and references therein. At the same time, boundary value problems for impulsive differential equations and impulsive difference equations have received much attention [612], since such equations may exhibit several real-world phenomena in physics, biology, engineering, etc. see [1315] and the references therein.

In paper [16], Sun studied the first-order boundary value problem on time scales
x Δ ( t ) = f ( x ( σ ( t ) ) ) , t [ 0 , T ] T , x ( 0 ) = β x ( σ ( T ) ) ,
(1.1)

where 0 < β < 1. By means of the twin fixed point theorem due to Avery and Henderson, some existence criteria for at least two positive solutions were established.

Tian and Ge [17] studied the first-order three-point boundary value problem on time scales
x Δ ( t ) + p ( t ) x ( σ ( t ) ) = f ( t , x ( σ ( t ) ) ) , t [ 0 , T ] T , x ( 0 ) - α x ( ξ ) = β x ( σ ( T ) ) .
(1.2)

Using several fixed point theorems, the existence of at least one positive solution and multiple positive solutions is obtained.

However, except BVP of differential and difference equations, that is, for particular time scales ( T = or T = ), there are few papers dealing with multi-point boundary value problems more than three-point for first-order systems on time scales. In addition, problems with integral boundary conditions arise naturally in thermal conduction problems [18], semiconductor problems [19], hydrodynamic problems [20]. In continuous case, since integral boundary value problems include two-point, three-point,..., n-point boundary value problems, such boundary value problems for continuous systems have received more and more attention and many results have worked out during the past ten years, see Refs. [2127] for more details. To the best of authors' knowledge, up to the present, there is no paper concerning the boundary value problem with integral boundary conditions on time scales. This paper is to fill the gap in the literature.

In this paper, we are concerned with the following first-order nonlinear impulsive integral boundary value problem on time scales:
{ x Δ ( t ) + p ( t ) x ( σ ( t ) ) = f ( t , x ( σ ( t ) ) ) , t J : = [ 0 , T ] T \ { t 1 , t 2 , , t m } , Δ x ( t i ) = x ( t i + ) x ( t i ) = I i ( x ( t i ) ) , i = 1 , 2 , , m , a x ( 0 ) β x ( σ ( T ) ) = 0 σ ( T ) g ( s ) x ( s ) Δ s ,
(1.3)

where T is a time scale which is a nonempty closed subset of with the topology and ordering inherited from , 0, and T are points in T , an interval [ 0 , T ] T : = [ 0 , T ] T which has finite right-scattered points, f C ( [ 0 , σ ( T ) ] T × [ 0 , + ) , [ 0 , + ) ) , p C ( [ 0 , σ ( T ) ] T and p is regressive, +), I i (1 ≤ im) C([0, +∞), [0, +∞)), g is a nonnegative integrable function on [ 0 , σ ( T ) ] T and Γ : = α - β e p ( 0 , σ ( T ) ) - 0 σ ( T ) g ( s ) e p ( 0 , s ) Δ s > 0 , ep(0,σ(T)) is the exponential function on time scale T , which will be introduced in the next section, t i ( 1 i m ) [ 0 , T ] T , 0 < t1 < · · · < t m < T, and for each i = 1 , 2 , , m , x ( t i + ) = lim h 0 + x ( t i + h ) and x ( t i - ) = lim h 0 - x ( t i + h ) represent the right and left limits of x(t) at t = t i , x ( t i - ) = x ( t i ) .

Remark 1.1. Let T r s = { θ 1 , θ 2 , . . . , θ q } denote the set of right-scattered points in interval [ 0 , T ] T , 0 ≤ θ1< · · · < θ q T, σ(θ0) = 0, θq+1= T. By some basic concepts and time scale calculus formulae in the book by Bohner and Peterson[28], we have
0 σ ( T ) g ( s ) x ( s ) Δ s = k = 0 q σ ( θ k ) θ k + 1 g ( s ) x ( s ) Δ s + k = 1 q + 1 θ k σ ( θ k ) g ( s ) x ( s ) Δ s = k = 0 q σ ( θ k ) θ k + 1 g ( s ) x ( s ) d s + k = 1 q + 1 μ ( θ k ) g ( θ k ) x ( θ k ) .
(1.4)

The main purpose of this paper is to establish some sufficient conditions for the existence of at least one, two, or three positive solutions for BVP (1.3) using Guo-Krasnoselskii and Legget-Williams fixed point theorem, respectively.

For convenience, we introduce the following notation:
m a x f 0 = lim x 0 max t [ 0 , σ ( T ) ] T f ( t , x ) x , m i n f 0 = lim x 0 min t [ 0 , σ ( T ) ] T f ( t , x ) x , I i 0 = lim x 0 I i ( x ) x , m a x f = lim x max t [ 0 , σ ( T ) ] T f ( t , x ) x , m i n f = lim x min t [ 0 , σ ( T ) ] T f ( t , x ) x , I i = lim x I i ( x ) x ,

where i = 1, 2,..., m.

This paper is organized as follows. In Section 2, some basic definitions and lemmas on time scales are introduced without proofs. In Section 3, some useful lemmas are established. In particular, Green's function for BVP (1.3) is established. We prove the main results in Sections 4-6.

2 Preliminaries

In this section, we shall first recall some basic definitions, lemmas that are used in what follows. For the details of the calculus on time scales, we refer to books by Bohner and Peterson [28, 29].

Definition 2.1. [28]A time scale T is an arbitrary nonempty closed subset of the real set with the topology and ordering inherited from . The forward and backward jump operators σ , ρ : T T and the graininess μ : T + are defined, respectively, by
σ ( t ) : = i n f { s T : s > t } , ρ ( t ) : = s u p { s T : s < t } , μ ( t ) : = σ ( t ) - t .

In this definition, we put i n f = s u p T (i.e., σ(t) = t if T has a maximum t) and s u p = i n f T (i.e., ρ(t) = t if T has a minimum t). The point t T is called left-dense, left-scattered, right-dense, or right-scattered if ρ(t) = t, ρ(t) < t, σ(t) = t, or σ(t) > t, respectively. Points that are right-dense and left-dense at the same time are called dense. If T has a left-scattered maximum m1, defined T k = T - { m 1 } ; otherwise, set T k = T . If T has a right-scattered minimum m2, defined T k = T - { m 2 } , otherwise, set T k = T .

Definition 2.2. [28]A function f : T is rd continuous provided it is continuous at each right-dense point in T and has a left-sided limit at each left-dense point in T . The set of rd-continuous functions f : T will be denoted by C r d ( T ) = C r d ( T , ) .

Definition 2.3. [28]If f : T is a function and t T k , then the delta derivative of f at the point t is defined to be the number fΔ(t) (provided it exists) with the property that for each ε > 0 there is a neighborhood U of t such that
| f ( σ ( t ) ) - f ( s ) - f Δ ( t ) [ σ ( t ) - s ] | ε | σ ( t ) - s | f o r a l l s U .
Definition 2.4. [28]For a function f : T (the range of f may be actually replaced by Banach space), the (delta) derivative is defined by
f Δ = f ( σ ( t ) ) - f ( t ) σ ( t ) - t ,
if f is continuous at t and t is right-scattered. If t is not right-scattered, then the derivative is defined by
f Δ = lim s t f ( σ ( t ) ) - f ( s ) σ ( t ) - s = lim s t f ( t ) - f ( s ) t - s

provided this limit exists.

Definition 2.5. [28]If FΔ(t) = f(t), then we define the delta integral by
a t f ( s ) Δ s = F ( t ) - F ( a ) .
Definition 2.6. [28]A function p : T is said to be regressive provided 1 + μ(t)p(t) ≠ 0 for all t T k , where μ(t) = σ(t) - t is the graininess function. The set of all regressive rd-continuous functions f : T is denoted by , while the set + is given by { f : 1 + μ ( t ) f ( t ) > 0 } for all t T . Let p . The exponential function is defined by
e p ( t , s ) = e x p s t ξ μ ( τ ) ( p ( τ ) ) Δ τ ,

where ξh(z)is the so-called cylinder transformation.

Lemma 2.1. [28]Let p, q . Then
  1. (1)

    e 0(t, s) ≡ 1 and e p (t, t) ≡ 1;

     
  2. (2)

    e p (σ(t), s) = (1 + μ(t)p(t))e p (t, s);

     
  3. (3)

    1 e p ( t , s ) = e Θ p ( t , s ) , where Θ p ( t ) = - p ( t ) 1 + μ ( t ) p ( t ) ;

     
  4. (4)

    e p (t, s)e p (s, r) = e p (t, r),

     
  5. (5)

    e p Δ ( , s ) = p e p ( , s ) .

     
Lemma 2.2. [28]Assume that f , g : T are delta differentiable at t T k . Then
( f g ) Δ ( t ) = f Δ ( t ) g ( t ) + f ( σ ( t ) ) g Δ ( t ) = f ( t ) g Δ ( t ) + f Δ ( t ) g ( σ ( t ) )
Lemma 2.3. [28]Let a T k , b T , and assume that f : T × T k is continuous at (t, t), where t T k with t > a. Also, assume that fΔ(t, ·) is rd-continuous on [a, σ(t)]. Suppose that for each ε > 0 there exists a neighborhood U of t, independent of τ [a, σ(t)], such that
| f ( σ ( t ) , τ ) - f ( s , τ ) - f Δ ( t , τ ) ( σ ( t ) - s ) | ε | σ ( t ) - s | f o r a l l s U ,
where f Δ denotes the derivative of f with respect to the first variable. Then
  1. (1)

    g ( t ) : = a t f ( t , τ ) Δ τ i m p l i e s g Δ ( t ) = a t f Δ ( t , τ ) Δ τ + f ( σ ( t ) , t ) ;

     
  2. (2)

    h ( t ) : = t b f ( t , τ ) Δ τ i m p l i e s h Δ ( t ) = t b f Δ ( t , τ ) Δ τ - f ( σ ( t ) , t ) .

     

3 Foundational lemmas

In this section, we first introduce some background definitions, fixed point theorems in Banach space, then present basic lemmas that are very crucial in the proof of the main results.

We define P C = { x : [ 0 , σ ( T ) ) ] T | x ( t ) is a piecewise continuous map with first-class discontinuous points in [ 0 , σ ( T ) ] T { t i : 1 i m } and at each discontinuous point it is continuous on the left} with the norm | | x | | = sup t [ 0 , σ ( t ) ] T | x ( t ) | , then PC is a Banach Space.

Definition 3.1. A function x is said to be a positive solution of problem (1.3) if x PC satisfying problem (1.3) and x(t) > 0 for all t [ 0 , σ ( t ) ] T .

Definition 3.2. Let X be a real Banach space, the nonempty set K X is called a cone of X, if it satisfies the following conditions.
  1. (1)

    x K and λ ≥ 0 implies λx K;

     
  2. (2)

    x K and -x K implies x = 0.

     

Every cone K X induces an ordering in X, which is given by xy if and only if y - x K.

Definition 3.3. An operator is called completely continuous if it is continuous and maps bounded sets into precompact sets.

Lemma 3.1. (Guo-Krasnoselskii[30]) Let X be a Banach space and K X be a cone in X. Assume that Ω1, Ω2are bounded open subsets of X with 0 Ω 1 Ω ̄ 1 Ω 2 and Φ : K ( Ω ̄ 2 \ Ω 1 ) K is a completely continuous operator such that, either
  1. (1)

    ||Φx|| ≤ ||x||, x K ∩ ∂Ω1, and ||Φx|| ≥ ||x||, x K ∩ ∂Ω2; or

     
  2. (2)

    ||Φx|| ≥ ||x||, x K ∩ ∂Ω1, and ||Φx|| ≤ ||x||, x K ∩ ∂Ω2.

     

Then Φ has at least one fixed point in K ( Ω ̄ 2 \ Ω 1 ) .

Lemma 3.2. Suppose h C ( [ 0 , σ ( T ) ] T , ) , ν i , then x is a solution of
x ( t ) = 0 σ ( T ) G ( t , s ) h ( s ) Δ s + i = 1 m G ( t , t i ) ν i , t [ 0 , σ ( T ) ] T ,
(3.1)
where
G ( t , s ) = Γ - 1 e p ( s , t ) [ α - 0 σ ( s ) g ( r ) e p ( 0 , r ) Δ r ] , 0 s t σ ( T ) , Γ - 1 e p ( s , t ) [ β e p ( 0 , σ ( T ) ) + σ ( s ) σ ( T ) g ( r ) e p ( 0 , r ) Δ r ] , 0 t s σ ( T ) ,
if and only if x is a solution of the boundary value problem
{ x Δ ( t ) + p ( t ) x ( σ ( t ) ) = h ( t ) , t J : = [ 0 , T ] T \ { t 1 , t 2 , , t m } , Δ x ( t i ) = x ( t i + ) x ( t i ) = v i , i = 1 , 2 , , m , a x ( 0 ) β x ( σ ( T ) ) = 0 σ ( T ) g ( s ) x ( s ) Δ s .
(3.2)
Proof. Assume that x(t) is a solution of (3.2). By the first equation in (3.2), we have
( x ( t ) e p ( t , 0 ) ) Δ = h ( t ) e p ( t , 0 ) .
(3.3)
If t [0, t1], integrating (3.3) from 0 to t, we get
x ( t ) e p ( t , 0 ) = x ( 0 ) + 0 t e p ( s , 0 ) h ( s ) Δ s ,
while tt1, we have
x ( t 1 - ) e p ( t 1 , 0 ) = x ( 0 ) + 0 t 1 e p ( s , 0 ) h ( s ) Δ s ,
then
x ( t 1 + ) e p ( t 1 , 0 ) = x ( 0 ) + 0 t 1 e p ( s , 0 ) h ( s ) Δ s + ν 1 e p ( t 1 , 0 ) .
Now, let t (t1, t2], integrating (3.3) from t1 to t, we obtain
x ( t ) e p ( t , 0 ) = x ( t 1 + ) e p ( t 1 , 0 ) + t 1 t e p ( s , 0 ) h ( s ) Δ s = x ( 0 ) + 0 t e p ( s , 0 ) h ( s ) Δ s + ν 1 e p ( t 1 , 0 ) .
For t (t k , tk+1], repeating the above process, we can get
x ( t ) e p ( t , 0 ) = x ( 0 ) + 0 t e p ( s , 0 ) h ( s ) Δ s + 0 < t i < t ν i e p ( t i , 0 ) ,
that is
x ( t ) = x ( 0 ) e p ( 0 , t ) + 0 t e p ( s , t ) h ( s ) Δ s + 0 < t i < t ν i e p ( t i , t ) .
It follows from α x ( 0 ) - β x ( σ ( T ) ) = 0 σ ( T ) g ( s ) x ( s ) Δ s that
x ( 0 ) = Γ - 1 β 0 σ ( T ) e p ( s , σ ( T ) ) h ( s ) Δ s + 0 σ ( T ) g ( s ) 0 s e p ( r , s ) h ( r ) Δ r Δ s + β i = 1 m ν i e p ( t i , σ ( T ) ) + 0 σ ( T ) g ( s ) 0 < t i < s ν i e p ( t i , s ) Δ s = Γ - 1 β 0 σ ( T ) e p ( s , σ ( T ) ) h ( s ) Δ s + 0 σ ( T ) 0 σ ( T ) g ( r ) e p ( s , r ) Δ r h ( s ) Δ s - 0 σ ( T ) 0 σ ( s ) g ( r ) e p ( s , r ) Δ r h ( s ) Δ s + i = 1 m ν i t i σ ( T ) g ( s ) e p ( t i , s ) Δ s + β e p ( t i , σ ( T ) ) ,
where Γ - 1 = [ α - β e p ( 0 , σ ( T ) ) - 0 σ ( T ) g ( s ) e p ( 0 , s ) Δ s ] - 1 . Then
x ( t ) = Γ - 1 e p ( 0 , t ) β 0 σ ( T ) e p ( s , σ ( T ) ) h ( s ) Δ s + 0 σ ( T ) 0 σ ( T ) g ( r ) e p ( s , r ) Δ r h ( s ) Δ s - 0 σ ( T ) 0 σ ( s ) g ( r ) e p ( s , r ) Δ r h ( s ) Δ s + i = 1 m ν i t i σ ( T ) g ( s ) e p ( t i , s ) Δ s + β e p ( t i , σ ( T ) ) + 0 t e p ( s , t ) h ( s ) Δ s + 0 < t i < t ν i e p ( t i , t ) = 0 σ ( T ) G ( t , s ) h ( s ) Δ s + i = 1 m G ( t , t i ) ν i .
(3.4)

This means that if x is a solution of (3.2) then x satisfies (3.1).

On the other hand, if x satisfies (3.1), we have
x ( t ) = 0 σ ( T ) G ( t , s ) h ( s ) Δ s + i = 1 m G ( t , t i ) ν i , t [ 0 , σ ( T ) ] T .
Then
x ( t ) e p ( t , 0 ) = 0 σ ( T ) H ( s ) h ( s ) Δ s + i = 1 m H ( t i ) ν i , t [ 0 , σ ( T ) ] T ,
(3.5)
where
H ( s ) = Γ - 1 e p ( s , 0 ) [ α - 0 σ ( s ) g ( r ) e p ( 0 , r ) Δ r ] , 0 s t σ ( T ) , Γ - 1 e p ( s , 0 ) [ β e p ( 0 , σ ( T ) ) + σ ( s ) σ ( T ) g ( r ) e p ( 0 , r ) Δ r ] , 0 t s σ ( T ) .
Notice that
0 σ ( T ) H ( s ) h ( s ) Δ s Δ = Γ - 1 0 t e p ( s , 0 ) α - 0 σ ( s ) g ( r ) e p ( 0 , r ) Δ r h ( s ) Δ s Δ + Γ - 1 t σ ( T ) e p ( s , 0 ) β e p ( 0 , σ ( T ) ) + σ ( s ) σ ( T ) g ( r ) e p ( 0 , r ) Δ r h ( s ) Δ s Δ = Γ - 1 e p ( t , 0 ) α - 0 σ ( t ) g ( r ) e p ( 0 , r ) Δ r h ( t ) - Γ - 1 e p ( t , 0 ) β e p ( 0 , σ ( T ) ) + σ ( t ) σ ( T ) g ( r ) e p ( 0 , r ) Δ r h ( t ) = e p ( t , 0 ) h ( t ) .
Similarly,
i = 1 m H ( t i ) ν i Δ = 0 .
Hence, we get from (3.5) that
( x ( t ) e p ( t , 0 ) ) Δ = h ( t ) e p ( t , 0 ) ,
that is
x Δ ( t ) + p ( t ) x ( σ ( t ) ) = h ( t ) , t J .
Finally, we can obtain from (3.1) that
x ( t k + ) - x ( t k - ) = ν k , k = 1 , 2 , . . . , m ,
and
α x ( 0 ) - β x ( σ ( T ) ) = α 0 σ ( T ) G ( 0 , s ) h ( s ) Δ s + i = 1 m G ( 0 , t i ) ν i - β 0 σ ( T ) G ( σ ( T ) , s ) h ( s ) Δ s + i = 1 m G ( σ ( T ) , t i ) ν i = α 0 t Γ - 1 e p ( s , 0 ) α - 0 σ ( s ) g ( r ) e p ( 0 , r ) Δ r h ( s ) Δ s + 0 < t i < t Γ - 1 e p ( t i , 0 ) α - 0 σ ( t i ) g ( r ) e p ( 0 , r ) Δ r ν i + t σ ( T ) Γ - 1 e p ( s , 0 ) β e p ( 0 , σ ( T ) ) + σ ( s ) σ ( T ) g ( r ) e p ( 0 , r ) Δ r h ( s ) Δ s + t < t i < σ ( T ) Γ - 1 e p ( t i , 0 ) β e p ( 0 , σ ( T ) ) + σ ( t i ) σ ( T ) g ( r ) e p ( 0 , r ) Δ r ν i - β 0 t Γ - 1 e p ( s , σ ( T ) ) α - 0 σ ( s ) g ( r ) e p ( 0 , r ) Δ r h ( s ) Δ s + 0 < t i < t Γ - 1 e p ( t i , σ ( T ) ) α - 0 σ ( t i ) g ( r ) e p ( 0 , r ) Δ r ν i + t σ ( T ) Γ - 1 e p ( s , σ ( T ) ) β e p ( 0 , σ ( T ) ) + σ ( s ) σ ( T ) g ( r ) e p ( 0 , r ) Δ r h ( s ) Δ s + t < t i < σ ( T ) Γ - 1 e p ( t i , σ ( T ) ) β e p ( 0 , σ ( T ) ) + σ ( t i ) σ ( T ) g ( r ) e p ( 0 , r ) Δ r ν i = 0 σ ( T ) g ( s ) 0 σ ( T ) G ( s , r ) h ( r ) Δ r + i = 1 m G ( s , s i ) ν i Δ s = 0 σ ( T ) g ( s ) x ( s ) Δ s .

So the proof of this lemma is completed.

Lemma 3.3. Let G(t, s) be defined the same as that in Lemma 3.2, then the following properties hold.
  1. (1)

    G(t, s) > 0 for all t , s [ 0 , σ ( T ) ] T ;

     
  2. (2)
    AG(t, s) ≤ B for all t , s [ 0 , σ ( T ) ] T , where
    A = Γ - 1 β e p 2 ( 0 , σ ( T ) ) , B = Γ - 1 e p ( σ ( T ) , 0 ) α + β e p ( 0 , σ ( T ) ) + 0 σ ( T ) g ( s ) e p ( 0 , s ) Δ s .
     
Proof. Since α - β e p ( 0 , σ ( T ) ) - 0 σ ( T ) g ( s ) e p ( 0 , s ) Δ s > 0 , then it is clear that (1) holds. Now we will show that (2) holds.
G ( t , s ) = Γ - 1 e p ( s , t ) [ α - 0 σ ( s ) g ( r ) e p ( 0 , r ) Δ r ] , 0 s < t σ ( T ) , Γ - 1 e p ( s , t ) [ β e p ( 0 , σ ( T ) ) + σ ( s ) σ ( T ) g ( r ) e p ( 0 , r ) Δ r ] , 0 t s σ ( T ) , Γ - 1 e p ( s , 0 ) e p ( 0 , t ) [ α - 0 σ ( T ) g ( r ) e p ( 0 , r ) Δ r ] , 0 s < t σ ( T ) , Γ - 1 e p ( s , 0 ) e p ( 0 , t ) β e p ( 0 , σ ( T ) ) , 0 t s σ ( T ) , Γ - 1 e p ( 0 , σ ( T ) ) [ α - 0 σ ( T ) g ( r ) e p ( 0 , r ) Δ r ] , 0 s < t σ ( T ) , Γ - 1 β e p 2 ( 0 , σ ( T ) ) , 0 t s σ ( T ) , Γ - 1 β e p 2 ( 0 , σ ( T ) ) : = A .

Hence, the left-hand side of (2) holds. And it is easy to show that the right-hand side of (2) also holds. The proof is complete. ■

Define an operator Φ : PCPC by
( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) .

By Lemma 3.2, the fixed points of Φ are solutions of problem (1.3).

Lemma 3.4. The operator Φ : PCPC is completely continuous.

Proof. The first step we will show that Φ : PCPC is continuous. Let { x n } n = 1 be a sequence such that lim n x n = x in PC. Then
| ( Φ x n ) ( t ) - ( Φ x ) ( t ) | = 0 σ ( T ) G ( t , s ) [ f ( s , x n ( σ ( s ) ) ) - f ( s , x ( σ ( s ) ) ) ] Δ s + i = 1 m G ( t , t i ) [ I i ( x n ( t i ) ) - I i ( x ( t i ) ) ] B 0 σ ( T ) f ( s , x n ( σ ( s ) ) ) - f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m I i ( x n ( t i ) ) - I i ( x ( t i ) ) .

Since f(t, x) and I i (x)(1 ≤ im) are continuous in x, we have |(Φx n )(t) - (Φx)(t)| → 0, which leads to ||Φx n - Φx|| PC → 0, as n → ∞. That is, Φ : PCPC is continuous.

Next, we will show that Φ : PCPC is a compact operator by two steps.

Let U PC be a bounded set.

Firstly, we will show that {Φx : x U}is bounded. For any x U, we have
( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) B 0 σ ( T ) | f ( s , x ( σ ( s ) ) ) | Δ s + i = 1 m | I i ( x ( t i ) ) | .

In virtue of the continuity of f(t, x) and I i (x)(1 ≤ im), we can conclude that {Φx : x U} is bounded from above inequality.

Secondly, we will show that {Φx : x U} is the set of equicontinuous functions. For any x, y U, then
| ( Φ x ) ( t ) - ( Φ y ) ( t ) | = 0 σ ( T ) G ( t , s ) [ f ( s , x ( σ ( s ) ) ) - f ( s , y ( σ ( s ) ) ) ] Δ s + i = 1 m G ( t , t i ) [ I i ( x ( t i ) ) - I i ( y ( t i ) ) ] B 0 σ ( T ) | f ( s , x ( σ ( s ) ) ) - f ( s , y ( σ ( s ) ) ) | Δ s + i = 1 m | I i ( x ( t i ) ) - I i ( y ( t i ) ) | .

In virtue of the continuity of f(t, x) and I i (x)(1 ≤ im), the right-hand side tends to zero uniformly as |x - y| → 0. Consequently, {Φx : x U} is the set of equicontinuous functions.

By Arzela-Ascoli theorem on time scales [31], {Φx : x U} is a relatively compact set. So Φ maps a bounded set into a relatively compact set, and Φ is a compact operator.

From above three steps, it is easy to see that Φ : PCPC is completely continuous. The proof is complete. ■

Let K = { x P C : x ( t ) δ | | x | | , t [ 0 , σ ( T ) ] T } , where δ = A B ( 0 , 1 ) . It is not difficult to verify that K is a cone in PC.

Lemma 3.5. Φ maps K into K.

Proof. Obviously, Φ(K) PC. x K, we have
( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) B 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + B i = 1 m I i ( x ( t i ) ) , t [ 0 , σ ( T ) ] T ,
which implies
Φ x B 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + B i = 1 m I i ( x ( t i ) ) .
Therefore,
( Φ x ) ( t ) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + A i = 1 m I i ( x ( t i ) ) = A B B 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + B i = 1 m I i ( x ( t i ) ) δ Φ x .

Hence, Φ(K) K. The proof is complete. ■

4 Existence of at least one positive solution

In this section, we will state and prove our main result about the existence of at least one positive solution of problem (1.3).

Theorem 4.1. Assume that one of the following conditions is satisfied:

(H1) max f0 = 0, min f = ∞, and Ii 0= 0, i = 1, 2,..., m; or

(H2) max f = 0, min f0 = ∞, and Ii= 0, i = 1, 2,..., m.

Then, problem (1.3) has at least one positive solution.

Proof. Firstly, we assume that (H1) holds. In this case, since max f0 = 0 and Ii 0= 0, i = 1, 2,..., m, for ε ≤ ((T) + Bm)-1, there exists a positive constant r1 such that
f ( t , x ) ε x a n d I i ( x ) ε x f o r a l l x ( 0 , r 1 ] , i = 1 , 2 , , m .
In view of min f = ∞, we have that for M ≥ ((T)δ)-1, there exists a constant r 2 > r 1 δ such that
f ( t , x ) M x f o r a l l x [ δ r 2 , ) .

Let Ω i = {x PC : ||x|| < r i }, i = 1, 2.

On the one hand, if x K ∩ ∂Ω1, we have
( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) B 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + B i = 1 m I i ( x ( t i ) ) B 0 σ ( T ) ε x Δ s + B m ε x B σ ( T ) ε r 1 + B m ε r 1 r 1 = x ,
which yields
Φ x x f o r a l l x K Ω 1 .
(4.1)
On the other hand, if x K ∩ ∂Ω2, we have
( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + A i = 1 m I i ( x ( t i ) ) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s A 0 σ ( T ) M x ( s ) Δ s A σ ( T ) M δ x A σ ( T ) M δ r 2 r 2 = x ,
which implies
Φ x x f o r a l l x K Ω 2 .
(4.2)

Therefore, by (4.1), (4.2), and Lemma 3.1, it follows that Φ has a fixed point in K ( Ω ̄ 2 \ Ω 1 ) .

Next, we assume that (H2) holds. In this case, since max f = 0 and Ii= 0, i = 1, 2,..., m, for ε' ≤ ((T) + Bm)-1, there exists a positive constant r3 such that
f ( t , x ) ε x a n d I i ( x ) ε x f o r a l l x [ δ r 3 , ) , i = 1 , 2 , , m .
In view of min f = ∞, we have that for M' ≥ ((T)δ)-1, there exists a positive constant r4< δr3 such that
f ( t , x ) M x f o r a l l x ( 0 , r 4 ] .

Let Ω i = {x PC : ||x|| < r i }, i = 3, 4.

On the one hand, if x K ∩ ∂Ω3, we have
( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) B 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + B i = 1 m I i ( x ( t i ) ) B 0 σ ( T ) ε x Δ s + B m ε x B σ ( T ) ε r 3 + B m ε r 1 r 3 = x ,
which yields
Φ x x f o r a l l x K Ω 3 .
(4.3)
On the other hand, if x K ∩ ∂Ω4, we have
( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + A i = 1 m I i ( x ( t i ) ) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s A 0 σ ( T ) M x ( s ) Δ s A σ ( T ) M δ x A σ ( T ) M δ r 4 r 4 = x ,
which implies
Φ x x f o r a l l x K Ω 4 .
(4.4)

Hence, from (4.3) and (4.4) and Lemma 3.1, we conclude that Φ has a fixed point in K ( Ω ̄ 3 \ Ω 4 ) , that is, problem (1.3) has at least one positive solution. The proof is complete. ■

5 Existence of at least two positive solutions

In this section, we will state and prove our main results about the existence of at least two positive solutions to problem (1.3).

Theorem 5.1. Assume that the following conditions hold.

(H3) min f0 = +∞, min f = +∞.

(H4) There exists a positive constant R such that f ( t , x ) < R 2 B σ ( T ) for all 0 < xR.

(H5) I i ( x ) < x 2 B m , x (0, ∞), i = 1, 2,..., m.

Then, problem (1.3) has at least two positive solutions.

Proof. Let Ω R = {x PC : ||x|| < R}. From (H4) and (H5), for x K ∩ ∂Ω R , we get
( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) B 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + B i = 1 m I i ( x ( t i ) ) < B σ ( T ) R 2 B σ ( T ) + m R 2 m B = R = x .
So
Φ x x f o r a l l x K Ω R .
(5.1)
Since min f0 = +∞, for M ≥ ((T)δ)-1, there exists a positive constant R1< δ R such that
f ( t , x ) M x f o r a l l x ( 0 , R 1 ] .
Let Ω R 1 = { x P C : | | x | | < R 1 } . For any x K Ω R 1 , we have
( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + A i = 1 m I i ( x ( t i ) ) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s A 0 σ ( T ) M x ( s ) Δ s A σ ( T ) M δ x = A σ ( T ) M δ R 1 R 1 = x .
Hence,
Φ x x f o r a l l x K Ω R 1 .
(5.2)
Similarly, since min f = +∞, for M' ≥ ((T)δ)-1, there exists a positive constant R 2 > R δ such that
f ( t , x ) M x f o r a l l x [ δ R 2 , ) .
Let Ω R 2 = { x P C : | | x | | < R 2 } . For any x K Ω R 2 , we have
( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + A i = 1 m I i ( x ( t i ) ) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s A 0 σ ( T ) M x ( s ) Δ s A σ ( T ) M δ x = A σ ( T ) M δ R 2 R 2 = x .
Hence,
Φ x x f o r a l l x K Ω R 2 .
(5.3)

Equations 5.1 and 5.2 imply that Φ has at least one fixed point in K ( Ω ̄ R \ Ω R 1 ) , which is a positive solution of problem (1.3). Besides, (5.1) and (5.3) imply that Φ has at least one fixed point in K ( Ω ̄ R 2 \ Ω R ) , which is a positive solution of problem (1.3). Therefore, problem (1.3) has at least two positive solutions x1 and x2 satisfying 0 < R1 ≤ ||x1|| < R < ||x2|| ≤ R2. The proof is complete. ■

Theorem 5.2. Assume that the following conditions hold.

(H6) max f0 = 0, max f = 0, Ii 0= 0, Ii= 0, i = 1, 2,..., m.

(H7) There exists a positive constant r such that f ( t , x ) > r A σ ( T ) for all 0 < xr.

Then problem (1.3) has at least two positive solutions.

Proof. Let Ω r = {x PC : ||x|| < r}. From (H7), for x K ∩ ∂Ω r , we get
( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s > A σ ( T ) r A σ ( T ) = r = x .
So
Φ x > x f o r a l l x K Ω r .
(5.4)
Since max f0 = 0 and Ii 0= 0, i = 1, 2,..., m, for ε ≤ ((T) + Bm)-1, there exists a positive constant r1< δ r such that
f ( t , x ) ε x a n d I i ( x ) ε x f o r a l l x ( 0 , r 1 ] , i = 1 , 2 , , m .
Let Ω r 1 = { x P C : | | x | | < r 1 } . For any x K Ω r 1 , we have
( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) B 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + B i = 1 m I i ( x ( t i ) ) ( B σ ( T ) + B m ) ε r 1 r 1 = x .
Hence,
Φ x x f o r a l l x K Ω r 1 .
(5.5)
Similarly, since max f = 0 and Ii= 0, i = 1, 2,..., m, for ε' ≤ ((T) + Bm)-1, there exists a positive constant r 2 > r δ such that
f ( t , x ) ε x a n d I i ε x f o r a l l x [ δ r 2 , ) , i = 1 , 2 , , m .
Let Ω r 2 = { x P C : | | x | | < r 2 } . For any x K Ω r 2 , we have
( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) B 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + B i = 1 m I i ( x ( t i ) ) ( B σ ( T ) + B m ) ε r 2 r 2 = x .
Hence,
Φ x x f o r a l l x K Ω r 2 .
(5.6)

Equations 5.4 and 5.5 imply that Φ has at least one fixed point in K ( Ω ̄ r \ Ω r 1 ) , which is a positive solution of problem (1.3). Besides, (5.4) and (5.6) imply that Φ has at least one fixed point in K ( Ω ̄ r 2 \ Ω r ) , which is a positive solution of problem (1.3). Therefore, problem (1.3) has at least two positive solutions x1 and x2 satisfying 0 < r1 ≤ ||x1|| < r < ||x2|| ≤ r2. The proof is complete. ■

Similar to Theorems 5.1 and 5.2, one can easily obtain the following corollary:

Corollary 5.1. Assume that (H7) and the following conditions hold.

(H8) max f0 = 0, max f = 0, Ii 0= 0, i = 1, 2,..., m.

(H9) There exists a positive constant d such that I i ( x ) | x | 2 B m for all xd, i = 1, 2,..., m.

Then, problem (1.3) has at least two positive solutions.

6 Existence of at least three positive solutions

In this section, we will state and prove our multiplicity result of positive solutions to problem (1.3) via Legget-Williams fixed point theorem. For readers' convenience, we first illustrate Legget-Williams fixed point theorem.

Let E be a real Banach space with cone K. A map α : K → [0, +∞) is said to be a continuous concave functional on K if α is continuous and
α ( t x + ( 1 - t ) y ) t α ( x ) + ( 1 - t ) α ( y )
for all x, y K and t [0, 1]. Let a, b be two numbers such that 0 < a < b and α be a nonnegative continuous concave functional on K. We define the following convex sets:
K a = { x K : x < a } a n d K ( α , a , b ) = { x K : a α ( x ) , x b } .
Lemma 6.1. (Legget-Williams fixed point theorem[32]). Let Φ : K c ¯ K c ¯ be completely continuous and α be a nonnegative continuous concave functional on K such that α(x) ≤ ||x|| for all x K c . Suppose that there exist 0 < d < a < bc such that
  1. (1)

    {x K(α, a, b) : α(x) > a} ≠ , and α(Φ(x)) > a for all x K(α, a, b);

     
  2. (2)

    ||Φx|| < d for all ||x|| ≤ d;

     
  3. (3)

    α(Φ(x)) > a for all x K(α, a, c) with ||Φ(x)|| > b.

     

Then, Φ has at least three fixed points x1, x2, x3in K c ¯ satisfying ||x1|| < d, a < α(x2), ||x3|| > d, and α(x3) < a.

Theorem 6.1. Assume that there exist numbers d, a, and c with 0 < d < a < a δ < c such that
max t [ 0 , σ ( T ) ] T f ( t , x ) < d 2 B σ ( T ) , I i ( x ) < d 2 B m , i = 1 , 2 , , m , x ( 0 , d ] ,
(6.1)
max t [ 0 , σ ( T ) ] T f ( t , x ) < c 2 B σ ( T ) , I i ( x ) < c 2 B m , i = 1 , 2 , , m , x ( 0 , c ] ,
(6.2)
min t [ 0 , σ ( T ) ] T f ( t , x ) > a 2 A σ ( T ) , I i ( x ) > a 2 A m , i = 1 , 2 , , m , x [ a , a δ ] .
(6.3)

Then, problem (1.3) has at least three positive solutions.

Proof. For x K, we define
α ( x ) = min t [ 0 , σ ( T ) ] T x ( t ) .

It is easy to verify that α is a nonnegative continuous concave functional on K with α(x) < ||x|| for all x K.

We first claim that if there exists a positive constant r such that max t [ 0 , σ ( T ) ] T f ( t , x ) < r 2 B σ ( T ) , I i ( x ) < r 2 B m , i = 1, 2,..., m, for x (0, r], then Φ : K r ¯ K r .

Indeed, if x K r ¯ ,
( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) B 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + B i = 1 m I i ( x ( t i ) ) < B σ ( T ) r 2 B σ ( T ) + B m r 2 B m = r .

Thus, ||Φx|| < r, that is Φx K r . Hence, we have shown that (6.1) or (6.2) hold, then Φ maps K d ¯ into K d or K c ¯ into K c , respectively. So condition (2) of Lemma 6.1 holds.

Let b = a δ . Next, we will show that {x K(α, a, b) : α(x) > a} ≠ , and α(Φ(x)) > a for x K(α, a, b). In fact, a < ( 1 + δ ) a 2 δ < a δ , then the constant function ( 1 + δ ) a 2 δ { x K ( α , a , b ) : α ( x ) > a } .

Since (6.3) holds, for x K(α, a, b), we obtain
( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + A i = 1 m I i ( x ( t i ) ) > A σ ( T ) a 2 A σ ( T ) + A m a 2 A m = a .

So α(Φ(x)(t)) > a for all x K(α, a, b), then condition (1) of Lemma 6.1 holds.

Finally, suppose x K(α, a, c) and Φ ( x ) > b = a δ , then we have
α ( Φ ( x ) ( t ) ) = min t [ 0 , σ ( T ) ] T 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + A i = 1 m I i ( x ( t i ) ) A 1 B 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + 1 B i = 1 m G ( t , t i ) I i ( x ( t i ) ) A B ( Φ x ) ( t )
for all t [ 0 , σ ( T ) ] T . Thus,
α ( Φ ( x ) ( t ) ) A B max t [ 0 , σ ( T ) ] T ( Φ x ) ( t ) = A B Φ x > a .
To sum up, all the conditions of Theorem 6.1 are satisfied. Hence, Φ has at least three fixed points, that is, problem (1.3) has at least three positive solutions x1, x2, x3 such that
| | x 1 | | < d , a < min t [ 0 , σ ( T ) ] T x 2 ( t ) , | | x 3 | | > d , min t [ 0 , σ ( T ) ] T x 3 ( t ) < a .

The proof is complete. ■

7 Examples

In this section, we give some examples to illustrate our main results.

Example 7.1. Take T = n = 0 [ 2 n , 2 n + 1 ] . We consider the following IBVP on T :
x Δ ( t ) + p ( t ) x ( σ ( t ) ) = f ( t , x ( σ ( t ) ) ) , t [ 0 , 3 ] T , t 1 2 , x ( 1 2 + ) - x ( 1 2 - ) = I ( x ( 1 2 ) ) , α x ( 0 ) - β x ( 4 ) = 0 4 g ( s ) x ( s ) Δ s ,
(7.1)
where T = 3, p(t) = t, f(t, x(σ(t))) = (t + 1)(x(σ(t)))2, I(x) = x3, α = 1, β = 1 2 , and
g ( t ) = t , t [ 0 , 1 ]