Open Access

Eigenvalues of complementary Lidstone boundary value problems

Boundary Value Problems20122012:49

DOI: 10.1186/1687-2770-2012-49

Received: 9 December 2011

Accepted: 24 April 2012

Published: 24 April 2012

Abstract

We consider the following complementary Lidstone boundary value problem

( - 1 ) m y ( 2 m + 1 ) ( t ) = λ F ( t , y ( t ) , y ( t ) ) , t ( 0 , 1 ) y ( 0 ) = 0 , y ( 2 k - 1 ) ( 0 ) = y ( 2 k - 1 ) ( 1 ) = 0 , 1 k m

where λ > 0. The values of λ are characterized so that the boundary value problem has a positive solution. Moreover, we derive explicit intervals of λ such that for any λ in the interval, the existence of a positive solution of the boundary value problem is guaranteed. Some examples are also included to illustrate the results obtained. Note that the nonlinear term F depends on y' and this derivative dependence is seldom investigated in the literature.

AMS Subject Classification: 34B15.

Keywords

eigenvalues positive solutions complementary Lidstone boundary value problems

1 Introduction

In this article, we shall consider the complementary Lidstone boundary value problem
( - 1 ) m y ( 2 m + 1 ) ( t ) = λ F ( t , y ( t ) , y ( t ) ) , t ( 0 , 1 ) y ( 0 ) = 0 , y ( 2 k - 1 ) ( 0 ) = y ( 2 k - 1 ) ( 1 ) = 0 , 1 k m
(1.1)

where m ≥ 1, λ > 0, and F is continuous at least in the interior of the domain of interest. Note that the nonlinear term F involves a derivative of the dependent variable--this is seldom studied in the literature and most research articles on boundary value problems consider nonlinear terms that involve y only.

We are interested in the existence of a positive solution of (1.1). By a positive solution y of (1.1), we mean a nontrivial y C(2m+1)(0, 1) satisfying (1.1) and y(t) ≥ 0 for t (0, 1). If, for a particular λ the boundary value problem (1.1) has a positive solution y, then λ is called an eigenvalue and y is a corresponding eigenfunction of (1.1). We shall denote the set of eigenvalues of (1.1) by E, i.e.,
E = { λ > 0 | ( 1 . 1 ) has a positive solution } .

The focus of this article is eigenvalue problem, as such we shall characterize the values of λ so that the boundary value problem (1.1) has a positive solution. To be specific, we shall establish criteria for E to contain an interval, and for E to be an interval (which may either be bounded or unbounded). In addition explicit subintervals of E are derived.

The complementary Lidstone interpolation and boundary value problems are very recently introduced in [1], and studied by Agarwal et. al. [2, 3] where they consider an odd order ((2m+ 1)th order) differential equation together with boundary data at the odd order derivatives
y ( 0 ) = a 0 , y ( 2 k - 1 ) ( 0 ) = a k , y ( 2 k - 1 ) ( 1 ) = b k , 1 k m .
(1.2)
The boundary conditions (1.2) are known as complementary Lidstone boundary conditions, they naturally complement the Lidstone boundary conditions [47] which involve even order derivatives. To be precise, the Lidstone boundary value problem comprises an even order (2m th order) differential equation and the Lidstone boundary conditions
y ( 2 k ) ( 0 ) = a k , y ( 2 k ) ( 1 ) = b k , 0 k m - 1 .
(1.3)

There is a vast literature on Lidstone interpolation and boundary value problems. The Lidstone interpolation has a long history from 1929 when Lidstone [8] introduced a generalization of Taylor's series that approximates a given function in the neighborhood of two points instead of one. Further characterization can be found in the study of [916]. More research on Lidstone interpolation as well as Lidstone spline is seen in [1, 1723]. On the other hand, the Lidstone boundary value problems and several of its particular cases have been the subject matter of numerous investigations, see [4, 18, 2437] and the references cited therein. It is noted that in most of these studies the nonlinear terms considered do not involve derivatives of the dependent variable, only a handful of articles [30, 31, 34, 35] tackle nonlinear terms that involve even order derivatives. In the present study, our study of the complementary Lidstone boundary value problem (1.1) where F depends on a derivative certainly extends and complements the rich literature on boundary value problems and in particular on Lidstone boundary value problems.

The plan of the article is as follows. In Section 2, we shall state a fixed point theorem due to Krasnosel'skii [38], and develop some inequalities for certain Green's function which are needed later. The characterization of the set E is presented in Section 3. Finally, in Section 4, we establish explicit subintervals of E.

2 Preliminaries

Theorem 2.1. [38] Let B be a Banach space, and let C( B) be a cone. Assume Ω1, Ω2 are open subsets of B with 0 Ω 1 , Ω ̄ 1 Ω 2 , and let S : C ( Ω ̄ 2 \ Ω 1 ) C be a completely continuous operator such that, either
  1. (a)

    Syy, y C Ω1, and Syy, y C Ω2, or

     
  2. (b)

    Syy, y C Ω1, and Syy, y C Ω2.

     

Then, S has a fixed point in C ( Ω ̄ 2 \ Ω 1 ) .

To tackle the complementary Lidstone boundary value problem (1.1), let us review certain attributes of the Lidstone boundary value problem. Let g m (t, s) be the Green's function of the Lidstone boundary value problem
x ( 2 m ) ( t ) = 0 , t ( 0 , 1 ) x ( 2 k ) ( 0 ) = x ( 2 k ) ( 1 ) = 0 , 0 k m - 1 .
(2.1)
The Green's function g m (t, s) can be expressed as [4, 5]
g m ( t , s ) = 0 1 g ( t , u ) g m - 1 ( u , s ) d u
(2.2)
where
g 1 ( t , s ) = g ( t , s ) = t ( s - 1 ) , 0 t s 1 s ( t - 1 ) , 0 s t 1 .
(2.3)
Further, it is known that
g m ( t , s ) = ( - 1 ) m g m ( t , s ) and g m ( t , s ) = g m ( s , t ) , ( t , s ) ( 0 , 1 ) × ( 0 , 1 ) .
(2.4)
We also have the inequality
2 π 2 sin π t t ( 1 - t ) 1 π sin π t , t [ 0 , 1 ] .
(2.5)

The following two lemmas give the upper and lower bounds of |g m (t, s)|, they play an important role in subsequent development.

Lemma 2.1. For (t, s) [0, 1] × [0, 1], we have
g m ( t , s ) 1 π 2 m - 1 sin π s .
(2.6)
Proof. For (t, s) [0, 1] × [0, 1], it is clear from (2.3) that
g ( t , s ) s ( 1 - s ) .
(2.7)
Using (2.7), (2.4), and (2.5) in (2.2) yields for (t, s) [0, 1] × [0, 1],
g m ( t , s ) = 0 1 g ( t , u ) g m - 1 ( u , s ) d u 0 1 g m - 1 ( u , s ) u ( 1 - u ) d u 1 π 0 1 g m - 1 ( s , u ) sin π u d u .
(2.8)
By induction, we can show that
0 1 g m ( t , s ) sin π s d s = 1 π 2 m sin π t , t [ 0 , 1 ] .
(2.9)

Now (2.6) is immediate by applying (2.9) to (2.8).

Lemma 2.2. Let δ 0 , 1 2 be given. For (t, s) [δ, 1-δ] × [0, 1], we have
g m ( t , s ) 2 δ π 2 m sin π s .
(2.10)
Proof. For (t, s) [δ, 1-δ] × [0, 1], from (2.3) we find
g ( t , s ) δ ( 1 - s ) , 1 s [ 1 - ( 1 - δ ) ] s , s t δ s ( 1 - s ) .
(2.11)
Then, using (2.11), (2.4), and (2.5) in (2.2), we get for (t, s) [δ, 1 - δ ] × [0, 1],
g m ( t , s ) = 0 1 g ( t , u ) g m - 1 ( u , s ) d u δ 0 1 g m - 1 ( u , s ) u ( 1 - u ) d u 2 δ π 2 0 1 g m - 1 ( s , u ) sin π u d u ,

which, in view of (2.9), gives (2.10) immediately.

Remark 2.1. The bounds in Lemmas 2.1 and 2.2 are sharper than those given in the literature [4, 5, 35, 37].

3 Eigenvalues of (1.1)

To tackle (1.1) we first consider the initial value problem
y ( t ) = x ( t ) , t ( 0 , 1 ) y ( 0 ) = 0
(3.1)
whose solution is simply
y ( t ) = 0 t x ( s ) d s .
(3.2)
Taking into account (3.1) and (3.2), the complementary Lidstone boundary value problem (1.1) reduces to the Lidstone boundary value problem
( - 1 ) m x ( 2 m ) ( t ) = λ F t , 0 t x ( s ) d s , x ( t ) , t ( 0 , 1 ) x ( 2 k - 2 ) ( 0 ) = x ( 2 k - 2 ) ( 1 ) = 0 , 1 k m .
(3.3)
If (3.3) has a positive solution x*, then by virtue of (3.2), y * ( t ) = 0 t x * ( s ) d s is a positive solution of (1.1). Hence, the existence of a positive solution of the complementary Lidstone boundary value problem (1.1) follows from the existence of a positive solution of the Lidstone boundary value problem (3.3). It is clear that an eigenvalue of (3.3) is also an eigenvalue of (1.1), thus
E = { λ > 0 | ( 1 . 1 ) has a positive solution } = { λ > 0 | ( 3 . 3 ) has a positive solution } .

With the lemmas developed in Section 2 and a technique to handle the nonlinear term F, we shall study the eigenvalue problem (1.1) via (3.3).

For easy reference, we list below the conditions that are used later. In these conditions, f, α, and β are continuous functions with f : (0, ∞) × (0, ∞) → (0, ∞) and α, β : (0, 1) → [0, ∞).

(A1) f is nondecreasing in each of its arguments, i.e., for u, u1, u2, v, v1, v2 (0, ∞) with u1u2 and v1v2, we have
f ( u 1 , v ) f ( u 2 , v ) and f ( u , v 1 ) f ( u , v 2 ) ;
(A2) for t (0, 1) and u, v (0, ∞),
α ( t ) f ( u , v ) F ( t , u , v ) β ( t ) f ( u , v ) ;

(A3) α(t) is not identically zero on any nondegenerate subinterval of (0, 1) and there exists a0 (0, 1] such that α(t) ≥ a0β(t) for all t (0, 1);

(A4) 0 < 0 1 β ( t ) sin π t d t < ;

(A5) for t (0, 1) and u, u1, u2, v, v1, v2 (0, ∞) with u1u2 and v1v2, we have
F ( t , u 1 , v ) F ( t , u 2 , v ) and F ( t , u , v 1 ) F ( t , u , v 2 ) .
We shall consider the Banach space B = C[0, 1] equipped with the norm
x = sup t [ 0 , 1 ] x ( t ) , x B .
For a given δ 0 , 1 2 , let the cone C δ be defined by
C δ = x B x ( t ) 0 , t [ 0 , 1 ] ; min t [ δ , 1 - δ ] x ( t ) γ x
where γ = 2 δ π a 0 (a0 is defined in (A3)). Further, let
C δ ( M ) = { x C δ | x M } .
Let the operator S : C δ B be defined by
S x ( t ) = λ 0 1 ( - 1 ) m g m ( t , s ) F s , 0 s x ( τ ) d τ , x ( s ) d s = λ 0 1 g m ( t , s ) F s , 0 s x ( τ ) d τ , x ( s ) d s , t [ 0 , 1 ] .
(3.4)

To obtain a positive solution of (3.3), we shall seek a fixed point of the operator S in the cone C δ .

Further, we define the operators U, V : C δ B by
U x ( t ) = λ 0 1 g m ( t , s ) α ( s ) f 0 s x ( τ ) d τ , x ( s ) d s
and
V x ( t ) = λ 0 1 g m ( t , s ) β ( s ) f 0 s x ( τ ) d τ , x ( s ) d s .
If (A2) holds, then
U x ( t ) S x ( t ) V x ( t ) , t [ 0 , 1 ] .
(3.5)

Lemma 3.1. Let (A1)-(A4) hold. Then, the operator S is compact on the cone C δ .

Proof. Let us consider the case when α(t) is unbounded in a deleted right neighborhood of 0 and also in a deleted left neighborhood of 1. Clearly, β(t) is also unbounded near 0 and 1.

For n {1, 2, 3, ...}, let α n , β n : [0, 1] → [0, ∞) be defined by
α n ( t ) = α 1 n + 1 , 0 t 1 n + 1 α ( t ) , 1 n + 1 t n n + 1 α n n + 1 , n n + 1 t 1
and
β n ( t ) = β 1 n + 1 , 0 t 1 n + 1 β ( t ) , 1 n + 1 t n n + 1 β n n + 1 , n n + 1 t 1 .
Also, we define the operators U n , V n : C δ B by
U n x ( t ) = λ 0 1 g m ( t , s ) α n ( s ) f 0 s x ( τ ) d τ , x ( s ) d s
and
V n x ( t ) = λ 0 1 g m ( t , s ) β n ( s ) f 0 s x ( τ ) d τ , x ( s ) d s .
It is standard that for each n, both U n and V n are compact operators on C δ . Let M > 0 and x C δ (M). For t [0, 1], we get
V n x ( t ) - V x ( t ) 0 1 g m ( t , s ) β n ( s ) - β ( s ) f 0 s x ( τ ) d τ , x ( s ) d s = 0 1 n + 1 g m ( t , s ) β n ( s ) - β ( s ) f 0 s x ( τ ) d τ , x ( s ) d s + n n + 1 1 g m ( t , s ) β n ( s ) - β ( s ) f 0 s x ( τ ) d τ , x ( s ) d s .
By the monotonicity of f (see (A1)), we have
f 0 s x ( τ ) d τ , x ( s ) f 0 1 M d τ , M = f ( M , M )
(3.6)
Coupling with Lemma 2.1, it follows that
V n x ( t ) - V x ( t ) f ( M , M ) 0 1 n + 1 1 π 2 m - 1 β 1 n + 1 - β ( s ) sin π s d s + n n + 1 1 1 π 2 m - 1 β n n + 1 - β ( s ) sin π s d s .

The integrability of β(t) sin πt (see (A4)) ensures that V n converges uniformly to V on C δ (M). Hence, V is compact on C δ . By a similar argument, we see that U n converges uniformly to U on C δ (M) and therefore U is also compact on C δ . It follows immediately from inequality (3.5) that the operator S is compact on C δ .

Remark 3.1. From the proof of Lemma 3.1, we see that if the functions α and β are continuous on the close interval [0, 1], then the conditions (A1) and (A4) are not needed in Lemma 3.1.

The first result shows that E contains an interval.

Theorem 3.1. Let (A1)-(A4) hold. Then, there exists c > 0 such that the interval (0, c] E.

Proof. Let M > 0 be given. Define
c = M f ( M , M ) 0 1 1 π 2 m - 1 β ( s ) sin π s d s - 1 .
(3.7)
Let λ (0, c]. We shall prove that S(C δ (M)) C δ (M). Let x C δ (M). First, we shall show that Sx C δ . It is clear from (3.5) that
S x ( t ) λ 0 1 g m ( t , s ) α ( s ) f 0 s x ( τ ) d τ , x ( s ) d s 0 , t [ 0 , 1 ] .
(3.8)
Further, from (3.5) and Lemma 2.1 we get
S x ( t ) λ 0 1 g m ( t , s ) β ( s ) f 0 s x ( τ ) d τ , x ( s ) d s λ 0 1 1 π 2 m - 1 β ( s ) f 0 s x ( τ ) d τ , x ( s ) sin π s d s , t [ 0 , 1 ]
which leads to
S x λ 0 1 1 π 2 m - 1 β ( s ) f 0 s x ( τ ) d τ , x ( s ) sin π s d s .
(3.9)
Now, applying (3.5), Lemma 2.2, (A3) and (3.9) successively, we find for t [δ, 1-δ],
S x ( t ) λ 0 1 g m ( t , s ) α ( s ) f 0 s x ( τ ) d τ , x ( s ) d s λ 0 1 2 δ π 2 m α ( s ) f 0 s x ( τ ) d τ , x ( s ) sin π s d s λ 0 1 2 δ π 2 m a 0 β ( s ) f 0 s x ( τ ) d τ , x ( s ) sin π s d s 2 δ π a 0 S x = γ S x .
Therefore,
min t [ δ , 1 - δ ] S x ( t ) γ S x .
(3.10)

Inequalities (3.8) and (3.10) imply that Sx C δ .

Next, we shall verify that SxM. For this, an application of (3.5), Lemma 2.1, (3.6) and (3.7) provides
S x ( t ) c 0 1 g m ( t , s ) β ( s ) f 0 s x ( τ ) d τ , x ( s ) d s c f ( M , M ) 0 1 1 π 2 m - 1 β ( s ) sin π s d s = M , t [ 0 , 1 ]
or equivalently
S x M .

Hence, S(C δ (M)) C δ (M). Also, the standard arguments yield that S is completely continuous. By Schauder fixed point theorem, S has a fixed point in C δ (M). Clearly, this fixed point is a positive solution of (3.3) and therefore λ is an eigenvalue of (3.3). Since λ (0, c] is arbitrary, it follows immediately that the interval (0, c] E.

Remark 3.2. From the proof of Theorem 3.1, we see that (A2) and (A3) lead to S : C δ C δ .

Theorem 3.2. Let (A1)-(A5) hold. Suppose that λ* E, for any λ (0, λ*), we have λ E, i.e., (0, λ*] E.

Proof. Let x* be the eigenfunction corresponding to the eigenvalue λ*. Thus, we have
x * ( t ) = S x * ( t ) = λ * 0 1 g m ( t , s ) F s , 0 s x * ( τ ) d τ , x * ( s ) d s , t [ 0 , 1 ] .
(3.11)
Define
K * = x B 0 x ( t ) x * ( t ) , t [ 0 , 1 ] .
Let λ (0, λ*) and x K*. Using (A5), we get
0 S x ( t ) = λ 0 1 g m ( t , s ) F s , 0 s x ( τ ) d τ , x ( s ) d s λ * 0 1 g m ( t , s ) F s , 0 s x * ( τ ) d τ , x * ( s ) d s = S x * ( t ) , t [ 0 , 1 ]

where the last equality follows from (3.11). This immediately implies that the operator S maps K* into K*. Moreover, the operator S is continuous and completely continuous. Schauder's fixed point theorem guarantees that S has a fixed point in K*, which is a positive solution of (3.3). Hence, λ is an eigenvalue, i.e., λ E.

The following result shows that E is an interval.

Corollary 3.1. Let (A1)-(A5) hold. If E , then E is an interval.

Proof. Suppose E is not an interval. Then, there exist λ 0 , λ 0 E ( λ 0 < λ 0 ) and τ ( λ 0 , λ 0 ) with τ E. However, this is not possible as Theorem 3.2 guarantees that τ E. Hence, E is an interval.

The following two results give the upper and lower bounds of an eigenvalue in terms of some parameters of the corresponding eigenfunction.

Theorem 3.3. Let (A1) and (A2) hold. Assume that m is odd. Let λ be an eigenvalue of (3.3) and x C δ be a corresponding eigenfunction. If x(i)(0) = b i , i = 1, 3, ..., 2m - 1, where b2m-1> 0, then λ satisfies
M 1 λ M 2
(3.12)
where
M 1 = max 0 k m - 1 i = k m - 1 b 2 i + 1 ( 2 ( i - k ) + 1 ) ! f ( D , D ) 0 1 ( 1 - s ) 2 ( m - k ) - 1 ( 2 ( m - k ) - 1 ) ! β ( s ) d s - 1 , M 2 = min 0 k m - 1 i = k m - 1 b 2 i + 1 ( 2 ( i - k ) + 1 ) ! f ( 0 , 0 ) 0 1 ( 1 - s ) 2 ( m - k ) - 1 ( 2 ( m - k ) - 1 ) ! α ( s ) d s - 1
and
D = max t [ 0 , 1 ] i = 0 m - 1 b 2 i + 1 t 2 i + 1 ( 2 i + 1 ) ! .

Proof. For n {1, 2, 3, ...}, we define f n = f * ω n , where ω n is a standard mollifier [25] such that f n is Lipschitz and converges uniformly to f.

For a fixed n, let λ n be an eigenvalue and x n (t), with x n ( i ) ( 0 ) = b i , i = 1 , 3 , , 2 m - 1 be a corresponding eigenfunction of the following boundary value problem
( - 1 ) m x n ( 2 m ) ( t ) = λ n F n t , 0 t x n ( s ) d s , x n ( t ) , t [ 0 , 1 ]
(3.13)
x n ( 2 i ) ( 0 ) = x n ( 2 i ) ( 1 ) = 0 , 0 i m - 1
(3.14)
where F n converges uniformly to F, and for u, v (0, ∞),
α n ( t ) f n ( u , v ) F n ( t , u , v ) β n ( t ) f n ( u , v ) , t ( 0 , 1 )
(3.15)

(see the proof of Lemma 3.1 for the definitions of α n (t) and β n (t)).

It is clear that x n (t) is the unique solution of the initial value problem (3.13),
x n ( i ) ( 0 ) = 0 , i = 0 , 2 , , 2 m - 2 x n ( i ) ( 0 ) = b i , i = 1 , 3 , , 2 m - 1 .
(3.16)
First, we shall establish an upper bound for x n . Since
( - 1 ) m x n ( 2 m ) ( t ) = λ n F n t , 0 t x n ( s ) d s , x n ( t ) λ n α n ( t ) f n 0 t x n ( s ) d s , x n ( t ) 0 ,
we have x n ( 2 m - 1 ) ( t ) is nonincreasing and hence
x n ( 2 m - 1 ) ( t ) x n ( 2 m - 1 ) ( 0 ) = b 2 m - 1 , t [ 0 , 1 ] .
(3.17)
In view of the initial conditions (3.16) and also (3.17), we find
x n ( 2 m - 2 ) ( t ) = 0 t x n ( 2 m - 1 ) ( s ) d s b 2 m - 1 t , t [ 0 , 1 ] .
(3.18)
Next, an application of (3.18) gives
x n ( 2 m - 3 ) ( t ) = b 2 m - 3 + 0 t x n ( 2 m - 2 ) ( s ) d s b 2 m - 3 + b 2 m - 1 t 2 2 ! , t [ 0 , 1 ] .
By repeating the process, we get
x n ( t ) b 1 t + b 3 t 3 3 ! + + b 2 m - 1 t 2 m - 1 ( 2 m - 1 ) ! D , t [ 0 , 1 ] .
(3.19)
By the monotonicity of f n , we have
f n 0 t x n ( s ) d s , x n ( t ) f n 0 1 D d s , D = f n ( D , D )
and
f n 0 t x n ( s ) d s , x n ( t ) f n 0 t 0 d s , 0 = f n ( 0 , 0 ) .
Coupling with (3.13) and (3.15), it follows that
λ n α n ( t ) f n ( 0 , 0 ) ( - 1 ) m x n ( 2 m ) ( t ) λ n β n ( t ) f n ( D , D ) , t [ 0 , 1 ] .
(3.20)
Once again, using the initial conditions (3.16), repeated integration of (3.20) from 0 to t provides
ϕ 1 , k ( t ) x n ( 2 k ) ( t ) ϕ 2 , k ( t ) , t [ 0 , 1 ] , 0 k m - 1
(3.21)
where
ϕ 1 , k ( t ) = i = k m - 1 b 2 i + 1 t 2 ( i - k ) + 1 ( 2 ( i - k ) + 1 ) ! - λ n f n ( D , D ) 0 t ( t - s ) 2 ( m - k ) - 1 ( 2 ( m - k ) - 1 ) ! β n ( s ) d s
and
ϕ 2 , k ( t ) = i = k m - 1 b 2 i + 1 t 2 ( i - k ) + 1 ( 2 ( i - k ) + 1 ) ! - λ n f n ( 0 , 0 ) 0 t ( t - s ) 2 ( m - k ) - 1 ( 2 ( m - k ) - 1 ) ! α n ( s ) d s .
In order to satisfy the boundary conditions x n ( 2 k ) ( 1 ) = 0 , 0 k m - 1 , from inequality (3.21) it is necessary that
ϕ 1 , k ( 1 ) 0 and ϕ 2 , k ( 1 ) 0 , 0 k m - 1 .
This readily implies
M 1 , n λ n M 2 , n
(3.22)
where
M 1 , n = max 0 k m - 1 i = k m - 1 b 2 i + 1 ( 2 ( i - k ) + 1 ) ! f n ( D , D ) 0 1 ( 1 - s ) 2 ( m - k ) - 1 ( 2 ( m - k ) - 1 ) ! β n ( s ) d s - 1
and
M 2 , n = min 0 k m - 1 i = k m - 1 b 2 i + 1 ( 2 ( i - k ) + 1 ) ! f n ( 0 , 0 ) 0 1 ( 1 - s ) 2 ( m - k ) - 1 ( 2 ( m - k ) - 1 ) ! α n ( s ) d s - 1 .
From (3.20) it is observed (by using the initial conditions (3.16) and repeated integration) that { x n ( i ) } n = 1 , 0 i 2 m - 1 is a uniformly bounded sequence on [0, 1]. Thus, there exists a subsequence, which can be relabeled as { x n } n = 1 , that converges uniformly (in fact, in C(2m-1)-norm) to some x on [0, 1]. We note that each x n (t) can be expressed as
x n ( t ) = λ n 0 1 g m ( t , s ) F n s , 0 s x n ( τ ) d τ , x n ( s ) d s , t [ 0 , 1 ] .
(3.23)
Since { λ n } n = 1 is a bounded sequence (from (3.22)), there is a subsequence, which can be relabeled as { λ n } n = 1 , that converges to some λ. Then, letting n → ∞ in (3.23) yields
x ( t ) = λ 0 1 g m ( t , s ) F s , 0 s x ( τ ) d τ , x ( s ) d s , t [ 0 , 1 ] .

This means that x(t) is an eigenfunction of (3.3) corresponding to the eigenvalue λ. Further, x(i)(0) = b i , i = 1, 3, ..., 2m - 1 and inequality (3.12) follows from (3.22) immediately.

Theorem 3.4. Let (A1)-(A4) hold. Let λ be an eigenvalue of (3.3) and x C δ be a corresponding eigenfunction. Further, let x = p. Then,
λ p f ( p , p ) 1 π 2 m - 1 0 1 β ( s ) sin π s d s - 1
(3.24)
and
λ p f ( γ p ( 1 2 - δ ) , γ p ) 1 2 1 - δ g m ( t 1 , s ) α ( s ) d s - 1
(3.25)

where t1 is any number in (0, 1) such that x(t1) ≠ 0.

Proof. Let t0 [0, 1] be such that
p = x = x ( t 0 ) .
Then, using (3.5), Lemma 2.1 and the monotonicity of f, we find
p = x ( t 0 ) = S x ( t 0 ) λ 0 1 g m ( t 0 , s ) β ( s ) f 0 s x ( τ ) d τ , x ( s ) d s λ 0 1 1 π 2 m - 1 β ( s ) f 0 1 p d τ , p sin π s d s = λ f ( p , p ) π 2 m - 1 0 1 β ( s ) sin π s d s

which gives (3.24) readily.

Next, we employ (3.5), the monotonicity of f and the fact that mint[δ, 1-δ]x(t) ≥ γp to get
p x ( t 1 ) λ 0 1 g m ( t 1 , s ) α ( s ) f 0 s x ( τ ) d τ , x ( s ) d s λ 1 2 1 - δ g m ( t 1 , s ) α ( s ) f 0 s x ( τ ) d τ , x ( s ) d s λ 1 2 1 - δ g m ( t 1 , s ) α ( s ) f 0 1 2 x ( τ ) d τ , x ( s ) d s λ 1 2 1 - δ g m ( t 1 , s ) α ( s ) f δ 1 2 x ( τ ) d τ , x ( s ) d s λ 1 2 1 - δ g m ( t 1 , s ) α ( s ) f δ 1 2 γ p d τ , γ p d s = λ f γ p 1 2 - δ , γ p 1 2 1 - δ g m ( t 1 , s ) α ( s ) d s

from which (3.25) is immediate.

The following result gives the criteria for E to be a bounded/unbounded interval.

Theorem 3.5. Define
W B = f u f ( u , u ) is bounded for u ( 0 , ) , W 0 = f lim u u f ( u , u ) = 0 , W = f lim u u f ( u , u ) = .
  1. (a)

    Let (A1)-(A5) hold. If f W B , then E = (0, c) or (0, c] for some c (0, ∞).

     
  2. (b)

    Let (A1)-(A5) hold. If f W0, then E = (0, c] for some c (0, ∞).

     
  3. (c)

    Let (A1)-(A4) hold. If f W, then E = (0, ∞).

     
Proof. (a) This is immediate from (3.25) and Corollary 3.1.
  1. (b)

    Since W0 W B , it follows from Case (a) that E = (0, c) or (0, c] for some c (0, ∞).

     
In particular,
c = sup E .
Let { λ n } n = 1 be a monotonically increasing sequence in E which converges to c, and let { x n } n = 1 be a corresponding sequence of eigenfunctions in the context of (3.3). Further, let p n = x n . Then, (3.25) together with f W0 implies that no subsequence of { p n } n = 1 can diverge to infinity. Thus, there exists R > 0 such that p n R for all n. So { x n } n = 1 is uniformly bounded. This implies that there is a subsequence of { x n } n = 1 , relabeled as the original sequence, which converges uniformly to some x, where x(t) ≥ 0 for t [0, 1]. Clearly, we have Sx n = x n , i.e.,
x n ( t ) = λ n 0 1 g m ( t , s ) F s , 0 s x n ( τ ) d τ , x n ( s ) d s , t [ 0 , 1 ] .
(3.26)
Since x n converges to x and λ n converges to c, letting n → ∞ in (3.26) yields
x ( t ) = c 0 1 g m ( t , s ) F s , 0 s x ( τ ) d τ , x ( s ) d s , t [ 0 , 1 ] .
Hence, c is an eigenvalue with corresponding eigenfunction x, i.e., c = sup E E. This completes the proof for Case (b).
  1. (c)
    Let λ > 0 be fixed. Choose ε > 0 so that
    λ π 2 m - 1 0 1 β ( s ) sin π s d s 1 ε .
    (3.27)
     
By definition, if f W, then there exists M = M(ε) > 0 such that
f ( u , u ) < ε u , u M .
(3.28)
We shall prove that S(C δ (M)) C δ (M). Let x C δ (M). As in the proof of Theorem 3.1, we have (3.8) and (3.10) and so Sx C δ . Thus, it remains to show that SxM. Using (3.5), Lemma 2.1, (3.6), (3.28), and (3.27), we find for t [0, 1],
S x ( t ) λ 0 1 g m ( t , s ) β ( s ) f 0 s x ( τ ) d τ , x ( s ) d s λ f ( M , M ) 0 1 1 π 2 m - 1 β ( s ) sin π s d s λ ε M 0 1 1 π 2 m - 1 β ( s ) sin π s d s M .

It follows that SxM and hence S(C δ (M)) C δ (M). Also, S is continuous and completely continuous. Schauder's fixed point theorem guarantees that S has a fixed point in C δ (M). Clearly, this fixed point is a positive solution of (3.3) and therefore λ is an eigenvalue of (3.3). Since λ > 0 is arbitrary, we have proved that E = (0, ∞).

Example 3.1. Consider the complementary Lidstone boundary value problem
y ( 5 ) = λ t 5 10 + t 4 4 - t 3 + t 2 4 + t 2 + 2 - q y + y 2 + 2 q , t ( 0 , 1 ) y ( 0 ) = y ( 0 ) = y ( 0 ) = y ( 1 ) = y ( 1 ) = 0
(3.29)

where λ > 0 and q ≥ 0.

Here, m = 2 and
F ( t , y , y ) = t 5 10 + t 4 4 - t 3 + t 2 4 + t 2 + 2 - q y + y 2 + 2 q .

Clearly, F(t, u, v) is nondecreasing in u and v, thus (A5) is satisfied.

Choose
α ( t ) = β ( t ) = t 5 10 + t 4 4 - t 3 + t 2 4 + t 2 + 2 - q
and
f ( u , v ) = u + v 2 + 2 q .

We see that (A1)-(A4) are satisfied.

Case 1. 0 ≤ q < 1. Clearly f W. It follows from Theorem 3.5(c) that the set E = (0, ∞). As an example, when λ = 24, the boundary value problem (3.29) has a positive solution given by y ( t ) = t 5 5 - t 4 2 + t 2 2 .

Case 2. q = 1. Here f W B . By Theorem 3.5(a) the set E is an open or a half-closed interval. Further, from Case 1 and Theorem 3.2 we note that E contains the interval (0, 24].

Case 3. q > 1. Clearly f W0. By Theorem 3.5(b) the set E is a half-closed interval. Again, as in Case 2 we note that (0, 24] E.

4 Eigenvalue intervals

In this section, we shall establish explicit subintervals of E. Here, the functions α and β in (A2)-(A4) are assumed to be continuous on the closed interval [0, 1]. Hence, noting Remark 3.1, we shall not require conditions (A1) and (A4) to show the compactness of the operator S. For the function f in (A2), we define
f ̄ 0 = lim sup u 0 , v 0 f ( u , v ) v , f - 0 = lim inf u 0 , v 0 f ( u , v ) v , f ̄ = lim sup u , v f ( u , v ) v , f - = lim inf u , v f ( u , v ) v .
Let δ ( 0 , 1 2 ) be given. Define t * , t ^ [ 0 , 1 ] by
1 2 1 - δ g m ( t * , s ) α ( s ) d s = sup t [ 0 , 1 ] 1 2 1 - δ g m ( t , s ) α ( s ) d s , δ 1 - δ g m ( t ^ , s ) α ( s ) d s = sup t [ 0 , 1 ] δ 1 - δ g m ( t , s ) α ( s ) d s ,
(4.1)
Theorem 4.1. Let (A2)-(A4) hold. Then, λ E if λ satisfies
1 f - γ 1 2 1 - δ g m ( t * , s ) α ( s ) d s - 1 < λ < 1 f ̄ 0 1 π 2 m - 1 0 1 β ( s ) sin π s d s - 1 .
(4.2)
Proof. We shall use Theorem 2.1. Let λ satisfy (4.2) and let ε > 0 be such that
1 f - - ε γ 1 2 1 - δ g m ( t * , s ) α ( s ) d s - 1 λ 1 f ̄ 0 + ε 1 π 2 m - 1 0 1 β ( s ) sin π s d s - 1 .
(4.3)
First, we pick p > 0 so that
f ( u , v ) ( f ̄ 0 + ε ) v , 0 < u p , 0 < v p .
(4.4)
Let x C δ be such that x = p. Note that for s [0, 1],
0 s x ( τ ) d τ 0 1 p d τ = p .
Then, using (3.5), Lemma 2.1, (4.4) and (4.3) successively, we find for t [0, 1],
S x ( t ) λ 0 1 1 π 2 m - 1 β ( s ) f 0 s x ( τ ) d τ , x ( s ) sin π s d s λ 0 1 1 π 2 m - 1 β ( s ) ( f ̄ 0 + ε ) x ( s ) sin π s d s λ 0 1 1 π 2 m - 1 β ( s ) ( f ̄ 0 + ε ) x sin π s d s x .
Hence,
S x x .
(4.5)

If we set Ω1 = {x B |x < p}, then (4.5) holds for x C δ Ω1.

Next, let q > 0 be such that
f ( u , v ) ( f - - ε ) v , u q , v q .
(4.6)
Let x C δ be such that
x = max p + 1 , q γ , q γ 1 2 - δ - 1 = max p + 1 , q γ 1 2 - δ - 1 q 0 .
It is clear that
x ( s ) γ x q , s 1 2 , 1 - δ δ 1 2 x ( τ ) d τ δ 1 2 γ x d τ = 1 2 - δ γ x q .
(4.7)
Then, an application of (3.5), (4.7), and (4.6) gives for t [0, 1],
S x ( t ) λ 1 2 1 - δ g m ( t , s ) α ( s ) f 0 s x ( τ ) d τ , x ( s ) d s λ 1 2 1 - δ g m ( t , s ) α ( s ) f δ 1 2 x ( τ ) d τ , x ( s ) d s λ 1 2 1 - δ g m ( t , s ) α ( s ) ( f - - ε ) x ( s ) d s λ 1 2 1 - δ g m ( t , s ) α ( s ) ( f - - ε ) γ x d s .
Taking supremum both sides and using (4.3) then provides (see (4.1) for the definition of t*)
S x λ 1 2 1 - δ g m ( t * , s ) α ( s ) d s ( f - - ε ) γ x x .
Therefore, if we set Ω2 = {x B| x < q0}, then for x C δ Ω2 we have
S x x .
(4.8)

Now that we have obtained (4.5) and (4.8), it follows from Remark 3.2 and Theorem 2.1 that S has a fixed point x C δ ( Ω ̄ 2 \ Ω 1 ) such that pxq0. Obviously, this x is a positive solution of (3.3) and hence λ E.   □

Theorem 4.2. Let (A2)-(A4) hold. Then, λ E if λ satisfies
1 f - 0 γ δ 1 - δ g m ( t ^ , s ) α ( s ) d s - 1 < λ < 1 f ̄ 1 π 2 m - 1 0 1 β ( s ) sin π s d s - 1 .
(4.9)
Proof. We shall apply Theorem 2.1 again. Let λ satisfy (4.9) and let ε > 0 be such that
1 f - 0 - ε γ δ 1 - δ g m ( t ^ , s ) α ( s ) d s - 1 λ 1 f ̄ + ε 1 π 2 m - 1 0 1 β ( s ) sin π s d s - 1 .
(4.10)
First, we choose r > 0 so that
f ( u , v ) ( f - 0 - ε ) v , 0 < u r , 0 < v r .
(4.11)
Let x C δ be such that x = r. Then, on using (3.5), (4.11), and (4.10) successively, we have for t [0, 1],
S x ( t ) λ 0 1 g m ( t , s ) α ( s ) f 0 s x ( τ ) d τ , x ( s ) d s λ 0 1 g m ( t , s ) α ( s ) ( f - 0 - ε ) x ( s ) d s λ δ 1 - δ g m ( t , s ) α ( s ) ( f - 0 - ε ) γ x d s .
Taking supremum both sides and using (4.10) then yields (see (4.1) for the definition of t ^ )
S x λ δ 1 - δ g m ( t ^ , s ) α ( s ) d s ( f - 0 - ε ) γ x x .

Hence, if we set Ω1 = {y B| x < r}, then (4.8) holds for x C δ Ω1.

Next, pick w > 0 such that
f ( u , v ) ( f ̄ + ε ) v , u w , v w .
(4.12)

We shall consider two cases - when f is bounded and when f is unbounded.

Case 1. Suppose that f is bounded. Then, there exists some M > 0 such that
f ( u , v ) M , u , v ( 0 , ) .
(4.13)
Let x C δ be such that
x = max r + 1 , λ M π 2 m - 1 0 1 β ( s ) sin π s d s w 0 .
From (3.5), Lemma 2.1 and (4.13), it is clear for t [0, 1] that
S x ( t ) λ 0 1 1 π 2 m - 1 β ( s ) f 0 s x ( τ ) d τ , x ( s ) sin π s d s λ 0 1 1 π 2 m - 1 β ( s ) M sin π s d s w 0 = x .

Hence, (4.5) holds.

Case 2. Suppose that f is unbounded. Then, there exists w0 > max {r + 1, w} such that
f ( u , v ) f ( w 0 , w 0 ) , 0 < u w 0 , 0 < v w 0 .
(4.14)
Let x C δ be such that x = w0. Then, applying (3.5), Lemma 2.1, (4.14), (4.12), and (4.10) successively gives for t [0, 1],
S x ( t ) λ 0 1 1 π 2 m - 1 β ( s ) f 0 s x ( τ ) d τ , x ( s ) sin π s d s λ 0 1 1 π 2 m - 1 β ( s ) f ( w 0 , w 0 ) sin π s d s λ 0 1 1 π 2 m - 1 β ( s ) ( f ̄ + ε ) w 0 sin π s d s = λ 0 1 1 π 2 m - 1 β ( s ) ( f ̄ + ε ) x sin π s d s x .

Thus, (4.5) follows immediately.

In both Cases 1 and 2, if we set Ω2 = {x B| x < w0}, then (4.5) holds for x C δ Ω2.

Now that we have obtained (4.8) and (4.5), it follows from Remark 3.2 and Theorem 2.1 that S has a fixed point x C δ ( Ω ̄ 2 \ Ω 1 ) such that rxw0. It is clear that this x is a positive solution of (3.3) and hence λ E.

Remark 4.1. In (4.2) and (4.9), although t* and t ^ can be computed from (4.1), we can circumvent the computation by giving further bounds. Indeed, applying Lemma 2.2 we find
1 2 1 - δ g m ( t * , s ) α ( s ) d s sup t [ δ , 1 - δ ] 1 2 1 - δ g m ( t , s ) α ( s ) d s 2 δ π 2 m 1 2 1 - δ α ( s ) sin π s d s
and
δ 1 - δ g m ( t ^ , s ) α ( s ) d s sup t [ δ , 1 - δ ] δ 1 - δ g m ( t , s ) α ( s ) d s 2 δ π 2 m δ 1 - δ α ( s ) sin π s d s .

The following corollary is immediate from Theorems 4.1, 4.2 and Remark 4.1.

Corollary 4.1. Let (A2)-(A4) hold. Then,
E 1 f - γ 1 2 1 - δ g m ( t * , s ) α ( s ) d s - 1 , 1 f ̄ 0 1 π 2 m - 1 0 1 β ( s ) sin π s d s - 1 1 f - 2 γ δ π 2 m 1 2 1 - δ α ( s ) sin π s d s - 1 , 1 f ̄ 0 1 π 2 m - 1 0 1 β ( s ) sin π s d s - 1
and
E 1 f - 0 γ δ 1 - δ g m ( t ^ , s ) α ( s ) d s - 1 , 1 f ̄ 1 π 2 m - 1 0 1 β ( s ) sin π s d s - 1 1 f - 0 2 γ δ π 2 m δ 1 - δ α ( s ) sin π s d s - 1 , 1 f ̄ 1 π 2 m - 1 0 1 β ( s ) sin π s d s - 1 .

Remark 4.2. If f is superlinear (i.e., f ̄ 0 = 0 and f - = ) or sublinear (i.e., f - 0 = and f ̄ = 0 ), then we conclude from Corollary 4.1 that E = (0, ∞), i.e., the boundary value problem (3.3) (or (1.1)) has a positive solution for any λ > 0.

Example 4.1. Consider the complementary Lidstone boundary value problem
y ( 5 ) = λ a t 5 - 5 t 4 2 + 5 t 2 2 + b ( 5 t 4 - 10 t 3 + 5 t ) + c - r ( a y + b y + c ) r , t ( 0 , 1 ) y ( 0 ) = y ( 0 ) = y ( 0 ) = y ( 1 ) = y ( 1 ) = 0
(4.15)

where λ, a, b, c > 0 and r ≤ 1.

Here, m = 2. It is clear that (A2)-(A4) are satisfied with
α ( t ) = β ( t ) = a t 5 - 5 t 4 2 + 5 t 2 2 + b ( 5 t 4 - 10 t 3 + 5 t ) + c - r and f ( u , v ) = ( a u + b v + c ) r .

Case 1. r < 1. It is clear that f is sublinear. Therefore, by Remark 4.2 the boundary value problem (4.15) has a positive solution for any λ > 0. In fact, we note that when λ = 120, (4.15) has a positive solution given by x ( t ) = t 5 - 5 t 4 2 + 5 t 2 2 .

Case 2. r = 1, a = b = 0.5, c = 10. Here, f - 0 = and f ̄ = 1 . It follows from Corollary 4.1 that
E 0 , π 3 f ̄ 0 1 β ( s ) sin π s d s - 1 = ( 0 , 528 . 99 ) .

Once again we note that when λ = 120 (0,528.99), the corresponding eigenfunction is given by x ( t ) = t 5 - 5 t 4 2 + 5 t 2 2 .

Declarations

Authors’ Affiliations

(1)
Department of Mathematics, Texas A&M University - Kingsville
(2)
Department of Mathematics, Faculty of Science, King Abdulaziz University
(3)
School of Electrical and Electronic Engineering, Nanyang Technological University

References

  1. Costabile FA, Dell'Accio F, Luceri R: Explicit polynomial expansions of regular real functions by means of even order Bernoulli polynomials and boundary values. J Com-put Appl Math 2005, 175: 77-99. 10.1016/j.cam.2004.06.006MathSciNetView ArticleGoogle Scholar
  2. Agarwal RP, Pinelas S, Wong PJY: Complementary Lidstone interpolation and boundary value problems. J Inequal Appl 2009, 2009: 1-30.MathSciNetGoogle Scholar
  3. Agarwal RP, Wong PJY: Piecewise complementary Lidstone interpolation and error inequalities. J Comput Appl Math 2010, 234: 2543-2561. 10.1016/j.cam.2010.03.029MathSciNetView ArticleGoogle Scholar
  4. Agarwal RP, Wong PJY: Lidstone polynomials and boundary value problems. Comput Math Appl 1989, 17: 1397-1421. 10.1016/0898-1221(89)90023-0MathSciNetView ArticleGoogle Scholar
  5. Agarwal RP, Wong PJY: Error Inequalities in Polynomial Interpolation and their Applications. Kluwer, Dordrecht; 1993.View ArticleGoogle Scholar
  6. Davis PJ: Interpolation and Approximation. Blaisdell Publishing Co., Boston; 1961.Google Scholar
  7. Varma AK, Howell G: Best error bounds for derivatives in two point Birkhoff interpolation problem. J Approx Theory 1983, 38: 258-268. 10.1016/0021-9045(83)90132-6MathSciNetView ArticleGoogle Scholar
  8. Lidstone GJ: Notes on the extension of Aitken's theorem (for polynomial interpolation) to the Everett types. Proc Edinburgh Math Soc 1929, 2: 16-19.View ArticleGoogle Scholar
  9. Boas RP: A note on functions of exponential type. Bull Am Math Soc 1941, 47: 750-754. 10.1090/S0002-9904-1941-07552-XMathSciNetView ArticleGoogle Scholar
  10. Boas RP: Representation of functions by Lidstone series. Duke Math J 1943, 10: 239-245. 10.1215/S0012-7094-43-01021-XMathSciNetView ArticleGoogle Scholar
  11. Poritsky H: On certain polynomial and other approximations to analytic functions. Trans Am Math Soc 1932, 34: 274-331. 10.1090/S0002-9947-1932-1501639-4MathSciNetView ArticleGoogle Scholar
  12. Schoenberg IJ: On certain two-point expansions of integral functions of exponential type. Bull Am Math Soc 1936, 42: 284-288. 10.1090/S0002-9904-1936-06293-2MathSciNetView ArticleGoogle Scholar
  13. Whittaker JM: On Lidstone's series and two-point expansions of analytic functions. Proc Lond Math Soc 36: 451-469. (1933-34)
  14. Whittaker JM: Interpolatory Function Theory. Cambridge University Press, Cambridge; 1935.Google Scholar
  15. Widder DV: Functions whose even derivatives have a prescribed sign. Proc Nat Acad Sci 1940, 26: 657-659. 10.1073/pnas.26.11.657MathSciNetView ArticleGoogle Scholar
  16. Widder DV: Completely convex functions and Lidstone series. Trans Am Math Soc 1942, 51: 387-398.MathSciNetView ArticleGoogle Scholar
  17. Agarwal RP, Wong PJY: Explicit error bounds for the derivatives of piecewise-Lidstone interpolation. J Comp Appl Math 1995, 58: 67-81. 10.1016/0377-0427(93)E0262-KMathSciNetView ArticleGoogle Scholar
  18. Agarwal RP, Wong PJY: Error bounds for the derivatives of Lidstone interpolation and applications, in. In Approximation Theory, in memory of A.K. Varma. Edited by: Govil NK. Marcal Dekker, New York; 1998:1-41.Google Scholar
  19. Costabile FA, Dell'Accio F: Lidstone approximation on the triangle. Appl Numer Math 2005, 52: 339-361. 10.1016/j.apnum.2004.08.003MathSciNetView ArticleGoogle Scholar
  20. Costabile FA, Serpe A: An algebraic approach to Lidstone polynomials. Appl Math Lett 2007, 20: 387-390. 10.1016/j.aml.2006.02.034MathSciNetView ArticleGoogle Scholar
  21. Wong PJY: On Lidstone splines and some of their applications. Neural Parallel Sci Comput 1995, 1: 472-475.Google Scholar
  22. Wong PJY, Agarwal RP: Sharp error bounds for the derivatives of Lidstone-spline interpolation. Comput Math Appl 1994, 28(9):23-53. 10.1016/0898-1221(94)00176-6MathSciNetView ArticleGoogle Scholar
  23. Wong PJY, Agarwal RP: Sharp error bounds for the derivatives of Lidstone-spline interpolation II. Comput Math Appl 1996, 31(3):61-90. 10.1016/0898-1221(95)00206-5MathSciNetView ArticleGoogle Scholar
  24. Agarwal RP, Akrivis G: Boundary value problems occurring in plate deflection theory. J Comput Appl Math 1982, 8: 145-154. 10.1016/0771-050X(82)90035-3MathSciNetView ArticleGoogle Scholar
  25. Agarwal RP, O'Regan D, Wong PJY: Positive Solutions of Differential, Difference and Integral Equations. Kluwer, Dordrecht; 1999.View ArticleGoogle Scholar
  26. Agarwal RP, Wong PJY: Quasilinearization and approximate quasilinearization for Lidstone boundary value problems. Int J Comput Math 1992, 42: 99-116. 10.1080/00207169208804054View ArticleGoogle Scholar
  27. Baldwin P: Asymptotic estimates of the eigenvalues of a sixth-order boundary-value problem obtained by using global phase-integral method. Phil Trans Roy Soc London Ser A 1987, 322: 281-305. 10.1098/rsta.1987.0051MathSciNetView ArticleGoogle Scholar
  28. Baldwin P: A localized instability in a Béenard layer. Appl Anal 1987, 24: 117-156. 10.1080/00036818708839658MathSciNetView ArticleGoogle Scholar
  29. Boutayeb A, Twizell EH: Finite-difference methods for twelfth-order boundary value problems. J Comput Appl Math 1991, 35: 133-138. 10.1016/0377-0427(91)90202-UMathSciNetView ArticleGoogle Scholar
  30. Davis JM, Eloe PW, Henderson J: Triple positive solutions and dependence on higher order derivatives. J Math Anal Appl 1999, 237: 710-720. 10.1006/jmaa.1999.6500MathSciNetView ArticleGoogle Scholar
  31. Davis JM, Henderson J, Wong PJY: General Lidstone problems: multiplicity and symmetry of solutions. J Math Anal Appl 2000, 251: 527-548. 10.1006/jmaa.2000.7028MathSciNetView ArticleGoogle Scholar
  32. Eloe PW, Henderson J, Thompson HB: Extremal points for impulsive Lidstone boundary value problems. Math Comput Model 2000, 32: 687-698. 10.1016/S0895-7177(00)00165-5MathSciNetView ArticleGoogle Scholar
  33. Forster P: Existenzaussagen und Fehlerabschäatzungen bei gewissen nichtlinearen Randwertaufgaben mit gewöhnlichen Differentialgleichungen. Numerische Mathematik 1967, 10: 410-422. 10.1007/BF02162874MathSciNetView ArticleGoogle Scholar
  34. Guo Y, Ge W: Twin positive symmetric solutions for Lidstone boundary value problems. Taiwanese J Math 2004, 8: 271-283.MathSciNetGoogle Scholar
  35. Ma Y: Existence of positive solutions of Lidstone boundary value problems. J Math Anal Appl 2006, 314: 97-108. 10.1016/j.jmaa.2005.03.059MathSciNetView ArticleGoogle Scholar
  36. Twizell EH, Boutayeb A: Numerical methods for the solution of special and general sixth-order boundary value problems, with applications to Bénard layer eigenvalue problems. Proc Roy Soc London Ser A 1990, 431: 433-450. 10.1098/rspa.1990.0142MathSciNetView ArticleGoogle Scholar
  37. Yao Q: On the positive solutions of Lidstone boundary value problems. Appl Math Comput 2003, 137: 477-485. 10.1016/S0096-3003(02)00152-2MathSciNetView ArticleGoogle Scholar
  38. Krasnosel'skii MA: Positive Solutions of Operator Equations. Noordhoff, Groningen; 1964.Google Scholar

Copyright

© Agarwal and Wong; licensee Springer. 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.