Skip to main content

Existence and multiplicity of positive solutions to a fourth-order impulsive integral boundary value problem with deviating argument

Abstract

In this paper, we study the existence of multiple positive solutions for fourth-order impulsive differential equation with integral boundary conditions and deviating argument. The main tool is based on the Avery and Peterson fixed point theorem. Meanwhile, an example to demonstrate the main results is given.

1 Introduction

As an important area of investigation, the theory and applications of the fourth-order ordinary differential equations are emerging. The study concerns mainly the description of the deformations of an elastic beam by means of a fourth-order differential equation boundary value problem (BVP for short). Owing to its various applications in physics, engineering, and material mechanics, a lot of attention has been received by fourth-order differential equation BVPs. For more information, see [1–8].

In the meantime, many very interesting and significant cases of BVPs are constituted by integral boundary conditions. They include two-, three-, multi-point, and nonlocal BVPs as special cases. Therefore, in recent years, increasing attention has been given to integral boundary conditions [9–15] of fourth-order BVPs. Especially, we intend to mention some recent results.

In [12], Ma studied the following fourth-order BVP:

$$\left \{ \textstyle\begin{array}{l} u^{(4)}(t)=h(t)f(t,u), \quad 0< t< 1, \\ u(0)=u(1)=\int_{0}^{1}p(s)u(s)\,ds, \\ u''(0)=u''(1)=\int_{0}^{1}q(s)u(s)\,ds, \end{array}\displaystyle \right . $$

where \(p, q \in L^{1}[0,1]\), h and f are continuous. From an application of the fixed point index in cones, the existence of at least one symmetric positive solution could be obtained.

In 2015, the authors [13] investigated the fourth-order differential equation with integral boundary conditions,

$$\left \{ \textstyle\begin{array}{l} y^{(4)}(t)=\omega(t)F(t,y(t),y''(t)),\quad 0< t< 1, \\ y(0)=y(1)=\int_{0}^{1}h(s)y(s)\,ds, \\ ay''(0)-by'''x(0)=\int_{0}^{1}g(s)y''(s)\,ds, \\ ay''(1)+by'''x(1)=\int_{0}^{1}g(s)y''(s)\,ds. \end{array}\displaystyle \right . $$

By using a novel technique and fixed point theories, they showed the existence and multiplicity of positive solutions.

Unlike [12] and [13], a class of fourth-order differential equations with advanced or delayed argument were considered by the author of [14],

$$x^{4}(t)=h(t)f \bigl(t,x(t),x \bigl(\alpha(t) \bigr) \bigr),\quad t \in(0,1), $$

subject to the boundary conditions

$$\left \{ \textstyle\begin{array}{l} x(0)=\gamma x'(0)-\int_{0}^{1}g(s)x(s)\,ds, \\ x(1)=\beta x(\eta), \qquad x''(0)=x''(1)=0, \end{array}\displaystyle \right . $$

or

$$\left \{ \textstyle\begin{array}{l} x(0)=\beta x(\eta), \qquad x''(0)=x''(1)=0, \\ x(1)=\gamma x'(0)-\int_{0}^{1}g(s)x(s)\,ds. \end{array}\displaystyle \right . $$

The existence of multiple positive solutions is obtained by using a fixed point theorem due to Avery and Peterson.

In addition, to deviating arguments, the authors of [15] studied a fourth-order impulsive BVP as follows:

$$\left \{ \textstyle\begin{array}{l} (\phi_{p} (y''(t) ) )''=\lambda\omega (t)f (t,y (\alpha(t) ) ), \quad t\in(0,1)/(t_{1}, t_{2},\ldots, t_{n}), \\ \triangle y'_{t_{k}}=-\mu I_{k} (t_{k},y(t_{k}) ), \quad k=1, 2, \ldots, n, \\ ay(0)-by'(0)=\int_{0}^{1}g(s)y(s)\,ds, \\ ay(1)+by'(1)=\int_{0}^{1}g(s)y(s)\,ds, \\ \phi_{p} (y''(0) )=\phi_{p} (y''(1) )=\int _{0}^{1}h(t)\phi_{p} (y''(t) )\,dt. \end{array}\displaystyle \right . $$

The boundary conditions above are special Sturm-Liouville integral boundary conditions, since \(ay(0)-by'(0)=ay(1)+by'(1)=\int _{0}^{1}g(s)y(s)\,ds\). Several existence and multiplicity results were derived by using inequality techniques and fixed point theories. For most research papers on impulsive differential equation BVPs, see [16–19] and the references therein.

Motivated by the mentioned results, we investigate a fourth-order impulsive differential equation Sturm-Liouville integral BVP with deviating argument,

$$ x^{(4)}(t)=h(t)f\bigl(t,x(t),x \bigl(\alpha(t)\bigr) \bigr),\quad t\in J_{0}, $$
(1.1)

subject to

$$ \left \{ \textstyle\begin{array}{l} x(0)=x(1)=\int_{0}^{1}g_{0}(s)x(s)\,ds, \\ \triangle x'_{t_{k}}=-I_{k} (t_{k}, x(t_{k}) ), \quad k=1, 2, \ldots, m, \\ x''(0)-\xi x'''(0)=\int_{0}^{1}g_{1}(s)x''(s)\,ds, \\ x''(1)+\eta x'''(1)=\int_{0}^{1}g_{2}(s)x''(s)\,ds, \end{array}\displaystyle \right . $$
(1.2)

where \(\xi,\eta>0\). Compared with [13–15], in this paper, (1.2) contains the general Sturm-Liouville integral boundary conditions, where \(g_{1}(s)\), \(g_{2}(s)\) could be two different functions in \(L^{1}[0,1]\). In this case, we have to establish a more complicated expression of operator T and to find the proper lower and upper bounds of Green’s functions (Lemma 2.2 and Lemma 2.4). Further, by using the fixed point theorem due to Avery and Peterson, the existence and multiplicity of positive solutions are obtained.

In (1.2), \(t_{k}\) (\(k=1, 2, \ldots, m\)) are fixed points with \(0=t_{0}< t_{1}< t_{2}<\cdots<t_{m}<t_{m+1}=1\), \(\triangle x'_{t_{k}}=x'({t_{k}}^{+})-x'({t_{k}}^{-})\), \(x'({t_{k}}^{+})\) and \(x'({t_{k}}^{-})\) represent the right-hand limit and the left-hand limit of \(x'(t_{k})\) at \(t=t_{k}\), respectively.

Throughout the paper, we always assume that \(J=[0,1]\), \(J_{0}=(0,1) \backslash\{t_{1}, t_{2}, \ldots, t_{m}\}\), \(\mathbb{R}^{+}=[0,+\infty)\), \(J_{k}=(t_{k}, t_{k+1}]\), \(k=0, 1, \ldots, m-1\). \(\alpha: J\rightarrow J\) is continuous and:

(H1):

\(f\in C (J\times\mathbb{R}^{+}\times\mathbb{R}^{+}, \mathbb{R}^{+})\), with \(f(t,u,v)>0\) for \(t\in J\), \(u>0\), and \(v>0\);

(H2):

h is a nonnegative continuous function defined on \((0,1)\); h is not identically zero on any subinterval on \(J_{0}\);

(H3):

\(I_{k}:J\times\mathbb{R}^{+}\rightarrow\mathbb {R}^{+}\) is continuous with \(I_{k}(t,u)>0\) (\(k=1,2,\ldots,m\)) for all \(t\in J\) and \(u >0\);

(H4):

\(g_{0}, g_{1}, g_{2} \in L^{1}[0,1]\) are nonnegative and \(\gamma=\int_{0}^{1}g_{0}(s)\,ds \in(0,1)\).

2 Expression and properties of Green’s function

For \(v(t)\in C(J)\), we consider the equation

$$ x^{4}(t)=v(t), \quad 0< t< 1, $$
(2.1)

with boundary conditions (1.2).

We shall reduce BVP (2.1) and (1.2) to two second-order problems. To this goal, first, by means of the transformation

$$ x''(t)=-y(t), $$
(2.2)

we convert problem (2.1) and (1.2) into

$$ \left \{ \textstyle\begin{array}{l} y''(t)=-v(t), \\ y(0)-\xi y'(0)=\int_{0}^{1} g_{1}(s)y(s)\,ds, \\ y(1)+\eta y'(1)=\int_{0}^{1} g_{2}(s)y(s)\,ds, \end{array}\displaystyle \right . $$
(2.3)

and

$$ \left \{ \textstyle\begin{array}{l} x''(t)=-y(t), \\ x(0)=x(1)=\int_{0}^{1}g_{0}(s)x(s)\,ds, \\ \triangle x'_{t_{k}}=- I_{k}, \quad k=1, 2, \ldots, m. \end{array}\displaystyle \right . $$
(2.4)

Lemma 2.1

\(\lambda(t)\) is the solution of \(x''(t)=0\), \(x(0)=\xi\), \(x'(0)=1\). \(\mu(t)\) is the solution of \(y''(t)=0\), \(y(1)=\eta\), \(y'(1)=-1\). Then \(\lambda(t)\) is strictly increasing on J, \(\lambda(t)>0\) on \((0,1]\); \(\mu(t)\) is strictly decreasing on J, and \(\mu(t)>0\) on \([0,1)\). For any \(v(t)\in C(J)\), then the BVP (2.3) has a unique solution as follows:

$$ y(t)=(Fv) (t)+A(v)\lambda(t)+B(v)\mu(t), $$
(2.5)

where

$$\begin{aligned}& (Fv) (t)= \int_{0}^{1} G_{1}(t,s)v(s) \,ds, \end{aligned}$$
(2.6)
$$\begin{aligned}& G_{1}(t,s)= \textstyle\begin{cases} \frac{1}{\rho}(\eta+1-t)(s+\xi), & 0\leq s\leq t\leq1,\\ \frac{1}{\rho}(\eta+1-s)(t+\xi), & 0\leq t\leq s\leq1, \end{cases}\displaystyle \end{aligned}$$
(2.7)
$$\begin{aligned}& \rho=1+\xi+\eta, \qquad \lambda(t)=t+\xi,\qquad \mu(t)=\eta +1-t, \end{aligned}$$
(2.8)

and

$$ A(v)=\frac{1}{\Delta} \begin{vmatrix} \alpha[Fv] & \rho-\alpha[\mu] \\ \beta[Fv] & -\beta[\mu] \end{vmatrix} ,\qquad B(v)=\frac{1}{\Delta} \begin{vmatrix} -\alpha[\lambda] & \alpha[Fv] \\ \rho-\beta[\lambda] & \beta[Fv] \end{vmatrix} , $$
(2.9)

where

$$ \begin{aligned} &\Delta= \begin{vmatrix} -\alpha[ \lambda] & \rho-\alpha[\mu] \\ \rho-\beta[\lambda] & -\beta[\mu] \end{vmatrix} , \\ &\alpha[v]= \int_{0}^{1} g_{1}(s)v(s) \,ds ,\qquad \beta[v]= \int_{0}^{1} g_{2}(s)v(s) \,ds. \end{aligned} $$
(2.10)

Proof

Since λ and μ are two linearly independent solutions of the equation \(y''(t)=0\), we know that any solution of \(y''=v(t)\) can be represented by (2.5).

It is easy to check that the function defined by (2.5) is a solution of (2.3) if A and B are as in (2.9), respectively.

Now we show that the function defined by (2.5) is a solution of (2.3) only if A and B are as in (2.9), respectively.

Let \(y(t)=(Fv)(t)+A\lambda(t)+B\mu(t)\) be a solution of (2.3), then we have

$$\begin{aligned}& y(t)= \int_{0}^{t} \frac{1}{\rho}(\eta+1-t) (s+\xi)v(s) \,ds+ \int_{t}^{1} \frac{1}{\rho}(\eta+1-s) (t+\xi)v(s) \,ds+A\lambda(t)+B\mu(t), \\& y'(t)=- \int_{0}^{t} \frac{1}{\rho}(s+\xi)v(s)\,ds+ \int_{t}^{1} \frac {1}{\rho}(\eta+1-s)v(s)\,ds+A \lambda'(t)+B\mu'(t), \end{aligned}$$

and

$$y''(t)=-\frac{1}{\rho}(t+\xi)v(t)- \frac{1}{\rho}(\eta +1-t)v(t)+A\lambda''(t)+B \mu''(t). $$

Thus, by (2.8), we can obtain

$$y''=-v(t). $$

Since

$$\begin{aligned}& y(0)= \int_{0}^{1} \frac{\xi}{\rho}(\eta+1-s)v(s)\,ds+A \lambda(0)+B\mu(0), \\& y'(0)= \int_{0}^{1} \frac{1}{\rho}(\eta+1-s)v(s)\,ds+A \lambda '(0)+B\mu'(0), \end{aligned}$$

we have

$$ A \biggl(- \int_{0}^{1} g_{1}(s)\lambda(s)\,ds \biggr)+B \biggl(\rho- \int_{0}^{1} g_{1}(s)\mu(s)\,ds \biggr)= \int_{0}^{1} g_{1}(s) (Fv) (s)\,ds. $$
(2.11)

Since

$$\begin{aligned}& y(1)= \int_{0}^{1} \frac{\eta}{\rho}(s+\xi)v(s)\,ds+A \lambda(1)+B\mu(1), \\& y'(1)=- \int_{0}^{1} \frac{1}{\rho}(s+\xi)v(s)\,ds+A \lambda '(1)+B\mu'(1), \end{aligned}$$

we have

$$ A \biggl(\rho- \int_{0}^{1} g_{2}(s)\lambda(s)\,ds \biggr)+B \biggl(- \int_{0}^{1} g_{2}(s)\mu(s)\,ds \biggr)= \int_{0}^{1} g_{2}(s) (Fv) (s)\,ds. $$
(2.12)

From (2.11) and (2.12), we get

$$ \begin{pmatrix} -\alpha[\lambda] & \rho-\alpha[\mu] \\ \rho-\beta[\lambda] & -\beta[\mu] \end{pmatrix} \begin{pmatrix} A \\ B \end{pmatrix} = \begin{pmatrix} \alpha[Fv] \\ \beta[Fv] \end{pmatrix} , $$

which implies that A and B satisfy (2.9), respectively. □

Assume that

(H5):

\(\Delta<0\), \(\alpha[\mu]<\rho\), \(\beta[\lambda]<\rho\).

Lemma 2.2

Denote \(e_{1}(t)=G_{1}(t,t)\), \(\hat{e}_{1}(t)=\frac{1}{\rho}(1-t)(t+\xi)\), for \(t\in J\). Let \(\kappa[v]=\int_{0}^{1} e_{1}(s)v(s)\,ds\) and \(\hat{\kappa}[v]=\int_{0}^{1} \hat{e}_{1}(s)v(s)\,ds\), for \(v \in C(J, \mathbb{R}^{+})\). If (H5) is satisfied, then the following results are true:

  1. (1)

    \(\hat{e}_{1}(s)\hat{e}_{1}(t)\leq G_{1}(t,s)\leq e_{1}(s)\), for \(t,s \in J\);

  2. (2)

    \(0\leq\underline{A} \hat{\kappa}[v]\leq A(v)\leq\overline{A} \kappa[v]\), for \(v \in C(J, \mathbb{R}^{+})\);

  3. (3)

    \(0\leq\underline{B} \hat{\kappa}[v] \leq B(v)\leq\overline {B} \kappa[v]\), for \(v \in C(J, \mathbb{R}^{+})\),

where

$$\begin{aligned}& \overline{A}=\frac{1}{\Delta} \begin{vmatrix} \alpha[1] & \rho-\alpha[\mu] \\ \beta[1] & -\beta[\mu] \end{vmatrix} ,\qquad \overline{B}= \frac{1}{\Delta} \begin{vmatrix} -\alpha[\lambda] & \alpha[1] \\ \rho-\beta[\lambda] & \beta[1] \end{vmatrix} , \\& \underline{A}=\frac{1}{\Delta} \begin{vmatrix} \alpha[\hat{e}_{1}] & \rho-\alpha[\mu] \\ \beta[\hat{e}_{1}] & -\beta[\mu] \end{vmatrix} ,\qquad \underline{B}=\frac{1}{\Delta} \begin{vmatrix} -\alpha[ \lambda] & \alpha[\hat{e}_{1}] \\ \rho-\beta[\lambda] & \beta[\hat{e}_{1}] \end{vmatrix} . \end{aligned}$$

Proof

Now we show that (1) is true. Obviously, \(G_{1}(t,s)\leq e_{1}(s) \) for \(t,s\in J\).

In fact, \(\hat{e}_{1}(s)\hat{e}_{1}(t)=\frac{1}{\rho^{2}}(1-s)(s+\xi)(1-t)(t+\xi)\), for \(s, t\in J\).

For \(0\leq s\leq t\leq1\), we notice that

$$\frac{\hat{e}_{1}(s)\hat{e}_{1}(t)}{G_{1}(t,s)}=\frac{(1-s)(1-t)(t+\xi )}{\rho(\eta+1-t)}= \frac{(1-s)(1-t)(t+\xi)}{(1+\xi+\eta)(\eta+1-t)}. $$

It is easy to see that \(1-s\leq1\), \(1-t\leq\eta+1-t\), \(t+\xi\leq 1+\xi+\eta\), for \(s, t\in J\), \(\xi, \eta>0\), which implies

$$\frac{(1-s)(1-t)(t+\xi)}{(1+\xi+\eta)(\eta+1-t)}\leq1. $$

Hence, we have

$$\hat{e}_{1}(s)\hat{e}_{1}(t)\leq G_{1}(t,s),\quad \mbox{for } 0\leq s\leq t\leq1. $$

Similarly, we can obtain

$$\hat{e}_{1}(s)\hat{e}_{1}(t)\leq G_{1}(t,s),\quad \mbox{for } 0\leq t\leq s\leq1. $$

In the following we show (2) and (3) hold. In view of (H5), for \(v \in C(J, \mathbb{R}^{+})\), we have

$$\begin{aligned}& A(v)=-\frac{1}{\Delta} \bigl( \alpha[Fv] \beta[\mu]+\beta[Fv]\bigl(\rho - \alpha[\mu]\bigr) \bigr) \\& \hphantom{A(v)}\leq-\frac{1}{\Delta} \bigl( \alpha[1] \beta[\mu]+\beta[1]\bigl( \rho -\alpha[\mu]\bigr) \bigr)\kappa[v] =\overline{A} \kappa[v] , \\& A(v) =-\frac{1}{\Delta} \bigl( \alpha[Fv] \beta[\mu]+\beta[Fv]\bigl(\rho - \alpha[\mu]\bigr) \bigr) \\& \hphantom{A(v)}\geq-\frac{1}{\Delta} \bigl( \alpha[\hat{e}_{1}] \beta[ \mu]+\beta [\hat{e}_{1}]\bigl(\rho-\alpha[\mu]\bigr) \bigr) \hat{ \kappa}[v]=\underline {A} \hat{\kappa}[v]. \end{aligned}$$

In the same way, we have \(B(v)\leq\overline{B} \kappa[v]\), \(B(v)\geq\underline{B}\hat{\kappa}[v]\), for \(v \in C(J,\mathbb {R}^{+})\). □

Analogously to Lemma 2.1 in [12], we obtain the following result; we omit the proof.

Lemma 2.3

If (H4) holds, for any \(y \in C(J)\), the problem (2.4) has a unique solution x expressed in the form

$$ x(t)= \int_{0}^{1}H(t,s)y(s)\,ds+\sum _{k=1}^{m} H(t,t_{k})I_{k} , $$
(2.13)

where

$$\begin{aligned}& H(t,s)=G_{2}(t,s)+\frac{1}{1-\gamma} \int_{0}^{1}G_{2}(s,\tau)g _{0}(\tau) \,d\tau, \end{aligned}$$
(2.14)
$$\begin{aligned}& G_{2}(t,s)= \textstyle\begin{cases} t(1-s), & 0\leq t\leq s\leq1, \\ s(1-t), & 0\leq s\leq t\leq1. \end{cases}\displaystyle \end{aligned}$$
(2.15)

From (2.14) and (2.15), we can prove that \(H(t,s)\) and \(G_{2}(t,s)\) have the following properties.

Lemma 2.4

Let \(H(t,s)\), \(G_{2}(t,s)\) be given as in Lemma  2.3. Assume that (H4) holds, then the following results are true:

  1. (1)

    \(e_{2}(s)e_{2}(t)\leq G_{2}(t,s) \leq e_{2}(s)\), for \(t, s \in J\);

  2. (2)

    \(\frac{\Gamma}{1-\gamma}e_{2}(s)\leq H(t,s) \leq\frac {1}{1-\gamma}e_{2}(s)\), for \(t, s \in J\),

where

$$\Gamma= \int_{0}^{1}e_{2}(s)g_{0}(s) \,ds, \qquad e_{2}(t)=G_{2}(t,t),\quad \textit{for } t \in J. $$

Proof

It is easy to see that (1) holds. In the following, we prove that (2) is satisfied:

$$\begin{aligned} H(t,s)&=G(t,s)+\frac{1}{1-\gamma} \int_{0}^{1} G_{2}(s,\tau)g_{0}( \tau )\,d\tau \\ &\leq e_{2}(s)+\frac{1}{1-\gamma} \int_{0}^{1} e_{2}(s)g_{0}( \tau)\,d\tau \\ &=\frac{1}{1-\gamma}e_{2}(s), \quad \mbox{for } s\in J, \end{aligned}$$

and

$$\begin{aligned} H(t,s) &\geq e_{2}(s)e_{2}(t)+\frac{1}{1-\gamma} \int_{0}^{1} e_{2}(s)e_{2}(\tau )g_{0}(\tau)\,d\tau \\ &= e_{2}(s) \biggl[e_{2}(t)+\frac{1}{1-\gamma} \int_{0}^{1} e_{2}(\tau )g_{0}( \tau)\,d\tau \biggr] \\ &=\frac{\Gamma}{1-\gamma}e_{2}(s), \quad \mbox{for } t, s\in J. \end{aligned}$$

 □

Lemma 2.5

Assume that (H1)-(H4) hold. Then problem (2.1) and (1.2) has a unique solution x given by

$$ x(t)= \int_{0}^{1} H(t,s) \bigl[(Fv) (s)+A(v) \lambda(s)+B(v)\mu (s) \bigr] \,ds +\sum_{k=1}^{m} H(t,t_{k})I_{k} . $$
(2.16)

Lemma 2.6

Assume that (H3)-(H5) hold, for \(v\in C(J,\mathbb{R}^{+})\), the unique solution x of problem (2.1) and (1.2) satisfies \(x(t)\geq0\) on J.

Proof

By Lemma 2.4, we can obtain \(H(t,s)\geq0 \) for \(t, s \in J\). Hence, from Lemma 2.5, combining with Lemma 2.2 and (H3), we can obtain

$$x(t) \geq0, \quad \mbox{for } t \in J. $$

This completes the proof. □

3 Background materials and definitions

Now, we present the prerequisite definitions in Banach spaces from the theory of cones.

Definition 3.1

Let E be a real Banach space. A nonempty convex closed set \(P\subset E \) is said to be a cone if

  1. (i)

    \(ku\in P \) for all \(u\in P\) and all \(k\geq0\), and

  2. (ii)

    \(u, -u\in P \) implies \(u=0\).

Definition 3.2

On a cone P of a real Banach space E, a map Λ is said to be a nonnegative continuous concave functional if \(\Lambda :P\rightarrow\mathbb{R}^{+} \) is continuous and

$$\Lambda \bigl(tx+(1-t)y \bigr)\geq t \Lambda(x)+(1-t)\Lambda(y), $$

for all \(x, y \in P \) and \(t\in J\).

At the same time, on a cone P of a real Banach space E, a map φ is said to be a nonnegative continuous convex functional if φ: \(P\rightarrow\mathbb{R}^{+}\) is continuous and

$$\varphi \bigl(tx+(1-t)y \bigr)\leq t \varphi(x) + (1-t)\varphi(y), $$

for all \(x,y \in P \) and \(t\in J\).

Definition 3.3

If it is continuous and maps bounded sets into pre-compact sets, an operator is called completely continuous.

Let φ and Θ be nonnegative continuous convex functionals on P, Λ be a nonnegative continuous concave functional on P, and Ψ be a nonnegative continuous functional on P. Then for positive numbers a, b, c, and d, we define the following sets:

$$\begin{aligned}& P(\varphi,d)=\bigl\{ x\in P : \varphi (x)< d \bigr\} , \\& P(\varphi,\Lambda,b,d)=\bigl\{ x\in P:b\leq\Lambda(x),\varphi(x)\leq d \bigr\} , \\& P(\varphi,\Theta,\Lambda,b,c,d)=\bigl\{ x\in P:b\leq\Lambda(x),\Theta(x)\leq c, \varphi(x)\leq d \bigr\} , \end{aligned}$$

and

$$R(\varphi,\Psi,a,d)=\bigl\{ x\in P:a\leq\Psi(x),\varphi(x)\leq d \bigr\} . $$

We will make use of the following fixed point theorem of Avery and Peterson to establish multiple positive solutions to problem (1.1) and (1.2).

Theorem 3.1

(See [20])

Let P be a cone in a real Banach space E. Let φ and Θ be nonnegative continuous convex functionals on P, Λ be nonnegative continuous concave functional on P, and Ψ be a nonnegative continuous functional on P satisfying \(\Psi(kx)\leq k\Psi(x)\) for \(0 \leq{k} \leq1 \), such that for some positive numbers M and d,

$$\Lambda(x)\leq\Psi(x) \quad \textit{and}\quad \lVert{x}\lVert\leq M\varphi(x), $$

for all \(x\in\overline{P(\varphi,d)}\). Suppose

$$T:\overline{P(\varphi,d)}\rightarrow\overline{P(\varphi,d)}, $$

is completely continuous and there exist positive numbers a, b, and c with \(a< b\) such that

(S1):

\(\{x\in P(\varphi,\Theta,\Lambda,b,c,d):\Lambda(x)>b\} \neq \emptyset\) and \(\Lambda(Tx)>b\) for \(x\in P(\varphi,\Theta ,\Lambda,b,c,d)\);

(S2):

\(\Lambda(Tx)>b\) for \(x\in P(\varphi,\Lambda,b,d)\) with \(\Theta(Tx)>c\);

(S3):

\(0 \notin R(\varphi,\Psi,a,d)\) and \(\Psi(Tx)< a\) for \(x\in R(\varphi,\Psi,a,d)\) with \(\Psi(x)=a\).

Then T has at least three fixed points \(x_{1}, x_{2}, x_{3} \in\overline{P(\varphi,d)}\), such that

$$\begin{aligned}& \varphi(x_{i})\leq d, \quad \textit{for } i=1,2,3, \\& b< \Lambda(x_{1}), \qquad a< \Psi(x_{2})\quad \textit{with } \Lambda(x_{2})< b, \end{aligned}$$

and

$$\Psi(x_{3})< a. $$

4 Existence result for the case of \(\alpha(t) \geq t\) on J

Function \(h(t)\) in (1.1) satisfies (H2). We introduce the notations

$$l_{1}=\kappa[h]= \int_{0}^{1}e_{1}(s)h(s)\,ds, \qquad \hat{l}_{1}= \int _{0}^{1}\hat{e}_{1}(s)h(s)\,ds. $$

Let \(X=C(J,\mathbb{R})\) be our Banach space with the maximum norm \(\lVert{x}\lVert=\max_{t\in J} \lvert x \lvert\).

Set

$$ \begin{aligned} &P=\bigl\{ x\in X:x \mbox{ is nonnegative, concave and } x(t)\geq \Gamma\lVert{x}\lVert, t\in J\bigr\} , \\ &\overline{P_{r}}=\bigl\{ x \in P:\lVert{x}\lVert\leq r\bigr\} , \end{aligned} $$
(4.1)

where Γ is defined as in Lemma 2.4. We define the nonnegative continuous concave functional \(\Lambda=\Lambda_{1}\) on P by

$$\Lambda_{1}(x)=\min_{t\in[\delta,1]} \bigl\lvert x(t) \bigr\lvert , $$

where \(\delta\in(0,1)\) is such that \(0<\delta<1-\delta<1\). Set \(J_{\delta_{1}}=[\delta,1]\).

Note that \(\Lambda_{1}(x) \leq\lVert{x}\lVert\). Put \(\Psi(x)=\Theta (x)=\varphi(x)=\lVert{x}\lVert\).

Theorem 4.1

Let assumptions (H1)-(H5) hold and \(\alpha(t) \geq t \) on J. In addition, we assume that there exist positive constants a, b, c, d, ω, L with \(a< b\) such that

$$ \begin{aligned} &\omega> \frac{1}{1-\gamma} \Biggl[\frac{l_{1}}{6}+ \biggl(\frac{1}{12}+\frac {\xi}{6} \biggr)l_{1}\overline{A} + \biggl(\frac{1}{12}+\frac{\eta}{6} \biggr)l_{1}\overline{B}+ \sum_{k=1}^{m} t_{k}(1-t_{k}) \Biggr], \\ &0< L< \frac{\Gamma}{(1-\gamma)} \Biggl[ \biggl(\frac{1}{30}+\frac{\xi }{12} \biggr)\hat{l}_{1} + \biggl(\frac{1}{12}+\frac{\xi}{6} \biggr)\hat{l}_{1}\underline {A}+ \biggl(\frac{1}{12} + \frac{\eta}{6} \biggr)\hat{l}_{1}\underline{B}+\sum _{k=1}^{m} t_{k}(1-t_{k}) \Biggr], \end{aligned} $$
(4.2)

and

(A1):

\(f(t, u, v)\leq\frac{d}{\omega}\), for \((t, u, v)\in J \times[0,d] \times[0,d]\), \(I_{k}(t,u)\leq\frac {d}{\omega}\), for \((t,u)\in J_{k} \times[0,d]\);

(A2):

\(f(t, u, v)\geq\frac{b}{L}\), for \((t, u, v)\in J_{\delta_{1}} \times[b,\frac{b}{\Gamma}]\times[b,\frac{b}{\Gamma}]\), \(I_{k}(t,u)\geq\frac{b}{L}\), for \((t,u)\in J_{\delta_{1}}\cap J_{k} \times[b,\frac{b}{\Gamma}]\);

(A3):

\(f(t, u, v)\leq\frac{a}{\omega}\), for \((t, u, v)\in J \times[0,a] \times[0,a]\), \(I_{k}(t,u)\leq\frac {a}{\omega}\), for \((t,u)\in J_{k} \times[0,a]\).

Then problem (1.1) and (1.2) has at least three positive solutions \(x_{1}\), \(x_{2}\), \(x_{3} \) satisfying \(\|x_{i}\|\leq d \), \(i=1, 2, 3\), and

$$b\leq\Lambda_{1}(x_{1}), \qquad a< \lVert x_{2} \lVert \quad \textit{with } \Lambda_{1}(x_{2})< b $$

and

$$\lVert x_{3}\lVert< a. $$

Proof

For any \(x\in C(J,\mathbb{R}^{+})\), we define operator T by

$$\begin{aligned} (Tx) (t) =& \int_{0}^{1} H(t,s) \bigl[F \bigl(hf_{x}(s) \bigr)+A \bigl(hf_{x}(s) \bigr)\lambda(s) +B \bigl(hf_{x}(s) \bigr)\mu(s) \bigr] \,ds \\ &{}+\sum_{k=1}^{m} H(t,t_{k})I_{k} \bigl(t_{k},x(t_{k}) \bigr) , \end{aligned}$$
(4.3)

where \(hf_{x}(s)=h(s)f (s,x(s),x(\alpha(s)) )\). Indeed, \(T:X \rightarrow X\). Problem (1.1) and (1.2) has a solution x if and only if x solves the operator equation \(x=Tx\).

We need to prove the existence of at least three fixed points of T by verifying that operator T satisfies the Avery-Peterson fixed point theorem.

From the definition of T, we can obtain

$$ (Tx)''(t) =-F \bigl(hf_{x}(s) \bigr)-A \bigl(hf_{x}(s) \bigr)\lambda(t) -B \bigl(hf_{x}(s) \bigr) \mu(t). $$
(4.4)

In view of (H1), (H2), Lemma 2.1, and Lemma 2.2, we have

$$(Tx)''(t) \leq0, \quad \mbox{for } t\in J. $$

So Tx is concave on J. From (4.3) and (4.4), combining with Lemma 2.4 and (H3), we can obtain

$$(Tx) (t) \geq0, \quad \mbox{for }t\in J. $$

Noting (2) in Lemma 2.4, it follows that

$$\begin{aligned} \lVert Tx \lVert =&\max_{t\in J} \Biggl\{ \int_{0}^{1} H(t,s) \bigl[F \bigl(hf_{x}(s) \bigr)+A \bigl(hf_{x}(s) \bigr)\lambda (s)+B \bigl(hf_{x}(s) \bigr)\mu(s) \bigr] \,ds \\ &{}+\sum_{k=1}^{m}H(t,t_{k})I_{k} \bigl(t_{k},x(t_{k}) \bigr) \Biggr\} \\ \leq& \frac{1}{1-\gamma} \Biggl\{ \int_{0}^{1} e_{2}(s) \bigl[F \bigl(hf_{x}(s) \bigr)+A \bigl(hf_{x}(s) \bigr)\lambda(s)+B \bigl(hf_{x}(s) \bigr)\mu(s) \bigr]\,ds \\ &{}+\sum_{k=1}^{m} e_{2}(t_{k})I_{k} \bigl(t_{k},x(t_{k}) \bigr) \Biggr\} . \end{aligned}$$
(4.5)

On the other hand, from the properties of \(H(t,s)\), we have

$$\begin{aligned} (Tx) (t) =& \int_{0}^{1} H(t,s) \bigl[F \bigl(hf_{x}(s) \bigr)+A \bigl(hf_{x}(s) \bigr)\lambda(s)+B \bigl(hf_{x}(s) \bigr)\mu(s) \bigr] \,ds \\ &{}+\sum_{k=1}^{m} H(t,t_{k})I_{k} \bigl(t_{k},x(t_{k}) \bigr) \\ \geq& \frac{\Gamma}{1-\gamma} \Biggl\{ \int_{0}^{1} e_{2}(s) \bigl[F \bigl(hf_{x}(s) \bigr)+A \bigl(hf_{x}(s) \bigr)\lambda(s)+B \bigl(hf_{x}(s) \bigr)\mu(s) \bigr]\,ds \\ &{}+\sum_{k=1}^{m} e_{2}(t_{k})I_{k} \bigl(t_{k},x(t_{k}) \bigr) \Biggr\} \\ \geq&\Gamma\lVert Tx \lVert. \end{aligned}$$
(4.6)

This proves that \(TP\subset P\).

Now we prove that the operator \(T: P\rightarrow P\) is completely continuous. Let \(x\in\overline{P_{r}}\), then \(\lVert x\lVert\leq r\). Note that h and f are continuous, so h is bounded on J and f is bounded on \(J\times[-r,r]\). It means that there exists a constant \(K>0\) such that \(\lVert Tx \lVert\leq K\). This proves that TPÌ… is uniformly bounded. On the other hand, for \(t_{1}, t_{2} \in J \) there exists a constant \(L_{1} >0\) such that

$$\bigl\vert (Tx) (t_{1})-(Tx) (t_{2}) \bigr\vert \leq L_{1} \vert t_{1}-t_{2} \vert . $$

This shows that TPÌ… is equicontinuous on J, so T is completely continuous.

Let \(x\in\overline{P(\varphi,d)}\), so \(0\leq x(t)\leq d\), \(t\in J\), and \(\lVert x \lVert\leq d\). Note that also \(0\leq x (\alpha (t) )\leq d\), \(t\in J\) because \(0 \leq t \leq\alpha(t) \leq1\) on J. Hence

$$\varphi(Tx)=\lVert Tx \lVert=\max_{t\in J} \bigl\lvert (Tx) (t) \bigr\lvert = \max_{t\in J} (Tx) (t). $$

By (4.2), Lemma 2.2, Lemma 2.4, and (A1), we have

$$\begin{aligned} \varphi(Tx) =&\max_{t\in J} \Biggl\{ \int_{0}^{1} H(t,s) \bigl[F \bigl(hf_{x}(s) \bigr)+A \bigl(hf_{x}(s) \bigr)\lambda(s) +B \bigl(hf_{x}(s) \bigr)\mu(s) \bigr] \,ds \\ &{}+\sum_{k=1}^{m}H(t,t_{k})I_{k} \bigl(t_{k},x(t_{k}) \bigr) \Biggr\} \\ \leq&\frac{1}{1-\gamma} \Biggl[ \int_{0}^{1} \int _{0}^{1}e_{2}(s)e_{1}( \tau)h(\tau)f_{x}(\tau)\,d\tau \,ds+ \int_{0}^{1} e_{2}(s)A \bigl(hf_{x}(s)\bigr) (s+\xi) \,ds \\ &{}+ \int_{0}^{1} e_{2}(s)B \bigl(hf_{x}(s)\bigr) (\eta+1-s) \,ds+ \sum _{k=1}^{m} e_{2}(t_{k})I_{k} \bigl(t_{k},x(t_{k}) \bigr) \Biggr] \\ \leq&\frac{d}{\omega(1-\gamma)} \Biggl[ \int_{0}^{1} \int _{0}^{1}e_{2}(s)e_{1}( \tau)h(\tau)\,d\tau \,ds + \overline{A} \int_{0}^{1}\kappa[h]e_{2}(s) (s+\xi)\,ds \\ &{}+\overline{B} \int_{0}^{1}\kappa[h]e_{2}(s) (\eta+1-s) \,ds + \sum_{k=1}^{m} t_{k}(1-t_{k}) \Biggr] \\ =& \frac{d}{\omega(1-\gamma)} \Biggl[\frac{l_{1}}{6}+ \biggl(\frac {1}{12}+ \frac{\xi}{6} \biggr)l_{1}\overline{A} + \biggl(\frac{1}{12}+ \frac{\eta}{6} \biggr)l_{1}\overline{B}+\sum _{k=1}^{m} t_{k}(1-t_{k}) \Biggr] \\ < & d. \end{aligned}$$

This shows that \(T :\overline{P(\varphi,d)}\rightarrow\overline {P(\varphi,d)}\).

To check condition (S1) we choose

$$x(t)=\frac{1}{2} \biggl(b+\frac{b}{\Gamma} \biggr),\quad t\in J. $$

Then

$$\lVert x \lVert= \frac{b(\Gamma+1)}{2\Gamma}< \frac{b}{\Gamma}, $$

so

$$\Lambda_{1}(x)=\min_{t\in[\delta,1]}x(t)=\frac{b(\Gamma +1)}{2\Gamma}>b= \frac{b}{\Gamma} \Gamma\geq\Gamma\lVert x \lVert. $$

It proves that

$$\biggl\{ x\in P\biggl(\varphi, \Theta, \Lambda_{1}, b, \frac{b}{\Gamma}, d\biggr) :b< \Lambda_{1}(x) \biggr\} \neq\emptyset. $$

Let \(b\leq x(t)\leq\frac{b}{\Gamma}\) for \(t\in[\delta,1]\), then \(\delta\leq t \leq\alpha(t) \leq1\) on \([\delta, 1]\). It yields \(b \leq x (\alpha(t) )\leq\frac{b}{\Gamma}\) on \([\delta,1]\). It gives

$$\Lambda_{1} (Tx)=\min_{t\in[\delta,1]} (Tx) (t). $$

By (4.2), Lemma 2.2, Lemma 2.4, and (A2), we have

$$\begin{aligned} \Lambda_{1} (Tx) =&\min_{t\in[\delta,1]} \Biggl\{ \int_{0}^{1} H(t,s) \bigl[F \bigl(hf_{x}(s) \bigr)+A \bigl(hf_{x}(s) \bigr)\lambda(s) +B \bigl(hf_{x}(s) \bigr)\mu(s) \bigr] \,ds \\ &{}+\sum_{k=1}^{m} H(t,t_{k})I_{k} \bigl(t_{k},x(t_{k}) \bigr) \Biggr\} \\ \geq&\frac{\Gamma}{1-\gamma} \Biggl[ \int_{0}^{1} \int _{0}^{1}e_{2}(s)\hat{e}_{1}(s) \hat{e}_{1}(\tau)h(\tau)f_{x}(\tau)\,d\tau \,ds \\ &{}+ \int_{0}^{1} e_{2}(s)A \bigl(hf_{x}(s)\bigr) (s+\xi) \,ds \\ &{}+ \int_{0}^{1} e_{2}(s)B \bigl(hf_{x}(s)\bigr) (\eta+1-s) \,ds+ \sum _{k=1}^{m} e_{2}(t_{k})I_{k} \bigl(t_{k},x(t_{k}) \bigr) \Biggr] \\ \geq&\frac{b\Gamma}{L(1-\gamma)} \Biggl[ \int_{0}^{1} \int _{0}^{1}e_{2}(s)\hat{e}_{1}(s) \hat{e}_{1}(\tau)h(\tau)\,d\tau \,ds + \underline{A} \int_{0}^{1}\hat{\kappa}[h]e_{2}(s) (s+ \xi)\,ds \\ &{}+\underline{B} \int_{0}^{1}\hat{\kappa}[h]e_{2}(s) ( \eta+1-s)\,ds + \sum_{k=1}^{m} t_{k}(1-t_{k}) \Biggr] \\ =& \frac{b\Gamma}{L(1-\gamma)} \Biggl[ \biggl(\frac{1}{30}+\frac{\xi }{12} \biggr)\hat{l}_{1}+ \biggl(\frac{1}{12}+\frac{\xi}{6} \biggr) \hat{l}_{1}\underline{A} \\ &{}+ \biggl(\frac{1}{12} +\frac{\eta}{6} \biggr)\hat{l}_{1}\underline{B}+\sum_{k=1}^{m} t_{k}(1-t_{k}) \Biggr] \\ >& b. \end{aligned}$$

It proves condition (S1) holds.

Now we need to prove that condition (S2) is satisfied. Take

$$x \in P(\varphi, \Lambda_{1}, b, d) \quad \mbox{with } \lVert Tx \lVert>\frac{b}{\Gamma}=c. $$

Then

$$\Lambda_{1}(Tx)=\min_{t\in[\delta, 1]} (Tx) (t)\geq\Gamma \lVert Tx \lVert >\Gamma\frac{b}{\Gamma}=b. $$

So condition (S2) holds.

We finally show that condition (S3) also holds. Clearly, as \(\Psi(0)=0< a\), so \(0\notin R(\varphi,\Psi,a,d)\). Suppose that \(x \in R(\varphi,\Psi,a,d)\) with \(\Psi(x)=\lVert x \lVert=a\).

Similarly, by (4.2), Lemma 2.2, Lemma 2.4, and (A3), we have

$$\begin{aligned} \Psi(Tx) =&\lVert Tx \lVert=\max_{t \in J} (Tx) (t) \\ =&\max_{t\in J} \Biggl\{ \int_{0}^{1} H(t,s) \bigl[F \bigl(hf_{x}(s) \bigr)+A \bigl(hf_{x}(s) \bigr)\lambda(s) +B \bigl(hf_{x}(s) \bigr)\mu(s) \bigr] \,ds \\ &{} + \sum_{k=1}^{m} H(t,t_{k})I_{k} \bigl(t_{k},x(t_{k}) \bigr) \Biggr\} \\ \leq&\frac{a}{\omega} \Biggl\{ \frac{1}{1-\gamma} \Biggl[\frac {l_{1}}{6}+ \biggl(\frac{1}{12}+\frac{\xi}{6} \biggr)l_{1}\overline{A} + \biggl(\frac{1}{12}+\frac{\eta}{6} \biggr)l_{1}\overline{B}+ \sum_{k=1}^{m} t_{k}(1-t_{k}) \Biggr] \Biggr\} \\ < &a. \end{aligned}$$

It proves that condition (S3) is satisfied.

By Theorem 4.1, there exist at least three positive solutions \(x_{1} \), \(x_{2}\), \(x_{3}\) of problem (1.1) and (1.2) such that \(\lVert x_{i} \lVert\leq d \) for \(i= 1, 2, 3\),

$$b \leq\min_{t\in[\delta, 1]} x_{1}(t), \qquad a< \lVert x_{2} \lVert \quad \mbox{with } \min_{t\in[\delta, 1]} x_{2}(t)< b, $$

and \(\lVert x_{3} \lVert < a\). This ends the proof. □

Example

We consider the following BVP:

$$ \left \{ \textstyle\begin{array}{l} x^{(4)}(t)=h(t)f (x (\alpha(t) ) ), \quad t\in J_{0}, \\ x(0)=x(1)=\int_{0}^{1}\frac{s}{2}x(s)\,ds, \qquad \triangle x'_{t_{1}}=-I_{1} (t_{1}, x(t_{1}) ), \\ x''(0)-\frac{1}{2}x'''(0)=\int_{0}^{1}sx''(s)\,ds,\qquad x''(1)+\frac{1}{2}x'''(1)=\int_{0}^{1}s^{2}x''(s)\,ds, \end{array}\displaystyle \right . $$
(4.7)

with \(\alpha\in C(J,J)\), \(h(t)=Dt\), \(\alpha(t)\geq t\), \(\xi=\eta=\frac{1}{2}\), \(\rho=2\), and \(t_{1}=\frac{1}{12}\). It follows that \(\mu(t)=\frac{3}{2}-t\), \(\lambda(t)=t+\frac{1}{2}\), for \(t\in J\), and

$$f(v)=I_{1}(t,v)= \textstyle\begin{cases} \frac{v^{2}}{250}, & 0\leq v\leq 1, \\ \frac{1}{250}(748v-747), & 1\leq v\leq\frac{3}{2}, \\ \frac{13}{69}v+\frac{28}{23}, & \frac {3}{2}\leq v\leq36, \\ 8, & v\geq36. \end{cases} $$

Note that \(f\in C(\mathbb{R}^{+},\mathbb{R}^{+})\). As a function α we can take, for example, \(\alpha(t)=\sqrt{t}\).

In this case we have \(\gamma=\frac{1}{4}\), \(\Delta=-\frac{85}{36}\), \(\overline{A}=\frac{47}{170}\), \(\overline{B}=\frac{71}{170}\), \(\underline{A}=\frac{269}{6\text{,}800}\), \(\underline{B}=\frac{457}{6\text{,}800}\), \(l_{1}=\frac{11}{48}D\), \(\hat{l}_{1}=\frac{1}{12}D\), \(\Gamma=\frac{1}{24}\). Let \(a=1\), \(b=\frac{3}{2}\), \(c=36\), \(d\geq2\text{,}000\), \(D=2\text{,}784\). In this case we can take \(\omega=250\), \(L=1\). We see that all assumptions of Theorem 4.1 hold, so BVP (4.7) has at least three positive solutions.

5 Existence result for the case of \(\alpha(t) \leq t\) on J

The cone P is defined as in (4.1). We define the nonnegative continuous concave functional \(\Lambda=\Lambda_{2}\) on P by

$$\Lambda_{2}(x)=\min_{t\in[0,1-\delta]} \bigl\lvert x(t) \bigr\lvert , $$

where \(\delta\in(0,1)\) satisfying \(0<\delta<1-\delta<1\). Set \(J_{\delta_{2}}=[0,1-\delta]\). Similar to the proof of Theorem 4.1, we have the following result.

Theorem 5.1

Let assumptions (H1)-(H5) hold and \(\alpha(t) \leq t \) on J. In addition, we assume that there exist positive constants a, b, c, d, ω, L with \(a< b\) such that (4.2) holds and

(A1):

\(f(t, u, v)\leq\frac{d}{\omega}\), for \((t, u, v)\in J \times[0,d] \times[0,d]\), \(I_{k}(t,u)\leq\frac {d}{\omega}\), for \((t,u)\in J_{k} \times[0,d]\);

(A2):

\(f(t, u, v)\geq\frac{b}{L}\), for \((t, u, v)\in J_{\delta_{2}} \times[b,\frac{b}{\Gamma}]\times[b,\frac{b}{\Gamma}]\), \(I_{k}(t,u)\geq\frac{b}{L}\), for \((t,u)\in J_{\delta_{2}}\cap J_{k} \times[b,\frac{b}{\Gamma}]\);

(A3):

\(f(t, u, v)\leq\frac{a}{\omega}\), for \((t, u, v)\in J \times[0,a] \times[0,a]\), \(I_{k}(t,u)\leq\frac {a}{\omega}\), for \((t,u)\in J_{k} \times[0,a]\).

Then problem (1.1) and (1.2) has at least three positive solutions \(x_{1}\), \(x_{2}\), \(x_{3} \) satisfying \(\|x_{i}\|\leq d \), \(i=1, 2, 3\), and

$$b\leq\Lambda_{2}(x_{1}), \qquad a< \lVert x_{2} \lVert \quad \textit{with } \Lambda_{2}(x_{2})< b, $$

and

$$\lVert x_{3}\lVert< a. $$

References

  1. Sun, J, Wang, X: Monotone positive solutions for an elastic beam equation with nonlinear boundary conditions. Math. Probl. Eng. 2011, Article ID 609189 (2011)

    MathSciNet  MATH  Google Scholar 

  2. Yao, Q: Positive solutions of nonlinear beam equations with time and space singularities. J. Math. Anal. Appl. 374, 681-692 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  3. O’Regan, D: Solvability of some fourth (and higher) order singular boundary value problems. J. Math. Anal. Appl. 161, 78-116 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  4. Zhang, X: Existence and iteration of monotone positive solutions for an elastic beam equation with a corner. Nonlinear Anal. 10, 2097-2103 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bonanno, G, Bella, BD: A boundary value problem for fourth-order elastic beam equations. J. Math. Anal. Appl. 343, 1166-1176 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  6. Cabada, A, Tersian, S: Existence and multiplicity of solutions to boundary value problems for fourth-order impulsive differential equations. Bound. Value Probl. 2014, 105 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  7. Amster, P, Alzate, PPC: A shooting method for a nonlinear beam equation. Nonlinear Anal. 68, 2072-2078 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  8. Zhao, Y, Huang, L, Zhang, Q: Existence results for an impulsive Sturm-Liouville boundary value problems with mixed double parameters. Bound. Value Probl. 2015, 150 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  9. Kang, P, Wei, Z, Xu, J: Positive solutions to fourth-order singular boundary value problems with integral boundary conditions in abstract spaces. Appl. Math. Comput. 206, 245-256 (2008)

    MathSciNet  MATH  Google Scholar 

  10. Zhang, X, Feng, M: Positive solutions for classes of multi-parameter fourth-order impulsive differential equations with one-dimensional singular p-Laplacian. Bound. Value Probl. 2014, 112 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  11. Lv, X, Wang, L, Pei, M: Monotone positive solution of a fourth-order BVP with integral boundary conditions. Bound. Value Probl. 2015, 172 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  12. Ma, H: Symmetric positive solutions for nonlocal boundary value problems of fourth order. Nonlinear Anal. 68, 645-651 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  13. Zhang, X, Feng, M: Positive solutions of singular beam equations with the bending term. Bound. Value Probl. 2015, 84 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  14. Jankowski, T: Positive solutions for fourth-order differential equations with deviating arguments and integral boundary conditions. Nonlinear Anal. 73, 1289-1299 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  15. Feng, M, Qiu, J: Multi-parameter fourth order impulsive integral boundary value problems with one-dimensional m-Laplacian and deviating arguments. J. Inequal. Appl. 2015, 64 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  16. Afrouzi, GA, Hadjian, A, Radulescu, VD: Variational approach to fourth-order impulsive differential equations with two control parameters. Results Math. 65, 371-384 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  17. Sun, JT, Chen, HB, Yang, L: Variational methods to fourth-order impulsive differential equations. J. Appl. Math. Comput. 35, 232-340 (2011)

    MathSciNet  Google Scholar 

  18. Jankowski, T: Positive solutions for second order impulsive differential equations involving Stieltjes integral conditions. Nonlinear Anal. 74, 3775-3785 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  19. Ding, W, Wang, Y: New result for a class of impulsive differential equation with integral boundary conditions. Commun. Nonlinear Sci. Numer. Simul. 18, 1095-1105 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  20. Avery, RI, Peterson, AC: Three positive fixed points of nonlinear operators on ordered Banach spaces. Comput. Math. Appl. 42, 313-322 (2001)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The work is supported by Chinese Universities Scientific Fund (Project No. 2016 LX002). The authors would like to thank the anonymous referees very much for helpful comments and suggestions, which lead to the improvement of presentation and quality of the work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huihui Pang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dou, J., Zhou, D. & Pang, H. Existence and multiplicity of positive solutions to a fourth-order impulsive integral boundary value problem with deviating argument. Bound Value Probl 2016, 166 (2016). https://doi.org/10.1186/s13661-016-0674-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13661-016-0674-8

Keywords