• Research
• Open Access

# Existence of solutions for second-order three-point integral boundary value problems at resonance

Boundary Value Problems20132013:197

https://doi.org/10.1186/1687-2770-2013-197

• Accepted: 13 August 2013
• Published:

## Abstract

A class of second-order three-point integral boundary value problems at resonance is investigated in this paper. Using intermediate value theorems, we obtain a sufficient condition for the existence of the solution for the equation. An example is given to demonstrate our main results.

MSC:34B10, 34B16, 34B18.

## Keywords

• integral boundary value problem
• resonance
• fixed point theorem
• intermediate value theorem

## 1 Introduction

We are interested in the existence of the solutions for the following second-order three-point integral boundary value problems at resonance:
${u}^{″}\left(t\right)+f\left(t,u\left(t\right)\right)=0,\phantom{\rule{1em}{0ex}}0\le t\le 1,$
(1.1)
$u\left(0\right)=0,\phantom{\rule{2em}{0ex}}u\left(1\right)=\alpha {\int }_{0}^{\eta }u\left(s\right)\phantom{\rule{0.2em}{0ex}}ds,$
(1.2)

where $\eta \in \left(0,1\right)$, $\frac{1}{2}\alpha {\eta }^{2}=1$ and $f\in C\left(\left[0,1\right]×R,R\right)$.

In the last few decades, many authors have studied the multi-point boundary value problems for linear and nonlinear ordinary differential equations by using various methods, such as Leray-Schauder fixed point theorem, coincidence degree theory, Krasnosel’skii fixed point theorem, the shooting method and Leggett-Williams fixed point theorem. We refer the readers to  and references therein. Also, there are a lot of papers dealing with the resonant case for multi-point boundary value problems, see .

In , Infante and Zima studied the existence of solutions for the following n-point boundary value problem with resonance:
${u}^{″}\left(t\right)+f\left(t,u\left(t\right)\right)=0,\phantom{\rule{1em}{0ex}}0\le t\le 1,$
(1.3)
${u}^{\prime }\left(0\right)=0,\phantom{\rule{2em}{0ex}}u\left(1\right)=\sum _{i=0}^{n-2}{\alpha }_{i}u\left({\eta }_{i}\right),$
(1.4)

where $0<{\eta }_{i}<1$ and ${\sum }_{i=0}^{n-2}{\alpha }_{i}=1$. Using the Leggett-Williams norm-type theorem, they obtained the existence of a positive solution for problem (1.3)-(1.4).

Problem (1.1)-(1.2) with $0<\eta <1$ and $0<\frac{1}{2}\alpha {\eta }^{2}<1$ was studied by Tariboon and Sitthiwirattham in . They obtained the existence of at least one positive solution. In this paper, we are interested in the existence of the solution for problem (1.1)-(1.2) under the condition $\frac{1}{2}\alpha {\eta }^{2}=1$, which is a resonant case.

In this paper, using some properties of the Green function $G\left(t,s\right)$ and intermediate value theorems, we establish a sufficient condition for the existence of positive solutions of problem (1.1)-(1.2).

The rest of the paper is organized as follows. The main results for problem (1.1)-(1.2) under the condition $\frac{1}{2}\alpha {\eta }^{2}=1$ are given in Section 2. In Section 3, we give some lemmas for our results. We prove our main result in Section 4, and finally an example is given to illustrate our result.

## 2 Some lemmas and main results

In this section, we first introduce some lemmas which will be useful in the proof of our main results.

Let $\mathrm{\Omega }=C\left[0,1\right]$, $u\in \mathrm{\Omega }$ equipped with the norm
$\parallel u\left(t\right)\parallel =\underset{0\le t\le 1}{sup}|u\left(t\right)|,$

then Ω is a Banach space.

Lemma 2.1 

Let X be a Banach space with $C\subset X$ closed and convex. Assume that U is a relatively open subset of C with $0\in U$ and $T:\overline{U}\to C$ is completely continuous. Then either
1. (i)

T has a fixed point in $\overline{U}$, or

2. (ii)

there exist $u\in \partial U$ and $\gamma \in \left(0,1\right)$ with $u=\gamma Tu$.

Lemma 2.2 Problem (1.1)-(1.2) is equivalent to the following integral equation:
$u\left(t\right)={\int }_{0}^{1}G\left(t,s\right)f\left(s,u\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}ds+u\left(1\right)t,$
(2.1)
where
$G\left(t,s\right)=\frac{1}{\alpha -2}\left\{\begin{array}{cc}-\left(\alpha -2\right)\left(t-s\right)+\alpha t\left(1-s\right)-\alpha t{\left(\eta -s\right)}^{2},\hfill & 0\le s\le min\left\{t,\eta \right\}\le 1;\hfill \\ -\left(\alpha -2\right)\left(t-s\right)+\alpha t\left(1-s\right),\hfill & \eta \le s\le t\le 1;\hfill \\ \alpha t\left(1-s\right)-\alpha t{\left(\eta -s\right)}^{2},\hfill & t\le s\le \eta ;\hfill \\ \alpha t\left(1-s\right),\hfill & max\left\{t,\eta \right\}\le s\le 1.\hfill \end{array}$
(2.2)
Proof Assume that $u\left(t\right)$ is a solution of problem (1.1)-(1.2), then it satisfies the following integral equation:
$u\left(t\right)=-{\int }_{0}^{t}\left(t-s\right)f\left(s,u\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}ds+{C}_{1}+{C}_{2}t,$
(2.3)
where ${C}_{1}$, ${C}_{2}$ are constants. By the boundary value condition (1.2), we obtain
$\begin{array}{r}{C}_{1}=0,\\ {C}_{2}=\frac{\alpha }{\alpha -2}\left\{{\int }_{0}^{1}\left(1-s\right)f\left(s,u\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}ds-{\int }_{0}^{\eta }{\left(\eta -s\right)}^{2}f\left(s,u\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}ds\right\}+u\left(1\right).\end{array}$
(2.4)
Combining (2.3) with (2.4), we have
$\begin{array}{rcl}u\left(t\right)& =& \frac{1}{\alpha -2}\left\{-{\int }_{0}^{t}\left(\alpha -2\right)\left(t-s\right)f\left(s,u\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}ds+{\int }_{0}^{1}\alpha t\left(1-s\right)f\left(s,u\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}ds\\ -{\int }_{0}^{\eta }\alpha t{\left(\eta -s\right)}^{2}f\left(s,u\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}ds\right\}+u\left(1\right)t.\end{array}$
(2.5)

According to (2.5) it is easy to see that (2.1) holds.

On the other hand, if $u\left(t\right)$ is a solution of equation (2.1), deriving both sides of (2.1) two order, it is easy to show that $u\left(t\right)$ is also a solution of problem (1.1)-(1.2).

Therefore, problem (1.1)-(1.2) is equivalent to the integral equation (2.1) with the function $G\left(t,s\right)$ defined in (2.2). The proof is completed. □

Lemma 2.3 For any $\left(t,s\right)\in \left[0,1\right]×\left[0,1\right]$, $G\left(t,s\right)$ is continuous, and $G\left(t,s\right)>0$ for any $\left(t,s\right)\in \left(0,1\right)×\left(0,1\right)$.

Proof The continuity of $G\left(t,s\right)$ for any $\left(t,s\right)\in \left[0,1\right]×\left[0,1\right]$ is obvious. Let
${g}_{1}\left(t,s\right)=-\left(\alpha -2\right)\left(t-s\right)+\alpha t\left(1-s\right)-\alpha t{\left(\eta -s\right)}^{2},\phantom{\rule{1em}{0ex}}0\le s\le min\left\{t,\eta \right\}\le 1.$
Here we only need to prove that ${g}_{1}\left(t,s\right)>0$ for $0\le s\le min\left\{t,\eta \right\}\le 1$, the rest of the proof is similar. So, from the definition of ${g}_{1}\left(t,s\right)$, $0<\eta <1$ and the resonant condition $\frac{1}{2}\alpha {\eta }^{2}=1$, we have
$\begin{array}{rcl}{g}_{1}\left(t,s\right)& =& -\left(\alpha -2\right)\left(t-s\right)+\alpha t\left(1-s\right)-\alpha t{\left(\eta -s\right)}^{2}=2\left(t-s\right)+\alpha s\left(1-t\right)-\alpha t{\left(\eta -s\right)}^{2}\\ \ge & 2\left(t-s\right)+\alpha s\left(1-t\right)-\alpha t\left(1-s\right)=2\left(t-s\right)+\alpha \left(s-t\right)\\ >& 2\left(t-s\right)+2\left(s-t\right)=0\end{array}$

for $0\le s\le min\left\{t,\eta \right\}\le 1$. The proof is completed. □

Let
${G}^{\ast }\left(t,s\right)={t}^{-1}G\left(t,s\right).$
(2.6)
Then
${G}^{\ast }\left(t,s\right)=\frac{1}{\alpha -2}\left\{\begin{array}{cc}-\left(\alpha -2\right)\left(t-s\right){t}^{-1}+\alpha \left(1-s\right)-\alpha {\left(\eta -s\right)}^{2},\hfill & 0\le s\le min\left\{t,\eta \right\}\le 1;\hfill \\ -\left(\alpha -2\right)\left(t-s\right){t}^{-1}+\alpha \left(1-s\right),\hfill & \eta \le s\le t\le 1;\hfill \\ \alpha \left(1-s\right)-\alpha {\left(\eta -s\right)}^{2},\hfill & t\le s\le \eta ;\hfill \\ \alpha \left(1-s\right),\hfill & max\left\{t,\eta \right\}\le s\le 1.\hfill \end{array}$
(2.7)
Thus, problem (1.1)-(1.2) is equivalent to the following integral equation:
$u\left(t\right)={\int }_{0}^{1}t{G}^{\ast }\left(t,s\right)f\left(s,u\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}ds+u\left(1\right)t.$
(2.8)

By a simple computation, the new Green function ${G}^{\ast }\left(t,s\right)$ has the following properties.

Lemma 2.4 For any $\left(t,s\right)\in \left[0,1\right]×\left[0,1\right]$, ${G}^{\ast }\left(t,s\right)$ is continuous, and ${G}^{\ast }\left(t,s\right)>0$ for any $\left(t,s\right)\in \left(0,1\right)×\left(0,1\right)$. Furthermore,
$\underset{t\to 0}{lim}{G}^{\ast }\left(t,s\right):={G}^{\ast }\left(0,s\right)=\frac{1}{\alpha -2}\left\{\begin{array}{cc}\alpha \left(1-s\right)-\alpha {\left(\eta -s\right)}^{2},\hfill & 0\le s\le \eta ;\hfill \\ \alpha \left(1-s\right),\hfill & \eta \le s\le 1.\hfill \end{array}$
(2.9)
Lemma 2.5 For any $s\in \left(0,1\right)$, ${G}^{\ast }\left(t,s\right)$ is nonincreasing with respect to $t\in \left[0,1\right]$, and for any $s\in \left[0,1\right]$, $\frac{\partial {G}^{\ast }\left(t,s\right)}{\partial t}\le 0$, and $\frac{\partial {G}^{\ast }\left(t,s\right)}{\partial t}=0$ for $t\in \left[0,s\right]$. That is, ${G}^{\ast }\left(1,s\right)\le {G}^{\ast }\left(t,s\right)\le {G}^{\ast }\left(s,s\right)$, where
${G}^{\ast }\left(t,s\right)\le {G}^{\ast }\left(s,s\right)=\frac{1}{\alpha -2}\left\{\begin{array}{cc}\alpha \left(1-s\right)-\alpha {\left(\eta -s\right)}^{2},\hfill & 0\le s\le \eta ;\hfill \\ \alpha \left(1-s\right),\hfill & \eta \le s\le 1\hfill \end{array}$
(2.10)
and
${G}^{\ast }\left(t,s\right)\ge {G}^{\ast }\left(1,s\right)=\frac{1}{\alpha -2}\left\{\begin{array}{cc}2\left(1-s\right)-\alpha {\left(\eta -s\right)}^{2},\hfill & 0\le s\le \eta ;\hfill \\ 2\left(1-s\right),\hfill & \eta \le s\le 1.\hfill \end{array}$
(2.11)
Let
$u\left(t\right)=w\left(t\right)t.$
(2.12)
Then $u\left(1\right)=w\left(1\right)$, and equation (2.8) gives
$w\left(t\right)={\int }_{0}^{1}{G}^{\ast }\left(t,s\right)f\left(s,sw\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}ds+w\left(1\right).$
(2.13)
Now we let
$y\left(t\right)=w\left(t\right)-w\left(1\right).$
(2.14)
Then $y\left(1\right)=w\left(1\right)-w\left(1\right)=0$, and equation (2.13) gives
$y\left(t\right)={\int }_{0}^{1}{G}^{\ast }\left(t,s\right)f\left(s,s\left(y\left(s\right)+w\left(1\right)\right)\right)\phantom{\rule{0.2em}{0ex}}ds.$
(2.15)
We replace $w\left(1\right)$ by any real number μ, then (2.15) can be rewritten as
$y\left(t\right)={\int }_{0}^{1}{G}^{\ast }\left(t,s\right)f\left(s,s\left(y\left(s\right)+\mu \right)\right)\phantom{\rule{0.2em}{0ex}}ds.$
(2.16)
To present our result, we assume that $f\left(t,u\right)$ satisfies the following:
1. (H)
$f\left(t,u\right)\in C\left(\left[0,1\right]×R,R\right)$ and there exist two positive continuous functions $m\left(t\right),n\left(t\right)\in C\left(\left[0,1\right],{R}_{+}\right)$ such that
$|f\left(t,tu\right)|\le m\left(t\right)+n\left(t\right){|u|}^{p},\phantom{\rule{1em}{0ex}}t\in \left[0,1\right],$
(2.17)

where $0\le p\le 1$ . Furthermore,
$\underset{u\to ±\mathrm{\infty }}{lim}f\left(t,tu\right)=\mathrm{\infty }$
(2.18)

for any $t\in \left(0,1\right)$.

Our results are the following theorems.

Theorem 2.1 Assume that (H) holds. If
${\int }_{0}^{1}{G}^{\ast }\left(s,s\right)n\left(s\right)\phantom{\rule{0.2em}{0ex}}ds<1,$
(2.19)
then problem (1.1)-(1.2) has at least one solution, where
${G}^{\ast }\left(s,s\right)=\frac{1}{\alpha -2}\left\{\begin{array}{cc}\alpha \left(1-s\right)-\alpha {\left(\eta -s\right)}^{2},\hfill & 0\le s\le \eta ;\hfill \\ \alpha \left(1-s\right),\hfill & \eta \le s\le 1.\hfill \end{array}$
(2.20)
We define an operator T on the set Ω as follows:
$Ty\left(t\right)={\int }_{0}^{1}{G}^{\ast }\left(t,s\right)f\left(s,s\left(y\left(s\right)+\mu \right)\right)\phantom{\rule{0.2em}{0ex}}ds.$
(2.21)

Lemma 2.6 Assume that $f\in C\left(\left[0,1\right]×R,R\right)$ and (2.19) hold. Then the operator T is completely continuous in Ω.

Proof It is not difficult to check that T maps Ω into itself. Next, we divide the proof into three steps.

Step 1. $Ty\left(t\right)$ is continuous with respect to $y\left(t\right)\in \mathrm{\Omega }$.

Suppose that $\left\{{y}_{n}\left(t\right)\right\}$ is a sequence in Ω, and $\left\{{y}_{n}\left(t\right)\right\}$ converges to $y\left(t\right)\in \mathrm{\Omega }$. Because of $f\left(t,ty\right)$ being continuous with respect to $y\in R$ and from Lemma 2.4, it is obvious that ${G}^{\ast }\left(t,s\right)$ is uniformly continuous with respect to $\left(t,s\right)\in \left[0,1\right]×\left[0,1\right]$. Then, for any positive number ε, there exists an integer N. When $n>N$, we have
$\parallel f\left(t,t\left({y}_{n}\left(t\right)+\mu \right)\right)-f\left(t,t\left(y\left(t\right)+\mu \right)\right)\parallel \le \frac{\epsilon }{{\int }_{0}^{1}{G}^{\ast }\left(t,s\right)\phantom{\rule{0.2em}{0ex}}ds}.$
(2.22)
It follows from (2.21) and (2.22) that
$\begin{array}{rcl}\parallel \left(T{y}_{n}\right)\left(t\right)-\left(Ty\right)\left(t\right)\parallel & =& \parallel {\int }_{0}^{1}{G}^{\ast }\left(t,s\right)\left\{f\left(s,s\left({y}_{n}\left(s\right)+\mu \right)\right)-f\left(s,s\left(y\left(s\right)+\mu \right)\right)\right\}\phantom{\rule{0.2em}{0ex}}ds\parallel \\ \le & \parallel {\int }_{0}^{1}{G}^{\ast }\left(t,s\right)\phantom{\rule{0.2em}{0ex}}ds\parallel \parallel f\left(s,s\left({y}_{n}\left(s\right)+\mu \right)\right)-f\left(s,s\left(y\left(s\right)+\mu \right)\right)\parallel \\ \le & \epsilon .\end{array}$

Thus the operator T is continuous in Ω.

Step 2. T maps a bounded set in Ω into a bounded set.

Assume that $D\in \mathrm{\Omega }$ is a bounded set with $\parallel y\left(t\right)\parallel \le r$ for any $y\in D$. Then we have from (2.17) and (2.21) that
$\begin{array}{rcl}\parallel \left(Ty\right)\left(t\right)\parallel & =& \parallel {\int }_{0}^{1}{G}^{\ast }\left(t,s\right)f\left(s,s\left(y\left(s\right)+\mu \right)\right)\phantom{\rule{0.2em}{0ex}}ds\parallel \\ \le & {\int }_{0}^{1}{G}^{\ast }\left(t,s\right)m\left(s\right)\phantom{\rule{0.2em}{0ex}}ds+{\int }_{0}^{1}{G}^{\ast }\left(t,s\right)n\left(s\right){|y\left(s\right)+\mu |}^{p}\phantom{\rule{0.2em}{0ex}}ds\\ \le & {\int }_{0}^{1}{G}^{\ast }\left(t,s\right)m\left(s\right)\phantom{\rule{0.2em}{0ex}}ds+{\int }_{0}^{1}{G}^{\ast }\left(t,s\right)n\left(s\right)ds{\left(\parallel y\left(s\right)\parallel +\parallel \mu \parallel \right)}^{p}\\ \le & {\int }_{0}^{1}{G}^{\ast }\left(s,s\right)m\left(s\right)\phantom{\rule{0.2em}{0ex}}ds+{\int }_{0}^{1}{G}^{\ast }\left(s,s\right)n\left(s\right)\phantom{\rule{0.2em}{0ex}}ds{\left(r+\parallel \mu \parallel \right)}^{p}:=L.\end{array}$
(2.23)

This implies that the operator T maps a bounded set into a bounded set in Ω.

Step 3. T is equicontinuous in Ω.

It suffices to show that for any $y\left(t\right)\in D$ and any $0<{t}_{1}<{t}_{2}<1$, $Ty\left({t}_{1}\right)\to Ty\left({t}_{2}\right)$ as ${t}_{1}\to {t}_{2}$. There are the following three possible cases:

Case (i) ${t}_{1}<{t}_{2}\le \eta$;

Case (ii) ${t}_{1}<\eta <{t}_{2}$;

Case (iii) $\eta \le {t}_{1}<{t}_{2}$.

We only need to consider case (i) because the proofs of the other two are similar. Since D is bounded, then there exists $M>0$ such that $|f|\le M$. From (2.21), for any $f\in D$, we have

Because of Step 1 to Step 3, it follows that the operator T is completely continuous in Ω. The proof is completed. □

Lemma 2.7 Assume that $f\in C\left(\left[0,1\right]×R,R\right)$ and (2.17) and (2.19) hold. Then the integral equation (2.16) has at least one solution for any real number μ.

Proof We only need to present that the operator T is a priori bounded. Set
$r=max\left\{1,\frac{{\int }_{0}^{1}{G}^{\ast }\left(s,s\right)m\left(s\right)\phantom{\rule{0.2em}{0ex}}ds+{\int }_{0}^{1}{G}^{\ast }\left(s,s\right)n\left(s\right)\phantom{\rule{0.2em}{0ex}}ds{|\mu |}^{p}}{1-{\int }_{0}^{1}{G}^{\ast }\left(s,s\right)m\left(s\right)\phantom{\rule{0.2em}{0ex}}ds}\right\},$
(2.24)
and define a set $K\in \mathrm{\Omega }$ as follows:
$K=\left\{y\in \mathrm{\Omega }\mid \parallel y\left(t\right)\parallel \le r\right\}.$

To use Lemma 2.1 to prove the existence of a fixed point of the operator T, we need to show that the second possibility of Lemma 2.1 should not happen.

In fact, assume that there exists $y\in \partial K$ with $\parallel y\left(t\right)\parallel =r$ and $\gamma \in \left(0,1\right)$ such that $y=\gamma Ty$. It follows that
$y\left(t\right)=\gamma |\left(Ty\right)\left(t\right)|=\gamma |{\int }_{0}^{t}{G}^{\ast }\left(t,s\right)f\left(s,s\left(y\left(s\right)+\mu \right)\right)\phantom{\rule{0.2em}{0ex}}ds|$
and
$\begin{array}{rcl}\parallel y\left(t\right)\parallel & =& \parallel \gamma {\int }_{0}^{t}{G}^{\ast }\left(t,s\right)f\left(s,s\left(y\left(s\right)+\mu \right)\right)\phantom{\rule{0.2em}{0ex}}ds\parallel \\ \le & \gamma {\int }_{0}^{t}{G}^{\ast }\left(s,s\right)|f\left(s,s\left(y\left(s\right)+\mu \right)\right)|\phantom{\rule{0.2em}{0ex}}ds\\ <& {\int }_{0}^{t}{G}^{\ast }\left(s,s\right)m\left(s\right)\phantom{\rule{0.2em}{0ex}}ds+{\int }_{0}^{t}{G}^{\ast }\left(s,s\right)n\left(s\right)\phantom{\rule{0.2em}{0ex}}ds{\parallel \mu \parallel }^{p}+{\int }_{0}^{t}{G}^{\ast }\left(s,s\right)n\left(s\right)\phantom{\rule{0.2em}{0ex}}ds{\parallel r\parallel }^{p}\\ \le & {\int }_{0}^{t}{G}^{\ast }\left(s,s\right)m\left(s\right)\phantom{\rule{0.2em}{0ex}}ds+{\int }_{0}^{t}{G}^{\ast }\left(s,s\right)n\left(s\right)\phantom{\rule{0.2em}{0ex}}ds{\parallel \mu \parallel }^{p}+{\int }_{0}^{t}{G}^{\ast }\left(s,s\right)n\left(s\right)\phantom{\rule{0.2em}{0ex}}ds\parallel r\parallel \\ \le & \parallel r\parallel .\end{array}$
(2.25)
Here we use the inequality

Obviously, (2.25) contradicts our assumption that $\parallel y\left(t\right)\parallel =r$. Therefore, by Lemma 2.1, it follows that T has a fixed point $y\in \overline{K}$. Hence, the integral equation (2.21) has at least a solution $y\left(t\right)$. The proof is completed. □

## 3 The proof of Theorem 2.1

In this section, we prove Theorem 2.1 by using Lemmas 2.5-2.7 and the intermediate value theorem.

Proof of Theorem 2.1 From the right-hand side of (2.21), we know that (2.21) is continuously dependent on the parameter μ. So, we just need to find μ such that $y\left(1\right)=0$, which implies that $u\left(1\right)=\mu$.

We rewrite (2.16) for any given real number μ as follows:
${y}_{\mu }\left(t\right)={\int }_{0}^{1}{G}^{\ast }\left(t,s\right)f\left(s,s\left({y}_{\mu }\left(s\right)+\mu \right)\right)\phantom{\rule{0.2em}{0ex}}ds,\phantom{\rule{1em}{0ex}}t\in \left[0,1\right].$
(3.1)
From (3.1), it suffices to show that there exists μ such that
$L\left(\mu \right):={y}_{\mu }\left(1\right)={\int }_{0}^{1}{G}^{\ast }\left(1,s\right)f\left(s,s\left({y}_{\mu }\left(s\right)+\mu \right)\right)\phantom{\rule{0.2em}{0ex}}ds,\phantom{\rule{1em}{0ex}}t\in \left[0,1\right].$
(3.2)

Obviously, ${y}_{\mu }\left(1\right)$ is continuously dependent on the parameter μ. Our aim here is to prove that there exists ${\mu }^{\ast }$ such that ${y}_{{\mu }^{\ast }}\left(1\right)=0$, we only need to prove that ${lim}_{\mu \to \mathrm{\infty }}L\left(\mu \right)=\mathrm{\infty }$ and ${lim}_{\mu \to -\mathrm{\infty }}L\left(\mu \right)=-\mathrm{\infty }$.

Firstly, we prove that ${lim}_{\mu \to \mathrm{\infty }}L\left(\mu \right)=\mathrm{\infty }$. On the contrary, we suppose that ${\underline{lim}}_{\mu \to \mathrm{\infty }}L\left(\mu \right)<\mathrm{\infty }$. Then there exists a sequence $\left\{{\mu }_{n}\right\}$ with ${lim}_{n\to \mathrm{\infty }}{\mu }_{n}=\mathrm{\infty }$ such that ${lim}_{{\mu }_{n}\to \mathrm{\infty }}L\left({\mu }_{n}\right)<\mathrm{\infty }$, which implies that the sequence $\left\{L\left({\mu }_{n}\right)\right\}$ is bounded. Notice that the function $f\left(t,ty\right)$ is continuous with respect to $t\in \left[0,1\right]$ and $y\in R$. So, it is impossible to have
(3.3)
as ${\mu }_{n}$ is large enough. Indeed, assume that (3.3) is true. Then by (3.1) we have
(3.4)
Thus we get that
(3.5)
Since we have from (H) that
(3.6)
by (3.2), (3.5) and (3.6), we have
$\begin{array}{rcl}\underset{{\mu }_{n}\to \mathrm{\infty }}{lim}{y}_{{\mu }_{n}}\left(1\right)& =& \underset{{\mu }_{n}\to \mathrm{\infty }}{lim}{\int }_{0}^{1}{G}^{\ast }\left(1,s\right)f\left(s,s\left({y}_{{\mu }_{n}}+{\mu }_{n}\right)\right)\phantom{\rule{0.2em}{0ex}}ds\\ \ge & \underset{{\mu }_{n}\to \mathrm{\infty }}{lim}{\int }_{\frac{1}{4}}^{\frac{3}{4}}{G}^{\ast }\left(1,s\right)f\left(s,s\left({y}_{{\mu }_{n}}+{\mu }_{n}\right)\right)\phantom{\rule{0.2em}{0ex}}ds\\ =& \mathrm{\infty },\end{array}$
(3.7)

Now, for large ${\mu }_{n}$, we define
${I}_{n}=\left\{t\in \left[0,1\right]\mid f\left(t,t\left({y}_{{\mu }_{n}}+{\mu }_{n}\right)\right)<0\right\}.$

Then ${I}_{n}$ is not empty.

Secondly, we divide the set ${I}_{n}$ into set ${\stackrel{˜}{I}}_{n}$ and set ${\stackrel{ˆ}{I}}_{n}$ as follows:
$\begin{array}{c}{\stackrel{˜}{I}}_{n}=\left\{t\in {I}_{n}\mid {y}_{{\mu }_{n}}+{\mu }_{n}>0\right\},\hfill \\ {\stackrel{ˆ}{I}}_{n}=\left\{t\in {I}_{n}\mid {y}_{{\mu }_{n}}+{\mu }_{n}\le 0\right\}.\hfill \end{array}$

Obviously, we get that ${\stackrel{˜}{I}}_{n}\cap {\stackrel{ˆ}{I}}_{n}=\varphi$, ${\stackrel{˜}{I}}_{n}\cup {\stackrel{ˆ}{I}}_{n}={I}_{n}$. So, we have from (H) that ${\stackrel{ˆ}{I}}_{n}$ is not empty.

From (H) again, the function $f\left(t,tu\right)$ is bounded below by a constant for $t\in \left[0,1\right]$ and $\mu \in \left[0,\mathrm{\infty }\right)$. Thus, there exists a constant M (<0), independent of t and ${\mu }_{n}$, such that
$f\left(t,t\left({y}_{{\mu }_{n}}\left(t\right)+{\mu }_{n}\right)\right)\ge M,\phantom{\rule{1em}{0ex}}t\in {\stackrel{˜}{I}}_{n}.$
(3.8)
Let
$j\left({\mu }_{n}\right)=\underset{t\in {I}_{n}}{min}{y}_{{\mu }_{n}}\left(t\right).$
From the definitions of ${\stackrel{˜}{I}}_{n}$ and ${\stackrel{ˆ}{I}}_{n}$, we have
$j\left({\mu }_{n}\right)=\underset{t\in {\stackrel{ˆ}{I}}_{n}}{min}{y}_{{\mu }_{n}}\left(t\right)=-{\parallel {y}_{{\mu }_{n}}\left(t\right)\parallel }_{{\stackrel{ˆ}{I}}_{n}},$
and it follows that $j\left({\mu }_{n}\right)\to -\mathrm{\infty }$ as ${\mu }_{n}\to \mathrm{\infty }$ (since if $j\left({\mu }_{n}\right)$ is bounded below by a constant as ${\mu }_{n}\to \mathrm{\infty }$, then (3.7) holds). Therefore, we can choose ${\mu }_{{n}_{1}}$ large enough such that
$j\left({\mu }_{n}\right)
(3.9)
for $n>{n}_{1}$. From (H), (3.1), (3.8) and (3.9) and the definitions of ${\stackrel{˜}{I}}_{n}$ and ${\stackrel{ˆ}{I}}_{n}$, for any ${\mu }_{n}>{\mu }_{{n}_{1}}$, we have
$\begin{array}{rcl}{y}_{{\mu }_{n}}\left(t\right)& =& {\int }_{0}^{1}{G}^{\ast }\left(t,s\right)f\left(s,s\left({y}_{{\mu }_{n}}\left(s\right)+{\mu }_{n}\right)\right)\phantom{\rule{0.2em}{0ex}}ds\\ \ge & {\int }_{{I}_{n}}{G}^{\ast }\left(s,s\right)f\left(s,s\left({y}_{{\mu }_{n}}\left(s\right)+{\mu }_{n}\right)\right)\phantom{\rule{0.2em}{0ex}}ds\\ \ge & {\int }_{{\stackrel{˜}{I}}_{n}}{G}^{\ast }\left(s,s\right)f\left(s,s\left({y}_{{\mu }_{n}}\left(s\right)\right)+{\mu }_{n}\right)\phantom{\rule{0.2em}{0ex}}ds\\ +{\int }_{{\stackrel{ˆ}{I}}_{n}}{G}^{\ast }\left(s,s\right)\left(-m\left(s\right)-n\left(s\right){|{y}_{{\mu }_{n}}\left(s\right)+{\mu }_{n}|}^{p}\right)\phantom{\rule{0.2em}{0ex}}ds\\ \ge & \left(M{\int }_{{\stackrel{˜}{I}}_{n}}{G}^{\ast }\left(s,s\right)\phantom{\rule{0.2em}{0ex}}ds-{\int }_{{\stackrel{ˆ}{I}}_{n}}{G}^{\ast }\left(s,s\right)m\left(s\right)\phantom{\rule{0.2em}{0ex}}ds\right)\\ -{\int }_{{\stackrel{ˆ}{I}}_{n}}{G}^{\ast }\left(s,s\right)n\left(s\right)\phantom{\rule{0.2em}{0ex}}ds{\parallel {y}_{{\mu }_{n}}\left(s\right)+{\mu }_{n}\parallel }^{p}\phantom{\rule{0.2em}{0ex}}ds,\end{array}$
from which it follows that
$\begin{array}{rcl}{y}_{{\mu }_{n}}\left(t\right)& \ge & M{\int }_{0}^{1}{G}^{\ast }\left(s,s\right)\phantom{\rule{0.2em}{0ex}}ds-{\int }_{0}^{1}{G}^{\ast }\left(s,s\right)m\left(s\right)\phantom{\rule{0.2em}{0ex}}ds\\ -{\int }_{0}^{1}{G}^{\ast }\left(s,s\right)n\left(s\right)\phantom{\rule{0.2em}{0ex}}ds{\parallel {y}_{{\mu }_{n}}\left(t\right)\parallel }_{{I}_{n}}^{p}\\ \ge & M{\int }_{0}^{1}{G}^{\ast }\left(s,s\right)\phantom{\rule{0.2em}{0ex}}ds-{\int }_{0}^{1}{G}^{\ast }\left(s,s\right)m\left(s\right)\phantom{\rule{0.2em}{0ex}}ds\\ +{\int }_{0}^{1}{G}^{\ast }\left(s,s\right)n\left(s\right)\phantom{\rule{0.2em}{0ex}}dsj\left({\mu }_{n}\right),\phantom{\rule{1em}{0ex}}t\in {I}_{n},\end{array}$
which implies that
$j\left({\mu }_{n}\right)\ge \frac{M{\int }_{0}^{1}{G}^{\ast }\left(s,s\right)\phantom{\rule{0.2em}{0ex}}ds-{\int }_{0}^{1}{G}^{\ast }\left(s,s\right)m\left(s\right)\phantom{\rule{0.2em}{0ex}}ds}{1-{\int }_{0}^{1}{G}^{\ast }\left(s,s\right)n\left(s\right)\phantom{\rule{0.2em}{0ex}}ds}.$

This contradicts (3.9). Thus, we have proved that ${lim}_{\mu \to \mathrm{\infty }}L\left(\mu \right)=\mathrm{\infty }$. By a similar method, we can also prove that ${lim}_{\mu \to -\mathrm{\infty }}L\left(\mu \right)=-\mathrm{\infty }$.

Notice that $L\left(\mu \right)$ is continuous with respect to $\mu \in \left(-\mathrm{\infty },\mathrm{\infty }\right)$. It follows from the intermediate value theorem  that there exists ${\mu }^{\ast }\in \left(-\mathrm{\infty },\mathrm{\infty }\right)$ such that $L\left({\mu }^{\ast }\right)=0$, that is, $y\left(1\right)={y}_{{\mu }^{\ast }}\left(1\right)=0$, which satisfies the second boundary value condition of (1.2). The proof is completed. □

## 4 Example

In this section, we give an example to illustrate our main result.

Example Consider the boundary value problem
${u}^{″}+{t}^{2}+\frac{1}{2}u\left(t\right)=0,$
(4.1)
$u\left(0\right)=0,\phantom{\rule{2em}{0ex}}u\left(1\right)=8{\int }_{0}^{\frac{1}{2}}u\left(s\right)\phantom{\rule{0.2em}{0ex}}ds,$
(4.2)
where
$\alpha =8,\phantom{\rule{2em}{0ex}}\eta =\frac{1}{2},\phantom{\rule{2em}{0ex}}f\left(t,u\right)={t}^{2}+\frac{1}{2}u\left(t\right).$
So, we have
$\frac{1}{2}\alpha {\eta }^{2}=1$
and
$f\left(t,tu\right)={t}^{2}+\frac{t}{2}u\left(t\right).$
Now we take
$n\left(t\right)=\frac{t}{2}.$
It is easy to check that
$\underset{u\to ±\mathrm{\infty }}{lim}f\left(t,tu\right)=±\mathrm{\infty },\phantom{\rule{1em}{0ex}}t\in \left(0,1\right)$
and
$\begin{array}{rcl}{\int }_{0}^{1}{G}^{\ast }\left(s,s\right)n\left(s\right)\phantom{\rule{0.2em}{0ex}}ds& \le & \frac{1}{\alpha -2}{\int }_{0}^{1}\alpha \left(1-s\right)n\left(s\right)\phantom{\rule{0.2em}{0ex}}ds\\ =& \frac{1}{6}{\int }_{0}^{1}8\left(1-s\right)\frac{s}{2}\phantom{\rule{0.2em}{0ex}}ds\\ =& \frac{1}{9}<1.\end{array}$

Thus the conditions of Theorem 2.1 are satisfied. Therefore problem (4.1)-(4.2) has at least a nontrivial solution.

## Declarations

### Acknowledgements

The work was partially supported by the Natural Science Foundation of Hunan Province (No. 13JJ3074), the Foundation of Science and Technology of Hengyang city (No. J1) and the Scientific Research Foundation for Returned Scholars of University of South China (No. 2012XQD43).

## Authors’ Affiliations

(1)
School of Nuclear Science and Technology, School of Mathematics and Physics, University of South China, Hengyang, 421001, P.R. China

## References 