 Research
 Open
 Published:
The eigenvalue characterization for the constant sign Green’s functions of $(k,nk)$ problems
Boundary Value Problemsvolume 2016, Article number: 44 (2016)
Abstract
This paper is devoted to the study of the sign of the Green’s function related to a general linear nthorder operator, depending on a real parameter, $T_{n}[M]$, coupled with the $(k,nk)$ boundary value conditions.
If the operator $T_{n}[\bar{M}]$ is disconjugate for a given M̄, we describe the interval of values on the real parameter M for which the Green’s function has constant sign.
One of the extremes of the interval is given by the first eigenvalue of the operator $T_{n}[\bar{M}]$ satisfying $(k,nk)$ conditions.
The other extreme is related to the minimum (maximum) of the first eigenvalues of $(k1,nk+1)$ and $(k+1,nk1)$ problems.
Moreover, if $nk$ is even (odd) the Green’s function cannot be nonpositive (nonnegative).
To illustrate the applicability of the obtained results, we calculate the parameter intervals of constant sign Green’s functions for particular operators. Our method avoids the necessity of calculating the expression of the Green’s function.
We finalize the paper by presenting a particular equation in which it is shown that the disconjugation hypothesis on operator $T_{n}[\bar{M}]$ for a given M̄ cannot be eliminated.
Introduction
It is very well known that the validity of the method of lower and upper solutions, coupled with the monotone iterative techniques [1, 2], is equivalent to the constant sign of the Green’s function related to the linear part of the studied problem [3, 4]. Moreover, by means of the celebrated Krasnosel’skiĭ contraction/expansion fixed point theorem [5], nonexistence, existence, and multiplicity results are derived from the construction of suitable cones on Banach spaces. Such a construction follows by using adequate properties of the Green’s function, one of them is its constant sign [6–9]. Recently, the combination of the two previous methods has been proved as a useful tool to ensure the existence of solution [10–14].
Having in mind the power of this constant sign property, we will describe the interval of parameters for which the Green’s function related to the general linear nthorder equation
$t \in I \equiv[a,b]$, coupled with the socalled $(k,nk)$ twopoint boundary value conditions:
$1 \le k\le n1$, has constant sign on its square of definition $I \times I$.
The main hypothesis consists on assuming that there is a real parameter M̄ for which operator $T_{n}[\bar{M}]$ is disconjugate on I.
An exhaustive study of the general theory and the fundamental properties of the disconjugacy are compiled in the classical book of Coppel [15]. Different sufficient criteria to ensure the disconjugacy character of the linear operator $T_{n}[0]$ have been developed in the literature, we refer to [16, 17]. Sufficient conditions for particular cases have been obtained in [18–20] and, more recently, in [21]. We mention that operator $u^{(n)}(t)+a_{1}(t) u^{(n1)}(t)$ is always disconjugate in I, see [15] for details, in particular the results here presented are valid for the operator $u^{(n)}(t)+M u(t)$.
As has been shown in [15], the disconjugacy character implies the constant sign of the Green’s function $g_{M}$ related to problem (1)(2). However, as we will see in the paper, the reciprocal property is not true in general: there are real parameters M for which the Green’s function has constant sign but equation (1) is not disconjugate. In other words, the disconjugacy character is only a sufficient condition in order to ensure the constant sign of a Green’s function related to problem (1)(2).
In fact, from the disconjugacy character of the operator $T_{n}[\bar{M}]$ in I, it is shown in [15] that the Green’s function $g_{M}$ satisfies a suitable condition, stronger than its constant sign. Such condition fulfills the one introduced in Section 1.8 of [3]. So, following the results given in that reference we conclude that the set of parameters M for which $g_{M}$ has constant sign is an interval $H_{T}$. Moreover, if $nk$ is even then the maximum of $H_{T}$ is the opposite to the biggest negative eigenvalue of problem (1)(2), when $nk$ is odd the minimum of $H_{T}$ is the opposite to the least positive eigenvalue of such problem.
Thus, the difficulty remains in the characterization of the other extreme of the interval $H_{T}$. In this case, as it is shown in Section 1.8 of [3], such extreme is not an eigenvalue of the considered problem, so to attain its exact value is not immediate. In practical situations it is necessary to obtain the expression of the Green’s function, which is, in general, a difficult matter to deal with. We point out that this problem is not restricted to the $(k,nk)$ boundary conditions, the difficulty in obtaining the noneigenvalue extreme remains true for any kind of linear conditions [22, 23]. In [24], provided operator $T_{n}[M]$ has constant coefficients, it has been developed a computer algorithm that calculates the exact expression of a Green’s function coupled with twopoint boundary value conditions. However, such expression is often too complicated to manage, and to describe the interval $H_{T}$ is really very difficult in practical situations. In fact there is not a direct method of the construction for nonconstant coefficients.
We mention that the disconjugacy theory has been used in [25] to obtain the values for which the Green’s functions related to be thirdorder operators $u'''+M u^{(i)}$, $i=0,1,2$, coupled with conditions $(1,2)$ and $(2,1)$, have constant sign. A similar procedure has been performed in [26] for the fourthorder operator $u^{(4)}+M u$, coupled with conditions $(2,2)$ and, more recently, in [27] with conditions $(1,3)$ and $(3,1)$. In all the situations the interval of disconjugacy is obtained and then, by means of the expression of the Green’s function, it is proved that such interval is optimal. As we have mentioned above, this coincidence holds only in particular cases as the ones treated in these papers, in general the intervals of disconjugacy and constant sign Green’s functions do not coincide for the nthorder operator $T_{n}[M]$.
It is for this that we make in this work a general characterization of the regular extreme of the interval of constant sign $H_{T}$ by means of the spectral theory. We will show that it is an eigenvalue of the same operator $T_{n}[M]$ but related to different twopoint boundary value conditions. In fact, if $nk$ is even, it will be the minimum of the two least positive eigenvalues related to conditions $(k1,nk+1)$ and $(k+1,nk1)$. It will be the maximum of the two biggest negative eigenvalues of such problems when $nk$ is odd. So, we make a general characterization for the general operator $T_{n}[M]$ and we avoid the necessity of calculating the Green’s function and to study its sign dependence on the real parameter M.
We note that if the operator $T_{n}[M]$ has constant coefficients, to obtain the corresponding eigenvalues we only must to calculate the determinant of the matrix of coefficients of a linear homogeneous algebraic system. Numerical methods are also valid for the nonconstant case.
It is important to mention that, as a consequence of the obtained results, denoting by $g_{M}$ the Green’s function related to problem (1)(2), we conclude that $(1)^{nk} g_{M}(t,s)$ cannot be negative on $I \times I$ for all $M \in\mathbb{R}$.
The paper is organized as follows: in a preliminary Section 2 we introduce the fundamental concepts that are needed in the development of the paper. Next section is devoted to the proof of the main result in which the regular extreme is obtained via spectral theory. In Section 4 some particular cases are considered where it is shown the applicability of the obtained results. In the last section is introduced an example that shows that the disconjugacy hypothesis on the main result cannot be eliminated.
Preliminaries
In this section, for convenience of the reader, we introduce the fundamental tools in the theory of disconjugacy and Green’s functions that will be used in the development of further sections.
Definition 2.1
Let $a_{k}\in C^{nk}(I)$ for $k=1,\ldots,n$. The nthorder linear differential equation (1) is said to be disconjugate on an interval I if every nontrivial solution has less than n zeros on I, multiple zeros being counted according to their multiplicity.
Definition 2.2
The functions $u_{1},\ldots, u_{n} \in C^{n}(I)$ are said to form a Markov system on the interval I if the n Wronskians
are positive throughout I.
The following result about this concept is in Chapter 3 of [15].
Theorem 2.3
The linear differential equation (1) has a Markov fundamental system of solutions on the compact interval I if, and only if, it is disconjugate on I.
In order to introduce the concept of the Green’s function related to the nthorder scalar problem (1)(2), we consider the following equivalent firstorder vectorial problem:
with $x(t) \in\mathbb{R}^{n}$, $A(t), B, C\in\mathcal{M}_{n\times n}$, defined by
Here $I_{j}$, $j=1, \ldots,n1$, is the $j \times j$ identity matrix.
Definition 2.4
We say that G is a Green’s function for problem (4) if it satisfies the following properties:

(G1)
$G\equiv(G_{i,j})_{i,j\in\{1,\ldots,n\} }\colon (I\times I)\backslash \lbrace(t,t) , t\in I \rbrace \rightarrow\mathcal{M}_{n\times n}$.

(G2)
G is a $C^{1}$ function on the triangles $\lbrace(t,s)\in\mathbb{R}^{2} , a\leq s< t\leq b \rbrace$, and $\lbrace(t,s)\in\mathbb{R}^{2} , a\leq t < s\leq b \rbrace$.

(G3)
For all $i\neq j$ the scalar functions $G_{i,j}$ have a continuous extension to $I\times I$.

(G4)
For all $s\in(a,b)$, the following equality holds:
$$\frac{\partial}{\partial t} G(t,s)=A(t) G(t,s) \quad \text{for all } t\in I\backslash \lbrace s \rbrace. $$ 
(G5)
For all $s\in(a,b)$ and $i\in \lbrace 1,\ldots , n \rbrace$, the following equalities are fulfilled:
$$\lim_{t\rightarrow s^{+}}G_{i,i}(t,s)=\lim_{t\rightarrow s^{}}G_{i,i}(s,t)=1+ \lim_{t\rightarrow s^{+}}G_{i,i}(s,t)=1+\lim_{t\rightarrow s^{}}G_{i,i}(t,s) . $$ 
(G6)
For all $s\in(a,b)$, the function $t\rightarrow G(t,s)$ satisfies the boundary conditions
$$B G(a,s)+C G(b,s)=0 . $$
Remark 2.5
On the previous definition item (G5) can be modified to obtain the characterization of the lateral limits for $s=a$ and $s=b$ as follows:
It is very well known that the Green’s function related to this problem obeys the following expression ([3], Section 1.4):
where $g_{M}(t,s)$ is the scalar Green’s function related to problem (1)(2).
Using Definition 2.4 we can deduce the properties fulfilled by $g_{M}(t,s)$. In particular, $g_{M}\in C^{n2}(I \times I)$ and it is a ${\mathcal{C}} ^{n}$ function on the triangles $a\le s < t \le b$ and $a\le t < s \le b$. Moreover, it satisfies, as a function of t, the twopoint boundary value conditions (2) and solves equation (1) whenever $t \neq s$.
We also mention a result which appears in Chapter 3, Section 6 of [15] and that connects the disconjugacy and the sign of the Green’s function related to problem (1)(2).
Lemma 2.6
If the linear differential equation (1) is disconjugate and $g_{M}(t,s)$ is the Green’s function related to problem (1)(2), hence
where $p(t)=(ta)^{k} (tb)^{nk}$.
Remark 2.7
We mention that in previous lemma, by means of the expression
we are denoting
and
Moreover, due to the regularity of the function $g_{M}$, we see that there is a positive constant K such that the following properties hold for all $s \in(a,b)$:
and
We note that such properties imply the following inequalities:
The adjoint of the operator $T_{n}[M]$ is given by the following expression, see for details Section 1.4 of [3] or Chapter 3, Section 5 of [15]:
and its domain of definition is
In our case, because of the boundary conditions (2), we can express the domain of the operator $T_{n}[M]$, $D(T_{n}[M])$, as
so we can replace equation (8) with
In order to simplify the previous expression, we choose a function $u\in C^{n}(I)$ satisfying
Realizing that $a_{0}=1$, we conclude that every function $v\in D(T_{n}^{*}[M])$ must satisfy $v(b)=0$.
Moreover, if we now choose a function in $C^{n}(I)$ that satisfies
we conclude that any function $v\in D(T_{n}^{*}[M])$ has to satisfy
Since $a_{1}\in C^{n1}(I)$ and $v(b)=0$, we conclude that $v'(b)=0$.
Repeating this process we conclude that the domain of the adjoint operator is given by
The next result appears in Chapter 3, Theorem 9 of [15].
Theorem 2.8
Equation (1) is disconjugate on an interval I if, and only if, the adjoint equation, $T_{n}^{*}[M] y(t)=0$ is disconjugate on I.
We denote by $g_{M}^{*}(t,s)$ the Green function of the adjoint operator, $T_{n}^{*}[M]$.
In Section 1.4 of [3] the following relationship is proved:
Defining now the following operator:
we deduce, from the previous expression, that
Obviously, Theorem 2.8 remains true for the operator $\widehat {T}_{n}[(1)^{n} M]$.
Definition 2.9
The operator $T_{n}[M]$ is said to be inverse positive (inverse negative) on $X_{k}$ if every function $u \in X_{k}$ such that $T_{n}[M] u \ge0$ in I, must verify $u\geq0$ ($u\leq0$) on I.
The next results are proved in Section 1.6 and Section 1.8 of [3].
Theorem 2.10
The operator $T_{n}[M]$ is inverse positive (inverse negative) on $X_{k}$ if, and only if, the Green’s function related to problem (1)(2) is nonnegative (nonpositive) on its square of definition.
Theorem 2.11
Let $M_{1}, M_{2}\in\mathbb{R}$, and suppose that operators $T_{n}[M_{j}]$, $j=1,2$, are invertible in $X_{k}$. Let $g_{j}$, $j=1,2$, be Green’s functions related to operators $T_{n}[M_{j}]$, and suppose that both functions have the same constant sign on $I \times I$. Then, if $M_{1}< M_{2}$, $g_{2}\leq g_{1}$ on $I \times I$.
In the sequel, we introduce two conditions on $g_{M}(t,s)$, which will be used in the paper.
 ($\mathrm{P}_{g}$):

Suppose that there is a continuous function $\phi(t)>0$ for all $t\in(a,b)$ and $k_{1}, k_{2}\in\mathcal{L}^{1}(I)$, such that $0< k_{1}(s)< k_{2}(s)$ for a.e. $s\in I$, satisfying
$$\phi(t) k_{1}(s)\leq g_{M}(t,s)\leq\phi(t) k_{2}(s) \quad \text{for a.e. } (t,s)\in I \times I . $$  ($\mathrm{N}_{g}$):

Suppose that there is a continuous function $\phi(t)>0$ for all $t\in(a,b)$ and $k_{1}, k_{2}\in\mathcal{L}^{1}(I)$, such that $k_{1}(s)< k_{2}(s)<0$ for a.e. $s\in I$, satisfying
$$\phi(t) k_{1}(s)\leq g_{M}(t,s)\leq\phi(t) k_{2}(s) \quad \text{for a.e. }(t,s)\in I \times I . $$
Finally, we introduce the following sets, which are going to particularize $H_{T}$:
Realize that using Theorem 2.11 we can affirm that the two previous sets are real intervals.
The next results describe one of the extremes of the two previous intervals (see Theorems 1.8.31 and 1.8.23 of [3]).
Theorem 2.12
Let $\bar{M}\in\mathbb{R}$ be fixed. If the operator $T_{n}[\bar{M}]$ is invertible in $X_{k}$ and its related Green’s function satisfies condition ($\mathrm{P}_{g}$), then the following statements hold:

There exists $\lambda_{1}>0$, the least eigenvalue in absolute value of the operator $T_{n}[\bar{M}]$ in $X_{k}$. Moreover, there exists a nontrivial constant sign eigenfunction corresponding to the eigenvalue $\lambda_{1}$.

The Green’s function related to the operator $T_{n}[M]$ is nonnegative on $I\times I$ for all $M\in(\bar{M}\lambda_{1},\bar{M}]$.

The Green’s function related to the operator $T_{n}[M]$ cannot be nonnegative on $I\times I$ for all $M<\bar{M}\lambda_{1}$.

If there is $M\in\mathbb{R}$ for which the Green’s function related to the operator $T_{n}[M]$ is nonpositive on $I\times I$, then $M<\bar{M}\lambda_{1}$.
Theorem 2.13
Let $\bar{M}\in\mathbb{R}$ be fixed. If the operator $T_{n}[\bar{M}]$ is invertible in $X_{k}$ and its related Green’s function satisfies condition ($\mathrm{N}_{g}$), then the following statements hold:

There exists $\lambda_{2}<0$, the least eigenvalue in absolute value of the operator $T_{n}[\bar{M}]$ in $X_{k}$. Moreover, there exists a nontrivial constant sign eigenfunction corresponding to the eigenvalue $\lambda_{2}$.

The Green’s function related to the operator $T_{n}[M]$ is nonpositive on $I\times I$ for all $M\in[\bar{M},\bar{M}\lambda_{2})$.

The Green’s function related to the operator $T_{n}[M]$ cannot be nonpositive on $I\times I$ for all $M>\bar{M}\lambda_{2}$.

If there is $M\in\mathbb{R}$ for which the Green’s function related to the operator $T_{n}[M]$ is nonnegative on $I\times I$, then $M>\bar{M}\lambda_{2}$.
Main result
This section is devoted to the proof of the eigenvalue characterization of the sets $P_{T}$ and $N_{T}$. Such a result is enunciated in the following theorem.
Theorem 3.1
Let $\bar{M}\in\mathbb{R}$ be such that the equation $T_{n}[\bar{M}] u(t)=0$ is disconjugate on I. Then the following properties are fulfilled:
If $nk$ is even and $2\leq k \le n1$, then the operator $T_{n}[M]$ is inverse positive on $X_{k}$ if, and only if, $M\in(\bar{M}\lambda _{1},\bar {M}\lambda_{2}]$, where:

$\lambda_{1}>0$ is the least positive eigenvalue of the operator $T_{n}[\bar{M}]$ in $X_{k}$.

$\lambda_{2}<0$ is the maximum of:

$\lambda_{2}'<0$, the biggest negative eigenvalue of the operator $T_{n}[\bar{M}]$ in $X_{k1}$.

$\lambda_{2}''<0$, the biggest negative eigenvalue of the operator $T_{n}[\bar{M}]$ in $X_{k+1}$.

If $k=1$ and n is odd, then the operator $T_{n}[M]$ is inverse positive on $X_{1}$ if, and only if, $M\in(\bar{M}\lambda_{1},\bar {M}\lambda_{2}]$, where:

$\lambda_{1}>0$ is the least positive eigenvalue of the operator $T_{n}[\bar{M}]$ in $X_{1}$.

$\lambda_{2}<0$ is the biggest negative eigenvalue of the operator $T_{n}[\bar{M}]$ in $X_{2}$.
If $nk$ is odd and $2\leq k\leq n2$, then the operator $T_{n}[M]$ is inverse negative on $X_{k}$ if, and only if, $M\in[\bar{M}\lambda _{2},\bar {M}\lambda_{1})$, where:

$\lambda_{1}<0$ is the biggest negative eigenvalue of the operator $T_{n}[\bar{M}]$ in $X_{k}$.

$\lambda_{2}>0$ is the minimum of:

$\lambda_{2}'>0$, the least positive eigenvalue of the operator $T_{n}[\bar{M}]$ in $X_{k1}$.

$\lambda_{2}''>0$, the least positive eigenvalue of the operator $T_{n}[\bar{M}]$ in $X_{k+1}$.

If $k=1$ and $n>2$ is even, then the operator $T_{n}[M]$ is inverse negative on $X_{1}$ if, and only if, $M\in[\bar{M}\lambda_{2},\bar {M}\lambda_{1})$, where:

$\lambda_{1}<0$ is the biggest negative eigenvalue of the operator $T_{n}[\bar{M}]$ in $X_{1}$.

$\lambda_{2}>0$ is the least positive eigenvalue of the operator $T_{n}[\bar{M}]$ in $X_{2}$.
If $k=n1$ and $n>2$, then the operator $T_{n}[M]$ is inverse negative on $X_{n1}$ if, and only if, $M\in[\bar{M}\lambda_{2},\bar {M}\lambda _{1})$, where:

$\lambda_{1}<0$ is the biggest negative eigenvalue of the operator $T_{n}[\bar{M}]$ in $X_{n1}$.

$\lambda_{2}>0$ is the least positive eigenvalue of the operator $T_{n}[\bar{M}]$ in $X_{n2}$.
If $n=2$, then the operator $T_{2}[M]$ is inverse negative on $X_{1}$ if, and only if, $M\in(\infty,\bar{M}\lambda_{1})$, where:

$\lambda_{1}<0$ is the biggest negative eigenvalue of the operator $T_{2}[\bar{M}]$ in $X_{1}$.
In order to prove this result, we separate the proof in several subsections.
Decomposition of the operator $T_{n}[\bar{M}]$
We are interested in setting the operator $T_{n}[\bar{M}]$ as a composition of suitable operators of order $h \le n$. Such an expression allows us to control the values of such operators at the extremes of the interval a and b.
We recall the following result proved in Chapter 3 of [15].
Theorem 3.2
The linear differential equation (1) has a Markov system of solutions if, and only if, the operator $T_{n}[M]$ has a representation
where $v_{k}>0$ on I and $v_{k}\in C^{nk+1}(I)$ for $k=1,\ldots,n$.
It is obvious that for any real parameter M, denoting $\lambda =M\bar {M}$, we can rewrite the operator $T_{n}[M]$ as follows:
If we assume that the equation $T_{n}[\bar{M}] u(t)=0$ is disconjugate on I, because of Theorems 2.3 and 3.2, we can express $T_{n}[\bar{M}]$ as
where $T_{k}$ are constructed as
with $v_{k}>0$ on I, $v_{k}\in C^{nk+1}(I)$, for $k=1,\ldots,n$.
Let us see now that $T_{h} u(t)$ is given as a linear combination of $u(t), u'(t),\ldots, u^{(h)}(t)$ with the form
where $p_{h_{i}}\in C^{nh}(I)$.
Indeed, we are going to prove this equality by induction.
For $h=1$,
Assume, by induction hypothesis, that equation (15) is satisfied for some $h\in \lbrace1,\ldots,n1 \rbrace$, therefore
which clearly has the form of equation (15).
Finally, taking into account boundary conditions (2) and the regularity of the functions $p_{h_{i}}$, we conclude that
Moreover,
So, from the positiveness of $v_{h}$ on I, $h \in\{1, \ldots,n\}$, we see that $T_{k} u(a)$ and $u^{(k)}(a)$ have the same sign. The same property holds for $T_{nk} u(b)$ and $u^{(nk)}(b)$.
Expression of the matrix Green’s function
This subsection is devoted to expressing, as functions of $g_{M}(t,s)$, the functions $g_{1}(t,s), \ldots, g_{n1}(t,s)$, defined on (6), as the first row components of the Green’s function of the vectorial system (4).
By studying the adjoint operator as in Section 1.3 of [3], we know that the related Green’s function of the adjoint operator $G^{*}$ satisfies $G^{*}(t,s)=G^{T} (s,t)$. Moreover, the following equality holds:
So, we can transform the previous equality in
Hence
or, which is the same,
Using this equality, we are going to prove by induction the following:
Here $\alpha_{i}^{j}(s)$ are functions of $a_{1}(s) ,\ldots,a_{j}(s)$ and of its derivatives until the order $(j1)$ and follow the recurrence formula
Using equality (18), we deduce that the Green’s matrix’ terms which are on position $(1,i)$, $i=1,\ldots,n$, satisfy the following equality:
where $g_{M}(t,s)\equiv g_{n}(t,s)$.
If we take $i=n$ in equation (24) we deduce
which gives us equation (19) for $j=1$.
Assume now that equalities (19)(23) are fulfilled for some $j\in\{1,\ldots,n2\}$ given. Let us see that they hold again for $j+1$. We have
Now, we can express the Green’s matrix related to problem (4), $G(t,s)$, as
If coefficients $a_{1}(s),\ldots,a_{n1}(s), a_{n}(s)$ are constants, $a_{1},\ldots,a_{n1},a_{n}$, we can solve explicitly the recurrence form (20)(23) and deduce that
So, we see that
and we can rewrite $G(t,s)$ as
In particular, if $T_{n}[M] u(t)\equiv u^{(n)}(t)+M u(t)$ we conclude that
so the Green’s matrix, $G(t,s)$, is given by the expression
Remark 3.3
We note that in the general case it is possible to obtain some of the components of system (20)(23). We have
Proof of the main results
Now we will proceed with the proof of the main result, Theorem 3.1. To this end, we will divide the proof in several steps.
First, we are going to show a lemma.
Lemma 3.4
Let $\bar{M}\in\mathbb{R}$, such that $T_{n}[\bar{M}] u(t)=0$ is disconjugate on I. Then the following properties are fulfilled:

If $nk$ is even, then $T_{n}[\bar{M}]$ is a inverse positive operator on $X_{k}$ and its related Green’s function, $g_{\bar{M}}(t,s)$, satisfies ($\mathrm{P}_{g}$).

If $nk$ is odd, then $T_{n}[\bar{M}]$ is a inverse negative operator on $X_{k}$ and its related Green’s function satisfies ($\mathrm{N}_{g}$).
Proof
By Lemma 2.6 and Remark 2.7 we see that for all $s\in (a,b)$ the function $\frac{g_{\bar{M}}(t,s)}{p(t)}$ can be extended to a strictly positive and continuous function in I, thus
Since $g_{\bar{M}}$ is a continuous function in $I \times I$, we see that $k_{1}$ and $k_{2}$ are continuous functions too.
If $nk$ is even, we take $\phi(t)=p(t)$ and condition ($\mathrm{P}_{g}$) is trivially fulfilled.
If $nk$ is odd, we take $\phi(t)=p(t)$ and multiplying equation (26) by −1, condition ($\mathrm{N}_{g}$) holds immediately. □
First, notice that, as a direct corollary of the previous lemma, the assertion for $\lambda_{1}$ in Theorem 3.1 follows directly from Theorems 2.12 and 2.13.
Now, we are going to prove the assertion in Theorem 3.1 concerning $\lambda_{2}$.
The proof will be done in several steps. In a first moment we will show that, if $nk$ is even, the Green’s function changes sign for all $M>\bar{M}\lambda_{2}$ and for all $M<\bar{M}\lambda_{2}$ when $nk$ is odd.
After that we will prove that such estimation is optimal in both situations.
In order to make the paper more readable, along the proofs of this subsection it will be assumed that $nk$ is even. The arguments with $nk$ odd will be pointed out at the end of the subsection.
 Step 1.:

Behavior of the Green’s function on a neighborhood of $s=a$ and $s=b$.
First, we construct two functions that will characterize the values of $M\in\mathbb{R}$ for which the Green’s function oscillates, or not, on a neighborhood of $s=a$ and $s=b$.
In order to do that, we denote the Green’s function related to problem (1)(2) as follows:
Since $g_{M}(t,s)$ is a Green’s function,
where $g_{M}(t,s)$ is acting as a function of t.
Therefore, differentiating the previous expression, we deduce that
In particular, we can define the functions
Because of the relation between $g_{M}(t,s)$ and $g^{*}_{M}(t,s)$, shown in (10), and taking into account the boundary conditions of the adjoint operator, it is not difficult to deduce that
So, we are interested in knowing the values of M for which functions $u(t)$ and $v(t)$ oscillate on I. Such a property guarantees that Green’s function oscillates on a neighborhood of $s=a$ or $s=b$ for such values. Moreover, it provides a higher bound for the set of parameters where the Green’s function does not oscillate.
 Step 1.1.:

Boundary conditions of $v(t)$.
Because of equality (27) we know that $T_{n}[M] v(t)=0$ on $(a,b]$. In this step we are going to see which boundary conditions satisfies function v.
We see that $G(t,s)$ as it appears on (25) is the Green’s matrix related to the vectorial problem (4). Using the expressions of matrices B and C given by (5), if we consider the first row of resultant matrix, we obtain for $s\in (a,b)$ the following expression:
Thus, while $k>1$, none of the previous elements belongs to the diagonal of the matrix Green’s function. Since it has discontinuities only at its diagonal entries, see Definition 2.4, by considering the limit of s to a, we deduce that the previous equalities hold for $g_{M}^{2}(a,a)$, i.e.,
so, we conclude that
hence $v(a)=0$.
Analogously, since we do not reach any diagonal element, we deduce that $v'(a)=\cdots=v^{(k2)}(a)=0$.
Let us see what happens for $v^{(k1)}(a)$ with $k>1$. We arrive at the following system written as a function of $g_{M}^{1}(t,s)$:
This system remains true for $s=a$, and because of the continuity of the Green’s matrix at $t=s$ on the nondiagonal elements and the break which is produced on its diagonal, we arrive at the following system for $g_{M}^{2}(a,a)$:
hence
and
Obviously, taking $k=1$, the same argument will tell us that $v(a)=(1)^{n1}$.
To see the boundary conditions at $t=b$, we have the following system for $s\in(a,b)$, written as a function of $g_{M}^{2}(t,s)$:
hence
By continuity, this is satisfied at $s=a$, so
Using (25) and (5), since there is no jump in this case, it is immediate to verify that $v'(b)=\cdots=v^{(nk1)}(b)=0$.
As a consequence v is the unique solution of the following problem, which we denote as ($\mathrm{P}_{v}$):
Remark 3.5
We note that, to attain the previous expression, we have not used any disconjugacy hypotheses on operator $T_{n}[M]$. Moreover, the proof is valid for $nk$ even or odd. In other words, the function v solves problem ($\mathrm{P}_{v}$) for any linear operator defined in (1) and any $k\in\{1, \ldots,n1\}$.
We know, because $g_{\bar{M}}(t,s)$ is of constant sign on $I \times I$ (see Lemma 3.4), that if $M=\bar{M}$ the function v must be of constant sign in I.
 Step 1.2.:

If v is of constant sign in I then it cannot have any zero in $(a,b)$.
We are now going to see that while $v(t)$ is of constant sign in I it cannot have any zero in $(a,b)$. So the sign change comes in at $t=a$ or $t=b$.
In order to do that, we are going to consider the decomposition of the operator $T_{n}[M]$ made in Section 3.1.
Since $nk$ is even, using Lemma 3.4, we know that the operator $T_{n}[\bar{M}+\lambda] $ is, for $\lambda=0$, inverse positive on $X_{k}$. So, the characterization of $\lambda<0$ follows from Theorem 2.12.
For $\lambda>0$, $v\in C^{n}(I)$ is a solution of a linear differential equation, hence it is only allowed to have a finite number of zeros on I. Therefore, if $v(t)\geq0$, we have $v(t)>0$ for all $t\in I\backslash \lbrace t_{0},\ldots,t_{\ell}\rbrace$. In particular $v(t)>0$ for a.e. $t\in I$. Thus
As we have shown in Section 3.1, we know that
Since for every $k=1,\ldots,n$, $v_{k}\in C^{nk+1}(I)$ and $v_{k}(t)>0$ on I, we conclude that $\frac{1}{v_{n}(t)} T_{n1} v(t)$ must be decreasing on I.
Therefore, since $v_{n}(t)>0$ on I we see that $T_{n1}v(t)$ can vanish at most once in I.
Arguing by recurrence, we see that $T_{0} v(t)=v(t)$ can have at most n zeros on I (multiple zeros being counted according to their multiplicity) while $v(t)\geq0$.
On the other hand, because of the boundary conditions (30)(32), we know that v vanishes $n1$ times on a and b, hence it cannot have a double zero on $(a,b)$. This implies that the sign change cannot come from $(a,b)$.
 Step 1.3:

Change sign of v at $t=a$ and $t=b$.
We are now going to see that the sign change cannot come from a neighborhood of $t=a$.
Since $nk$ is even, as we have proved before, $v^{(k1)}(a)=1>0$ for all $M\in\mathbb{R}$, which implies, since $v(a)=\cdots =v^{(k2)}(a)=0$, that $v(t)={g^{2}_{M}}_{s^{nk}}(t,a)$ is always positive on a neighborhood of $t=a$. So, the following property is verified:
Using Step 1.2, we see that v will keep constant sign on I while $v^{(nk)}(b)=0$ is not satisfied, i.e., while an eigenvalue of $T_{n}[\bar{M}]$ on $X_{k1}$ is not attained.
Or equivalently, if $M\in[\bar{M},\bar{M}\lambda_{2}']$ then $g_{M}(t,s)$ satisfies the following property:
Moreover, by Theorem 2.11, we deduce that $g_{M}(t,s)$ oscillates in $I \times I$ for all $M>\bar{M}\lambda_{2}'$.
If $k=1$, in particular we see that $v(a)=1>0$. Since we have seen in Step 1.2 that, while v is of constant sign in I, it cannot have any zero in $(a,b)$, the sign change would come if $v^{(n1)}(b)=0$, which implies that v has a zero of multiplicity n at $t=b$, and this fact is not possible for a nontrivial solution of a linear differential equation. Then, $g_{M}(t,s)$ satisfies (35) for every $M\geq \bar{M}$.
 Step 1.4.:

Study of the function u.
In order to analyze the behavior of the Green’s function on a left neighborhood of $s=b$, we work now with the function u defined in (28).
Using the same arguments as for v, we conclude that u is the unique solution of the following problem, which we denote as ($\mathrm{P}_{u}$):
As in Remark 3.5, we see that this property does not depend either on the disconjugacy of the operator $T_{n}[M]$ nor if $nk$ is even or odd.
Using analogous arguments to the ones done with v, we can prove that sign change cannot come on the open interval $(a,b)$
Moreover, from condition $u^{(nk1)}(b)=(1)^{k1}$, sign change of u cannot appear on b.
So u is of constant sign in I until $u^{(k)}(a)=0$ is verified, i.e., while an eigenvalue of $T_{n}[\bar{M}]$ on $X_{k+1}$ is not attained. Or, equivalently, while $M\in[\bar{M},\bar{M}\lambda_{2}'']$.
Thus we see that if M is on that interval, the Green’s function satisfies the following property:
But once $M>\bar{M}\lambda_{2}''$ the Green’s function oscillates in $I \times I$.
As a consequence of Step 1, we deduce that interval $(\bar{M}\lambda _{1},\bar{M}\lambda_{2}]$ cannot be enlarged. Moreover, we have also proved that the Green’s function satisfies the properties (35) and (39) for all M in such an interval.
 Step 2.:

Behavior of the Green’s function on a neighborhood of $t=a$ and $t=b$.
Now, let us see what happens on a neighborhood of $t=a$ and $t=b$. In order to do that, we are going to use the operator $\widehat {T}_{n}[(1)^{n} \bar{M}]$ defined in (11) and the relation between $g_{M}(t,s)$ and $\hat{g}_{(1)^{n} M}(t,s)$ given in (12).
Arguing as in Step 1, we will obtain the values of the real parameter M for which $\hat{g}_{(1)^{n} M}(t,s)$ is of constant sign on a neighborhood of $s=a$ and $s=b$ for every fixed $t\in(a,b)$. Once we have done it, we will be able to apply such a property to the behavior of $g_{M}(t,s)$ on a neighborhood of $t=a$ or $t=b$.
The analogous problem for the operator $\widehat{T}_{n}[(1)^{n} M]$ related to problem (1)(2) is given by
Theorem 2.8 implies that the equation $T_{n}^{*}[\bar{M}] u(t)=0$ is disconjugate on I. So, the same holds with $\widehat{T}_{n}[(1)^{n} \bar{M}] u(t)=0$. Reasoning as in Step 1, we are able to prove that $\hat{g}_{(1)^{n} M}(t,s)$ satisfies (35), while an eigenvalue of $\widehat{T}_{n}[(1)^{n} \bar{M}]$ on $X_{nk1}$, let it be denoted as $\hat{\lambda}_{2}''$, is not attained.
This fact is equivalent to the existence of an eigenvalue of $T^{*}_{n}[\bar{M}]$ on $X_{nk1}$, which will be $(1)^{n} \hat {\lambda }_{2}''$. Now, using the fact that the real eigenvalues of an operator coincide with those of the adjoint operator, we conclude that $\lambda _{2}''=(1)^{n} \hat{\lambda}_{2}''$ is the biggest negative eigenvalue of $T_{n}[\bar{M}]$ on $X_{n(nk1)}=X_{k+1}$ and $\hat{g}_{(1)^{n} M}(t,s)$ satisfies the property (35) while $M\in[\bar {M},\bar {M}\lambda_{2}'']$. So for all $s\in(a,b)$, the Green’s function of problem (1)(2), $g_{M}(t,s)$, satisfies the following statement:
Analogously, arguing as before, we know that if $k>1$, then $\hat {g}_{(1)^{n} M}(t,s)$ satisfies the property (39) while an eigenvalue of $\widehat{T}_{n}[M]$ on $X_{nk+1}$ is not attained, which is equivalent to the existence of an eigenvalue of $T_{n}[M]$ on $X_{k1}$. Moreover, if $k=1$, then $\hat{g}_{(1)^{n} M}(t,s)$ satisfies (39) for every $M\geq\bar{M}$. Therefore, if $M\in [\bar{M},\bar{M}\lambda_{2}']$ we can affirm that the Green’s function of the operator $\widehat{T}_{n}[(1)^{n} M]$, $\hat{g}_{(1)^{n} M}(t,s)$, satisfies (39), as a consequence Green’s function of problem (1)(2), $g_{M}(t,s)$, will verify the following:
As a consequence of the two previous steps, we have already proved that if $M\in[\bar{M},\bar{M}\lambda_{2}]$ then the Green’s function satisfies the statements (35), (39), (40) and (41) and that if $M>\bar{M}\lambda_{2}$ Green’s function oscillates on $I \times I$.
 Step 3.:

The Green’s function does not come to change sign on $(a,b)\times(a,b)$.
In this step we will prove that the oscillation of the Green’s function related to problem (1)(2) must begin on the boundary of $I \times I$. Using Theorem 2.11 we see that, provided it has a nonnegative sign on $I\times I$, $g_{M}$ decreases in M.
As a consequence, once we prove that $g_{M}$ cannot have a double zero on $(a,b)\times(a,b)$, the change of sign must start on the boundary of $I\times I$.
Let us see that if $g_{M}(t,s)\geq0$ in $I\times I$ then $g_{M}(t,s)>0$ in $(a,b)\times(a,b)$.
Denote, for a fixed $s\in(a,b)$, $w_{s}(t)=g_{M}(t,s)$. By definition, denoting, as in Step 1, $\lambda=M \bar{M}$, we see that
Since $g_{\bar{M}}\ge0$ on $I \times I$, the behavior for $M<\bar{M}$ has been characterized in Lemma 3.4 and Theorem 2.12.
So we must pay attention to the situation $M > \bar{M}$, i.e. $\lambda>0$. In such a case, since, as in Step 1.2, we see that $w_{s}(t)\geq0$ has a finite number of zeros in I, we know that
Using (13) and (14), we see that
with $v_{k}>0$ on I for $k=1,\ldots,n$. In particular, $T_{n} w_{s}(t)<0$ a.e. in I.
Notice that, for all $s \in(a,b)$, $w_{s} \in C^{n2}(I)$ and $w_{s}^{(n1)}(s^{+})w_{s}^{(n1)}(s^{})=1$. Therefore, due to the definition of $T_{n}[\bar{M}]$ and expression (15), we see that $\frac{1}{v_{n}(t)}T_{n1} w_{s}(t)$ is a continuous function on $[a,s) \cup(s,b]$.
Since $T_{n} w_{s}(t)=\frac{d}{dt} ( \frac{1}{v_{n}(t)}T_{n1} w_{s}(t) ) <0$ for $t \neq s$, we can affirm that $\frac{1}{v_{n}(t)} T_{n1} w_{s}(t)$ is a decreasing function on I with a positive jump at $t=s$. So, it can have, at most, two zeros in I (see Figure 1).
Even if we cannot guarantee that $T_{n1} w_{s}(t)$ is decreasing, since $v_{n}>0$ on I, we conclude that it has the same sign as $\frac {1}{v_{n}(t)} T_{n1} w_{s}(t)$, i.e., it can have at most two zeros on I.
On the other hand, using equation (15) again, we conclude that $\frac{1}{v_{n1}(t)} T_{n2} w_{s}(t)$ is a continuous function on I. Now, (14) tells us that $\frac{1}{v_{n1}(t)} T_{n2} w_{s}(t)$ can reach at most four zeros on I (see Figure 2).
As before, we do not know intervals where $T_{n2} w_{s}(t)$ is increasing or decreasing, but since $v_{n1}(t)>0$ we conclude that it has the same sign as $\frac{1}{v_{n1}(t)} T_{n2} w_{s}(t)$, so it can reach at most four zeros.
Following this argument, since $v_{k}>0$ on I for $k=1,\ldots,n$, we know that $T_{n2h}w_{s}(t)$ cannot have more than $4+h$ zeros on I (multiple zeros being counted according to their multiplicity). In particular, $w_{s}(t)= T_{0} w_{s}(t)$ can have $n+2$ zeros at most, having n in the boundary.
This fact allows $w_{s}$ to have a double zero on $(a,b)$. So, to show that such a double root cannot exist, we need to prove that maximal oscillation is not possible. To this end, we point out that if for any h it is verified that the sign of $T_{n2h}w_{s}(a)$ is equal to the sign of $T_{n2{h+1}}w_{s}(a)$ we lose a possible oscillation.
Therefore, for maximal oscillation we must have
However, since $w_{s}(t)\geq0$ on I and $w_{s}(a)=w_{s}'(a)= \cdots =w_{s}^{(k1)}(a)=0$, we deduce that $w_{s}^{(k)}(a)\geq0$.
We can assume that $w_{s}^{(k)}(a)>0$ because, on the contrary, if $w_{s}^{(k)}(a)=0$ we would have $n+2$ zeros at most, having $n+1$ in the boundary. So, only a simple zero is allowed in the interior, which is not possible without oscillation.
Therefore $w_{s}^{(k)}(a)=w_{s}^{(n(nk))}(a)>0$. Since $nk$ is even, using now (16), we also know that $T_{k} w_{s}(a)>0$, which inhibits maximal oscillation.
So we conclude that if $g_{M}(t,s)\geq0$ on $I\times I$ then $g_{M}(t,s)>0$ on $(a,b)\times(a,b)$, as we wanted to prove.
As a consequence of the three previous steps, we have described the set of the real parameters M for which the Green’s function is nonnegative on $I \times I$ when $nk$ is even.
If $nk$ is odd, we can do similar arguments to achieve the proof. In the sequel, we enumerate the main ideas to be developed.
 Step 1.:

 Step 1.1.:

It has no modifications.
 Step 1.2.:

In equality (33) we have $\lambda<0$ and $v(t)<0$ a.e. in I, so it remains true and we can proceed analogously.
 Step 1.3.:

In this case, we see that $v^{(k1)}(a)<0$. Our attainment in this Step is that $g_{M}(t,s)$ verifies the property (35) while $M\in[\bar{M}\lambda_{2}',\bar{M}]$ and oscillates for all $M<\bar{M}\lambda_{2}'$.
If $k=1$ the achievement is that $g_{M}(t,s)$ verifies the property (35) for every $M\leq\bar{M}$. In particular, for $n=2$.
 Step 1.4.:

The arguments are not modified, but the final achievement is that $g_{M}(t,s)$ satisfies the property (39) for $M\in[\bar{M}\lambda_{2}'',\bar{M}]$ an oscillates for all $M<\bar{M}\lambda_{2}''$.
In this case, if $k=n1$, we can conclude similarly than in Step 1.3 that u is of constant sign for every $M\leq\bar{M}$. Then the Green’s function satisfies the property (39). In particular for $n=2$.
 Step 2.:

Using the same arguments we conclude that the interval where $g_{M}(t,s)$ verifies (35), (39), (40) and (41) is $[\bar{M}\lambda_{2},\bar{M}]$.
 Step 3.:

In this case we see that $w^{(k)}_{s}(a)=w^{(n(nk))}_{s}(a)<0$, with $nk$ odd contradicting maximal oscillation too.
Thus, our result is proved.
As a direct consequence of the arguments used in Step 1.3, without assuming the existence of $\bar{M}\in\mathbb{R}$ for which equation $T_{n}[\bar{M}] u(t)=0$ is disconjugate on I, we arrive at the following result.
Corollary 3.6
Let $T_{n}[M]$ be defined as in (1). Then the two following properties hold:
If $nk$ is even, then there does not exist $M\in\mathbb{R}$ such that the operator $T_{n}[M]$ is inverse negative in $X_{k}$.
If $nk$ is odd, then there does not exist $M\in\mathbb{R}$ such that the operator $T_{n}[M]$ is inverse positive in $X_{k}$.
Proof
It is enough to take into account that v, defined in (29), is the unique solution of problem ($\mathrm{P}_{v}$). Since $v^{(k1)}(a)=(1)^{nk}$ we conclude that, if $nk$ is even, the Green’s function has positive values in any neighborhood of $(a,a)$ and negative values when $nk$ is odd.
So, the result holds from Theorem 2.10. □
Particular cases
In order to obtain the eigenvalues of particular problems we calculate a fundamental system of solutions $y_{1}[M](t),\ldots,y_{n}[M](t)$ of equation (1) where every $y_{k}[M](t)$ satisfies the initial conditions
Then we denote the $n1$ Wronskians as
As a consequence of the characterization done in Chapter 3, Lemma 12 of [15], we deduce that the eigenvalues of problem (1) in $X_{k}$ are given as the $\lambda\in\mathbb{R}$ for which $W_{nk}^{n}[\lambda](b)=0$. So, in the sequel, we will use this method to find the eigenvalues of the different considered problems.
The operator $T_{n}[M] u(t)\equiv u^{(n)}(t)+M u(t)$
First of all, we are going to consider problems where $T_{n}[M] u(t)\equiv u^{(n)}(t)+M u(t)$, with $[a,b]=[0,1]$.
In this kind of problems, for $M=0$, $u^{(n)}(t)=0$ is always disconjugate; see Chapter 3 of [15]. So, the hypotheses of Theorem 3.1 are satisfied.
Remark 4.1
Note that adjoint equation to problem $T_{n}[M] u=0$, $u \in X_{k}$ is given by
So, if $\lambda_{i}$ is an eigenvalue of $u^{(n)}$ in $X_{k}$, it is also an eigenvalue of $(1)^{n} u^{(n)}$ in $X_{nk}$. Thus, $(1)^{n} \lambda _{i}$ is an eigenvalue of $u^{(n)}$ in $X_{nk}$.
As consequence, we only need to obtain first $\lfloor\frac {n}{2} \rfloor$ Wronskians, where $\lfloor\cdot\rfloor$ means the floor function.

Order 2
The eigenvalues of the operator $u''(t)$ in $X_{1}$ must satisfy $W_{1}^{2}[\lambda](1)=0$, which can be replaced by the following equation:
so it closest to zero negative eigenvalue is $\lambda_{2}^{1}=\pi^{2}$.
And so, we can affirm that Green’s function related to the operator $u''(t)+M u(t)$ is negative if, and only if, $M\in ( \infty, \pi^{2} ) $.
This result has been already obtained in different references (see [3] and references therein), but here it is not necessary to have the expression of the Green’s function.

Order 3
$\lambda_{3}^{1}\approxeq4.23321$ is the least positive solution of $W_{1}^{3}[\lambda^{3}](1)=0$, which is equivalent to the equation
Then, the least positive eigenvalue of the operator $u^{(3)}(t)$ in $X_{1}$ is $( \lambda_{3}^{1} ) ^{3}$ and the biggest negative eigenvalue of the operator $u^{(3)}(t)$ in $X_{2}$ is $ ( \lambda _{3}^{1} ) ^{3}$.
So, we can affirm that the Green’s function of the operator $u^{(3)}(t)+M u(t)$:

in $X_{1}$ is positive if, and only if, $M\in (  ( \lambda _{3}^{1} ) ^{3}, ( \lambda_{3}^{1} ) ^{3} ] $,

in $X_{2}$ is positive if, and only if, $M\in [  ( \lambda _{3}^{1} ) ^{3}, ( \lambda_{3}^{1} ) ^{3} ) $.
This result has been obtained by means of the explicit form of the Green’s function in [25].

Order 4
$\lambda_{4}^{1}\approxeq5.553$ is the least positive solution of $W_{1}^{4}[\lambda^{4}](1)=0$, simplifying that expression we have
$\lambda_{4}^{2}\approxeq4.73004$ is the least positive solution of $W_{2}^{4}[\lambda^{4}](1)=0$, which can be expressed as
The biggest negative eigenvalue of the operator $u^{(4)}(t)$ in $X_{1}$ and $X_{3}$ is given by $ ( \lambda_{4}^{1} ) ^{4}$.
The least positive eigenvalue of the operator $u^{(4)}(t)$ in $X_{2}$ is $( \lambda_{4}^{2} ) ^{4}$.
Therefore, we can affirm without calculating it explicitly, that the Green’s function related to the operator $u^{(4)}(t)+M u(t)$:

in $X_{1}$ and $X_{3}$ is negative if, and only if, $M\in [  ( \lambda_{4}^{2} ) ^{4}, ( \lambda_{4}^{1} ) ^{4} )$,

in $X_{2}$ is positive if, and only if, $M\in (  ( \lambda _{4}^{2} ) ^{4}, ( \lambda_{4}^{1} ) ^{4} ]$.
These results have been obtained using the explicit form of the Green’s function in [26] and [27].

Order 5
We can obtain $\lambda_{5}^{1}\approxeq6.94867$ and $\lambda _{5}^{2}\approxeq 5.64117$ as the least positive solution of $W_{1}^{5}[\lambda^{5}](1)=0$ and $W_{2}^{5}[\lambda^{5}](1)=0$, respectively. But the equations obtained are too complicated to show here and they have not so much interest.
The least positive eigenvalue of the operator $u^{(5)}(t)$ in $X_{1}$ is $( \lambda_{5}^{1} ) ^{5}$.
The biggest negative eigenvalue of the operator $u^{(5)}(t)$ in $X_{2}$ is $ ( \lambda_{5}^{2} ) ^{5}$.
The least positive eigenvalue of the operator $u^{(5)}(t)$ in $X_{3}$ is $( \lambda_{5}^{2} ) ^{5}$.
The biggest negative eigenvalue of the operator $u^{(5)}(t)$ in $X_{4}$ is $ ( \lambda_{5}^{1} ) ^{5}$.
Therefore, we conclude, without calculating it explicitly, that the Green’s function related to the operator $u^{(5)}(t)+M u(t)$:

in $X_{1}$ is positive if, and only if, $M\in (  ( \lambda _{5}^{1} ) ^{5}, ( \lambda_{5}^{2} ) ^{5} ]$,

in $X_{2}$ is negative if, and only if, $M\in [  ( \lambda _{5}^{2} ) ^{5}, ( \lambda_{5}^{2} ) ^{5} )$,

in $X_{3}$ is positive if, and only if, $M\in (  ( \lambda _{5}^{2} ) ^{5}, ( \lambda_{5}^{2} ) ^{5} ]$,

in $X_{4}$ is negative if, and only if, $M\in[  ( \lambda _{5}^{2} ) ^{5}, ( \lambda_{5}^{1} ) ^{5} )$.

Order 6
$\lambda_{6}^{1}\approxeq8.3788$ is the least positive solution of $W_{1}^{6}[\lambda^{6}](1)=0$, which is equivalent to
$\lambda_{6}^{2}\approxeq6.70763$ is the least positive solution of $W_{2}^{6}[\lambda^{6}](1)=0$, which we can express as
$\lambda_{6}^{3}\approxeq6.28319^{6}$ is the least positive solution of $W_{3}^{6}[\lambda^{6}](1)=0$, which can be represented as the first positive root of the following equation:
The biggest negative eigenvalue of the operator $u^{(6)}(t)$ in $X_{1}$ and $X_{5}$ is given by $ (\lambda_{6}^{1} )^{6}$.
The least positive eigenvalue of the operator $u^{(6)}(t)$ in $X_{2}$ and $X_{4}$ is $(\lambda_{6}^{2} )^{6}$.
The biggest negative eigenvalue of the operator $u^{(6)}(t)$ in $X_{3}$ is $ (\lambda_{6}^{3} )^{6}$.
Hence, we can affirm without calculating it explicitly, that the Green’s function related to the operator $u^{(6)}(t)+M u(t)$:

in $X_{1}$ or in $X_{5}$ is negative if, and only if, $M\in [  ( \lambda_{6}^{2} ) ^{6}, ( \lambda_{6}^{1} ) ^{6} )$,

in $X_{2}$ or in $X_{4}$ is positive if, and only if, $M\in (  ( \lambda_{6}^{2} ) ^{6}, ( \lambda_{6}^{3} ) ^{6} ]$,

in $X_{3}$ is negative if, and only if, $M\in [  ( \lambda _{6}^{2} ) ^{6}, ( \lambda_{6}^{3} ) ^{6} )$.

Order 7
We are not able to obtain analytically the eigenvalues of the operator $u^{(7)}(t)$, but we can obtain them numerically.
The least positive eigenvalue of this operator in $X_{1}$ is $( \lambda_{7}^{1} ) ^{7}$, where $\lambda_{7}^{1}\approxeq9.82677$.
The biggest negative eigenvalue in $X_{2}$ is $ ( \lambda _{7}^{2} ) ^{7}$, where $\lambda_{7}^{2}\approxeq7.85833$.
The least positive eigenvalue in $X_{3}$ is $( \lambda_{7}^{3} ) ^{7}$, where $\lambda_{7}^{3}\approxeq7.1347$.
The biggest negative eigenvalue in $X_{4}$ is $ ( \lambda _{7}^{3} ) ^{7}$.
The least positive eigenvalue in $X_{5}$ is $( \lambda_{7}^{2} ) ^{7}$.
The biggest negative eigenvalue in $X_{6}$ is $ ( \lambda_{7}^{1} ) ^{7}$.
So, we conclude, without calculating it explicitly, that the Green’s function related to the operator $u^{(7)}(t)+M u(t)$:

in $X_{1}$ is positive if, and only if, $M\in (  ( \lambda _{7}^{1} ) ^{7}, ( \lambda_{7}^{2} ) ^{7} ]$,

in $X_{2}$ is negative if, and only if, $M\in [  ( \lambda _{7}^{3} ) ^{7}, ( \lambda_{7}^{2} ) ^{7} )$,

in $X_{3}$ is positive if, and only if, $M\in (  ( \lambda _{7}^{3} ) ^{7}, ( \lambda_{7}^{3} ) ^{7} ]$,

in $X_{4}$ is negative if, and only if, $M\in [  ( \lambda _{7}^{3} ) ^{7}, ( \lambda_{7}^{3} ) ^{7} )$,

in $X_{5}$ is positive if, and only if, $M\in (  ( \lambda _{7}^{2} ) ^{7}, ( \lambda_{7}^{3} ) ^{7} ]$,

in $X_{6}$ is negative if, and only if, $M\in [  ( \lambda _{7}^{2} ) ^{7}, ( \lambda_{7}^{1} ) ^{7} )$.

Order 8
$\lambda_{8}^{1}\approxeq11.2846$, $\lambda_{8}^{2}\approxeq9.06306$, $\lambda _{8}^{3}\approxeq8.09971$, and $\lambda_{8}^{4}\approxeq7.81871$ can be obtained analytically as the least positive solution of $W_{1}^{8}[\lambda ^{8}](1)=0$, $W_{2}^{8}[\lambda^{8}](1)=0$, $W_{3}^{8}[\lambda^{8}](1)=0$, and $W_{4}^{8}[\lambda^{8}](1)=0$, respectively, but their expressions are too big to show it here and they do not bring about any important information.
The biggest negative eigenvalue of the operator $u^{(8)}(t)$ in $X_{1}$ and $X_{7}$ is given by $(\lambda_{8}^{1})^{8}$.
The least positive eigenvalue of the operator $u^{(8)}(t)$ in $X_{2}$ and $X_{6}$ is given by $(\lambda_{8}^{2})^{8}$.
The biggest negative eigenvalue of the operator $u^{(8)}(t)$ in $X_{3}$ and $X_{5}$ is given by $(\lambda_{8}^{3})^{8}$.
The least positive eigenvalue of the operator $u^{(8)}(t)$ in $X_{4}$ is $(\lambda_{8}^{4})^{8}$.
So, we can affirm without calculating it explicitly, that the Green’s function related to the operator $u^{(8)}(t)+M u(t)$:

in $X_{1}$ or in $X_{7}$ is negative if, and only if, $M\in [  ( \lambda_{8}^{2} ) ^{8}, ( \lambda_{8}^{1} ) ^{8} )$,

in $X_{2}$ or in $X_{6}$ is positive if, and only if, $M\in (  ( \lambda_{8}^{2} ) ^{8}, ( \lambda_{8}^{3} ) ^{8} ]$,

in $X_{3}$ or in $X_{5}$ is negative if, and only if, $M\in [  ( \lambda_{8}^{4} ) ^{8}, ( \lambda_{8}^{3} ) ^{8} )$,

in $X_{4}$ is positive if, and only if, $M\in (  ( \lambda _{8}^{4} ) ^{8}, ( \lambda_{8}^{3} ) ^{8} ]$.
As we have said before, thirdorder problems were explicitly calculated in [25]. Fourthorder problems were calculated in [26] in $X_{2}$ and in [27] in $X_{1}$ and $X_{3}$, respectively. But in all of these cases it was necessary to obtain the expression of the Green’s function and analyze it.
Moreover, in all the problems treated in [25–27] it is also satisfied that the open optimal interval where the Green’s function is of constant sign coincide with the optimal interval where equation (1) is disconjugate.
However, in Theorem 2.1 of [28] the following characterization of the interval of disconjugacy is proved.
Theorem 4.2
Let $\bar{M}\in\mathbb{R}$ and $n\geq2$ be such that $T_{n}[\bar{M}] u(t)=0$ is a disconjugate equation on I. Then, $T_{n}[M] u(t)=0$ is a disconjugate equation on I if, and only if, $M\in(\bar{M}\lambda _{1},\bar{M}\lambda_{2})$, where:

$\lambda_{1}=+\infty$ if $n=2$ and, for $n>2$, $\lambda_{1}>0$ is the minimum of the least positive eigenvalues on $T_{n}[\bar{M}]$ in $X_{k}$, with $nk$ even.

$\lambda_{2}<0$ is the maximum of the biggest negative eigenvalues on $T_{n}[\bar{M}]$ in $X_{k}$, with $nk$ odd.
As a consequence we see that the interval of constant sign of the Green’s function and the one of the disconjugacy for the linear operator are not the same in general. We have already proved (see Lemma 3.4) that while equation (1) is disconjugate its related Green’s function must be of constant sign. So, if both intervals do not coincide, the optimal interval where equation (1) is disconjugate must be contained in the open optimal interval where the Green’s function is of constant sign.
If, using the characterization given in Theorem 4.2, we calculate the optimal interval on M of disconjugacy for the equation
It is given by $( ( \lambda_{5}^{2} ) ^{5}, ( \lambda _{5}^{2} ) ^{5})$.
But, as we have shown before, the Green’s function related to the problem on the space $X_{1}$ remains positive on the interval $( ( \lambda_{5}^{1} ) ^{5}, ( \lambda_{5}^{2} ) ^{5}]$. So, its biggest open interval is strictly bigger than the optimal interval of disconjugacy.
Remark 4.3
In this kind of problems, if λ is an eigenvalue on $[0,1]$, then $\frac{\lambda}{(ba)^{n}}$ is an eigenvalue on $[a,b]$.
So, we can obtain our conclusions about Green’s function’ sign on any arbitrary interval $[a,b]$.
Operators with constant coefficients
This characterization of the interval where the Green’s function is of constant sign is also useful for those problems which have more nonnulls coefficients.
For example we can consider the operator of fourth order
We can show, using the characterization given in Theorem 2.3, that $T_{n}[0] u(t)=0$ is a disconjugate equation on $[0,1]$ and, so, Theorem 3.1 can be applied.
First, we calculate numerically the eigenvalues closest to zero in each $X_{k}$, $k=1,2,3$.

The biggest negative eigenvalue in $X_{1}$ is $(7.02782)^{4}$.

The least positive eigenvalue in $X_{2}$ is $(5.27208)^{4}$.

The biggest negative eigenvalue in $X_{3}$ is $(5.97041)^{4}$.
Realize that in this case we need to obtain the three corresponding Wronskians because it is not possible to connect the eigenvalues in $X_{1}$ with those in $X_{3}$ by means of its corresponding adjoint equation.
So, we conclude that the Green’s function related to the operator $T_{n}[M] u(t)$ defined in (43):

in $X_{1}$ is negative if, and only if, $M\in[(5.27208)^{4},(7.02782)^{4})$,

in $X_{2}$ is positive if, and only if, $M\in((5.27208)^{4},(5.97041)^{4}]$,

in $X_{3}$ is negative if, and only if, $M\in[(5.27208)^{4},(5,97041)^{4})$.
Notice that in this case the interval of disconjugation is $((5.27208)^{4},(5.97041)^{4})$. So, we have obtained an example of a fourthorder equation in which its interval of disconjugation does not coincide with the biggest open interval where the Green’s function is of constant sign in $X_{1}$.
In the sequel, we show an example where the operator $T_{n}[M]$ does not verify disconjugation hypothesis for $\bar{M}=0$.
If we choose the operator
We see that the equation $T_{n}[0] u(t)=0$ is not disconjugate on $[0,1]$, but if we analyze the equation $T_{n}[600] u(t)=0$ we can affirm, by means of Theorem 2.3, that it is disconjugate on $[0,1]$.
Hence, Theorem 3.1 can be applied to the operator $T_{n}[600] u(t)$.
If we calculate the eigenvalues closest to zero we have:

The biggest negative eigenvalue of $T_{n}[600] u(t)$ in $X_{1}$ is $9\text{,}565.99$.

The least positive eigenvalue in $X_{2}$ is 11.5685.

The biggest negative eigenvalue in $X_{2}$ is −28.9753.
Hence, using Theorem 2.10, we can affirm that operator $T_{n}[M] u(t)$, defined in (44):

in $X_{1}$ is inverse negative if, and only if, $M\in [60011.5685,600+9\text{,}565.99)=[611.5685,8\text{,}965.99)$,

in $X_{2}$ is inverse positive if, and only if, $M\in (60011.5685,600+28.9753]=(611.5685,571.0247]$,

in $X_{3}$ is inverse negative if, and only if, $M\in [60011.5685,600+28.9753)=[611.5685,571.0247)$.
Operators with nonconstant coefficients
We have already seen that applying Theorem 3.1 is much easier to calculate optimal intervals for M where the Green’s function related to the operator $T_{n}[M] u(t)$ than obtaining Green’s function expression explicitly. But, if we are referring to an operator with nonconstant coefficients this characterization is even more useful because in the majority of the situations we are not able to obtain the explicit expression for the Green’s function.
Consider now the thirdorder operator
for which, by means of Theorem 2.3, we can verify that the equation $T_{n}[0] u(t)=0$ is disconjugate on $[0,1]$.
If we calculate numerically the eigenvalues closest to zero of the operator defined in (45) we obtain:

$(4.19369)^{3}$ is the least positive eigenvalue of the operator $T_{n}[0] u(t)$ in $X_{1}$.

$(4.21255)^{3}$ is the biggest negative eigenvalue of the operator $T_{n}[0] u(t)$ in $X_{2}$.
So, we can affirm

the Green’s function related to the operator $T_{n}[M] u(t)$ in $X_{1}$ is positive if, and only if, $M\in((4.19369)^{3},(4.21255)^{3}]$,

the Green’s function related to the operator $T_{n}[M] u(t)$ in $X_{2}$ is negative if, and only if, $M\in[(4.19369)^{3},(4.21255)^{3})$.
We can also apply it to a fourthorder operator whose eigenvalues were also obtained numerically.
We can verify, by means of Theorem 2.3 again, that $T_{n}[0] u(t)=0$ is disconjugate on $[0,1]$.
If we calculate its eigenvalues we obtain:

The biggest negative eigenvalue in $X_{1}$ is $(5.5325)^{4}$.

The least positive eigenvalue in $X_{2}$ is $(4.7235)^{4}$.

The biggest negative eigenvalue in $X_{3}$ is $(5.5815)^{4}$.
So, applying Theorem 3.1, we conclude that:

the Green’s function related to the operator $T_{n}[M] u(t)$ in $X_{1}$ is negative if, and only if, $M\in[(4.7235)^{4},(5.5325)^{4})$,

the Green’s function related to the operator $T_{n}[M] u(t)$ in $X_{2}$ is positive if, and only if, $M\in((4.7235)^{4},(5.5325)^{4}]$,

the Green’s function related to the operator $T_{n}[M] u(t)$ in $X_{3}$ is negative if, and only if, $M\in[(4.7235)^{4},(5.5815)^{4})$.
Disconjugacy hypothesis cannot be removed on Theorem 3.1
In this last section we show that the disconjugacy hypothesis on Theorem 3.1 for some $M=\bar{M}$ cannot be avoided in general.
To this end, we consider the operator
coupled with twopoint boundary value conditions
Equation (47) is not disconjugate for $M=0$, indeed
is a solution of $T_{4}[0] u(t)=0$ with five zeros on $[0,1]$.
In a first moment we will verify that the Green’s function related to problem (47)(48) satisfies condition ($\mathrm{N}_{g}$) for $\bar{M}=0$. So, by means of Theorem 2.13, we know that $N_{T}=[\mu ,\lambda_{1})$ for some $\mu\ge0$.
In a second part, we will prove that $\mu\neq\lambda_{2}$, with $\lambda _{2}$ the first eigenvalue related to the operator $T_{4}[0]$ on the space $X_{2}$.
As a consequence, we deduce that the validity of Theorem 3.1 is not ensured when the disconjugacy assumption fails.
We point out that, since the existence of at least one M̄ for which operator $T_{4}[\bar{M}]$ is disconjugate on $[0,1]$ implies the validity of Theorem 3.1, the operator $T_{4}[M]$ cannot be disconjugate on $[0,1]$ for any real parameter M and not only for $\bar{M}=0$.
First, we obtain the Green’s function expression related to the operator $T_{4}[0] u(t)$ in $X_{3}$, $g_{0}(t,s)$. By means of the Mathematica package developed in [24], we see that if it obeys the expression
Let us see now that $g_{0}(t,s)\leq0$ on $[0,1]\times[0,1]$ and that it satisfies condition ($\mathrm{N}_{g}$), i.e., the following inequality is satisfied:
To study the behavior on a neighborhood of $t=0$ and $t=1$, we define the following functions:
In the sequel we will prove that both functions are strictly positive on $(0,1)$.
It is not difficult to verify that $k_{1}(1)=k_{1}'(1)=k_{1}''(1)=0$ and that
If we prove that $k_{1}^{(3)}(s)$ is strictly negative on $[0,1]$, since, in such a case, $k_{1}''(s)$ would be positive and $k_{1}'(s)$ negative, we will deduce that $k_{1}(s)>0$ for $s\in(0,1)$.
Due to the fact that
we only must check that
But the previous inequality holds immediately from the fact that
Consider now the function $k_{2}$. We see that $k_{2}(0)=0$ and
So, we study the sign of its first derivative
with
It is clear that such a function satisfies
which is positive for
Moreover, for $s\in[1\frac{2 \pi}{5 \sqrt{3}},1\frac{ \pi}{5 \sqrt{3}}]\approxeq[0.27448,0.63724]$ we see that
and the right part of the previous equality is positive for
Then, we see that $k_{2}'(s)>0$ for $s\in[0,1\frac{ \pi}{5 \sqrt{3}}]$, and, as a consequence, the same holds for $k_{2}(s)$.
On the other hand, we see that $k_{2}(1)=k_{2}'(1)=0$ and $k_{2}''(1)=1$, moreover,
where
Now, we must verify that ${k_{2}}_{1}(s)>0$.
If $s>0.9$ we can bound it from below by the following function:
It is clear that it is positive for $s\in(s_{1},1]$, where
which ensures that $k_{2}(s)>0$ on $(0.9,1)$.
On the other hand, for every $s\in[0,1]$, function $300 e^{10 s+5} k_{2}(s)$ is bounded from below by
which is positive for $s\in(s_{2},s_{3})$, where
and
So, we conclude that $k_{2}(s)>0$ for every $s\in(0,1)$.
Now, in order to deduce condition ($\mathrm{N}_{g}$), we only have to verify that $g_{0}(t,s)<0$ for every $(t,s)\in(0,1)\times(0,1)$.
If $t< s$ we can express
where
So, we must prove that both functions are positive on $(0,1)$.
$\ell_{1}(s)$ is a positive multiple of $k_{1}(s)$, so, as we have proved before, it is positive for $s\in(0,1)$.
To study the sign of $\ell_{2}$, since it satisfies $\ell_{2}(0)=\ell _{2}'(0)=\ell_{2}''(0)=0$, from the following expressions, valid for all $t\in[0,1]$:
we deduce that $\ell_{2}(t)>0$ for every $t\in(0,1)$.
Let us see now what happens for $0< s\leq t<1$.
We can express $g_{0}(t,s)$ as follows:
where
and
From the previously proved positiveness of $\ell_{1}$ and $\ell_{2}$, we know that $p_{1}(t,s)>0$.
On the other hand, since $p_{2}(0)=p_{2}'(0)=p_{2}''(0)=0$, if we verify that $p_{2}^{(3)}(r)>0$ for every $r\in[0,1]$, then we conclude that the same holds for $p_{2}$ on $(0,1]$. In this case
This function is trivially positive whenever $0 \le r\leq\frac{\pi }{10 \sqrt{3}}\approxeq0.18138$.
Moreover, for every $r\in[0,1]$, we see that
which is positive if, and only if, $r>\frac{\log(2)}{15}\approxeq0.0462$.
As consequence we deduce that $p_{2}(r)>0$ for every $r\in(0,1]$.
Then if we prove that $p_{2}(ts)< p_{1}(t,s)$ for $0< s\leq t<1$, we can conclude that $g_{0}(t,s)<0$.
Notice that, if we have two strictly convex functions on a suitable interval, we may affirm that they have at most two common points. In the sequel, to prove our result, we use this property.
Since by definition $g_{0}(1,s)=0$, we know that $p_{1}(1,s)=p_{2}(1s)$, for every fixed $s\in(0,1)$.
From the fact, proved before, that $k_{2}>0$ on $(0,1)$, we know that $g_{0}(t,s)<0$ on a neighborhood of $t=1$ for every $s\in(0,1)$. Then $p_{1}(t,s)>p_{2}(ts)$ on a neighborhood of $t=1$ for every $s\in(0,1)$.
Let us see now that, for every $s\in(0,1)$, $p_{1}(t,s)$, and $p_{2}(ts)$ are convex functions of t
By direct calculation, we see that
so we only need to verify that
The following inequality is trivially fulfilled:
We see that
since $q_{1}(0)=0$, we conclude that $q_{1}>0$ and, as a consequence, ${p_{1}}_{1}(t)>0$ on $(0,1]$ and also $\frac{\partial^{2}}{\partial t^{2}}p_{1}(t,s)>0$.
We have already proved that $p_{2}^{(3)}(r)>0$, for $r\in[0,1]$, and $p_{2}''(0)=0$, so for every fixed $s\in(0,1)$, $p_{2}''(ts)>0$ for every $t\in(s,1]$.
As a consequence, for any fixed $s\in(0,1)$, both $p_{1}(t,s)$ and $p_{2}(ts)$ are convex functions of t.
From the fact that $p_{1}(t,s)>p_{2}(ts)$ on a neighborhood of $t=1$, $p_{1}(1,s)=p_{2}(1s)$ and, also, $p_{1}(s,s)>0=p_{2}(0)$, we can affirm that $p_{1}(t,s)>p_{2}(ts)$ for $t\in[s,1)$, and then $g_{0}(t,s)<0$ for $0< s\leq t<1$, and condition ($\mathrm{N}_{g}$) is fulfilled.
Now, as a consequence of Theorem 2.13, we know that $g_{M}(t,s)\leq 0$ for $M\in[0,\lambda_{1})$, where $\lambda_{1}<0$ is the biggest negative eigenvalue of $T_{4}[0] u(t)$ in $X_{3}$.
To verify that Theorem 3.1 does not hold in this case we will prove that for $M<0$ the sign change does not come on the least positive eigenvalue of $T_{4}[0] u(t)$ in $X_{2}$.
As in the previous section, we can obtain numerically the first eigenvalues of $T_{4}[0]$, which can be given by the following approximated values:

The biggest negative eigenvalue in $X_{1}$ is $\lambda_{3}\approxeq (12.529)^{4}$.

The least positive eigenvalue in $X_{2}$ is $\lambda_{2}\approxeq (10.895)^{4}$.

The biggest negative eigenvalue in $X_{3}$ in $\lambda_{1}\approxeq (9.458)^{4}$.
Remark 5.1
Realize that, since $T_{4}[0] u(t)=0$ is not disconjugate on $[0,1]$, we have no a priori information as regards the sign of the eigenvalues $\lambda_{3}$ and $\lambda_{2}$. However, since $g_{0}$ satisfies ($\mathrm{N}_{g}$), we can be ensured, without calculating it, that $\lambda_{1}<0$.
Finally, let us see that there exists $M^{*}>\lambda_{2}$ for which $g_{M^{*}}$ has no constant sign on $I \times I$.
We are going to study the following function:
As we have proved in the proof of Theorem 3.1, if this function has no constant sign on I then the Green’s function must necessarily change sign in a neighborhood of $s=0$.
For $M^{*}=\frac{59\text{,}584}{9} \approxeq(9.02032)^{4}$, $v(t)$ obeys
This, see Figure 3, changes sign on I.
As a consequence the Green’s function has no constant sign for a value of M bigger than $\lambda_{2}$.
Even more, we can verify numerically which is the interval for M where $g_{M}(t,s)$ is nonpositive on $I \times I$. We observe that a change of sign comes first on the interior of $I\times I$. It comes in $(t,s)\approxeq(0.7186,0.0307)\in(0,1)\times(0,1)$ for $M \approxeq(7.87022)^{4}$. So we deduce that it is given by $[(7.87022)^{4},\lambda_{1})$.
As a consequence we conclude that the example shows that if we suppress the disconjugacy hypothesis, Theorem 3.1 is not true in general.
References
 1.
De Coster, C, Habets, P: TwoPoint Boundary Value Problems: Lower and Upper Solutions. Mathematics in Science and Engineering, vol. 205. Elsevier, Amsterdam (2006)
 2.
Ladde, GS, Lakshmikantham, V, Vatsala, AS: Monotone Iterative Techniques for Nonlinear Differential Equations. Pitman, Boston (1985)
 3.
Cabada, A: Green’s Functions in the Theory of Ordinary Differential Equations. Briefs in Mathematics. Springer, Berlin (2014)
 4.
Cabada, A: The method of lower and upper solutions for second, third, fourth, and higher order boundary value problems. J. Math. Anal. Appl. 185, 302320 (1994)
 5.
Krasnosel’skiĭ, MA: Positive Solutions of Operator Equations. Noordhoff, Groningen (1964)
 6.
Cabada, A, Cid, JA: Existence and multiplicity of solutions for a periodic Hill’s equation with parametric dependence and singularities. Abstr. Appl. Anal. 2011, Article ID 545264 (2011)
 7.
Graef, JR, Kong, L, Wang, H: A periodic boundary value problem with vanishing Green’s function. Appl. Math. Lett. 21, 176180 (2008)
 8.
Graef, JR, Kong, L, Wang, H: Existence, multiplicity, and dependence on a parameter for a periodic boundary value problem. J. Differ. Equ. 245, 11851197 (2008)
 9.
Torres, PJ: Existence of onesigned periodic solutions of some secondorder differential equations via a Krasnoselskii fixed point theorem. J. Differ. Equ. 190(2), 643662 (2003)
 10.
Cabada, A, Cid, JA: Existence of a nonzero fixed point for nondecreasing operators via Krasnoselskii’s fixed point theorem. Nonlinear Anal. 71, 21142118 (2009)
 11.
Cabada, A, Cid, JA, Infante, G: New criteria for the existence of nontrivial fixed points in cones. Fixed Point Theory Appl. 2013, 125 (2013)
 12.
Cid, JA, Franco, D, Minhós, F: Positive fixed points and fourthorder equations. Bull. Lond. Math. Soc. 41, 7278 (2009)
 13.
Franco, D, Infante, G, Perán, J: A new criterion for the existence of multiple solutions in cones. Proc. R. Soc. Edinb., Sect. A 142, 10431050 (2012)
 14.
Persson, H: A fixed point theorem for monotone functions. Appl. Math. Lett. 19, 12071209 (2006)
 15.
Coppel, WA: Disconjugacy. Lecture Notes in Mathematics, vol. 220. Springer, Berlin (1971)
 16.
Zettl, A: A constructive characterization of disconjugacy. Bull. Am. Math. Soc. 81, 145147 (1975)
 17.
Zettl, A: A characterization of the factors of ordinary linear differential operators. Bull. Am. Math. Soc. 80, 498499 (1974)
 18.
Erbe, L: HilleWintner type comparison theorem for selfadjoint fourth order linear differential equations. Proc. Am. Math. Soc. 80(3), 417422 (1980)
 19.
Kwong, MK, Zettl, A: Asymptotically constant functions and second order linear oscillation. J. Math. Anal. Appl. 93(2), 475494 (1983)
 20.
Simons, W: Some disconjugacy criteria for selfadjoint linear differential equations. J. Math. Anal. Appl. 34, 445463 (1971)
 21.
Elias, U: Eventual disconjugacy of $y^{(n)}+\mu p(x) y=0$ for every μ. Arch. Math. 40(2), 193200 (2004)
 22.
Cabada, A, Cid, JA, Sanchez, L: Positivity and lower and upper solutions for fourth order boundary value problems. Nonlinear Anal. 67, 15991612 (2007)
 23.
Li, H, Feng, Y, Bu, C: Nonconjugate boundary value problem of a third order differential equation. Electron. J. Qual. Theory Differ. Equ. 2015, 21 (2015)
 24.
Cabada, A, Cid, JA, MáquezVillamarín, B: Computation of Green’s functions for boundary value problems with Mathematica. Appl. Math. Comput. 219, 19191936 (2012)
 25.
Ma, R, Lu, Y: Disconjugacy and monotone iteration method for thirdorder equations. Commun. Pure Appl. Anal. 13(3), 12231236 (2014)
 26.
Cabada, A, Enguiça, RR: Positive solutions of fourth order problems with clamped beam boundary conditions. Nonlinear Anal. 74, 31123122 (2011)
 27.
Cabada, A, FernándezGómez, C: Constant sign solutions of twopoint fourth order problems. Appl. Math. Comput. 263, 122133 (2015)
 28.
Cabada, A, Saavedra, L: Disconjugacy characterization by means of spectral $(k,nk)$ problems. Appl. Math. Lett. 52, 2129 (2016)
Acknowledgements
Alberto Cabada was partially supported by Ministerio de Educación, Cultura y Deporte, Spain, and FEDER, project MTM201343014P. Lorena Saavedra was partially supported by Ministerio de Educación, Cultura y Deporte, Spain, and FEDER, project MTM201343014P, and Plan I2C scholarship, Consellería de Educación, Cultura e O.U., Xunta de Galicia, and FPU scholarship, Ministerio de Educación, Cultura y Deporte, Spain. The authors would also like to express their special thanks to the reviewer of the paper for his/her remarks, which considerably improved the content of this paper.
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
Both authors have contributed equally and significantly in writing this article. Both authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Received
Accepted
Published
DOI
MSC
 34B05
 34B08
 34B09
 34B27
 34C10
Keywords
 nth order boundary value problem
 Green’s functions
 disconjugation
 maximum principles
 spectral theory