Skip to main content

The eigenvalue characterization for the constant sign Green’s functions of \((k,n-k)\) problems

Abstract

This paper is devoted to the study of the sign of the Green’s function related to a general linear nth-order operator, depending on a real parameter, \(T_{n}[M]\), coupled with the \((k,n-k)\) boundary value conditions.

If the operator \(T_{n}[\bar{M}]\) is disconjugate for a given , we describe the interval of values on the real parameter M for which the Green’s function has constant sign.

One of the extremes of the interval is given by the first eigenvalue of the operator \(T_{n}[\bar{M}]\) satisfying \((k,n-k)\) conditions.

The other extreme is related to the minimum (maximum) of the first eigenvalues of \((k-1,n-k+1)\) and \((k+1,n-k-1)\) problems.

Moreover, if \(n-k\) is even (odd) the Green’s function cannot be nonpositive (nonnegative).

To illustrate the applicability of the obtained results, we calculate the parameter intervals of constant sign Green’s functions for particular operators. Our method avoids the necessity of calculating the expression of the Green’s function.

We finalize the paper by presenting a particular equation in which it is shown that the disconjugation hypothesis on operator \(T_{n}[\bar{M}]\) for a given cannot be eliminated.

1 Introduction

It is very well known that the validity of the method of lower and upper solutions, coupled with the monotone iterative techniques [1, 2], is equivalent to the constant sign of the Green’s function related to the linear part of the studied problem [3, 4]. Moreover, by means of the celebrated Krasnosel’skiĭ contraction/expansion fixed point theorem [5], nonexistence, existence, and multiplicity results are derived from the construction of suitable cones on Banach spaces. Such a construction follows by using adequate properties of the Green’s function, one of them is its constant sign [69]. Recently, the combination of the two previous methods has been proved as a useful tool to ensure the existence of solution [1014].

Having in mind the power of this constant sign property, we will describe the interval of parameters for which the Green’s function related to the general linear nth-order equation

$$ T_{n}[M] u(t) \equiv u^{(n)}(t)+a_{1}(t) u^{(n-1)}(t)+ \cdots+a_{n-1}(t) u'(t)+ \bigl(a_{n}(t)+M \bigr) u(t)=0, $$
(1)

\(t \in I \equiv[a,b]\), coupled with the so-called \((k,n-k)\) two-point boundary value conditions:

$$ u(a)=u'(a)=\cdots=u^{(k-1)}(a)=u(b)=u'(b)= \cdots=u^{(n-k-1)}(b)= 0, $$
(2)

\(1 \le k\le n-1\), has constant sign on its square of definition \(I \times I\).

The main hypothesis consists on assuming that there is a real parameter for which operator \(T_{n}[\bar{M}]\) is disconjugate on I.

An exhaustive study of the general theory and the fundamental properties of the disconjugacy are compiled in the classical book of Coppel [15]. Different sufficient criteria to ensure the disconjugacy character of the linear operator \(T_{n}[0]\) have been developed in the literature, we refer to [16, 17]. Sufficient conditions for particular cases have been obtained in [1820] and, more recently, in [21]. We mention that operator \(u^{(n)}(t)+a_{1}(t) u^{(n-1)}(t)\) is always disconjugate in I, see [15] for details, in particular the results here presented are valid for the operator \(u^{(n)}(t)+M u(t)\).

As has been shown in [15], the disconjugacy character implies the constant sign of the Green’s function \(g_{M}\) related to problem (1)-(2). However, as we will see in the paper, the reciprocal property is not true in general: there are real parameters M for which the Green’s function has constant sign but equation (1) is not disconjugate. In other words, the disconjugacy character is only a sufficient condition in order to ensure the constant sign of a Green’s function related to problem (1)-(2).

In fact, from the disconjugacy character of the operator \(T_{n}[\bar{M}]\) in I, it is shown in [15] that the Green’s function \(g_{M}\) satisfies a suitable condition, stronger than its constant sign. Such condition fulfills the one introduced in Section 1.8 of [3]. So, following the results given in that reference we conclude that the set of parameters M for which \(g_{M}\) has constant sign is an interval \(H_{T}\). Moreover, if \(n-k\) is even then the maximum of \(H_{T}\) is the opposite to the biggest negative eigenvalue of problem (1)-(2), when \(n-k\) is odd the minimum of \(H_{T}\) is the opposite to the least positive eigenvalue of such problem.

Thus, the difficulty remains in the characterization of the other extreme of the interval \(H_{T}\). In this case, as it is shown in Section 1.8 of [3], such extreme is not an eigenvalue of the considered problem, so to attain its exact value is not immediate. In practical situations it is necessary to obtain the expression of the Green’s function, which is, in general, a difficult matter to deal with. We point out that this problem is not restricted to the \((k,n-k)\) boundary conditions, the difficulty in obtaining the non-eigenvalue extreme remains true for any kind of linear conditions [22, 23]. In [24], provided operator \(T_{n}[M]\) has constant coefficients, it has been developed a computer algorithm that calculates the exact expression of a Green’s function coupled with two-point boundary value conditions. However, such expression is often too complicated to manage, and to describe the interval \(H_{T}\) is really very difficult in practical situations. In fact there is not a direct method of the construction for non-constant coefficients.

We mention that the disconjugacy theory has been used in [25] to obtain the values for which the Green’s functions related to be third-order operators \(u'''+M u^{(i)}\), \(i=0,1,2\), coupled with conditions \((1,2)\) and \((2,1)\), have constant sign. A similar procedure has been performed in [26] for the fourth-order operator \(u^{(4)}+M u\), coupled with conditions \((2,2)\) and, more recently, in [27] with conditions \((1,3)\) and \((3,1)\). In all the situations the interval of disconjugacy is obtained and then, by means of the expression of the Green’s function, it is proved that such interval is optimal. As we have mentioned above, this coincidence holds only in particular cases as the ones treated in these papers, in general the intervals of disconjugacy and constant sign Green’s functions do not coincide for the nth-order operator \(T_{n}[M]\).

It is for this that we make in this work a general characterization of the regular extreme of the interval of constant sign \(H_{T}\) by means of the spectral theory. We will show that it is an eigenvalue of the same operator \(T_{n}[M]\) but related to different two-point boundary value conditions. In fact, if \(n-k\) is even, it will be the minimum of the two least positive eigenvalues related to conditions \((k-1,n-k+1)\) and \((k+1,n-k-1)\). It will be the maximum of the two biggest negative eigenvalues of such problems when \(n-k\) is odd. So, we make a general characterization for the general operator \(T_{n}[M]\) and we avoid the necessity of calculating the Green’s function and to study its sign dependence on the real parameter M.

We note that if the operator \(T_{n}[M]\) has constant coefficients, to obtain the corresponding eigenvalues we only must to calculate the determinant of the matrix of coefficients of a linear homogeneous algebraic system. Numerical methods are also valid for the non-constant case.

It is important to mention that, as a consequence of the obtained results, denoting by \(g_{M}\) the Green’s function related to problem (1)-(2), we conclude that \((-1)^{n-k} g_{M}(t,s)\) cannot be negative on \(I \times I\) for all \(M \in\mathbb{R}\).

The paper is organized as follows: in a preliminary Section 2 we introduce the fundamental concepts that are needed in the development of the paper. Next section is devoted to the proof of the main result in which the regular extreme is obtained via spectral theory. In Section 4 some particular cases are considered where it is shown the applicability of the obtained results. In the last section is introduced an example that shows that the disconjugacy hypothesis on the main result cannot be eliminated.

2 Preliminaries

In this section, for convenience of the reader, we introduce the fundamental tools in the theory of disconjugacy and Green’s functions that will be used in the development of further sections.

Definition 2.1

Let \(a_{k}\in C^{n-k}(I)\) for \(k=1,\ldots,n\). The nth-order linear differential equation (1) is said to be disconjugate on an interval I if every nontrivial solution has less than n zeros on I, multiple zeros being counted according to their multiplicity.

Definition 2.2

The functions \(u_{1},\ldots, u_{n} \in C^{n}(I)\) are said to form a Markov system on the interval I if the n Wronskians

$$ W(u_{1},\ldots,u_{k})= \left \vert \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} u_{1}&\cdots& u_{k}\\ \vdots&\cdots&\vdots\\ u_{1}^{(k-1)}&\cdots&u_{k}^{(k-1)} \end{array}\displaystyle \right \vert ,\quad k=1,\ldots,n , $$
(3)

are positive throughout I.

The following result about this concept is in Chapter 3 of [15].

Theorem 2.3

The linear differential equation (1) has a Markov fundamental system of solutions on the compact interval I if, and only if, it is disconjugate on I.

In order to introduce the concept of the Green’s function related to the nth-order scalar problem (1)-(2), we consider the following equivalent first-order vectorial problem:

$$ x'(t)=A(t) x(t) , \quad t\in I ,\qquad B x(a)+C x(b)=0, $$
(4)

with \(x(t) \in\mathbb{R}^{n}\), \(A(t), B, C\in\mathcal{M}_{n\times n}\), defined by

$$\begin{aligned}& x(t)=\left ( \textstyle\begin{array}{@{}c@{}} u(t)\\ u'(t)\\ \vdots\\ u^{(n-1)}(t) \end{array}\displaystyle \right ),\qquad A(t)=\left ( \textstyle\begin{array}{@{}c@{\ }|@{\ }c@{}} 0& I_{n-1} \\ \hline -(a_{n}(t)+M) & -a_{n-1}(t)\cdots-a_{1}(t) \end{array}\displaystyle \right ), \\& B=\left ( \textstyle\begin{array}{@{}c@{\ }|@{\ }c@{}} I_{k}&0 \\ \hline 0&0 \end{array}\displaystyle \right ),\qquad C= \left ( \textstyle\begin{array}{@{}c@{\ }|@{\ }c@{}} 0&0 \\ \hline I_{n-k}&0 \end{array}\displaystyle \right ). \end{aligned}$$
(5)

Here \(I_{j}\), \(j=1, \ldots,n-1\), is the \(j \times j\) identity matrix.

Definition 2.4

We say that G is a Green’s function for problem (4) if it satisfies the following properties:

  1. (G1)

    \(G\equiv(G_{i,j})_{i,j\in\{1,\ldots,n\} }\colon (I\times I)\backslash \lbrace(t,t) , t\in I \rbrace \rightarrow\mathcal{M}_{n\times n}\).

  2. (G2)

    G is a \(C^{1}\) function on the triangles \(\lbrace(t,s)\in\mathbb{R}^{2} , a\leq s< t\leq b \rbrace\), and \(\lbrace(t,s)\in\mathbb{R}^{2} , a\leq t < s\leq b \rbrace\).

  3. (G3)

    For all \(i\neq j\) the scalar functions \(G_{i,j}\) have a continuous extension to \(I\times I\).

  4. (G4)

    For all \(s\in(a,b)\), the following equality holds:

    $$\frac{\partial}{\partial t} G(t,s)=A(t) G(t,s) \quad \text{for all } t\in I\backslash \lbrace s \rbrace. $$
  5. (G5)

    For all \(s\in(a,b)\) and \(i\in \lbrace 1,\ldots , n \rbrace\), the following equalities are fulfilled:

    $$\lim_{t\rightarrow s^{+}}G_{i,i}(t,s)=\lim_{t\rightarrow s^{-}}G_{i,i}(s,t)=1+ \lim_{t\rightarrow s^{+}}G_{i,i}(s,t)=1+\lim_{t\rightarrow s^{-}}G_{i,i}(t,s) . $$
  6. (G6)

    For all \(s\in(a,b)\), the function \(t\rightarrow G(t,s)\) satisfies the boundary conditions

    $$B G(a,s)+C G(b,s)=0 . $$

Remark 2.5

On the previous definition item (G5) can be modified to obtain the characterization of the lateral limits for \(s=a\) and \(s=b\) as follows:

$$\lim_{t\rightarrow a^{+}}G_{i,i}(t,a)=1+\lim_{t\rightarrow a^{+}}G_{i,i}(a,t) \quad \text{and}\quad \lim_{t\rightarrow b^{-}}G_{i,i}(b,t)=1+\lim _{t\rightarrow b^{-}}G_{i,i}(t,b) . $$

It is very well known that the Green’s function related to this problem obeys the following expression ([3], Section 1.4):

$$ G(t,s)=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} g_{1}(t,s)&g_{2}(t,s)&\cdots&g_{n-1}(t,s)&g_{M}(t,s) \\ \frac{\partial}{\partial t} g_{1}(t,s)& \frac{\partial}{\partial t} g_{2}(t,s)&\cdots&\frac{\partial}{\partial t} g_{n-1}(t,s)& \frac {\partial}{\partial t} g_{M}(t,s) \\ \vdots&\vdots&\cdots&\vdots&\vdots \\ \frac{\partial^{n-1} }{\partial t^{n-1}} g_{1}(t,s)&\frac{\partial^{n-1} }{\partial t^{n-1}} g_{2}(t,s)&\cdots&\frac{\partial^{n-1} }{\partial t^{n-1}} g_{n-1}(t,s)&\frac{\partial^{n-1}}{\partial t^{n-1}} g_{M}(t,s) \end{array}\displaystyle \right ) , $$
(6)

where \(g_{M}(t,s)\) is the scalar Green’s function related to problem (1)-(2).

Using Definition 2.4 we can deduce the properties fulfilled by \(g_{M}(t,s)\). In particular, \(g_{M}\in C^{n-2}(I \times I)\) and it is a \({\mathcal{C}} ^{n}\) function on the triangles \(a\le s < t \le b\) and \(a\le t < s \le b\). Moreover, it satisfies, as a function of t, the two-point boundary value conditions (2) and solves equation (1) whenever \(t \neq s\).

We also mention a result which appears in Chapter 3, Section 6 of [15] and that connects the disconjugacy and the sign of the Green’s function related to problem (1)-(2).

Lemma 2.6

If the linear differential equation (1) is disconjugate and \(g_{M}(t,s)\) is the Green’s function related to problem (1)-(2), hence

$$\begin{aligned}& g_{M}(t,s) p(t) \geq 0 , \quad (t,s)\in I \times I , \\& \frac{g_{M}(t,s)}{p(t)} > 0 ,\quad (t,s)\in[a,b]\times(a,b) , \end{aligned}$$

where \(p(t)=(t-a)^{k} (t-b)^{n-k}\).

Remark 2.7

We mention that in previous lemma, by means of the expression

$$\frac{g_{M}(t,s)}{p(t)}>0 ,\quad (t,s)\in[a,b]\times(a,b) $$

we are denoting

$$\frac{g_{M}(t,s)}{p(t)}>0 \quad \mbox{for all } (t,s)\in(a,b)\times(a,b) $$

and

$$\lim_{t\rightarrow a^{+}}\frac{g_{M}(t,s)}{p(t)}>0 \quad \text{and}\quad \lim _{t\rightarrow b^{-}}\frac{g_{M}(t,s)}{p(t)}>0 \quad \mbox{for all } s \in (a,b). $$

Moreover, due to the regularity of the function \(g_{M}\), we see that there is a positive constant K such that the following properties hold for all \(s \in(a,b)\):

$$0< \ell_{1}(s) = \lim_{t\rightarrow a^{+}}\frac{g_{M}(t,s)}{p(t)} = \frac {\frac{\partial^{k} }{\partial t^{k}} g_{M}(t,s)_{|_{t=a}}}{k! (a-b)^{n-k}} \le K $$

and

$$0< \ell_{2}(s) = \lim_{t\rightarrow b^{-}}\frac{g_{M}(t,s)}{p(t)} = \frac {\frac{\partial^{n-k} }{\partial t^{k}} g_{M}(t,s)_{|_{t=b}}}{(b-a)^{k} (n-k)!}\le K. $$

We note that such properties imply the following inequalities:

$$\begin{aligned}& (-1)^{n-k} g_{M}(t,s) > 0, \quad (t,s)\in(a,b)\times(a,b), \\& (-1)^{n-k}\frac{\partial^{k} }{\partial t^{k}} g_{M}(t,s)_{|_{t=a}} > 0, \quad s \in(a,b), \\& (-1)^{n-k}\frac{\partial^{n-k} }{\partial t^{n-k}} g_{M}(t,s)_{| _{t=b}} < 0, \quad s \in(a,b). \end{aligned}$$

The adjoint of the operator \(T_{n}[M]\) is given by the following expression, see for details Section 1.4 of [3] or Chapter 3, Section 5 of [15]:

$$ T_{n}^{*}[M]v (t)\equiv(-1)^{n} v^{(n)}(t)+\sum_{j=1}^{n-1}(-1)^{j} (a_{n-j} v )^{(j)}(t)+ \bigl(a_{n}(t)+M \bigr) v(t) , $$
(7)

and its domain of definition is

$$\begin{aligned} D \bigl(T_{n}^{*}[M] \bigr) =& \Biggl\lbrace v\in C^{n}(I) \biggm|\sum_{j=1}^{n} \sum_{i=0}^{j-1} (-1)^{j-1-i} (a_{n-j} v)^{(j-1-i)}(b) u^{(i)}(b) \\ &=\sum_{j=1}^{n}\sum _{i=0}^{j-1} (-1)^{j-1-i} (a_{n-j} v)^{(j-1-i)}(a) u^{(i)}(a)\ (\mbox{with }a_{0}=1) , \forall u\in D \bigl(T_{n}[M] \bigr) \Biggr\rbrace . \end{aligned}$$
(8)

In our case, because of the boundary conditions (2), we can express the domain of the operator \(T_{n}[M]\), \(D(T_{n}[M])\), as

$$ X_{k}= \bigl\lbrace u\in C^{n}(I) \mid u(a) =\cdots =u^{(k-1)}(a)=u(b)=\cdots =u^{(n-k-1)}(b)=0 \bigr\rbrace , $$

so we can replace equation (8) with

$$\begin{aligned} D \bigl(T_{n}^{*}[M] \bigr) =& \Biggl\lbrace v\in C^{n}(I) \biggm|\sum_{j=n-k+1}^{n}\sum _{i=n-k}^{j-1} (-1)^{j-1-i} (a_{n-j} v)^{(j-1-i)}(b) u^{(i)}(b) \\ & = \sum_{j=k+1}^{n}\sum _{i=k}^{j-1} (-1)^{j-1-i} (a_{n-j} v)^{(j-1-i)}(a) u^{(i)}(a)\ (\text{with }a_{0}=1) , \forall u\in C^{n}(I) \Biggr\rbrace . \end{aligned}$$

In order to simplify the previous expression, we choose a function \(u\in C^{n}(I)\) satisfying

$$\begin{aligned}& u^{(\sigma)}(a) = 0 ,\quad \sigma=1,\ldots,n-1, \\& u^{(\mu)}(b) = 0 ,\quad \mu=1,\ldots,n-2 , \\& u^{(n-1)}(b) = 1 . \end{aligned}$$

Realizing that \(a_{0}=1\), we conclude that every function \(v\in D(T_{n}^{*}[M])\) must satisfy \(v(b)=0\).

Moreover, if we now choose a function in \(C^{n}(I)\) that satisfies

$$\begin{aligned}& u^{(\sigma)}(a) = 0 , \quad \sigma=1,\ldots,n-1, \\& u^{(\mu)}(b) = 0 ,\quad \mu=1,\ldots,n-1 , \mu\neq n-2, \\& u^{(n-2)}(b) = 1, \end{aligned}$$

we conclude that any function \(v\in D(T_{n}^{*}[M])\) has to satisfy

$$-v'(b)+a_{1}(b) v(b)=0. $$

Since \(a_{1}\in C^{n-1}(I)\) and \(v(b)=0\), we conclude that \(v'(b)=0\).

Repeating this process we conclude that the domain of the adjoint operator is given by

$$ D \bigl(T_{n}^{*}[M] \bigr)=X_{n-k} . $$
(9)

The next result appears in Chapter 3, Theorem 9 of [15].

Theorem 2.8

Equation (1) is disconjugate on an interval I if, and only if, the adjoint equation, \(T_{n}^{*}[M] y(t)=0\) is disconjugate on I.

We denote by \(g_{M}^{*}(t,s)\) the Green function of the adjoint operator, \(T_{n}^{*}[M]\).

In Section 1.4 of [3] the following relationship is proved:

$$ g^{*}_{M}(t,s)=g_{M}(s,t) . $$
(10)

Defining now the following operator:

$$ \widehat{T}_{n} \bigl[(-1)^{n} M \bigr]:=(-1)^{n} T_{n}^{*}[M] , $$
(11)

we deduce, from the previous expression, that

$$ \hat{g}_{(-1)^{n} M}(t,s)=(-1)^{n} g_{M}^{*}(t,s)=(-1)^{n} g_{M}(s,t) . $$
(12)

Obviously, Theorem 2.8 remains true for the operator \(\widehat {T}_{n}[(-1)^{n} M]\).

Definition 2.9

The operator \(T_{n}[M]\) is said to be inverse positive (inverse negative) on \(X_{k}\) if every function \(u \in X_{k}\) such that \(T_{n}[M] u \ge0\) in I, must verify \(u\geq0\) (\(u\leq0\)) on I.

The next results are proved in Section 1.6 and Section 1.8 of [3].

Theorem 2.10

The operator \(T_{n}[M]\) is inverse positive (inverse negative) on \(X_{k}\) if, and only if, the Green’s function related to problem (1)-(2) is nonnegative (nonpositive) on its square of definition.

Theorem 2.11

Let \(M_{1}, M_{2}\in\mathbb{R}\), and suppose that operators \(T_{n}[M_{j}]\), \(j=1,2\), are invertible in \(X_{k}\). Let \(g_{j}\), \(j=1,2\), be Green’s functions related to operators \(T_{n}[M_{j}]\), and suppose that both functions have the same constant sign on \(I \times I\). Then, if \(M_{1}< M_{2}\), \(g_{2}\leq g_{1}\) on \(I \times I\).

In the sequel, we introduce two conditions on \(g_{M}(t,s)\), which will be used in the paper.

(\(\mathrm{P}_{g}\)):

Suppose that there is a continuous function \(\phi(t)>0\) for all \(t\in(a,b)\) and \(k_{1}, k_{2}\in\mathcal{L}^{1}(I)\), such that \(0< k_{1}(s)< k_{2}(s)\) for a.e. \(s\in I\), satisfying

$$\phi(t) k_{1}(s)\leq g_{M}(t,s)\leq\phi(t) k_{2}(s) \quad \text{for a.e. } (t,s)\in I \times I . $$
(\(\mathrm{N}_{g}\)):

Suppose that there is a continuous function \(\phi(t)>0\) for all \(t\in(a,b)\) and \(k_{1}, k_{2}\in\mathcal{L}^{1}(I)\), such that \(k_{1}(s)< k_{2}(s)<0\) for a.e. \(s\in I\), satisfying

$$\phi(t) k_{1}(s)\leq g_{M}(t,s)\leq\phi(t) k_{2}(s) \quad \text{for a.e. }(t,s)\in I \times I . $$

Finally, we introduce the following sets, which are going to particularize \(H_{T}\):

$$\begin{aligned}& P_{T} = \bigl\lbrace M\in\mathbb{R} \mid g_{M}(t,s)\geq0, \forall (t,s)\in I\times I \bigr\rbrace , \\& N_{T} = \bigl\lbrace M\in\mathbb{R} \mid g_{M}(t,s)\leq0, \forall (t,s)\in I\times I \bigr\rbrace . \end{aligned}$$

Realize that using Theorem 2.11 we can affirm that the two previous sets are real intervals.

The next results describe one of the extremes of the two previous intervals (see Theorems 1.8.31 and 1.8.23 of [3]).

Theorem 2.12

Let \(\bar{M}\in\mathbb{R}\) be fixed. If the operator \(T_{n}[\bar{M}]\) is invertible in \(X_{k}\) and its related Green’s function satisfies condition (\(\mathrm{P}_{g}\)), then the following statements hold:

  • There exists \(\lambda_{1}>0\), the least eigenvalue in absolute value of the operator \(T_{n}[\bar{M}]\) in \(X_{k}\). Moreover, there exists a nontrivial constant sign eigenfunction corresponding to the eigenvalue \(\lambda_{1}\).

  • The Green’s function related to the operator \(T_{n}[M]\) is nonnegative on \(I\times I\) for all \(M\in(\bar{M}-\lambda_{1},\bar{M}]\).

  • The Green’s function related to the operator \(T_{n}[M]\) cannot be nonnegative on \(I\times I\) for all \(M<\bar{M}-\lambda_{1}\).

  • If there is \(M\in\mathbb{R}\) for which the Green’s function related to the operator \(T_{n}[M]\) is nonpositive on \(I\times I\), then \(M<\bar{M}-\lambda_{1}\).

Theorem 2.13

Let \(\bar{M}\in\mathbb{R}\) be fixed. If the operator \(T_{n}[\bar{M}]\) is invertible in \(X_{k}\) and its related Green’s function satisfies condition (\(\mathrm{N}_{g}\)), then the following statements hold:

  • There exists \(\lambda_{2}<0\), the least eigenvalue in absolute value of the operator \(T_{n}[\bar{M}]\) in \(X_{k}\). Moreover, there exists a nontrivial constant sign eigenfunction corresponding to the eigenvalue \(\lambda_{2}\).

  • The Green’s function related to the operator \(T_{n}[M]\) is nonpositive on \(I\times I\) for all \(M\in[\bar{M},\bar{M}-\lambda_{2})\).

  • The Green’s function related to the operator \(T_{n}[M]\) cannot be nonpositive on \(I\times I\) for all \(M>\bar{M}-\lambda_{2}\).

  • If there is \(M\in\mathbb{R}\) for which the Green’s function related to the operator \(T_{n}[M]\) is nonnegative on \(I\times I\), then \(M>\bar{M}-\lambda_{2}\).

3 Main result

This section is devoted to the proof of the eigenvalue characterization of the sets \(P_{T}\) and \(N_{T}\). Such a result is enunciated in the following theorem.

Theorem 3.1

Let \(\bar{M}\in\mathbb{R}\) be such that the equation \(T_{n}[\bar{M}] u(t)=0\) is disconjugate on I. Then the following properties are fulfilled:

If \(n-k\) is even and \(2\leq k \le n-1\), then the operator \(T_{n}[M]\) is inverse positive on \(X_{k}\) if, and only if, \(M\in(\bar{M}-\lambda _{1},\bar {M}-\lambda_{2}]\), where:

  • \(\lambda_{1}>0\) is the least positive eigenvalue of the operator \(T_{n}[\bar{M}]\) in \(X_{k}\).

  • \(\lambda_{2}<0\) is the maximum of:

    • \(\lambda_{2}'<0\), the biggest negative eigenvalue of the operator \(T_{n}[\bar{M}]\) in \(X_{k-1}\).

    • \(\lambda_{2}''<0\), the biggest negative eigenvalue of the operator \(T_{n}[\bar{M}]\) in \(X_{k+1}\).

If \(k=1\) and n is odd, then the operator \(T_{n}[M]\) is inverse positive on \(X_{1}\) if, and only if, \(M\in(\bar{M}-\lambda_{1},\bar {M}-\lambda_{2}]\), where:

  • \(\lambda_{1}>0\) is the least positive eigenvalue of the operator \(T_{n}[\bar{M}]\) in \(X_{1}\).

  • \(\lambda_{2}<0\) is the biggest negative eigenvalue of the operator \(T_{n}[\bar{M}]\) in \(X_{2}\).

If \(n-k\) is odd and \(2\leq k\leq n-2\), then the operator \(T_{n}[M]\) is inverse negative on \(X_{k}\) if, and only if, \(M\in[\bar{M}-\lambda _{2},\bar {M}-\lambda_{1})\), where:

  • \(\lambda_{1}<0\) is the biggest negative eigenvalue of the operator \(T_{n}[\bar{M}]\) in \(X_{k}\).

  • \(\lambda_{2}>0\) is the minimum of:

    • \(\lambda_{2}'>0\), the least positive eigenvalue of the operator \(T_{n}[\bar{M}]\) in \(X_{k-1}\).

    • \(\lambda_{2}''>0\), the least positive eigenvalue of the operator \(T_{n}[\bar{M}]\) in \(X_{k+1}\).

If \(k=1\) and \(n>2\) is even, then the operator \(T_{n}[M]\) is inverse negative on \(X_{1}\) if, and only if, \(M\in[\bar{M}-\lambda_{2},\bar {M}-\lambda_{1})\), where:

  • \(\lambda_{1}<0\) is the biggest negative eigenvalue of the operator \(T_{n}[\bar{M}]\) in \(X_{1}\).

  • \(\lambda_{2}>0\) is the least positive eigenvalue of the operator \(T_{n}[\bar{M}]\) in \(X_{2}\).

If \(k=n-1\) and \(n>2\), then the operator \(T_{n}[M]\) is inverse negative on \(X_{n-1}\) if, and only if, \(M\in[\bar{M}-\lambda_{2},\bar {M}-\lambda _{1})\), where:

  • \(\lambda_{1}<0\) is the biggest negative eigenvalue of the operator \(T_{n}[\bar{M}]\) in \(X_{n-1}\).

  • \(\lambda_{2}>0\) is the least positive eigenvalue of the operator \(T_{n}[\bar{M}]\) in \(X_{n-2}\).

If \(n=2\), then the operator \(T_{2}[M]\) is inverse negative on \(X_{1}\) if, and only if, \(M\in(-\infty,\bar{M}-\lambda_{1})\), where:

  • \(\lambda_{1}<0\) is the biggest negative eigenvalue of the operator \(T_{2}[\bar{M}]\) in \(X_{1}\).

In order to prove this result, we separate the proof in several subsections.

3.1 Decomposition of the operator \(T_{n}[\bar{M}]\)

We are interested in setting the operator \(T_{n}[\bar{M}]\) as a composition of suitable operators of order \(h \le n\). Such an expression allows us to control the values of such operators at the extremes of the interval a and b.

We recall the following result proved in Chapter 3 of [15].

Theorem 3.2

The linear differential equation (1) has a Markov system of solutions if, and only if, the operator \(T_{n}[M]\) has a representation

$$ T_{n}[M] y\equiv v_{1} v_{2} \cdots v_{n} \frac{d}{dt} \biggl( \frac{1}{v_{n}} \frac {d}{dt} \biggl( \cdots\frac{d}{dt} \biggl( \frac{1}{v_{2}} \frac {d}{dt} \biggl( \frac{1}{v_{1}} y \biggr) \biggr) \biggr) \biggr) , $$
(13)

where \(v_{k}>0\) on I and \(v_{k}\in C^{n-k+1}(I)\) for \(k=1,\ldots,n\).

It is obvious that for any real parameter M, denoting \(\lambda =M-\bar {M}\), we can rewrite the operator \(T_{n}[M]\) as follows:

$$T_{n}[M] u(t)\equiv T_{n}[\bar{M}] u(t)+\lambda u(t). $$

If we assume that the equation \(T_{n}[\bar{M}] u(t)=0\) is disconjugate on I, because of Theorems 2.3 and 3.2, we can express \(T_{n}[\bar{M}]\) as

$$T_{n}[\bar{M}] u(t)\equiv v_{1}(t) \cdots v_{n}(t) T_{n} u(t) , $$

where \(T_{k}\) are constructed as

$$ T_{0} u(t)=u(t) ,\qquad T_{k} u(t)= \frac{d}{dt} \biggl( \frac{1}{v_{k}(t)} T_{k-1} u(t) \biggr),\quad k =1, \ldots,n-1, t\in I, $$
(14)

with \(v_{k}>0\) on I, \(v_{k}\in C^{n-k+1}(I)\), for \(k=1,\ldots,n\).

Let us see now that \(T_{h} u(t)\) is given as a linear combination of \(u(t), u'(t),\ldots, u^{(h)}(t)\) with the form

$$ T_{h} u(t)=\frac{1}{v_{1}(t)\cdots v_{h}(t)} u^{(h)}(t)+p_{h_{1}}(t) u^{(h-1)}(t)+\cdots+ p_{h_{h}}(t) u(t) , $$
(15)

where \(p_{h_{i}}\in C^{n-h}(I)\).

Indeed, we are going to prove this equality by induction.

For \(h=1\),

$$ T_{1} u(t)=\frac{d}{dt} \biggl( \frac{1}{v_{1}(t)} u(t) \biggr) = \frac {1}{v_{1}(t)} u'(t)-\frac{v_{1}'(t)}{v_{1}^{2}(t)} u(t) . $$

Assume, by induction hypothesis, that equation (15) is satisfied for some \(h\in \lbrace1,\ldots,n-1 \rbrace\), therefore

$$\begin{aligned} T_{h+1} u(t) =&\frac{d}{dt} \biggl( \frac{1}{v_{h+1}(t)} \biggl( \frac {1}{v_{1}(t)\cdots v_{h}(t)} u^{(h)}(t)+p_{h_{1}}(t) u^{(h-1)}(t)+ \cdots+ p_{h_{h}}(t) u(t) \biggr) \biggr) \\ =&\frac{d}{dt} \biggl( \frac{1}{v_{1}(t)\cdots v_{h+1}(t)} u^{(h)}(t) \biggr) \\ &{}+ \frac{d}{dt} \biggl( \frac{1}{v_{h+1}(t)} \bigl(p_{h_{1}}(t) u^{(h-1)}(t)+\cdots+ p_{h_{h}}(t) u(t) \bigr) \biggr) , \end{aligned}$$

which clearly has the form of equation (15).

Finally, taking into account boundary conditions (2) and the regularity of the functions \(p_{h_{i}}\), we conclude that

$$T_{0}u(a)=0 ,\ldots,T_{k-1}u(a)=0 ,\qquad T_{0}u(b)=0 ,\ldots, T_{n-k-1}(b)=0. $$

Moreover,

$$\begin{aligned}& T_{k} u(a) = \frac{1}{v_{1}(a)\cdots v_{k}(a)} u^{(k)}(a) , \end{aligned}$$
(16)
$$\begin{aligned}& T_{n-k} u(b) = \frac{1}{v_{1}(b)\cdots v_{n-k}(b)} u^{(n-k)}(b) . \end{aligned}$$
(17)

So, from the positiveness of \(v_{h}\) on I, \(h \in\{1, \ldots,n\}\), we see that \(T_{k} u(a)\) and \(u^{(k)}(a)\) have the same sign. The same property holds for \(T_{n-k} u(b)\) and \(u^{(n-k)}(b)\).

3.2 Expression of the matrix Green’s function

This subsection is devoted to expressing, as functions of \(g_{M}(t,s)\), the functions \(g_{1}(t,s), \ldots, g_{n-1}(t,s)\), defined on (6), as the first row components of the Green’s function of the vectorial system (4).

By studying the adjoint operator as in Section 1.3 of [3], we know that the related Green’s function of the adjoint operator \(G^{*}\) satisfies \(G^{*}(t,s)=G^{T} (s,t)\). Moreover, the following equality holds:

$$ \frac{\partial}{\partial t} \bigl( -G^{*}(t,s) \bigr) =-A^{T} (t) \bigl( -G^{*}(t,s) \bigr) ,\quad t \in I\backslash \lbrace s \rbrace. $$

So, we can transform the previous equality in

$$ \biggl( -\frac{\partial}{\partial t} G(s,t) \biggr) ^{T}=-\frac {\partial }{\partial t} G^{T}(s,t)=-A^{T}(t) \bigl( -G^{T}(s,t) \bigr)=A^{T}(t) G^{T}(s,t)= \bigl( G(s,t) A(t) \bigr)^{T} . $$

Hence

$$\frac{\partial}{\partial t} G(s,t)= -G(s,t) A(t) , $$

or, which is the same,

$$ \frac{\partial}{\partial s} G(t,s)= -G(t,s) A(s) . $$
(18)

Using this equality, we are going to prove by induction the following:

$$ g_{n-j}(t,s)=(-1)^{j} \frac{\partial^{j} }{\partial s^{j}} g_{M}(t,s)+\sum_{k=0}^{j-1} \alpha_{k}^{j} (s) \frac{\partial^{k} }{\partial s^{k}} g_{M}(t,s), \quad j=1, \ldots,n-1. $$
(19)

Here \(\alpha_{i}^{j}(s)\) are functions of \(a_{1}(s) ,\ldots,a_{j}(s)\) and of its derivatives until the order \((j-1)\) and follow the recurrence formula

$$\begin{aligned}& \alpha_{0}^{0}(s) = 0 , \end{aligned}$$
(20)
$$\begin{aligned}& \alpha_{k}^{j+1}(s) = 0 , \quad k\geq j+1\geq1 , \end{aligned}$$
(21)
$$\begin{aligned}& \alpha_{0}^{j+1}(s) = a_{j+1}(s)- \bigl( \alpha_{0}^{j} \bigr) '(s),\quad j \geq 0, \end{aligned}$$
(22)
$$\begin{aligned}& \alpha_{k}^{j+1}(s) = - \bigl( \alpha_{k-1}^{j}(s)+ \bigl( \alpha _{k}^{j} \bigr) '(s) \bigr),\quad 1\leq k\leq j. \end{aligned}$$
(23)

Using equality (18), we deduce that the Green’s matrix’ terms which are on position \((1,i)\), \(i=1,\ldots,n\), satisfy the following equality:

$$ g_{i-1}(t,s)=-\frac{\partial}{\partial s} g_{i}(t,s)+a_{n-i+1}(s) g_{M}(t,s) ,\quad i=2,\ldots,n , $$
(24)

where \(g_{M}(t,s)\equiv g_{n}(t,s)\).

If we take \(i=n\) in equation (24) we deduce

$$ g_{n-1}(t,s)=-\frac{\partial}{\partial s} g_{M}(t,s)+a_{1}(s) g_{M}(t,s), $$

which gives us equation (19) for \(j=1\).

Assume now that equalities (19)-(23) are fulfilled for some \(j\in\{1,\ldots,n-2\}\) given. Let us see that they hold again for \(j+1\). We have

$$\begin{aligned} g_{n-j-1}(t,s) =& -\frac{\partial}{\partial s} \Biggl( (-1)^{j} \frac {\partial^{j} }{\partial s^{j}} g_{M}(t,s)+\sum_{k=0}^{j-1} \alpha _{k}^{j}(s)\frac {\partial^{k} }{\partial s^{k}} g_{M}(t,s) \Biggr) +a_{j+1}(s) g_{M}(t,s) \\ =&a_{j+1}(s) g_{M}(t,s)+(-1)^{j+1} \frac{\partial^{j+1} }{\partial s^{j+1}} g_{M}(t,s) \\ &{} -\sum_{k=0}^{j-1} \bigl( \alpha_{k}^{j} \bigr) '(s)\frac{\partial^{k} }{\partial s^{k}} g_{M}(t,s) -\sum_{k=0}^{j-1} \alpha_{k}^{j}(s)\frac {\partial ^{k+1} }{\partial s^{k+1}} g_{M}(t,s) \\ =&(-1)^{j+1} \frac{\partial^{j+1} }{\partial s^{j+1}} g_{M}(t,s)+a_{j+1}(s) g_{M}(t,s) \\ &{} -\sum_{k=0}^{j-1} \bigl( \alpha_{k}^{j} \bigr) '(s)\frac{\partial ^{k}}{\partial s^{k}} g_{M}(t,s) -\sum_{k=1}^{j} \alpha_{k-1}^{j}(s)\frac {\partial^{k} }{\partial s^{k}} g_{M}(t,s) \\ =& (-1)^{j+1} \frac{\partial^{j+1} }{\partial s^{j+1}} g_{M}(t,s)+\sum _{k=0}^{j} \alpha_{k}^{j+1}(s) \frac{\partial^{k} }{\partial s^{k}} g_{M}(t,s) . \end{aligned}$$

Now, we can express the Green’s matrix related to problem (4), \(G(t,s)\), as

$$ \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} (-1)^{n-1} \frac{\partial^{n-1} }{\partial s^{n-1}} g_{M}(t,s)+ \sum_{k=0}^{n-2}\alpha_{k}^{n-1}(s) \frac{\partial^{k}}{\partial s^{k}} g_{M}(t,s)&\cdots&g_{M}(t,s) \\ (-1)^{n-1} \frac{\partial^{n} }{\partial t\, \partial s^{n-1}} g_{M}(t,s)+ \sum_{k=0}^{n-2}\alpha_{k}^{n-1}(s) \frac{\partial ^{k+1}}{\partial t\, \partial s^{k}} g_{M}(t,s)&\cdots&\frac{\partial }{\partial t} g_{M}(t,s) \\ &\cdots& \\ \vdots&&\vdots \\ &\cdots& \\ (-1)^{n}\frac{\partial^{2n-2} }{\partial t^{n-1}\, \partial s^{n-1}} g_{M}(t,s)+ \sum_{k=0}^{n-2}\alpha_{k}^{n-1}(s)\frac{\partial ^{n-1+k}}{\partial t^{n-1}\, \partial s^{k}} g_{M}(t,s)&\cdots&\frac {\partial ^{n-1} }{\partial t^{n-1}} g_{M}(t,s) \end{array}\displaystyle \right ). $$
(25)

If coefficients \(a_{1}(s),\ldots,a_{n-1}(s), a_{n}(s)\) are constants, \(a_{1},\ldots,a_{n-1},a_{n}\), we can solve explicitly the recurrence form (20)-(23) and deduce that

$$\alpha_{k}^{j}(s)=(-1)^{k} a_{j-k}. $$

So, we see that

$$g_{n-j}(t,s)=\sum_{k=0}^{j}(-1)^{k} a_{j-k} \frac{\partial^{k}}{\partial s^{k}} g_{M}(t,s) , \quad \text{with } a_{0}=1 , $$

and we can rewrite \(G(t,s)\) as

$$\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} \sum_{k=0}^{n-1}(-1)^{k} a_{n-1-k} \frac{\partial^{k}}{\partial s^{k}} g_{M}(t,s)&\cdots& \sum_{k=0}^{1}(-1)^{k} a_{1-k} \frac{\partial ^{k}}{\partial s^{k}} g_{M}(t,s)&g_{M}(t,s) \\ \sum_{k=0}^{n-1}(-1)^{k} a_{n-1-k} \frac{\partial^{k+1}}{\partial t\, \partial s^{k}} g_{M}(t,s)&\cdots& \sum_{k=0}^{1}(-1)^{k} a_{1-k} \frac {\partial^{k+1}}{\partial t\, \partial s^{k}} g_{M}(t,s)&\frac{\partial }{\partial t} g_{M}(t,s) \\ \vdots&&\vdots& \\ \sum_{i=0}^{n-1}(-1)^{k} a_{n-1-k} \frac{\partial^{n-1+k}}{\partial t^{n-1}\, \partial s^{k}} g_{M}(t,s)&\cdots& \sum_{k=0}^{1}(-1)^{k} a_{1-k} \frac{\partial^{n-1+k}}{\partial t^{n-1}\, \partial s^{k}} g_{M}(t,s)&\frac {\partial^{n-1} }{\partial t^{n-1}} g_{M}(t,s) \end{array}\displaystyle \right ). $$

In particular, if \(T_{n}[M] u(t)\equiv u^{(n)}(t)+M u(t)\) we conclude that

$$g_{n-j}(t,s)=(-1)^{j} \frac{\partial^{j} }{\partial s^{j}} g_{M}(t,s) , $$

so the Green’s matrix, \(G(t,s)\), is given by the expression

$$\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} (-1)^{n-1} \frac{\partial^{n-1}}{\partial s^{n-1}} g_{M}(t,s)&\cdots &-\frac{\partial}{\partial s} g_{M}(t,s)&g_{M}(t,s) \\ (-1)^{n-1} \frac{\partial^{n}}{\partial t\, \partial s^{n-1}} g_{M}(t,s)&\cdots& -\frac{\partial^{2}}{\partial t\, \partial s} g_{M}(t,s)&\frac {\partial}{\partial t} g_{M}(t,s) \\ \vdots&&\vdots& \\ (-1)^{n-1}\frac{\partial^{2 n-2}}{\partial t^{n-1}\, \partial s^{n-1}} g_{M}(t,s)&\cdots&-\frac{\partial^{n}}{\partial t^{n-1}\, \partial s} g_{M}(t,s)&\frac{\partial^{n-1} }{\partial t^{n-1}} g_{M}(t,s) \end{array}\displaystyle \right ) . $$

Remark 3.3

We note that in the general case it is possible to obtain some of the components of system (20)-(23). We have

$$\begin{aligned}& \alpha_{0}^{j}(s) = \sum_{i=0}^{j-1}(-1)^{i} a_{j-i}^{(i)}(s) , \\& \alpha_{1}^{j}(s) = \sum_{i=1}^{j-1}(-1)^{i} i a_{j-i}^{(i-1)}(s) , \\& \alpha_{j}^{j+1}(s) = (-1)^{j} a_{1}(s) . \end{aligned}$$

3.3 Proof of the main results

Now we will proceed with the proof of the main result, Theorem 3.1. To this end, we will divide the proof in several steps.

First, we are going to show a lemma.

Lemma 3.4

Let \(\bar{M}\in\mathbb{R}\), such that \(T_{n}[\bar{M}] u(t)=0\) is disconjugate on I. Then the following properties are fulfilled:

  • If \(n-k\) is even, then \(T_{n}[\bar{M}]\) is a inverse positive operator on \(X_{k}\) and its related Green’s function, \(g_{\bar{M}}(t,s)\), satisfies (\(\mathrm{P}_{g}\)).

  • If \(n-k\) is odd, then \(T_{n}[\bar{M}]\) is a inverse negative operator on \(X_{k}\) and its related Green’s function satisfies (\(\mathrm{N}_{g}\)).

Proof

By Lemma 2.6 and Remark 2.7 we see that for all \(s\in (a,b)\) the function \(\frac{g_{\bar{M}}(t,s)}{p(t)}\) can be extended to a strictly positive and continuous function in I, thus

$$ 0< k_{1}(s)=\min_{t\in I} \frac{g_{\bar{M}}(t,s)}{p(t)}< \max_{t\in I} \frac{g_{\bar{M}}(t,s)}{p(t)}=k_{2}(s) ,\quad s\in(a,b) . $$
(26)

Since \(g_{\bar{M}}\) is a continuous function in \(I \times I\), we see that \(k_{1}\) and \(k_{2}\) are continuous functions too.

If \(n-k\) is even, we take \(\phi(t)=p(t)\) and condition (\(\mathrm{P}_{g}\)) is trivially fulfilled.

If \(n-k\) is odd, we take \(\phi(t)=-p(t)\) and multiplying equation (26) by −1, condition (\(\mathrm{N}_{g}\)) holds immediately. □

First, notice that, as a direct corollary of the previous lemma, the assertion for \(\lambda_{1}\) in Theorem 3.1 follows directly from Theorems 2.12 and 2.13.

Now, we are going to prove the assertion in Theorem 3.1 concerning \(\lambda_{2}\).

The proof will be done in several steps. In a first moment we will show that, if \(n-k\) is even, the Green’s function changes sign for all \(M>\bar{M}-\lambda_{2}\) and for all \(M<\bar{M}-\lambda_{2}\) when \(n-k\) is odd.

After that we will prove that such estimation is optimal in both situations.

In order to make the paper more readable, along the proofs of this subsection it will be assumed that \(n-k\) is even. The arguments with \(n-k\) odd will be pointed out at the end of the subsection.

Step 1.:

Behavior of the Greens function on a neighborhood of \(s=a\) and \(s=b\).

First, we construct two functions that will characterize the values of \(M\in\mathbb{R}\) for which the Green’s function oscillates, or not, on a neighborhood of \(s=a\) and \(s=b\).

In order to do that, we denote the Green’s function related to problem (1)-(2) as follows:

$$g_{M}(t,s)=\left \{ \textstyle\begin{array}{l@{\quad}l} g_{M}^{1}(t,s),&a\leq t< s\leq b, \\ g_{M}^{2}(t,s),&a\leq s\leq t\leq b. \end{array}\displaystyle \right . $$

Since \(g_{M}(t,s)\) is a Green’s function,

$$T_{n}[M] g_{M}(t,s)=0 ,\quad t\in[a,b] , t\neq s , $$

where \(g_{M}(t,s)\) is acting as a function of t.

Therefore, differentiating the previous expression, we deduce that

$$ T_{n}[M] \biggl( \frac{\partial^{h} g_{M}(t,s)}{\partial s^{h}} \biggr) = \frac{\partial^{h}}{\partial s^{h}} \bigl( T_{n}[M] g_{M}(t,s) \bigr) =0, \quad h=0,\ldots,n-1, t\neq s . $$
(27)

In particular, we can define the functions

$$\begin{aligned}& u(t) = \frac{\partial^{k}}{\partial s^{k}}g_{M}^{1}(t,s)_{|_{s=b}} \equiv {g_{M}^{1}}_{s^{k}}(t,b) ,\quad t\in I , \end{aligned}$$
(28)
$$\begin{aligned}& v(t) = \frac{\partial^{n-k}}{\partial s^{n-k}}g_{M}^{2}(t,s)_{| _{s=a}} \equiv{g_{M}^{2}}_{s^{n-k}}(t,a) ,\quad t\in I . \end{aligned}$$
(29)

Because of the relation between \(g_{M}(t,s)\) and \(g^{*}_{M}(t,s)\), shown in (10), and taking into account the boundary conditions of the adjoint operator, it is not difficult to deduce that

$$\begin{aligned}& {g_{M}^{2}}_{s^{h}}(t,a) = {g^{* 1}_{M}}_{t^{h}}(a,s)=0, \quad 0\leq h\leq n-k-1, \\& {g_{M}^{1}}_{s^{\ell}}(t,b) = {g^{* 2}_{M}}_{t^{\ell}}(b,s)=0 , \quad 0\leq\ell\leq k-1 . \end{aligned}$$

So, we are interested in knowing the values of M for which functions \(u(t)\) and \(v(t)\) oscillate on I. Such a property guarantees that Green’s function oscillates on a neighborhood of \(s=a\) or \(s=b\) for such values. Moreover, it provides a higher bound for the set of parameters where the Green’s function does not oscillate.

Step 1.1.:

Boundary conditions of \(v(t)\).

Because of equality (27) we know that \(T_{n}[M] v(t)=0\) on \((a,b]\). In this step we are going to see which boundary conditions satisfies function v.

We see that \(G(t,s)\) as it appears on (25) is the Green’s matrix related to the vectorial problem (4). Using the expressions of matrices B and C given by (5), if we consider the first row of resultant matrix, we obtain for \(s\in (a,b)\) the following expression:

$$\left \{ \textstyle\begin{array}{l} g_{M}^{1}(a,s)=0, \\ -{g_{M}^{1}}_{s}(a,s)+\alpha_{0}^{1}(s) g_{M}^{1}(a,s)=0, \\ \ldots, \\ (-1)^{n-k}{g_{M}^{1}}_{s^{n-k}}(a,s)+ \sum_{i=0}^{n-k-1} \alpha _{i}^{n-k}(s){g_{M}^{1}}_{s^{i}}(a,s)=0 . \end{array}\displaystyle \right . $$

Thus, while \(k>1\), none of the previous elements belongs to the diagonal of the matrix Green’s function. Since it has discontinuities only at its diagonal entries, see Definition 2.4, by considering the limit of s to a, we deduce that the previous equalities hold for \(g_{M}^{2}(a,a)\), i.e.,

$$\left \{ \textstyle\begin{array}{l} g_{M}^{2}(a,a)=0, \\ -{g_{M}^{2}}_{s}(a,a)+\alpha_{0}^{1}(a) g_{M}^{2}(a,a)=0, \\ \ldots, \\ (-1)^{n-k}{g_{M}^{2}}_{s^{n-k}}(a,a)+ \sum_{i=0}^{n-k-1} \alpha _{i}^{n-k}(a){g_{M}^{2}}_{s^{i}}(a,a)=0 , \end{array}\displaystyle \right . $$

so, we conclude that

$$g_{M}^{2}(a,a)={g_{M}^{2}}_{s}(a,a)= \cdots={g_{M}^{2}}_{s^{n-k}}(a,a)=0 , $$

hence \(v(a)=0\).

Analogously, since we do not reach any diagonal element, we deduce that \(v'(a)=\cdots=v^{(k-2)}(a)=0\).

Let us see what happens for \(v^{(k-1)}(a)\) with \(k>1\). We arrive at the following system written as a function of \(g_{M}^{1}(t,s)\):

$$\left \{ \textstyle\begin{array}{l} {g_{M}^{1}}_{t^{k-1}}(a,s)=0, \\ - {g_{M}^{1}}_{t^{k-1} s}(a,s)+\alpha_{0}^{1}(s) {g_{M}^{1}}_{t^{k-1}}(a,s)=0, \\ \ldots, \\ (-1)^{n-k} {g_{M}^{1}}_{t^{k-1} s^{n-k}}(a,s)+ \sum_{i=0}^{n-k-1} \alpha _{i}^{n-k}(s){g_{M}^{1}}_{t^{k-1} s^{i}}(a,s)=0 . \end{array}\displaystyle \right . $$

This system remains true for \(s=a\), and because of the continuity of the Green’s matrix at \(t=s\) on the non-diagonal elements and the break which is produced on its diagonal, we arrive at the following system for \(g_{M}^{2}(a,a)\):

$$\left \{ \textstyle\begin{array}{l} {g_{M}^{2}}_{t^{k-1}}(a,a)=0, \\ -{g_{M}^{2}}_{t^{k-1} s}(a,a)+\alpha_{0}^{1}(a) {g_{M}^{2}}_{t^{k-1}}(a,a)=0, \\ \ldots, \\ (-1)^{n-k} {g_{M}^{2}}_{t^{k-1} s^{n-k}}(a,a)+ \sum_{i=0}^{n-k-1} \alpha _{i}^{n-k}(a){g_{M}^{2}}_{t^{k-1} s^{i}}(a,a)=1, \end{array}\displaystyle \right . $$

hence

$${g_{M}^{2}}_{t^{k-1}}(a,a)=\cdots={g_{M}^{2}}_{t^{k-1} s^{n-k-1}}(a,a)=0 $$

and

$$v^{(k-1)}(a)= {g_{M}^{2}}_{t^{k-1} s^{n-k}}(a,a)=(-1)^{n-k} . $$

Obviously, taking \(k=1\), the same argument will tell us that \(v(a)=(-1)^{n-1}\).

To see the boundary conditions at \(t=b\), we have the following system for \(s\in(a,b)\), written as a function of \(g_{M}^{2}(t,s)\):

$$\left \{ \textstyle\begin{array}{l} g_{M}^{2}(b,s)=0, \\ -{g_{M}^{2}}_{s}(b,s)+\alpha_{0}^{1}(s) g_{M}^{2}(b,s)=0, \\ \ldots, \\ (-1)^{n-k} {g_{M}^{2}}_{s^{n-k}}(b,s)+ \sum_{i=0}^{n-k-1} \alpha _{i}^{n-k}(s) {g_{M}^{2}}_{s^{i}}(b,s)=0, \end{array}\displaystyle \right . $$

hence

$$g_{M}^{2}(b,s)=\cdots= {g_{M}^{2}}_{s^{n-k}}(b,s)=0 . $$

By continuity, this is satisfied at \(s=a\), so

$$v(b)= {g_{M}^{2}}_{s^{n-k}}(b,a)=0 . $$

Using (25) and (5), since there is no jump in this case, it is immediate to verify that \(v'(b)=\cdots=v^{(n-k-1)}(b)=0\).

As a consequence v is the unique solution of the following problem, which we denote as (\(\mathrm{P}_{v}\)):

$$\begin{aligned}& T_{n}[M] v(t) = 0 ,\quad t\in I , \\& v(a) = \cdots= v^{(k-2)}(a)=0, \quad \text{if } k>1, \end{aligned}$$
(30)
$$\begin{aligned}& v(b) = \cdots= v^{(n-k-1)}(b)=0 , \end{aligned}$$
(31)
$$\begin{aligned}& v^{(k-1)}(a) = (-1)^{n-k} . \end{aligned}$$
(32)

Remark 3.5

We note that, to attain the previous expression, we have not used any disconjugacy hypotheses on operator \(T_{n}[M]\). Moreover, the proof is valid for \(n-k\) even or odd. In other words, the function v solves problem (\(\mathrm{P}_{v}\)) for any linear operator defined in (1) and any \(k\in\{1, \ldots,n-1\}\).

We know, because \(g_{\bar{M}}(t,s)\) is of constant sign on \(I \times I\) (see Lemma 3.4), that if \(M=\bar{M}\) the function v must be of constant sign in I.

Step 1.2.:

If v is of constant sign in I then it cannot have any zero in \((a,b)\).

We are now going to see that while \(v(t)\) is of constant sign in I it cannot have any zero in \((a,b)\). So the sign change comes in at \(t=a\) or \(t=b\).

In order to do that, we are going to consider the decomposition of the operator \(T_{n}[M]\) made in Section 3.1.

Since \(n-k\) is even, using Lemma 3.4, we know that the operator \(T_{n}[\bar{M}+\lambda] \) is, for \(\lambda=0\), inverse positive on \(X_{k}\). So, the characterization of \(\lambda<0\) follows from Theorem 2.12.

For \(\lambda>0\), \(v\in C^{n}(I)\) is a solution of a linear differential equation, hence it is only allowed to have a finite number of zeros on I. Therefore, if \(v(t)\geq0\), we have \(v(t)>0\) for all \(t\in I\backslash \lbrace t_{0},\ldots,t_{\ell}\rbrace\). In particular \(v(t)>0\) for a.e. \(t\in I\). Thus

$$ T_{n}[\bar{M}] v(t)=-\lambda v(t)< 0 \quad \text{for a.e. } t\in I . $$
(33)

As we have shown in Section 3.1, we know that

$$T_{n}[\bar{M}] v(t)=v_{1}(t)\cdots v_{n}(t) \frac{d}{dt} \biggl( \frac {1}{v_{n}(t)} T_{n-1} v(t) \biggr). $$

Since for every \(k=1,\ldots,n\), \(v_{k}\in C^{n-k+1}(I)\) and \(v_{k}(t)>0\) on I, we conclude that \(\frac{1}{v_{n}(t)} T_{n-1} v(t)\) must be decreasing on I.

Therefore, since \(v_{n}(t)>0\) on I we see that \(T_{n-1}v(t)\) can vanish at most once in I.

Arguing by recurrence, we see that \(T_{0} v(t)=v(t)\) can have at most n zeros on I (multiple zeros being counted according to their multiplicity) while \(v(t)\geq0\).

On the other hand, because of the boundary conditions (30)-(32), we know that v vanishes \(n-1\) times on a and b, hence it cannot have a double zero on \((a,b)\). This implies that the sign change cannot come from \((a,b)\).

Step 1.3:

Change sign of v at \(t=a\) and \(t=b\).

We are now going to see that the sign change cannot come from a neighborhood of \(t=a\).

Since \(n-k\) is even, as we have proved before, \(v^{(k-1)}(a)=1>0\) for all \(M\in\mathbb{R}\), which implies, since \(v(a)=\cdots =v^{(k-2)}(a)=0\), that \(v(t)={g^{2}_{M}}_{s^{n-k}}(t,a)\) is always positive on a neighborhood of \(t=a\). So, the following property is verified:

$$ \exists\varepsilon>0 \text{ such that } \forall t\in (a, \varepsilon ), \exists\eta(t)>0 , g_{M}(t,s)=g_{M}^{2}(t,s)>0 \text{ for } s\in \bigl(a,a+\eta(t) \bigr). $$
(34)

Using Step 1.2, we see that v will keep constant sign on I while \(v^{(n-k)}(b)=0\) is not satisfied, i.e., while an eigenvalue of \(T_{n}[\bar{M}]\) on \(X_{k-1}\) is not attained.

Or equivalently, if \(M\in[\bar{M},\bar{M}-\lambda_{2}']\) then \(g_{M}(t,s)\) satisfies the following property:

$$\begin{aligned}& \forall t\in(a,b), \exists\eta(t)>0 \text{ such that } g_{M}(t,s)=g_{M}^{2}(t,s) \text{ is of constant sign} \\& \quad \text{for } s\in \bigl(a,a+\eta(t) \bigr). \end{aligned}$$
(35)

Moreover, by Theorem 2.11, we deduce that \(g_{M}(t,s)\) oscillates in \(I \times I\) for all \(M>\bar{M}-\lambda_{2}'\).

If \(k=1\), in particular we see that \(v(a)=1>0\). Since we have seen in Step 1.2 that, while v is of constant sign in I, it cannot have any zero in \((a,b)\), the sign change would come if \(v^{(n-1)}(b)=0\), which implies that v has a zero of multiplicity n at \(t=b\), and this fact is not possible for a nontrivial solution of a linear differential equation. Then, \(g_{M}(t,s)\) satisfies (35) for every \(M\geq \bar{M}\).

Step 1.4.:

Study of the function u.

In order to analyze the behavior of the Green’s function on a left neighborhood of \(s=b\), we work now with the function u defined in (28).

Using the same arguments as for v, we conclude that u is the unique solution of the following problem, which we denote as (\(\mathrm{P}_{u}\)):

$$\begin{aligned}& T_{n}[M] u(t) = 0 ,\quad t\in I , \\& u(a) = \cdots= u^{(k-1)}(a)=0, \end{aligned}$$
(36)
$$\begin{aligned}& u(b) = \cdots= u^{(n-k-2)}(b)=0 , \quad \text{if } k< n-1, \end{aligned}$$
(37)
$$\begin{aligned}& u^{(n-k-1)}(b) = (-1)^{k-1} . \end{aligned}$$
(38)

As in Remark 3.5, we see that this property does not depend either on the disconjugacy of the operator \(T_{n}[M]\) nor if \(n-k\) is even or odd.

Using analogous arguments to the ones done with v, we can prove that sign change cannot come on the open interval \((a,b)\)

Moreover, from condition \(u^{(n-k-1)}(b)=(-1)^{k-1}\), sign change of u cannot appear on b.

So u is of constant sign in I until \(u^{(k)}(a)=0\) is verified, i.e., while an eigenvalue of \(T_{n}[\bar{M}]\) on \(X_{k+1}\) is not attained. Or, equivalently, while \(M\in[\bar{M},\bar{M}-\lambda_{2}'']\).

Thus we see that if M is on that interval, the Green’s function satisfies the following property:

$$\begin{aligned}& \forall t\in(a,b) ,\exists\eta(t)>0 \text{ such that } g_{M}(t,s)=g_{M}^{1}(t,s) \text{ is of constant sign} \\& \quad \text{for } s\in \bigl(b-\eta(t),b \bigr). \end{aligned}$$
(39)

But once \(M>\bar{M}-\lambda_{2}''\) the Green’s function oscillates in \(I \times I\).

As a consequence of Step 1, we deduce that interval \((\bar{M}-\lambda _{1},\bar{M}-\lambda_{2}]\) cannot be enlarged. Moreover, we have also proved that the Green’s function satisfies the properties (35) and (39) for all M in such an interval.

Step 2.:

Behavior of the Greens function on a neighborhood of \(t=a\) and \(t=b\).

Now, let us see what happens on a neighborhood of \(t=a\) and \(t=b\). In order to do that, we are going to use the operator \(\widehat {T}_{n}[(-1)^{n} \bar{M}]\) defined in (11) and the relation between \(g_{M}(t,s)\) and \(\hat{g}_{(-1)^{n} M}(t,s)\) given in (12).

Arguing as in Step 1, we will obtain the values of the real parameter M for which \(\hat{g}_{(-1)^{n} M}(t,s)\) is of constant sign on a neighborhood of \(s=a\) and \(s=b\) for every fixed \(t\in(a,b)\). Once we have done it, we will be able to apply such a property to the behavior of \(g_{M}(t,s)\) on a neighborhood of \(t=a\) or \(t=b\).

The analogous problem for the operator \(\widehat{T}_{n}[(-1)^{n} M]\) related to problem (1)-(2) is given by

$$\left \{ \textstyle\begin{array}{l} \widehat{T}_{n}[(-1)^{n} M] v(t)=0,\quad t \in I , \\ v(a)=\cdots=v^{(n-k-1)}(a)=0 , \\ v(b)=\cdots=v^{(k-1)}(b)=0 . \end{array}\displaystyle \right . $$

Theorem 2.8 implies that the equation \(T_{n}^{*}[\bar{M}] u(t)=0\) is disconjugate on I. So, the same holds with \(\widehat{T}_{n}[(-1)^{n} \bar{M}] u(t)=0\). Reasoning as in Step 1, we are able to prove that \(\hat{g}_{(-1)^{n} M}(t,s)\) satisfies (35), while an eigenvalue of \(\widehat{T}_{n}[(-1)^{n} \bar{M}]\) on \(X_{n-k-1}\), let it be denoted as \(\hat{\lambda}_{2}''\), is not attained.

This fact is equivalent to the existence of an eigenvalue of \(T^{*}_{n}[\bar{M}]\) on \(X_{n-k-1}\), which will be \((-1)^{n} \hat {\lambda }_{2}''\). Now, using the fact that the real eigenvalues of an operator coincide with those of the adjoint operator, we conclude that \(\lambda _{2}''=(-1)^{n} \hat{\lambda}_{2}''\) is the biggest negative eigenvalue of \(T_{n}[\bar{M}]\) on \(X_{n-(n-k-1)}=X_{k+1}\) and \(\hat{g}_{(-1)^{n} M}(t,s)\) satisfies the property (35) while \(M\in[\bar {M},\bar {M}-\lambda_{2}'']\). So for all \(s\in(a,b)\), the Green’s function of problem (1)-(2), \(g_{M}(t,s)\), satisfies the following statement:

$$\begin{aligned}& \forall s\in(a,b), \exists\eta(s)>0 \text{ such that } g_{M}(t,s)=g_{M}^{1}(t,s) \text{ is of constant sign} \\& \quad \text{for } t\in \bigl(a,a+\eta(s) \bigr). \end{aligned}$$
(40)

Analogously, arguing as before, we know that if \(k>1\), then \(\hat {g}_{(-1)^{n} M}(t,s)\) satisfies the property (39) while an eigenvalue of \(\widehat{T}_{n}[M]\) on \(X_{n-k+1}\) is not attained, which is equivalent to the existence of an eigenvalue of \(T_{n}[M]\) on \(X_{k-1}\). Moreover, if \(k=1\), then \(\hat{g}_{(-1)^{n} M}(t,s)\) satisfies (39) for every \(M\geq\bar{M}\). Therefore, if \(M\in [\bar{M},\bar{M}-\lambda_{2}']\) we can affirm that the Green’s function of the operator \(\widehat{T}_{n}[(-1)^{n} M]\), \(\hat{g}_{(-1)^{n} M}(t,s)\), satisfies (39), as a consequence Green’s function of problem (1)-(2), \(g_{M}(t,s)\), will verify the following:

$$\begin{aligned}& \forall s\in(a,b), \exists\eta(s)>0 \text{ such that } g_{M}(t,s)=g_{M}^{1}(t,s) \text{ is of constant sign} \\& \quad \text{for } t\in \bigl(b-\eta(s),b \bigr). \end{aligned}$$
(41)

As a consequence of the two previous steps, we have already proved that if \(M\in[\bar{M},\bar{M}-\lambda_{2}]\) then the Green’s function satisfies the statements (35), (39), (40) and (41) and that if \(M>\bar{M}-\lambda_{2}\) Green’s function oscillates on \(I \times I\).

Step 3.:

The Greens function does not come to change sign on \((a,b)\times(a,b)\).

In this step we will prove that the oscillation of the Green’s function related to problem (1)-(2) must begin on the boundary of \(I \times I\). Using Theorem 2.11 we see that, provided it has a nonnegative sign on \(I\times I\), \(g_{M}\) decreases in M.

As a consequence, once we prove that \(g_{M}\) cannot have a double zero on \((a,b)\times(a,b)\), the change of sign must start on the boundary of \(I\times I\).

Let us see that if \(g_{M}(t,s)\geq0\) in \(I\times I\) then \(g_{M}(t,s)>0\) in \((a,b)\times(a,b)\).

Denote, for a fixed \(s\in(a,b)\), \(w_{s}(t)=g_{M}(t,s)\). By definition, denoting, as in Step 1, \(\lambda=M -\bar{M}\), we see that

$$T_{n}[\bar{M}] w_{s}(t)+\lambda w_{s}(t)=0 , \quad t\in I , t\neq s . $$

Since \(g_{\bar{M}}\ge0\) on \(I \times I\), the behavior for \(M<\bar{M}\) has been characterized in Lemma 3.4 and Theorem 2.12.

So we must pay attention to the situation \(M > \bar{M}\), i.e. \(\lambda>0\). In such a case, since, as in Step 1.2, we see that \(w_{s}(t)\geq0\) has a finite number of zeros in I, we know that

$$T_{n}[\bar{M}] w_{s}(t)=-\lambda w_{s}(t)< 0 \quad \text{for a.e. } t\in I . $$

Using (13) and (14), we see that

$$T_{n}[\bar{M}] w_{s}(t)=v_{1}(t)\cdots v_{n}(t) T_{n} w_{s}(t), $$

with \(v_{k}>0\) on I for \(k=1,\ldots,n\). In particular, \(T_{n} w_{s}(t)<0\) a.e. in I.

Notice that, for all \(s \in(a,b)\), \(w_{s} \in C^{n-2}(I)\) and \(w_{s}^{(n-1)}(s^{+})-w_{s}^{(n-1)}(s^{-})=1\). Therefore, due to the definition of \(T_{n}[\bar{M}]\) and expression (15), we see that \(\frac{1}{v_{n}(t)}T_{n-1} w_{s}(t)\) is a continuous function on \([a,s) \cup(s,b]\).

Since \(T_{n} w_{s}(t)=\frac{d}{dt} ( \frac{1}{v_{n}(t)}T_{n-1} w_{s}(t) ) <0\) for \(t \neq s\), we can affirm that \(\frac{1}{v_{n}(t)} T_{n-1} w_{s}(t)\) is a decreasing function on I with a positive jump at \(t=s\). So, it can have, at most, two zeros in I (see Figure 1).

Figure 1
figure 1

\(\pmb{\frac{1}{v_{n}(t)} T_{n-1} w_{s}(t)}\) , maximal oscillation with \(\pmb{I=[0,1]}\) .

Even if we cannot guarantee that \(T_{n-1} w_{s}(t)\) is decreasing, since \(v_{n}>0\) on I, we conclude that it has the same sign as \(\frac {1}{v_{n}(t)} T_{n-1} w_{s}(t)\), i.e., it can have at most two zeros on I.

On the other hand, using equation (15) again, we conclude that \(\frac{1}{v_{n-1}(t)} T_{n-2} w_{s}(t)\) is a continuous function on I. Now, (14) tells us that \(\frac{1}{v_{n-1}(t)} T_{n-2} w_{s}(t)\) can reach at most four zeros on I (see Figure 2).

Figure 2
figure 2

\(\pmb{\frac{1}{v_{n-1}(t)} T_{n-2} w_{s}(t)}\) , maximal oscillation with \(\pmb{I=[0,1]}\) .

As before, we do not know intervals where \(T_{n-2} w_{s}(t)\) is increasing or decreasing, but since \(v_{n-1}(t)>0\) we conclude that it has the same sign as \(\frac{1}{v_{n-1}(t)} T_{n-2} w_{s}(t)\), so it can reach at most four zeros.

Following this argument, since \(v_{k}>0\) on I for \(k=1,\ldots,n\), we know that \(T_{n-2-h}w_{s}(t)\) cannot have more than \(4+h\) zeros on I (multiple zeros being counted according to their multiplicity). In particular, \(w_{s}(t)= T_{0} w_{s}(t)\) can have \(n+2\) zeros at most, having n in the boundary.

This fact allows \(w_{s}\) to have a double zero on \((a,b)\). So, to show that such a double root cannot exist, we need to prove that maximal oscillation is not possible. To this end, we point out that if for any h it is verified that the sign of \(T_{n-2-h}w_{s}(a)\) is equal to the sign of \(T_{n-2-{h+1}}w_{s}(a)\) we lose a possible oscillation.

Therefore, for maximal oscillation we must have

$$\left \{ \textstyle\begin{array}{l@{\quad}l} T_{n-h}w_{s}(a)>0, & \text{if } h\text{ odd}, \\ T_{n-h}w_{s}(a)< 0, & \text{if } h\text{ even}. \end{array}\displaystyle \right . $$

However, since \(w_{s}(t)\geq0\) on I and \(w_{s}(a)=w_{s}'(a)= \cdots =w_{s}^{(k-1)}(a)=0\), we deduce that \(w_{s}^{(k)}(a)\geq0\).

We can assume that \(w_{s}^{(k)}(a)>0\) because, on the contrary, if \(w_{s}^{(k)}(a)=0\) we would have \(n+2\) zeros at most, having \(n+1\) in the boundary. So, only a simple zero is allowed in the interior, which is not possible without oscillation.

Therefore \(w_{s}^{(k)}(a)=w_{s}^{(n-(n-k))}(a)>0\). Since \(n-k\) is even, using now (16), we also know that \(T_{k} w_{s}(a)>0\), which inhibits maximal oscillation.

So we conclude that if \(g_{M}(t,s)\geq0\) on \(I\times I\) then \(g_{M}(t,s)>0\) on \((a,b)\times(a,b)\), as we wanted to prove.

As a consequence of the three previous steps, we have described the set of the real parameters M for which the Green’s function is nonnegative on \(I \times I\) when \(n-k\) is even.

If \(n-k\) is odd, we can do similar arguments to achieve the proof. In the sequel, we enumerate the main ideas to be developed.

Step 1.:
Step 1.1.:

It has no modifications.

Step 1.2.:

In equality (33) we have \(\lambda<0\) and \(v(t)<0\) a.e. in I, so it remains true and we can proceed analogously.

Step 1.3.:

In this case, we see that \(v^{(k-1)}(a)<0\). Our attainment in this Step is that \(g_{M}(t,s)\) verifies the property (35) while \(M\in[\bar{M}-\lambda_{2}',\bar{M}]\) and oscillates for all \(M<\bar{M}-\lambda_{2}'\).

If \(k=1\) the achievement is that \(g_{M}(t,s)\) verifies the property (35) for every \(M\leq\bar{M}\). In particular, for \(n=2\).

Step 1.4.:

The arguments are not modified, but the final achievement is that \(g_{M}(t,s)\) satisfies the property (39) for \(M\in[\bar{M}-\lambda_{2}'',\bar{M}]\) an oscillates for all \(M<\bar{M}-\lambda_{2}''\).

In this case, if \(k=n-1\), we can conclude similarly than in Step 1.3 that u is of constant sign for every \(M\leq\bar{M}\). Then the Green’s function satisfies the property (39). In particular for \(n=2\).

Step 2.:

Using the same arguments we conclude that the interval where \(g_{M}(t,s)\) verifies (35), (39), (40) and (41) is \([\bar{M}-\lambda_{2},\bar{M}]\).

Step 3.:

In this case we see that \(w^{(k)}_{s}(a)=w^{(n-(n-k))}_{s}(a)<0\), with \(n-k\) odd contradicting maximal oscillation too.

Thus, our result is proved.

As a direct consequence of the arguments used in Step 1.3, without assuming the existence of \(\bar{M}\in\mathbb{R}\) for which equation \(T_{n}[\bar{M}] u(t)=0\) is disconjugate on I, we arrive at the following result.

Corollary 3.6

Let \(T_{n}[M]\) be defined as in (1). Then the two following properties hold:

If \(n-k\) is even, then there does not exist \(M\in\mathbb{R}\) such that the operator \(T_{n}[M]\) is inverse negative in \(X_{k}\).

If \(n-k\) is odd, then there does not exist \(M\in\mathbb{R}\) such that the operator \(T_{n}[M]\) is inverse positive in \(X_{k}\).

Proof

It is enough to take into account that v, defined in (29), is the unique solution of problem (\(\mathrm{P}_{v}\)). Since \(v^{(k-1)}(a)=(-1)^{n-k}\) we conclude that, if \(n-k\) is even, the Green’s function has positive values in any neighborhood of \((a,a)\) and negative values when \(n-k\) is odd.

So, the result holds from Theorem 2.10. □

4 Particular cases

In order to obtain the eigenvalues of particular problems we calculate a fundamental system of solutions \(y_{1}[M](t),\ldots,y_{n}[M](t)\) of equation (1) where every \(y_{k}[M](t)\) satisfies the initial conditions

$$y_{k}^{(n-k)}[M](a)=1 ,\qquad y_{k}^{(n-j)}[M](a)=0 ,\quad j=1,\ldots,n , j\neq k . $$

Then we denote the \(n-1\) Wronskians as

$$W^{n}_{k}[M](t)= \left \vert \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} y_{1}[M](t)& \cdots &y_{k}[M](t) \\ y_{1}'[M](t)& \cdots &y_{k}' [M](t) \\ &\vdots& \\ y_{1}^{(k-1)}[M](t)& \cdots &y_{k}^{(k-1)}[M](t) \end{array}\displaystyle \right \vert ,\quad k=1,\ldots,n-1 . $$

As a consequence of the characterization done in Chapter 3, Lemma 12 of [15], we deduce that the eigenvalues of problem (1) in \(X_{k}\) are given as the \(\lambda\in\mathbb{R}\) for which \(W_{n-k}^{n}[-\lambda](b)=0\). So, in the sequel, we will use this method to find the eigenvalues of the different considered problems.

4.1 The operator \(T_{n}[M] u(t)\equiv u^{(n)}(t)+M u(t)\)

First of all, we are going to consider problems where \(T_{n}[M] u(t)\equiv u^{(n)}(t)+M u(t)\), with \([a,b]=[0,1]\).

In this kind of problems, for \(M=0\), \(u^{(n)}(t)=0\) is always disconjugate; see Chapter 3 of [15]. So, the hypotheses of Theorem 3.1 are satisfied.

Remark 4.1

Note that adjoint equation to problem \(T_{n}[M] u=0\), \(u \in X_{k}\) is given by

$$T_{n}^{*}[M] u(t)=(-1)^{n} u^{(n)}(t)+M u(t)=0,\quad u \in X_{n-k}. $$

So, if \(\lambda_{i}\) is an eigenvalue of \(u^{(n)}\) in \(X_{k}\), it is also an eigenvalue of \((-1)^{n} u^{(n)}\) in \(X_{n-k}\). Thus, \((-1)^{n} \lambda _{i}\) is an eigenvalue of \(u^{(n)}\) in \(X_{n-k}\).

As consequence, we only need to obtain first \(\lfloor\frac {n}{2} \rfloor\) Wronskians, where \(\lfloor\cdot\rfloor\) means the floor function.

  • Order 2

The eigenvalues of the operator \(u''(t)\) in \(X_{1}\) must satisfy \(W_{1}^{2}[\lambda](1)=0\), which can be replaced by the following equation:

$$ \sin(\sqrt{-\lambda})=0 , $$
(42)

so it closest to zero negative eigenvalue is \(\lambda_{2}^{1}=-\pi^{2}\).

And so, we can affirm that Green’s function related to the operator \(u''(t)+M u(t)\) is negative if, and only if, \(M\in ( -\infty, \pi^{2} ) \).

This result has been already obtained in different references (see [3] and references therein), but here it is not necessary to have the expression of the Green’s function.

  • Order 3

\(\lambda_{3}^{1}\approxeq4.23321\) is the least positive solution of \(W_{1}^{3}[\lambda^{3}](1)=0\), which is equivalent to the equation

$$ \cos \biggl(\frac{1}{2} \sqrt{3} \lambda \biggr)-\sqrt{3} \sin \biggl( \frac{1}{2} \sqrt{3} \lambda \biggr)=e^{\frac{-3 \lambda}{2}} . $$

Then, the least positive eigenvalue of the operator \(u^{(3)}(t)\) in \(X_{1}\) is \(( \lambda_{3}^{1} ) ^{3}\) and the biggest negative eigenvalue of the operator \(u^{(3)}(t)\) in \(X_{2}\) is \(- ( \lambda _{3}^{1} ) ^{3}\).

So, we can affirm that the Green’s function of the operator \(u^{(3)}(t)+M u(t)\):

  • in \(X_{1}\) is positive if, and only if, \(M\in ( - ( \lambda _{3}^{1} ) ^{3}, ( \lambda_{3}^{1} ) ^{3} ] \),

  • in \(X_{2}\) is positive if, and only if, \(M\in [ - ( \lambda _{3}^{1} ) ^{3}, ( \lambda_{3}^{1} ) ^{3} ) \).

This result has been obtained by means of the explicit form of the Green’s function in [25].

  • Order 4

\(\lambda_{4}^{1}\approxeq5.553\) is the least positive solution of \(W_{1}^{4}[\lambda^{4}](1)=0\), simplifying that expression we have

$$ \tan \biggl( \frac{\lambda}{\sqrt{2}} \biggr) =\tanh \biggl( \frac {\lambda }{\sqrt{2}} \biggr) . $$

\(\lambda_{4}^{2}\approxeq4.73004\) is the least positive solution of \(W_{2}^{4}[-\lambda^{4}](1)=0\), which can be expressed as

$$ \cos(\lambda) \cosh(\lambda)=1 . $$

The biggest negative eigenvalue of the operator \(u^{(4)}(t)\) in \(X_{1}\) and \(X_{3}\) is given by \(- ( \lambda_{4}^{1} ) ^{4}\).

The least positive eigenvalue of the operator \(u^{(4)}(t)\) in \(X_{2}\) is \(( \lambda_{4}^{2} ) ^{4}\).

Therefore, we can affirm without calculating it explicitly, that the Green’s function related to the operator \(u^{(4)}(t)+M u(t)\):

  • in \(X_{1}\) and \(X_{3}\) is negative if, and only if, \(M\in [ - ( \lambda_{4}^{2} ) ^{4}, ( \lambda_{4}^{1} ) ^{4} )\),

  • in \(X_{2}\) is positive if, and only if, \(M\in ( - ( \lambda _{4}^{2} ) ^{4}, ( \lambda_{4}^{1} ) ^{4} ]\).

These results have been obtained using the explicit form of the Green’s function in [26] and [27].

  • Order 5

We can obtain \(\lambda_{5}^{1}\approxeq6.94867\) and \(\lambda _{5}^{2}\approxeq 5.64117\) as the least positive solution of \(W_{1}^{5}[\lambda^{5}](1)=0\) and \(W_{2}^{5}[-\lambda^{5}](1)=0\), respectively. But the equations obtained are too complicated to show here and they have not so much interest.

The least positive eigenvalue of the operator \(u^{(5)}(t)\) in \(X_{1}\) is \(( \lambda_{5}^{1} ) ^{5}\).

The biggest negative eigenvalue of the operator \(u^{(5)}(t)\) in \(X_{2}\) is \(- ( \lambda_{5}^{2} ) ^{5}\).

The least positive eigenvalue of the operator \(u^{(5)}(t)\) in \(X_{3}\) is \(( \lambda_{5}^{2} ) ^{5}\).

The biggest negative eigenvalue of the operator \(u^{(5)}(t)\) in \(X_{4}\) is \(- ( \lambda_{5}^{1} ) ^{5}\).

Therefore, we conclude, without calculating it explicitly, that the Green’s function related to the operator \(u^{(5)}(t)+M u(t)\):

  • in \(X_{1}\) is positive if, and only if, \(M\in ( - ( \lambda _{5}^{1} ) ^{5}, ( \lambda_{5}^{2} ) ^{5} ]\),

  • in \(X_{2}\) is negative if, and only if, \(M\in [ - ( \lambda _{5}^{2} ) ^{5}, ( \lambda_{5}^{2} ) ^{5} )\),

  • in \(X_{3}\) is positive if, and only if, \(M\in ( - ( \lambda _{5}^{2} ) ^{5}, ( \lambda_{5}^{2} ) ^{5} ]\),

  • in \(X_{4}\) is negative if, and only if, \(M\in[ - ( \lambda _{5}^{2} ) ^{5}, ( \lambda_{5}^{1} ) ^{5} )\).

  • Order 6

\(\lambda_{6}^{1}\approxeq8.3788\) is the least positive solution of \(W_{1}^{6}[\lambda^{6}](1)=0\), which is equivalent to

$$\sin(\lambda)-\sqrt{3} \cos \biggl(\frac{\lambda}{2} \biggr) \sinh \biggl( \frac{\sqrt{3} \lambda}{2} \biggr)+\sin \biggl(\frac{\lambda }{2} \biggr) \cosh \biggl( \frac{\sqrt{3} \lambda}{2} \biggr)=0 . $$

\(\lambda_{6}^{2}\approxeq6.70763\) is the least positive solution of \(W_{2}^{6}[-\lambda^{6}](1)=0\), which we can express as

$$-3 e^{\lambda/2} \bigl(e^{2 \lambda}+1 \bigr)+\sqrt{3} \bigl(e^{\lambda}-1 \bigr)^{3} \sin \biggl(\frac{\sqrt{3} \lambda }{2} \biggr)+ \bigl(e^{\lambda}+1 \bigr)^{3} \cos \biggl( \frac{\sqrt{3} \lambda}{2} \biggr)-2 e^{3 \lambda/2} \cos (\sqrt{3} \lambda )=0 . $$

\(\lambda_{6}^{3}\approxeq6.28319^{6}\) is the least positive solution of \(W_{3}^{6}[\lambda^{6}](1)=0\), which can be represented as the first positive root of the following equation:

$$\sin(\lambda) \bigl(-\cos(\lambda)+\cosh (\sqrt{3} \lambda )+4 \bigr)-8 \sin \biggl(\frac{\lambda}{2} \biggr) \cosh \biggl(\frac{\sqrt{3} \lambda}{2} \biggr)=0 . $$

The biggest negative eigenvalue of the operator \(u^{(6)}(t)\) in \(X_{1}\) and \(X_{5}\) is given by \(- (\lambda_{6}^{1} )^{6}\).

The least positive eigenvalue of the operator \(u^{(6)}(t)\) in \(X_{2}\) and \(X_{4}\) is \((\lambda_{6}^{2} )^{6}\).

The biggest negative eigenvalue of the operator \(u^{(6)}(t)\) in \(X_{3}\) is \(- (\lambda_{6}^{3} )^{6}\).

Hence, we can affirm without calculating it explicitly, that the Green’s function related to the operator \(u^{(6)}(t)+M u(t)\):

  • in \(X_{1}\) or in \(X_{5}\) is negative if, and only if, \(M\in [ - ( \lambda_{6}^{2} ) ^{6}, ( \lambda_{6}^{1} ) ^{6} )\),

  • in \(X_{2}\) or in \(X_{4}\) is positive if, and only if, \(M\in ( - ( \lambda_{6}^{2} ) ^{6}, ( \lambda_{6}^{3} ) ^{6} ]\),

  • in \(X_{3}\) is negative if, and only if, \(M\in [ - ( \lambda _{6}^{2} ) ^{6}, ( \lambda_{6}^{3} ) ^{6} )\).

  • Order 7

We are not able to obtain analytically the eigenvalues of the operator \(u^{(7)}(t)\), but we can obtain them numerically.

The least positive eigenvalue of this operator in \(X_{1}\) is \(( \lambda_{7}^{1} ) ^{7}\), where \(\lambda_{7}^{1}\approxeq9.82677\).

The biggest negative eigenvalue in \(X_{2}\) is \(- ( \lambda _{7}^{2} ) ^{7}\), where \(\lambda_{7}^{2}\approxeq7.85833\).

The least positive eigenvalue in \(X_{3}\) is \(( \lambda_{7}^{3} ) ^{7}\), where \(\lambda_{7}^{3}\approxeq7.1347\).

The biggest negative eigenvalue in \(X_{4}\) is \(- ( \lambda _{7}^{3} ) ^{7}\).

The least positive eigenvalue in \(X_{5}\) is \(( \lambda_{7}^{2} ) ^{7}\).

The biggest negative eigenvalue in \(X_{6}\) is \(- ( \lambda_{7}^{1} ) ^{7}\).

So, we conclude, without calculating it explicitly, that the Green’s function related to the operator \(u^{(7)}(t)+M u(t)\):

  • in \(X_{1}\) is positive if, and only if, \(M\in ( - ( \lambda _{7}^{1} ) ^{7}, ( \lambda_{7}^{2} ) ^{7} ]\),

  • in \(X_{2}\) is negative if, and only if, \(M\in [ - ( \lambda _{7}^{3} ) ^{7}, ( \lambda_{7}^{2} ) ^{7} )\),

  • in \(X_{3}\) is positive if, and only if, \(M\in ( - ( \lambda _{7}^{3} ) ^{7}, ( \lambda_{7}^{3} ) ^{7} ]\),

  • in \(X_{4}\) is negative if, and only if, \(M\in [ - ( \lambda _{7}^{3} ) ^{7}, ( \lambda_{7}^{3} ) ^{7} )\),

  • in \(X_{5}\) is positive if, and only if, \(M\in ( - ( \lambda _{7}^{2} ) ^{7}, ( \lambda_{7}^{3} ) ^{7} ]\),

  • in \(X_{6}\) is negative if, and only if, \(M\in [ - ( \lambda _{7}^{2} ) ^{7}, ( \lambda_{7}^{1} ) ^{7} )\).

  • Order 8

\(\lambda_{8}^{1}\approxeq11.2846\), \(\lambda_{8}^{2}\approxeq9.06306\), \(\lambda _{8}^{3}\approxeq8.09971\), and \(\lambda_{8}^{4}\approxeq7.81871\) can be obtained analytically as the least positive solution of \(W_{1}^{8}[\lambda ^{8}](1)=0\), \(W_{2}^{8}[-\lambda^{8}](1)=0\), \(W_{3}^{8}[\lambda^{8}](1)=0\), and \(W_{4}^{8}[-\lambda^{8}](1)=0\), respectively, but their expressions are too big to show it here and they do not bring about any important information.

The biggest negative eigenvalue of the operator \(u^{(8)}(t)\) in \(X_{1}\) and \(X_{7}\) is given by \(-(\lambda_{8}^{1})^{8}\).

The least positive eigenvalue of the operator \(u^{(8)}(t)\) in \(X_{2}\) and \(X_{6}\) is given by \((\lambda_{8}^{2})^{8}\).

The biggest negative eigenvalue of the operator \(u^{(8)}(t)\) in \(X_{3}\) and \(X_{5}\) is given by \(-(\lambda_{8}^{3})^{8}\).

The least positive eigenvalue of the operator \(u^{(8)}(t)\) in \(X_{4}\) is \((\lambda_{8}^{4})^{8}\).

So, we can affirm without calculating it explicitly, that the Green’s function related to the operator \(u^{(8)}(t)+M u(t)\):

  • in \(X_{1}\) or in \(X_{7}\) is negative if, and only if, \(M\in [ - ( \lambda_{8}^{2} ) ^{8}, ( \lambda_{8}^{1} ) ^{8} )\),

  • in \(X_{2}\) or in \(X_{6}\) is positive if, and only if, \(M\in ( - ( \lambda_{8}^{2} ) ^{8}, ( \lambda_{8}^{3} ) ^{8} ]\),

  • in \(X_{3}\) or in \(X_{5}\) is negative if, and only if, \(M\in [ - ( \lambda_{8}^{4} ) ^{8}, ( \lambda_{8}^{3} ) ^{8} )\),

  • in \(X_{4}\) is positive if, and only if, \(M\in ( - ( \lambda _{8}^{4} ) ^{8}, ( \lambda_{8}^{3} ) ^{8} ]\).

As we have said before, third-order problems were explicitly calculated in [25]. Fourth-order problems were calculated in [26] in \(X_{2}\) and in [27] in \(X_{1}\) and \(X_{3}\), respectively. But in all of these cases it was necessary to obtain the expression of the Green’s function and analyze it.

Moreover, in all the problems treated in [2527] it is also satisfied that the open optimal interval where the Green’s function is of constant sign coincide with the optimal interval where equation (1) is disconjugate.

However, in Theorem 2.1 of [28] the following characterization of the interval of disconjugacy is proved.

Theorem 4.2

Let \(\bar{M}\in\mathbb{R}\) and \(n\geq2\) be such that \(T_{n}[\bar{M}] u(t)=0\) is a disconjugate equation on I. Then, \(T_{n}[M] u(t)=0\) is a disconjugate equation on I if, and only if, \(M\in(\bar{M}-\lambda _{1},\bar{M}-\lambda_{2})\), where:

  • \(\lambda_{1}=+\infty\) if \(n=2\) and, for \(n>2\), \(\lambda_{1}>0\) is the minimum of the least positive eigenvalues on \(T_{n}[\bar{M}]\) in \(X_{k}\), with \(n-k\) even.

  • \(\lambda_{2}<0\) is the maximum of the biggest negative eigenvalues on \(T_{n}[\bar{M}]\) in \(X_{k}\), with \(n-k\) odd.

As a consequence we see that the interval of constant sign of the Green’s function and the one of the disconjugacy for the linear operator are not the same in general. We have already proved (see Lemma 3.4) that while equation (1) is disconjugate its related Green’s function must be of constant sign. So, if both intervals do not coincide, the optimal interval where equation (1) is disconjugate must be contained in the open optimal interval where the Green’s function is of constant sign.

If, using the characterization given in Theorem 4.2, we calculate the optimal interval on M of disconjugacy for the equation

$$ u^{(5)}(t)+M u(t)=0 ,\quad t\in[0,1]. $$

It is given by \((- ( \lambda_{5}^{2} ) ^{5}, ( \lambda _{5}^{2} ) ^{5})\).

But, as we have shown before, the Green’s function related to the problem on the space \(X_{1}\) remains positive on the interval \((- ( \lambda_{5}^{1} ) ^{5},- ( \lambda_{5}^{2} ) ^{5}]\). So, its biggest open interval is strictly bigger than the optimal interval of disconjugacy.

Remark 4.3

In this kind of problems, if λ is an eigenvalue on \([0,1]\), then \(\frac{\lambda}{(b-a)^{n}}\) is an eigenvalue on \([a,b]\).

So, we can obtain our conclusions about Green’s function’ sign on any arbitrary interval \([a,b]\).

4.2 Operators with constant coefficients

This characterization of the interval where the Green’s function is of constant sign is also useful for those problems which have more non-nulls coefficients.

For example we can consider the operator of fourth order

$$ T_{n}[M] u(t)\equiv u^{(4)}(t)+10 u^{(3)}(t)+10 u''(t)+10 u'(t)+M u(t),\quad t\in[0,1]. $$
(43)

We can show, using the characterization given in Theorem 2.3, that \(T_{n}[0] u(t)=0\) is a disconjugate equation on \([0,1]\) and, so, Theorem 3.1 can be applied.

First, we calculate numerically the eigenvalues closest to zero in each \(X_{k}\), \(k=1,2,3\).

  • The biggest negative eigenvalue in \(X_{1}\) is \(-(7.02782)^{4}\).

  • The least positive eigenvalue in \(X_{2}\) is \((5.27208)^{4}\).

  • The biggest negative eigenvalue in \(X_{3}\) is \(-(5.97041)^{4}\).

Realize that in this case we need to obtain the three corresponding Wronskians because it is not possible to connect the eigenvalues in \(X_{1}\) with those in \(X_{3}\) by means of its corresponding adjoint equation.

So, we conclude that the Green’s function related to the operator \(T_{n}[M] u(t)\) defined in (43):

  • in \(X_{1}\) is negative if, and only if, \(M\in[-(5.27208)^{4},(7.02782)^{4})\),

  • in \(X_{2}\) is positive if, and only if, \(M\in(-(5.27208)^{4},(5.97041)^{4}]\),

  • in \(X_{3}\) is negative if, and only if, \(M\in[-(5.27208)^{4},(5,97041)^{4})\).

Notice that in this case the interval of disconjugation is \((-(5.27208)^{4},(5.97041)^{4})\). So, we have obtained an example of a fourth-order equation in which its interval of disconjugation does not coincide with the biggest open interval where the Green’s function is of constant sign in \(X_{1}\).

In the sequel, we show an example where the operator \(T_{n}[M]\) does not verify disconjugation hypothesis for \(\bar{M}=0\).

If we choose the operator

$$ T_{n}[M] u(t)\equiv u^{(4)}(t)+10 u^{(3)}(t)+550 u'(t)+M u(t) , \quad t\in [0,1] . $$
(44)

We see that the equation \(T_{n}[0] u(t)=0\) is not disconjugate on \([0,1]\), but if we analyze the equation \(T_{n}[-600] u(t)=0\) we can affirm, by means of Theorem 2.3, that it is disconjugate on \([0,1]\).

Hence, Theorem 3.1 can be applied to the operator \(T_{n}[-600] u(t)\).

If we calculate the eigenvalues closest to zero we have:

  • The biggest negative eigenvalue of \(T_{n}[-600] u(t)\) in \(X_{1}\) is \(-9\text{,}565.99\).

  • The least positive eigenvalue in \(X_{2}\) is 11.5685.

  • The biggest negative eigenvalue in \(X_{2}\) is −28.9753.

Hence, using Theorem 2.10, we can affirm that operator \(T_{n}[M] u(t)\), defined in (44):

  • in \(X_{1}\) is inverse negative if, and only if, \(M\in [-600-11.5685,-600+9\text{,}565.99)=[-611.5685,8\text{,}965.99)\),

  • in \(X_{2}\) is inverse positive if, and only if, \(M\in (-600-11.5685,-600+28.9753]=(-611.5685,-571.0247]\),

  • in \(X_{3}\) is inverse negative if, and only if, \(M\in [-600-11.5685,-600+28.9753)=[-611.5685,-571.0247)\).

4.3 Operators with non-constant coefficients

We have already seen that applying Theorem 3.1 is much easier to calculate optimal intervals for M where the Green’s function related to the operator \(T_{n}[M] u(t)\) than obtaining Green’s function expression explicitly. But, if we are referring to an operator with non-constant coefficients this characterization is even more useful because in the majority of the situations we are not able to obtain the explicit expression for the Green’s function.

Consider now the third-order operator

$$ T_{n}[M] u(t)\equiv u^{(3)}(t)+t u'(t)+M u(t) ,\quad t\in[0,1] $$
(45)

for which, by means of Theorem 2.3, we can verify that the equation \(T_{n}[0] u(t)=0\) is disconjugate on \([0,1]\).

If we calculate numerically the eigenvalues closest to zero of the operator defined in (45) we obtain:

  • \((4.19369)^{3}\) is the least positive eigenvalue of the operator \(T_{n}[0] u(t)\) in \(X_{1}\).

  • \(-(4.21255)^{3}\) is the biggest negative eigenvalue of the operator \(T_{n}[0] u(t)\) in \(X_{2}\).

So, we can affirm

  • the Green’s function related to the operator \(T_{n}[M] u(t)\) in \(X_{1}\) is positive if, and only if, \(M\in(-(4.19369)^{3},(4.21255)^{3}]\),

  • the Green’s function related to the operator \(T_{n}[M] u(t)\) in \(X_{2}\) is negative if, and only if, \(M\in[-(4.19369)^{3},(4.21255)^{3})\).

We can also apply it to a fourth-order operator whose eigenvalues were also obtained numerically.

$$ T_{n}[M]\equiv u^{(4)}(t)+e^{2 t} u'(t)+M u(t) , \quad t\in[0,1]. $$
(46)

We can verify, by means of Theorem 2.3 again, that \(T_{n}[0] u(t)=0\) is disconjugate on \([0,1]\).

If we calculate its eigenvalues we obtain:

  • The biggest negative eigenvalue in \(X_{1}\) is \(-(5.5325)^{4}\).

  • The least positive eigenvalue in \(X_{2}\) is \((4.7235)^{4}\).

  • The biggest negative eigenvalue in \(X_{3}\) is \(-(5.5815)^{4}\).

So, applying Theorem 3.1, we conclude that:

  • the Green’s function related to the operator \(T_{n}[M] u(t)\) in \(X_{1}\) is negative if, and only if, \(M\in[-(4.7235)^{4},(5.5325)^{4})\),

  • the Green’s function related to the operator \(T_{n}[M] u(t)\) in \(X_{2}\) is positive if, and only if, \(M\in(-(4.7235)^{4},(5.5325)^{4}]\),

  • the Green’s function related to the operator \(T_{n}[M] u(t)\) in \(X_{3}\) is negative if, and only if, \(M\in[-(4.7235)^{4},(5.5815)^{4})\).

5 Disconjugacy hypothesis cannot be removed on Theorem 3.1

In this last section we show that the disconjugacy hypothesis on Theorem 3.1 for some \(M=\bar{M}\) cannot be avoided in general.

To this end, we consider the operator

$$ T_{4}[M] u(t)=u^{(4)}(t)-1\text{,}000 u'(t)+M u(t) ,\quad t\in[0,1] , $$
(47)

coupled with two-point boundary value conditions

$$ u(0)=u'(0)=u''(0)=u(1)=0. $$
(48)

Equation (47) is not disconjugate for \(M=0\), indeed

$$u(t)=\frac{-e^{10 (t-1)}-2 e^{5-5 t} \cos (5 \sqrt{3} (t-1) )+3}{3\text{,}000} $$

is a solution of \(T_{4}[0] u(t)=0\) with five zeros on \([0,1]\).

In a first moment we will verify that the Green’s function related to problem (47)-(48) satisfies condition (\(\mathrm{N}_{g}\)) for \(\bar{M}=0\). So, by means of Theorem 2.13, we know that \(N_{T}=[-\mu ,-\lambda_{1})\) for some \(\mu\ge0\).

In a second part, we will prove that \(\mu\neq\lambda_{2}\), with \(\lambda _{2}\) the first eigenvalue related to the operator \(T_{4}[0]\) on the space \(X_{2}\).

As a consequence, we deduce that the validity of Theorem 3.1 is not ensured when the disconjugacy assumption fails.

We point out that, since the existence of at least one for which operator \(T_{4}[\bar{M}]\) is disconjugate on \([0,1]\) implies the validity of Theorem 3.1, the operator \(T_{4}[M]\) cannot be disconjugate on \([0,1]\) for any real parameter M and not only for \(\bar{M}=0\).

First, we obtain the Green’s function expression related to the operator \(T_{4}[0] u(t)\) in \(X_{3}\), \(g_{0}(t,s)\). By means of the Mathematica package developed in [24], we see that if it obeys the expression

$$\left \{ \textstyle\begin{array}{l} \frac{e^{10 (t-s)}-\frac{e^{-5 (2 s+t)} (-3 e^{10 s+5}+2 e^{15 s} \cos (5 \sqrt{3} (s-1) )+e^{15} ) (-3 e^{5 t}+e^{15 t}+2 \cos (5 \sqrt{3} t ) )}{-3 e^{5}+e^{15}+2 \cos (5 \sqrt{3} )}+2 e^{5 s-5 t} \cos (5 \sqrt{3} (t-s) )-3}{3\text{,}000}, \\ \quad 0\leq s\leq t\leq 1, \\ -\frac{e^{-5 (2 s+t)} (-3 e^{10 s+5}+2 e^{15 s} \cos (5 \sqrt {3} (s-1) )+e^{15} ) (-3 e^{5 t}+e^{15 t}+2 \cos (5 \sqrt{3} t ) )}{3\text{,}000 (-3 e^{5}+e^{15}+2 \cos (5 \sqrt{3} ) )}, \\ \quad 0< t< s\leq1. \end{array}\displaystyle \right . $$

Let us see now that \(g_{0}(t,s)\leq0\) on \([0,1]\times[0,1]\) and that it satisfies condition (\(\mathrm{N}_{g}\)), i.e., the following inequality is satisfied:

$$\frac{g_{0}(t,s)}{t^{3} (t-1)}>0 \quad \mbox{for all } (t,s)\in[0,1]\times (0,1) . $$

To study the behavior on a neighborhood of \(t=0\) and \(t=1\), we define the following functions:

$$\begin{aligned}& k_{1}(s)=\lim_{t\rightarrow0^{+}}\frac{g_{0}(t,s)}{t^{3} (t-1)} = \frac{e^{-10 s} (-3 e^{10 s+5}+2 e^{15 s} \cos (5 \sqrt{3} (s-1) )+e^{15} )}{6 (-3 e^{5}+e^{15}+2 \cos (5 \sqrt {3} ) )}, \\& k_{2}(s)=\lim_{t\rightarrow1^{-}}\frac{g_{0}(t,s)}{t^{3} (t-1)} \\& \hphantom{k_{2}(s)}= \frac {1}{300} e^{-10 s-5} \biggl(e^{15 s} \bigl(\sqrt{3} \sin \bigl(5 \sqrt {3} (s-1) \bigr)-\cos \bigl(5 \sqrt{3} (s-1) \bigr) \bigr)+e^{15} {\frac{e^{15}}{e^{15}}} \\& \hphantom{k_{2}(s)={}}{}+ \frac{ (-3 e^{10 s+5}+2 e^{15 s} \cos (5 \sqrt{3} (s-1) )+e^{15} ) (-e^{15}+\sqrt{3} \sin (5 \sqrt{3} )+\cos (5 \sqrt{3} ) )}{-3 e^{5}+e^{15}+2 \cos (5 \sqrt {3} )} \biggr) . \end{aligned}$$

In the sequel we will prove that both functions are strictly positive on \((0,1)\).

It is not difficult to verify that \(k_{1}(1)=k_{1}'(1)=k_{1}''(1)=0\) and that

$$k_{1}^{(3)}(1)=-\frac{500 e^{5}}{-3 e^{5}+e^{15}+2 \cos (5 \sqrt {3} )}< 0 . $$

If we prove that \(k_{1}^{(3)}(s)\) is strictly negative on \([0,1]\), since, in such a case, \(k_{1}''(s)\) would be positive and \(k_{1}'(s)\) negative, we will deduce that \(k_{1}(s)>0\) for \(s\in(0,1)\).

Due to the fact that

$$k_{1}^{(3)}(s)=-\frac{500 e^{-10 s} (2 e^{15 s} \cos (5 \sqrt {3} (s-1) )+e^{15} )}{3 (-3 e^{5}+e^{15}+2 \cos (5 \sqrt{3} ) )} , $$

we only must check that

$${k_{1}}_{1}(s):=2 e^{15 s} \cos \bigl(5 \sqrt{3} (s-1) \bigr)+e^{15}>0 ,\quad s\in[0,1] . $$

But the previous inequality holds immediately from the fact that

$$\min_{s \in[0,1]}{k_{1}}_{1}(s)=e^{15} \bigl(1-e^{-\frac{2 \pi}{ \sqrt {3}}} \bigr) >0 ,\quad s\in[0,1] . $$

Consider now the function \(k_{2}\). We see that \(k_{2}(0)=0\) and

$$k_{2}'(0)=\frac{1+e^{5} (e^{15}-\sqrt{3} (e^{10}-1 ) \sin (5 \sqrt{3} )- (1+e^{10} ) \cos (5 \sqrt {3} ) )}{10 e^{5} (-3 e^{5}+e^{15}+2 \cos (5 \sqrt{3} ) )}>0 . $$

So, we study the sign of its first derivative

$$ k_{2}'(s)=\frac{e^{-10 s-5}}{30 (-3 e^{5}+e^{15}+2 \cos (5 \sqrt{3} ) )} {k_{2}}_{0}(s), $$

with

$$\begin{aligned} {k_{2}}_{0}(s) =&e^{15 s} \bigl(\sqrt{3} \bigl(e^{5} \bigl(2 e^{10}-3 \bigr) \sin \bigl(5 \sqrt{3} (s-1) \bigr)+\sin (5 \sqrt{3} s ) \bigr) -3 e^{5} \cos \bigl(5 \sqrt{3} (s-1) \bigr) \\ &{}+3 \cos (5 \sqrt{3} s ) \bigr) +e^{15} \bigl(3 e^{5}- \sqrt {3} \sin (5 \sqrt{3} )-3 \cos (5 \sqrt{3} ) \bigr). \end{aligned}$$

It is clear that such a function satisfies

$${k_{2}}_{0}(s)> \bigl(-3-3 e^{5}+\sqrt{3} \bigl(-1+3 e^{5}-2 e^{15} \bigr) \bigr) e^{15 s}+e^{15} \bigl(3 e^{5}-\sqrt{3} \sin (5 \sqrt{3} )-3 \cos (5 \sqrt{3} ) \bigr), $$

which is positive for

$$\begin{aligned} s < &\frac{1}{15} \bigl(\log \bigl(3 e^{20}-\sqrt{3} e^{15} \sin (5 \sqrt{3} )-3 e^{15} \cos (5 \sqrt{3} ) \bigr) \\ &{}- \log \bigl(3+\sqrt{3}+3 e^{5}-3 \sqrt{3} e^{5}+2 \sqrt{3} e^{15} \bigr) \bigr) \\ \approxeq&0.32389. \end{aligned}$$

Moreover, for \(s\in[1-\frac{2 \pi}{5 \sqrt{3}},1-\frac{ \pi}{5 \sqrt{3}}]\approxeq[0.27448,0.63724]\) we see that

$${k_{2}}_{0}(s)> \bigl(-4-3 e^{5} \bigr) e^{15 s}+e^{15} \bigl(3 e^{5}-\sqrt{3} \sin (5 \sqrt{3} )-3 \cos (5 \sqrt{3} ) \bigr) , $$

and the right part of the previous equality is positive for

$$s< \frac{1}{15} \bigl(\log \bigl(3 e^{20}-\sqrt{3} e^{15} \sin (5 \sqrt{3} )-3 e^{15} \cos (5 \sqrt{3} ) \bigr)- \log \bigl(4+3 e^{5} \bigr) \bigr)\approxeq0.99954. $$

Then, we see that \(k_{2}'(s)>0\) for \(s\in[0,1-\frac{ \pi}{5 \sqrt{3}}]\), and, as a consequence, the same holds for \(k_{2}(s)\).

On the other hand, we see that \(k_{2}(1)=k_{2}'(1)=0\) and \(k_{2}''(1)=1\), moreover,

$$k_{2}''(s)=\frac{e^{-10 s-5}}{3 (-3 e^{5}+e^{15}+2 \cos (5 \sqrt {3} ) )} {k_{2}}_{1}(s) , $$

where

$$\begin{aligned} {k_{2}}_{1}(s) =&e^{15 s} \bigl(\sqrt{3} \bigl(e^{15} \sin \bigl(5 \sqrt{3} (s-1) \bigr)-\sin (5 \sqrt{3} s ) \bigr)+3 e^{5} \bigl(e^{10}-2 \bigr) \cos \bigl(5 \sqrt{3} (s-1) \bigr) \\ &{}+3 \cos (5 \sqrt{3} s ) \bigr)+e^{15} \bigl(-3 e^{5}+ \sqrt{3} \sin (5 \sqrt{3} )+3 \cos (5 \sqrt {3} ) \bigr) . \end{aligned}$$

Now, we must verify that \({k_{2}}_{1}(s)>0\).

If \(s>0.9\) we can bound it from below by the following function:

$$e^{15 s} \biggl(-3-\sqrt{3} \bigl(1+e^{15} \bigr)+3 e^{5} \bigl(e^{10}-2 \bigr) \cos \biggl(\frac{\sqrt{3}}{2} \biggr) \biggr)+e^{15} \bigl(-3 e^{5}+\sqrt{3} \sin (5 \sqrt{3} )+3 \cos (5 \sqrt{3} ) \bigr). $$

It is clear that it is positive for \(s\in(s_{1},1]\), where

$$s_{1}=\frac{1}{15} \log \biggl(\frac{-3 e^{20}+\sqrt{3} e^{15} \sin (5 \sqrt{3} )+3 e^{15} \cos (5 \sqrt{3} )}{3+\sqrt {3}+\sqrt{3} e^{15}+6 e^{5} \cos (\frac{\sqrt{3}}{2} )-3 e^{15} \cos (\frac{\sqrt {3}}{2} )} \biggr) \approxeq0.510335, $$

which ensures that \(k_{2}(s)>0\) on \((0.9,1)\).

On the other hand, for every \(s\in[0,1]\), function \(300 e^{10 s+5} k_{2}(s)\) is bounded from below by

$${k_{2}}_{2}=\frac{1}{100} \bigl(-476 e^{15 s}+303 e^{10 s+5}-e^{15} \bigr) , $$

which is positive for \(s\in(s_{2},s_{3})\), where

$$\begin{aligned} s_{2} =&1+\frac{1}{5} \log \biggl(\frac{101}{476}+ \frac{101}{476} \sqrt{3} \sin \biggl(\frac{1}{3} \tan^{-1} \biggl(\frac{476 \sqrt {973\text{,}657}}{917\text{,}013} \biggr) \biggr) \\ &{}-\frac{101}{476} \cos \biggl( \frac{1}{3} \tan^{-1} \biggl(\frac{476 \sqrt {973\text{,}657}}{917\text{,}013} \biggr) \biggr) \biggr) \\ \approxeq&0.438593 \end{aligned}$$

and

$$s_{3}=1+\frac{1}{5} \log \biggl(\frac{101}{476}+ \frac{101}{238} \cos \biggl(\frac{1}{3} \tan^{-1} \biggl( \frac{476 \sqrt{973\text{,}657}}{917\text{,}013} \biggr) \biggr) \biggr)\approxeq0.908 . $$

So, we conclude that \(k_{2}(s)>0\) for every \(s\in(0,1)\).

Now, in order to deduce condition (\(\mathrm{N}_{g}\)), we only have to verify that \(g_{0}(t,s)<0\) for every \((t,s)\in(0,1)\times(0,1)\).

If \(t< s\) we can express

$$g_{0}(t,s)=-\frac{e^{-5 (2 s+t)} \ell_{1}(s) \ell_{2}(t)}{3\text{,}000 (-3 e^{5}+e^{15}+2 \cos (5 \sqrt{3} ) )} , $$

where

$$\begin{aligned}& \ell_{1}(s) = \bigl(-3 e^{10 s+5}+2 e^{15 s} \cos \bigl(5 \sqrt {3} (s-1) \bigr)+e^{15} \bigr), \\& \ell_{2}(t) = \bigl(-3 e^{5 t}+e^{15 t}+2 \cos (5 \sqrt{3} t ) \bigr). \end{aligned}$$

So, we must prove that both functions are positive on \((0,1)\).

\(\ell_{1}(s)\) is a positive multiple of \(k_{1}(s)\), so, as we have proved before, it is positive for \(s\in(0,1)\).

To study the sign of \(\ell_{2}\), since it satisfies \(\ell_{2}(0)=\ell _{2}'(0)=\ell_{2}''(0)=0\), from the following expressions, valid for all \(t\in[0,1]\):

$$\ell_{2}^{(3)}(t)=375 \bigl(-e^{5 t}+9 e^{15 t}+2 \sqrt{3} \sin (5 \sqrt{3} t ) \bigr)\geq375 \bigl(-e^{5 t}+9 e^{15 t}-2 \sqrt{3} \bigr)>0, $$

we deduce that \(\ell_{2}(t)>0\) for every \(t\in(0,1)\).

Let us see now what happens for \(0< s\leq t<1\).

We can express \(g_{0}(t,s)\) as follows:

$$g_{0}(t,s)=\frac{1}{3\text{,}000} \bigl( p_{2}(t-s)-p_{1}(t,s) \bigr) ,\quad 0< s\leq t< 1 , $$

where

$$ p_{1}(t,s) = \frac{e^{-5(2s+t)} \ell_{1}(s) \ell_{2}(t)}{-3 e^{5}+e^{15}+2 \cos ( 5\sqrt{3} ) } $$

and

$$ p_{2}(r) = e^{10 r}+2 e^{-5 r} \cos ( 5 \sqrt{3} r ) -3 . $$

From the previously proved positiveness of \(\ell_{1}\) and \(\ell_{2}\), we know that \(p_{1}(t,s)>0\).

On the other hand, since \(p_{2}(0)=p_{2}'(0)=p_{2}''(0)=0\), if we verify that \(p_{2}^{(3)}(r)>0\) for every \(r\in[0,1]\), then we conclude that the same holds for \(p_{2}\) on \((0,1]\). In this case

$$p_{2}^{(3)}(r)=1\text{,}000 e^{10 r}+2\text{,}000 e^{-5 r} \cos (5 \sqrt{3} r ). $$

This function is trivially positive whenever \(0 \le r\leq\frac{\pi }{10 \sqrt{3}}\approxeq0.18138\).

Moreover, for every \(r\in[0,1]\), we see that

$$p_{2}^{(3)}(r)>1\text{,}000 e^{10 r}-2\text{,}000 e^{-5 r} , $$

which is positive if, and only if, \(r>\frac{\log(2)}{15}\approxeq0.0462\).

As consequence we deduce that \(p_{2}(r)>0\) for every \(r\in(0,1]\).

Then if we prove that \(p_{2}(t-s)< p_{1}(t,s)\) for \(0< s\leq t<1\), we can conclude that \(g_{0}(t,s)<0\).

Notice that, if we have two strictly convex functions on a suitable interval, we may affirm that they have at most two common points. In the sequel, to prove our result, we use this property.

Since by definition \(g_{0}(1,s)=0\), we know that \(p_{1}(1,s)=p_{2}(1-s)\), for every fixed \(s\in(0,1)\).

From the fact, proved before, that \(k_{2}>0\) on \((0,1)\), we know that \(g_{0}(t,s)<0\) on a neighborhood of \(t=1\) for every \(s\in(0,1)\). Then \(p_{1}(t,s)>p_{2}(t-s)\) on a neighborhood of \(t=1\) for every \(s\in(0,1)\).

Let us see now that, for every \(s\in(0,1)\), \(p_{1}(t,s)\), and \(p_{2}(t-s)\) are convex functions of t

By direct calculation, we see that

$$\frac{\partial^{2}}{\partial t^{2}}p_{1}(t,s)=\frac{100 e^{-5 (2 s+t)} (e^{15 t}+\sqrt{3} \sin (5 \sqrt{3} t )-\cos (5 \sqrt{3} t ) ) \ell_{2}(s)}{-3 e^{5}+e^{15}+2 \cos ( 5\sqrt {3} ) }, $$

so we only need to verify that

$${p_{1}}_{1}(t)= \bigl(e^{15 t}+\sqrt{3} \sin (5 \sqrt{3} t )-\cos (5 \sqrt{3} t ) \bigr)>0 , \quad t\in(0,1) . $$

The following inequality is trivially fulfilled:

$${p_{1}}_{1}(t)>e^{15 t}+\sqrt{3} \sin (5 \sqrt{3} t )-1=q_{1}(t) ,\quad t\in[0,1] . $$

We see that

$$q_{1}'(t)=15 e^{15 t}+15 \cos(5 \sqrt{3} t)>15 \bigl(e^{15 t}-1 \bigr)>0 , $$

since \(q_{1}(0)=0\), we conclude that \(q_{1}>0\) and, as a consequence, \({p_{1}}_{1}(t)>0\) on \((0,1]\) and also \(\frac{\partial^{2}}{\partial t^{2}}p_{1}(t,s)>0\).

We have already proved that \(p_{2}^{(3)}(r)>0\), for \(r\in[0,1]\), and \(p_{2}''(0)=0\), so for every fixed \(s\in(0,1)\), \(p_{2}''(t-s)>0\) for every \(t\in(s,1]\).

As a consequence, for any fixed \(s\in(0,1)\), both \(p_{1}(t,s)\) and \(p_{2}(t-s)\) are convex functions of t.

From the fact that \(p_{1}(t,s)>p_{2}(t-s)\) on a neighborhood of \(t=1\), \(p_{1}(1,s)=p_{2}(1-s)\) and, also, \(p_{1}(s,s)>0=p_{2}(0)\), we can affirm that \(p_{1}(t,s)>p_{2}(t-s)\) for \(t\in[s,1)\), and then \(g_{0}(t,s)<0\) for \(0< s\leq t<1\), and condition (\(\mathrm{N}_{g}\)) is fulfilled.

Now, as a consequence of Theorem 2.13, we know that \(g_{M}(t,s)\leq 0\) for \(M\in[0,-\lambda_{1})\), where \(\lambda_{1}<0\) is the biggest negative eigenvalue of \(T_{4}[0] u(t)\) in \(X_{3}\).

To verify that Theorem 3.1 does not hold in this case we will prove that for \(M<0\) the sign change does not come on the least positive eigenvalue of \(T_{4}[0] u(t)\) in \(X_{2}\).

As in the previous section, we can obtain numerically the first eigenvalues of \(T_{4}[0]\), which can be given by the following approximated values:

  • The biggest negative eigenvalue in \(X_{1}\) is \(\lambda_{3}\approxeq -(12.529)^{4}\).

  • The least positive eigenvalue in \(X_{2}\) is \(\lambda_{2}\approxeq (10.895)^{4}\).

  • The biggest negative eigenvalue in \(X_{3}\) in \(\lambda_{1}\approxeq -(9.458)^{4}\).

Remark 5.1

Realize that, since \(T_{4}[0] u(t)=0\) is not disconjugate on \([0,1]\), we have no a priori information as regards the sign of the eigenvalues \(\lambda_{3}\) and \(\lambda_{2}\). However, since \(g_{0}\) satisfies (\(\mathrm{N}_{g}\)), we can be ensured, without calculating it, that \(\lambda_{1}<0\).

Finally, let us see that there exists \(M^{*}>-\lambda_{2}\) for which \(g_{M^{*}}\) has no constant sign on \(I \times I\).

We are going to study the following function:

$$v(t)=\frac{\partial}{\partial s}g_{M^{*}}(t,s)_{|_{s=0}} . $$

As we have proved in the proof of Theorem 3.1, if this function has no constant sign on I then the Green’s function must necessarily change sign in a neighborhood of \(s=0\).

For \(M^{*}=-\frac{59\text{,}584}{9} \approxeq-(9.02032)^{4}\), \(v(t)\) obeys

$$\begin{aligned}& \biggl(\biggl(3 e^{-\frac{1}{3} (9+\sqrt{669} ) t} \biggl(277 e^{6 t} \bigl( (213 \sqrt{669}-27\text{,}875 ) e^{2 \sqrt{\frac{223}{3}} t}-27\text{,}875-213 \sqrt{669} \bigr) \\& \quad {}+446 e^{\sqrt{\frac{223}{3}} t} \biggl(537 \sqrt{831} \sin \biggl(\sqrt { \frac {277}{3}} t \biggr)+34\text{,}625 \cos \biggl(\sqrt{\frac{277}{3}} t \biggr) \biggr) \biggr)\biggr) \\& \quad \Big/8\text{,}441\text{,}871\text{,}944\biggr) \\& \quad {}- \biggl(\biggl(223 \biggl(537 \sqrt{831} \sin \biggl(\sqrt{ \frac{277}{3}} \biggr)+34\text{,}625 \cos \biggl(\sqrt{\frac {277}{3}} \biggr) \biggr) \\& \quad {}+277 e^{6} \biggl(213 \sqrt{669} \sinh \biggl(\sqrt{ \frac{223}{3}} \biggr)-27\text{,}875 \cosh \biggl(\sqrt{\frac {223}{3}} \biggr) \biggr)\biggr) \\& \quad \Big/\biggl(8\text{,}441\text{,}871\text{,}944 \biggl(-277 (2 \text{,}007+152 \sqrt {669} ) e^{6}+277 (152 \sqrt{669}-2\text{,}007 ) e^{6+2 \sqrt{\frac{223}{3}}} \\& \quad {}+446 e^{\sqrt {\frac {223}{3}}} \biggl(2\text{,}493 \cos \biggl(\sqrt{ \frac{277}{3}} \biggr)-98 \sqrt {831} \sin \biggl(\sqrt{\frac{277}{3}} \biggr) \biggr) \biggr)\biggr)\biggr) \\& \quad {}\times 6 e^{\sqrt{\frac{223}{3}}-\frac{1}{3} (9+\sqrt{669} ) t} \biggl(277 e^{6 t} \bigl( (152 \sqrt{669}-2\text{,}007 ) e^{2 \sqrt {\frac{223}{3}} t}-2\text{,}007-152 \sqrt{669} \bigr) \\& \quad {}+446 e^{\sqrt{\frac{223}{3}} t} \biggl(2\text{,}493 \cos \biggl(\sqrt{\frac{277}{3}} t \biggr)-98 \sqrt{831} \sin \biggl(\sqrt {\frac {277}{3}} t \biggr) \biggr) \biggr). \end{aligned}$$

This, see Figure 3, changes sign on I.

Figure 3
figure 3

Graph of v .

As a consequence the Green’s function has no constant sign for a value of M bigger than \(-\lambda_{2}\).

Even more, we can verify numerically which is the interval for M where \(g_{M}(t,s)\) is nonpositive on \(I \times I\). We observe that a change of sign comes first on the interior of \(I\times I\). It comes in \((t,s)\approxeq(0.7186,0.0307)\in(0,1)\times(0,1)\) for \(M \approxeq-(7.87022)^{4}\). So we deduce that it is given by \([-(7.87022)^{4},-\lambda_{1})\).

As a consequence we conclude that the example shows that if we suppress the disconjugacy hypothesis, Theorem 3.1 is not true in general.

References

  1. De Coster, C, Habets, P: Two-Point Boundary Value Problems: Lower and Upper Solutions. Mathematics in Science and Engineering, vol. 205. Elsevier, Amsterdam (2006)

    Google Scholar 

  2. Ladde, GS, Lakshmikantham, V, Vatsala, AS: Monotone Iterative Techniques for Nonlinear Differential Equations. Pitman, Boston (1985)

    MATH  Google Scholar 

  3. Cabada, A: Green’s Functions in the Theory of Ordinary Differential Equations. Briefs in Mathematics. Springer, Berlin (2014)

    Book  MATH  Google Scholar 

  4. Cabada, A: The method of lower and upper solutions for second, third, fourth, and higher order boundary value problems. J. Math. Anal. Appl. 185, 302-320 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  5. Krasnosel’skiĭ, MA: Positive Solutions of Operator Equations. Noordhoff, Groningen (1964)

    Google Scholar 

  6. Cabada, A, Cid, JA: Existence and multiplicity of solutions for a periodic Hill’s equation with parametric dependence and singularities. Abstr. Appl. Anal. 2011, Article ID 545264 (2011)

    Article  MathSciNet  Google Scholar 

  7. Graef, JR, Kong, L, Wang, H: A periodic boundary value problem with vanishing Green’s function. Appl. Math. Lett. 21, 176-180 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  8. Graef, JR, Kong, L, Wang, H: Existence, multiplicity, and dependence on a parameter for a periodic boundary value problem. J. Differ. Equ. 245, 1185-1197 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  9. Torres, PJ: Existence of one-signed periodic solutions of some second-order differential equations via a Krasnoselskii fixed point theorem. J. Differ. Equ. 190(2), 643-662 (2003)

    Article  MATH  Google Scholar 

  10. Cabada, A, Cid, JA: Existence of a non-zero fixed point for non-decreasing operators via Krasnoselskii’s fixed point theorem. Nonlinear Anal. 71, 2114-2118 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  11. Cabada, A, Cid, JA, Infante, G: New criteria for the existence of non-trivial fixed points in cones. Fixed Point Theory Appl. 2013, 125 (2013)

    Article  MathSciNet  Google Scholar 

  12. Cid, JA, Franco, D, Minhós, F: Positive fixed points and fourth-order equations. Bull. Lond. Math. Soc. 41, 72-78 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  13. Franco, D, Infante, G, Perán, J: A new criterion for the existence of multiple solutions in cones. Proc. R. Soc. Edinb., Sect. A 142, 1043-1050 (2012)

    Article  MATH  Google Scholar 

  14. Persson, H: A fixed point theorem for monotone functions. Appl. Math. Lett. 19, 1207-1209 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  15. Coppel, WA: Disconjugacy. Lecture Notes in Mathematics, vol. 220. Springer, Berlin (1971)

    MATH  Google Scholar 

  16. Zettl, A: A constructive characterization of disconjugacy. Bull. Am. Math. Soc. 81, 145-147 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  17. Zettl, A: A characterization of the factors of ordinary linear differential operators. Bull. Am. Math. Soc. 80, 498-499 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  18. Erbe, L: Hille-Wintner type comparison theorem for selfadjoint fourth order linear differential equations. Proc. Am. Math. Soc. 80(3), 417-422 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  19. Kwong, MK, Zettl, A: Asymptotically constant functions and second order linear oscillation. J. Math. Anal. Appl. 93(2), 475-494 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  20. Simons, W: Some disconjugacy criteria for selfadjoint linear differential equations. J. Math. Anal. Appl. 34, 445-463 (1971)

    Article  MathSciNet  MATH  Google Scholar 

  21. Elias, U: Eventual disconjugacy of \(y^{(n)}+\mu p(x) y=0\) for every μ. Arch. Math. 40(2), 193-200 (2004)

    MathSciNet  MATH  Google Scholar 

  22. Cabada, A, Cid, JA, Sanchez, L: Positivity and lower and upper solutions for fourth order boundary value problems. Nonlinear Anal. 67, 1599-1612 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  23. Li, H, Feng, Y, Bu, C: Non-conjugate boundary value problem of a third order differential equation. Electron. J. Qual. Theory Differ. Equ. 2015, 21 (2015)

    Article  MathSciNet  Google Scholar 

  24. Cabada, A, Cid, JA, Máquez-Villamarín, B: Computation of Green’s functions for boundary value problems with Mathematica. Appl. Math. Comput. 219, 1919-1936 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  25. Ma, R, Lu, Y: Disconjugacy and monotone iteration method for third-order equations. Commun. Pure Appl. Anal. 13(3), 1223-1236 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  26. Cabada, A, Enguiça, RR: Positive solutions of fourth order problems with clamped beam boundary conditions. Nonlinear Anal. 74, 3112-3122 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  27. Cabada, A, Fernández-Gómez, C: Constant sign solutions of two-point fourth order problems. Appl. Math. Comput. 263, 122-133 (2015)

    Article  MathSciNet  Google Scholar 

  28. Cabada, A, Saavedra, L: Disconjugacy characterization by means of spectral \((k,n-k)\) problems. Appl. Math. Lett. 52, 21-29 (2016)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

Alberto Cabada was partially supported by Ministerio de Educación, Cultura y Deporte, Spain, and FEDER, project MTM2013-43014-P. Lorena Saavedra was partially supported by Ministerio de Educación, Cultura y Deporte, Spain, and FEDER, project MTM2013-43014-P, and Plan I2C scholarship, Consellería de Educación, Cultura e O.U., Xunta de Galicia, and FPU scholarship, Ministerio de Educación, Cultura y Deporte, Spain. The authors would also like to express their special thanks to the reviewer of the paper for his/her remarks, which considerably improved the content of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alberto Cabada.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

Both authors have contributed equally and significantly in writing this article. Both authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cabada, A., Saavedra, L. The eigenvalue characterization for the constant sign Green’s functions of \((k,n-k)\) problems. Bound Value Probl 2016, 44 (2016). https://doi.org/10.1186/s13661-016-0547-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13661-016-0547-1

MSC

Keywords