The eigenvalue Characterization for the constant Sign Green's Functions of $(k,n-k)$ problems

This paper is devoted to the study of the sign of the Green's function related to a general linear $n^{\rm th}$-order operator, depending on a real parameter, $T_n[M]$, coupled with the $(k,n-k)$ boundary value conditions. If operator $T_n[\bar M]$ is disconjugate for a given $\bar M$, we describe the interval of values on the real parameter $M$ for which the Green's function has constant sign. One of the extremes of the interval is given by the first eigenvalue of operator $T_n[\bar M]$ satisfying $(k,n-k)$ conditions. The other extreme is related to the minimum (maximum) of the first eigenvalues of $(k-1,n-k+1)$ and $(k+1,n-k-1)$ problems. Moreover if $n-k$ is even (odd) the Green's function cannot be non-positive (non-negative). To illustrate the applicability of the obtained results, we calculate the parameter intervals of constant sign Green's functions for particular operators. Our method avoids the necessity of calculating the expression of the Green's function. We finalize the paper by presenting a particular equation in which it is shown that the disconjugation hypothesis on operator $T_n[\bar M]$ for a given $\bar M$ cannot be eliminated.


Introduction
It is very well known that the validity of the method of lower and upper solutions, coupled with the monotone iterative techniques [12,20] is equivalent to the constant sign of the sign of the Green's function related to the linear part of the studied problem [1,2]. Moreover, by means of the celebrated Krasnosel'skiȋ contraction/expansion fixed point theorem [18] nonexistence, existence and multiplicity results are derived from the construction of suitable cones on Banach spaces. Such construction follows by using adequate properties of the Green's function, one of them is its constant sign [3,16,17,25]. Recently, the combination of the two previous methods has proved to be a useful tool to ensure the existence of solution [4,5,11,15,23].
The main hypothesis consists on assuming that there is a real parameterM for which operator T n [M ] is disconjugate on I.
An exhaustive study of the general theory and the fundamental properties of the disconjugacy are compiled in the classical book of Coppel [10]. Different sufficient criteria to ensure the disconjugacy character of the linear operator T n [0] has been developed in the literature, we refer the classical references [26,27]. Sufficient conditions for particular cases have been obtained in [14,19,24] and, more recently, in [13]. We mention that operator u (n) (t) + a 1 (t) u (n−1) (t) is always disconjugate in I, see [10] for details, in particular the results here presented are valid for operator u (n) (t) + M u(t).
As it has been showed in [10], the disconjugacy character implies the constant sign of the Green's function g M related to problem (1)- (2). However, as we will see along the paper, the reciprocal property is not true in general: there are real parameters M for which the Green's function has constant sign but the equation (1) is not disconjugate. In other words, the disconjugacy character is only a sufficient condition in order to ensure the constant sign of a Green's function related to problem (1)- (2).
In fact, from the disconjugacy character of operator T n [M ] in I, it is showed in [10] that the Green's function g M satisfies a suitable condition, stronger than its constant sign. Such condition fulfills the one introduced in [1, Section 1.8]. So, following the results given in that reference we conclude that the set of parameters M for which g M has constant sign is an interval H T . Moreover if n − k is even then the maximum of H T is the opposed to the biggest negative eigenvalue of problem (1)-(2), when n − k is odd the minimum of H T is the opposed to the least positive eigenvalue of such problem.
Thus, the difficulty remains in the characterization of the other extreme of the interval H T . In this case, as it is showed in [1, Section 1.8], such extreme is not an eigenvalue of the considered problem, so to attain its exact value is not immediate. In practical situations it is necessary to obtain the expression of the Green's function, which is, in general, a difficult matter to deal with. We point out that this problem is not restricted to the (k, n − k) boundary conditions, the difficulty in obtaining the non eigenvalue extreme remains true for any kind of linear conditions [7,21]. In [6], provided operator T n [M ] has constant coefficients, it has been developed a computer algorithm that calculates the exact expression of a Green's function coupled with two-point boundary value conditions. However, such expression is often too complicated to manage it, and to describe the interval H T is really very difficult in practical situations. In fact there is not a direct method of construction for non constant coefficients.
We mention that the disconjugacy theory has been used in [22] to obtain the values for which the third order operators u + M u (i) , i = 0, 1, 2, coupled with conditions (1, 2) and (2, 1) have constant sign Green's function. Similar procedure has been done in [8] for the fourth order operator u (4) + M u, coupled with conditions (2, 2) and, more recently, in [9] with conditions (1, 3) and (3,1). In all the situations it is obtained the interval of disconjugacy and then, by means of the expression of the Green's function, it is proved that such interval is optimal. As we have mentioned above, this coincidence holds only in particular cases as the ones treated in these papers, in general the intervals of disconjugacy and constant sign Green's functions do not coincide for the n th -order operator It is for this that we make in this work a general characterization of the regular extreme of the interval of constant sign H T by means of the spectral theory. We will show that it is an eigenvalue of the same operator T n [M ] but related to different two-point boundary value conditions. In fact, if n − k is even, it will be the minimum of the two least positive eigenvalues related to conditions (k − 1, n − k + 1) and (k + 1, n − k − 1). It will be the maximum of the two biggest negative eigenvalues of such problems when n − k is odd. So, we make a general characterization for the general operator T n [M ] and we avoid the necessity of calculate the Green's function and to study its sign dependence on the real parameter M .
We note that if operator T n [M ] has constant coefficients, to obtain the corresponding eigenvalues we only must to calculate the determinant of the matrix of coefficients of a linear homogeneous algebraic system. Numerical methods are also valid for the non-constant case.
It is important to mention that, as consequence of the obtained results, denoting by g M the Green's function related to problem (1)-(2), we conclude that (−1) n−k g M (t, s) cannot be negative on I × I for all M ∈ R.
The paper is scheduled as follows: in a preliminary section 2 we introduce the fundamental concepts that are needed in the development of the paper. Next section is devoted to the proof of the main result in which the regular extreme is obtained via spectral theory. Last section is devoted to some particular cases where it is showed the applicability of the obtained results.

Preliminaries
In this section, for the convenience of the reader, we introduce the fundamental tools in the theory of disconjugacy and Green's functions that will be used in the development of further sections. Definition 2.1. Let a k ∈ C n−k (I) for k = 1, . . . , n. The n th -order linear differential equation (1) is said to be disconjugate on an interval I if every non trivial solution has less than n zeros on I, multiple zeros being counted according to their multiplicity.
Definition 2.2. The functions u 1 , . . . , u n ∈ C n (I) are said to form a Markov system on the interval I if the n Wronskians are positive throughout I.
The following result about this concept is collected on [10, Chapter 3].
Theorem 2.3. The linear differential equation (1) has a Markov fundamental system of solutions on the compact interval I if, and only if, it is disconjugate on I.
In order to introduce the concept of Green's function related to the n thorder scalar problem (1)-(2), we consider the following equivalent first order vectorial problem: with x(t) ∈ R n , A(t), B, C ∈ M n×n , defined by Here I j , j = 1, . . . , n − 1, is the j × j identity matrix.
Definition 2.4. We say that G is a Green's function for problem (4) if it satisfies the following properties: (G3) For all i = j the scalar functions G i,j have a continuous extension to I ×I.
It is very well known that Green's function related to this problem follows the following expression [1, Section 1.4] where g M (t, s) is the scalar Green's function related to problem (1)-(2).
Using Definition 2.4 we can deduce the properties fulfilled by g M (t, s). In particular, g M ∈ C n−2 (I) and it satisfies, as a function of t, the two-point boundary value conditions (2).
We also mention a result which appears on [10, Chapter 3, Section 6] and that connects the disconjugacy and the sign of Green function related to the problem (1)-(2). Lemma 2.5. If the linear differential equation (1) is disconjugate and g M (t, s) is the Green's function related to the problem (1)-(2), hence where The adjoint of the operator T n [M ], is given by the following expression, see for details [1,Section 1.4] or [10, Chapter 3, Section 5], and its domain of definition is In our case, because of boundary conditions (2), we can express the domain of the operator T n [M ], D(T n [M ]), as X k = u ∈ C n (I) | u(a) = · · · = u (k−1) (a) = u(b) = · · · = u (n−k−1) (b) = 0 , so we can replace expression (8) with So, in order to simplify the previous expression, we choose a function u ∈ C n (I) satisfying So, realizing that a 0 = 1, we conclude that every function Moreover, if we now choose a function in C n (I) that satisfies we conclude that any function v ∈ D(T * n [M ]) has to satisfy Since a 1 ∈ C n−1 (I) and v(b) = 0, we conclude that v (b) = 0. Repeating this process we achieve that the domain of the adjoint operator is given by The next result appears in [10, Chapter 3, Theorem 9] Theorem 2.6. The equation (1) is disconjugate on an interval I if, and only if, the adjoint equation, T * n [M ] y(t) = 0 is disconjugate on I.
We denote g * M (t, s) as the Green function of the adjoint operator, T * n [M ]. In [1,Section 1.4] it is proved the following relationship Defining now the following operator we deduce, from the previous expression, that Obviously, Theorem 2.6 remains true for operator T n [(−1) n M ].
Definition 2.7. Operator T n [M ] is said to be inverse positive (inverse negative) on X k if every function u ∈ X k such that T n [M ] u ≥ 0 in I, must verify u ≥ 0 (u ≤ 0) on I.
In the sequel, we introduce two conditions on g M (t, s) that will be used along the paper.
Finally, we introduce the following sets, which are going to particularize H T , Next results describe the structure of the two previous parameter's set.
Theorem 2.10. LetM ∈ R be fixed. Suppose that operator T n [M ] is invertible on X k , its related Green's function is nonnegative on I × I, it satisfies condition (P g ), and the set P T is bounded from above.
is invertible in X k and the related nonnegative Green's function gM −μ vanishes at some points on the square I × I.
is invertible in X k , its related Green's function is nonpositive on I × I, it satisfies condition (N g ), and the set N T is bounded from below.
is invertible in X k and the related nonpositive Green's function gM −μ vanishes at some points on the square I × I.

Main Result
This section is devoted to prove the eigenvalue characterization of the sets P T and N T . Such result is enunciated on the following Theorem Then the two following properties are fulfilled: In order to prove this result, we separate the proof in several subsections.

Decomposition of operator T n [M ]
We are interested into put operator T n [M ] as a composition of suitable operators of order h ≤ n. Such expression allow us to control the values of such operators at the extremes of the interval a and b.
We recall the following result proved in [10, Chapter 3] where v k > 0 on I and v k ∈ C n−k+1 (I) for k = 1, . . . , n.
It is obvious that for any real parameter M , denoting λ = M −M , we can rewrite operator T n [M ] as follows: If we assume that equation T n [M ]u(t) = 0 is disconjugate on I, because of Theorems 2.3 and 3.2, we can express T n [M ] as where T k are built as with v k > 0 on I, v k ∈ C n−k+1 (I), for k = 1, . . . , n.
Let's see now that T h u(t) is given as a linear combination of where p hi ∈ C n−h (I).
Indeed, we are going to prove this equality by induction.
Assume, by induction hypothesis, that equation (15) is verified for some h ∈ {1, . . . , n − 1} , therefore which, clearly has the form of equation (15). Finally, taking into account boundary conditions (2) and the regularity of functions p hi , we conclude that So, from the positiveness of v h on I, h ∈ {1, . . . , n}, we have that T k u(a) and u (k) (a) have the same sign. The same property holds for T n−k u(b) and u (n−k) (b).

Expression of the matrix Green's function
This subsection is devoted to express, as functions of g M (t, s), the functions g 1 (t, s), . . . , g n−1 (t, s), defined on (6), as the first row componentes of the Green's function of the vectorial system (4), By studying the adjoint operator as in [1, Section 1.3], we know that the related Green's function of the adjoint operator G * satisfies that G * (t, s) = G T (s, t). Moreover, the following equality holds: So, we can transform previous equality in or, which is the same, Using this equality, we are going to prove by induction the following ones Here α j i (s) are functions of a 1 (s) , . . . , a j (s) and of its derivatives until order (j − 1) and follow the recurrence formula Using equality (18), we deduce that the Green's matrix' terms which are on position (1, i), i = 1, . . . , n, satisfy the following equality where g M (t, s) ≡ g n (t, s).
Now, we can express Green's matrix related to problem (4), G(t, s), as If coefficients a 1 (s), . . . , a n−1 (s), a n (s) are constants, a 1 , . . . , a n−1 , a n , we can solve explicitly the recurrence form (20) - (22) and deduce that So, we have that and we can rewrite G(t, s) as In particular, if T n [M ] u(t) ≡ u (n) (t) + M u(t) we conclude that g n−j (t, s) = (−1) j ∂ j ∂s j g M (t, s) , so Green's matrix, G(t, s), is given by expression Remark 3.3. We note that in the general case it is possible to obtain some of the components of system (20) - (22).

Proof of the main results
Now we will proceed with the proof of the Main Theorem 3.1. To this end, we will divide the proof in several steps. First, we are going to show a previous lemma.
Lemma 3.4. LetM ∈ R, such that T n [M ] u(t) = 0 is disconjugate on I. Then the following properties are fulfilled: • If n − k is even, then T n [M ] is a inverse positive operator on X k and its related Green's function, gM (t, s), satisfies (P g ).
• If n − k is odd, then T n [M ] is a inverse negative operator on X k and its related Green's function satisfies (N g ).
Proof. By Lemma 2.5 we have that for all s ∈ (a, b) so, for each s ∈ (a, b), we have that function is a strictly positive and continuous function in I, thus Since g M is a continuous function, we have that k 1 and k 2 are continuous functions too.
If n − k is even, we take φ(t) = p(t) and condition (P g ) is trivially fulfilled. If n − k is odd, we take φ(t) = −p(t) and multiplying equation (25) by −1, condition (N g ) holds immediately.
First, notice that, as a direct corollary of the previous Lemma the assertion for λ 1 in Theorem 3.1 follows directly from Theorems 2.10 and 2.11. Now, we are going to prove the assertion in Theorem 3.1 concerning λ 2 . The proof will be done in several steps. In a first moment we will show that, if n − k is even, the Green's function changes sign for all M >M − λ 2 and for all M <M − λ 2 when n − k is odd.
After that we will prove that such estimation is optimal in both situations. In order to make the paper more readable, along the proofs of this subsection it will be assumed that n − k is even. The arguments with n − k odd will be pointed out at the end of the subsection.
Step 1. Behavior of Green's function on a neighborhood of s = a and s = b.
First, we construct two functions that will characterize the values of M ∈ R for which Green's function oscillates, or not, on a neighborhood of s = a and s = b.
In order to do that, we denote Green's function related to problem (1)-(2) as follows Since g M (t, s) is a Green's function, it is satisfied that where g M (t, s) is acting as a function of t. Therefore, differentiating the previous expression, we deduce that In particular, we can define the functions v(t) = ∂ n−k ∂s n−k g 2 M (t, s) |s=a ≡ g 2 M s n−k (t, a) , t ∈ I .
Because of the relation between g M (t, s) and g * M (t, s), showed in (10), and taking into account the boundary conditions of the adjoint operator, it is not difficult to deduce that So, we are interested in to know the values of M for which functions u(t) and v(t) oscillate on I. Such property guarantees that Green's function oscillates on a neighborhood of s = a or s = b for such values. Moreover it provides a higher bound for the set of parameters where Green's function does not oscillate.
Step 1.1. Boundary conditions of v(t).
Because of equality (26) we know that T n [M ]v(t) = 0 on I. In this step we are going to see which boundary conditions satisfies function v.
We have that G(t, s) as it appears on (24) is Green's matrix related to vectorial problem (4). Using the expressions of matrices B and C given by (5), if we consider first row of resultant matrix, we obtain for s ∈ (a, b) the following expression Thus, while k > 1, none of the previous elements belongs to the diagonal of the matrix Green's function. Since it has discontinuities only at its diagonal entries, see Definition 2.4, by considering the limit of s to a, we deduce that the previous equalities hold for g 2 M (a, a), i.e. : Let's see what happens for v (k−1) (a) with k > 1. We arrive at the following system written as a function of g 1 M (t, s): This system remains true for s = a, and because of the continuity of Green's matrix on t = s on the non-diagonal elements and the break which is produced on its diagonal, we arrive at the following system for g 2 M (a, a): To see the boundary conditions at t = b, we have the following system for s ∈ (a, b), written as a function of g 2 M (t, s) By continuity, this is satisfied at s = a, so v(b) = g 2 M s n−k (b, a) = 0 . Using (24) and (5), since there is no jump in this case, it is immediate to verify that v (b) = · · · = v (n−k−1) (b) = 0.
As consequence v is the unique solution of the following problem, which we denote as (P v ): Remark 3.5. We note that, to attain the previous expression, we have not used any disconjugacy hypotheses on operator T n [M ]. Moreover the proof is valid for n − k even or odd. In other works, function v solves problem (P v ) for any linear operator defined in (1) and any k ∈ {1, . . . , n − 2}.
We know, because of gM (t, s) is of constant sign on I × I (see Lemma 3.4), that if M =M function v must be of constant sign in I.
Step 1.2. If v is of constant sign in I then it can not have any zero in (a, b).
We are now going to see that while v(t) is of constant sign in I it can not have any zero in (a, b). So the sign change comes on at t = a or t = b.
In order to do that, we are going to consider the decomposition of operator T n [M ] made in Subsection 3.1.
Since n − k is even, using Lemma 3.4, we know that operator T n [M + λ] is, for λ = 0, inverse positive on X k . So, the characterization of λ < 0 follows from Theorem 2.10.
For λ > 0, v ∈ C n (I) is a solution of a linear differential equation, hence it is only allowed to have a finite number of zeros on I. Therefore, if v(t) ≥ 0, we have that v(t) > 0 for all t ∈ I\ {t 0 , . . . , t }. In particular v(t) > 0 for a.e. t ∈ I. Thus As we have shown in Subsection 3.1, we know that Since for every k = 1, . . . , n, v k ∈ C n−k+1 (I) and v k (t) > 0 on I, we conclude T n−1 v(t) must be decreasing on I.
Therefore, since v n (t) > 0 on I we have that T n−1 v(t) can vanish at most once in I.
Arguing by recurrence, we have that T 0 v(t) = v(t) can have at most n zeros on I (multiple zeros being counted according to their multiplicity) while v(t) ≥ 0.
On the other hand, because of the boundary conditions (2), we know that v vanishes n − 1 times on a and b, hence it can not have a double zero on (a, b). This implies that sign change can not come from (a, b).
Step 1.3 Change sign of v at t = a and t = b.
We are now going to see that the sign change cannot come from a neighborhood of t = a.
Since n − k is even, as we have proved before, v (k−1) (a) > 0 for all M ∈ R, which implies, since v(a) = · · · = v (k−2) (a) = 0, that v(t) = g 2 M s n−k (t, a) is always positive on a neighborhood of t = a. This allows us to affirm that Green's function, g M (t, s), is positive on a neighborhood of (a, a).
Using Step 1.4. Study of function u.
In order to analyze the behavior of the Green's function on a left neighborhood of s = b, we work now with the function u defined in (27).
Using the same arguments than of v, we conclude that u is the unique solution of the following problem, which we denote as (P u ): Thus we have that if M is on that interval, Green's function g M (t, s) has constant sign on a left neighborhood of s = b, but once M >M − λ 2 Green's function oscillates in I × I.
As a consequence of Step 1, we deduce that interval (M −λ 1 ,M −λ 2 ] can not be increased without a oscillation of Green's function. We also know that if M is in that interval, then Green's function is of constant sign on a neighborhood of s = a and of s = b. Step 2. Behavior of Green's function on a neighborhood of t = a and t = b. Now, we are going to see what happens on a neighborhood of t = a and t = b. In order to do that, we are going to use the operator T n [(−1) nM ] defined in (11) and the relation between g M (t, s) and g (−1) n M (t, s) given in (12).
Applying the same arguments developed in Step 1, we will obtain the values of the real parameter M for which g (−1) n M (t, s) is of constant sign on a neighborhood of s = a and s = b. Once we have done it, we will be able to apply such property to the behavior of g M (t, s) on a neighborhood of t = a or t = b.
The analogous problem for operator T n [(−1) n M ] related to problem (1)-(2) is given by Theorem 2.6 implies that equation T * n [M ] u(t) = 0 is disconjugate on I. So, the same holds with T n [(−1) nM ] u(t) = 0. Hence, applying the same arguments than with g M (t, s), we have that g (−1) n M (t, s) will be of constant sign on a neighborhood of s = a, while an eigenvalue of T n [(−1) nM ] on X n−k−1 , let it be denoted as λ 2 , does not exist.
This fact is equivalent to the existence of an eigenvalue of T * n [M ] on X n−k−1 , that will be (−1) n λ 2 . Now, using the fact that the real eigenvalues of an operator coincide with those of the adjoint operator, we conclude that λ 2 = (−1) n λ 2 is the biggest negative eigenvalue of T n [M ] on X n−(n−k−1) = X k+1 and g (− Step 3. The Green's function does not become to change sign on (a, b) × (a, b).

In this
Step we will prove that the oscillation of Green's function related to problem (1)-(2) must begin on the boundary of I × I. Using Theorem 2.9 we have that, provided it has nonnegative sign on I × I, g M decreases in M .
As consequence, once we prove that g M cannot have a double zero on (a, b) × (a, b), the change of sign must start on the boundary of I × I.
Let's see that if g M (t, s) ≥ 0 in I × I then g M (t, s) > 0 in (a, b) × (a, b). Denote, for a fixed s ∈ (a, b), w s (t) = g M (t, s). By definition, denoting, as in Step 1, λ = M −M , we have that T n [M ] w s (t) + λ w s (t) = 0 , t ∈ I , t = s .
Since gM ≥ 0 on I × I, the behavior for M <M has been characterized in Lemma 3.4 and Theorem 2.10.
So we must pay our attention on the situation M >M , i.e. λ > 0. In such a case, since, as in Step 1.2, we have that w s (t) ≥ 0 has a finite number of zeros in I, we know that Using (13) and (14), we have that with v k > 0 on I for k = 1, . . . , n. In particular, it is satisfied that T n w s (t) < 0 a.e. in I.
Notice that, for all s ∈ (a, b), it is satisfied that w s ∈ C n−2 (I) and w Since T n w s (t) = d dt T n−1 w s (t) is a decreasing function on I with a positive jump at t = s. So, it can have, at most, two zeros in I, (see Figure 1). Even we can not guarantee that T n−1 w s (t) is decreasing, since v n > 0 on I, we conclude that it has the same sign as 1 v n (t) T n−1 w s (t), i.e, it can have at most two zeros on I. On another hand, using equation (15) again, we conclude that is a continuous function on I. Now, (14) tell us that 1 v n−1 (t) T n−2 w s (t) can reach at most 4 zeros on I (see Figure 2). As before, we do not know intervals where T n−2 w s (t) is increasing or decreasing, but since v n−1 (t) > 0 we conclude that it has the same sign as 1 v n−1 (t) T n−2 w s (t), so it can reach at most 4 zeros.
Following this argument, since v k > 0 on I for k = 1, . . . , n, we know that T n−2−h w s (t) can have not more than 4 + h zeros on I(multiple zeros being  counted according to their multiplicity). In particular, w s (t) = T 0 w s (t) can have n + 2 zeros at most, having n in the boundary. This fact allows w s to have a double zero on (a, b). So, to show that such double root cannot exist, we need to prove that maximal oscillation is not possible. To this end, we point out that if for any h it is verified that the sign of T n−2−h w s (a) is equal to the sign of T n−2−h+1 w s (a) we lose a possible oscillation.
Therefore, for maximal oscillation it must be satisfied However, since w s (t) ≥ 0 on I and w s (a) = w s (a) = · · · = w (k−1) s (a) = 0, we deduce that w (a) > 0. Since n − k is even, using now (16), we also know that T k w s (a) > 0, which inhibits maximal oscillation.
So we conclude that if g M (t, s) ≥ 0 on I ×I then g M (t, s) > 0 on (a, b)×(a, b), as we want to prove.
As a consequence of the three previous Steps, we have described the set of the real parameters M for which the Green's function is nonnegative on I × I when n − k is even.
If n − k is odd we can do similar arguments to achieve the proof. In the sequel, we enumerate the main ideas to be developed Step 1. 1 Step 1.1. It has not modifications.
Step 1.2. In equality (29) we have λ < 0 and v(t) < 0 a.e. in I, so it remains true and we can proceed analogously.
Step 1.3. In this case, we have that v (k−1) (a) < 0. Our attainment in this Step is that g M (t, s) remains negative while M ∈ [M − λ 2 ,M ] in a neighborhood of s = a and oscillates below it.
Step 1.4. The arguments are not modified, but the final achievement is that Step 2. Using the sames arguments we conclude that the interval where g M (t, s) is nonpositive on the boundary of Step 3. In this case we have that w Thus, our result is proved.
As a direct consequence of the arguments used in Step 1.3, without assuming the existence ofM ∈ R for which equation T n [M ] u(t) = 0 is disconjugate on I, we arrive at the following result.  (1). Then the two following properties hold: If n − k is even, then it does not exist M ∈ R such that operator T n [M ] is inverse negative in X k .
If n − k is odd, then it does not exist M ∈ R such that operator T n [M ] is inverse positive in X k .
Proof. It is enough to take into account that v, defined in (28), is the unique solution of problem (P v ). Since v (k−1) (a) = (−1) n−k we conclude that, if n − k is even, Green's function has positive values in any neighborhood of (a, a) and negative when n − k is odd.
So, the result holds from Theorem 2.8.

Particular cases
In order to obtain the eigenvalues of particular problems we calculate a funda Then we denote the n − 1 Wronskians as , k = 1, . . . , n − 1 .
As a consequence of the characterization done in [10, Chapter 3, Lemma 12], we deduce that the eigenvalues of problem (1) in X k are given as the λ ∈ R for which W n−k [−λ](b) = 0. So, in the sequel, we will use this method to find the eigenvalues of the different considered problems.

Operator T n [M ] u(t) ≡ u (n) (t) + M u(t)
First of all, we are going to consider problems where In this kind of problems, for M = 0, u (n) (t) = 0 is always disconjugate, see [10,Chapter 3]. So, hypotheses of Theorem 3.1 are satisfied.
Remark 4.1. Note that adjoint equation to problem T n [M ] u = 0, u ∈ X k is given by So, if we have that λ i is an eigenvalue of u (n) in X k , it is also an eigenvalue of (−1) n u (n) in X n−k . Thus, (−1) n λ i is an eigenvalue of u (n) in X n−k .
As consequence, we only need to obtain first n 2 Wronskians, where · means the floor function.
-Order 2 The eigenvalues of operator u (t) in X 1 must satisfy W 2 1 [λ](1) = 0 , which can be replaced by the following equation so it closest to zero negative eigenvalue is − λ 1 And so, we can affirm that Green's function related to operator u (t)+M u(t) is negative if, and only if, M ∈ −∞ , π 2 . This result has been already obtained in different references (See [1] and references therein), but here it is not necessary to have the expression of the Green's function. This result has been obtained by means of the explicit form of Green's function in [22]. The biggest negative eigenvalue of operator u (4) (t) in X 1 and X 3 is given by − λ 1 These results have been obtained using the explicit form of Green's function in [8] and [9].
Therefore, we conclude without calculating it explicitly, that Green's function related to the operator u (5)  The biggest negative eigenvalue of operator u (8) (t) in X 1 and X 7 is given by The least positive eigenvalue of operator u (8) (t) in X 2 and X 6 is given by The biggest negative eigenvalue of operator u (8) (t) in X 3 and X 5 is given by The least positive eigenvalue of operator u (8) So, we can affirm without calculating it explicitly, that Green's function related to the operator u (8)  As we have said before, third-order problems were explicitly calculated on [22]. And fourth-order problems were calculated on [8] in X 2 and on [9] in X 1 and X 3 , respectively. But, in all of this cases were necessary to obtain the expression of Green's function and analyze it.
In all of these cases, it is also verified that the open optimal interval where Green's function is of constant sign coincide with the optimal interval where the equation (1) is disconjugate.
However these two properties are not equivalent in general. We have already proved (see Lemma 3.4) that while equation (1) is disconjugate its related Green's function must be of constant sign. So, if both intervals do not coincide, the optimal interval where the equation (1) is disconjugate must be contained in the open optimal interval where Green's function is of constant sign .
If we calculate the optimal interval on M of disconjugacy, using the characterization given in Theorem 2.3, for equation

Operators with constant coefficients
This characterization of the interval where Green's function is of constant sign is also useful for those problems which have more non-nulls coefficients .
For example we can consider the operator of fourth order We can show, using the characterization given in Theorem 2.3, that T n [0] u(t) = 0 is a disconjugate equation and, so, Theorem 3.1 holds.
First, we calculate numerically the closest to zero eigenvalues in each X k , k = 1, 2, 3.
Realize that in this case we need to obtain the three correspondents Wronskians because it is not possible to connect eigenvalues in X 1 with those in X 3 .
So, we conclude that Green's function related to operator T n [M ] u(t) defined in (31) satisfies that Notice that in this case the interval of disconjugation is (−(5.27208) 4 , (5.97041) 4 ). So, we have an example of fourth order in which its interval of disconjugation does not coincide with the biggest open interval where Green's function is of constant sign in X 1 .
We can also show an example where operator T n [M ] does not verify disconjugation hypothesis forM = 0.
We can also apply it to a fourth order operator whose eigenvalues were also obtained numerically.
If we calculate its eigenvalues we obtain • The biggest negative eigenvalue in X 1 is −(5.5325) 4 .
So, applying Theorem 3.1, we conclude that