New results on the sign of the Green function of a two-point n-th order linear boundary value problem

This paper provides conditions for determining the sign of all the partial derivatives of the Green functions of n-th order boundary value problems subject to a wide set of homogeneous two-point boundary conditions, removing restrictions of previous results about the distance between the two extremes that define the problem. To do so, it analyzes the sign of the derivatives of the solutions of related two-point n-th order boundary value problems subject to n−1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$n-1$\end{document} boundary conditions by introducing a new property denoted by ‘hyperdisfocality’.

In this paper, we will study the sign of the derivatives of the Green function of the problem where [a, b] ⊂ J, α is the ordered set of integers {α 1 , α 2 , . . . , α k }, β is the ordered set of integers {β 1 , β 2 , . . . , β n-k }, 1 ≤ k ≤ n -1, both α 1 , β 1 ≥ 0 and α k , β n-k < n. We will impose the condition that the number of boundary conditions at a and b set on derivatives of order lower than t is greater or equal than t for t = 1, . . . , n. Elias denoted these conditions by poised in [1], term which we will use in the rest of the manuscript, although they are called admissible in other sources, including reference [2].
It is well known (see, for instance, [3,Chap. 3]) that problems of the type (3) with f ∈ C [a, b], have a solution given by y(x) = b a G(x, t)f (t) dt. Therefore, the knowledge of the sign of G(x, t) and its derivatives can provide information about the sign of the solution y(x) and these same derivatives, at least when f does not change sign on (a, b). Likewise, there is a large amount of literature ( [4][5][6][7][8], and [9]) on the use of the sign of G(x, t) to define cones that, by means of Krein and Rutman's [10] works, allow finding information about the eigenvalues and eigenfunctions of the general problem Ly = λ μ l=0 c l (x)y (l) (x), x ∈ (a, b), y (α i ) (a) = 0, α i ∈ α; y (β i ) (b) = 0, β i ∈ β; (4) with μ ≤ n -1, c l (x) ∈ C(J) for 0 ≤ l ≤ μ, and even calculate them. Incidentally, the calculation of the smallest eigenvalue of (4) with μ = 0 is also relevant to prove the existence of solutions of nonlinear boundary value problems of the type Ly + p(x)g(y) = 0, in particular, by comparing that eigenvalue with the quotient g(y) y for different values of y, especially when y → 0 + and when y → +∞. This approach was started by Erbe [11] for symmetric kernels and extended by Webb and Lan [12][13][14] and many others, [15] being a recent example.
Most of the literature that has addressed problem (4) has done it via an explicit calculation of the Green function G(x, t) to determine its positive character. Whereas this calculation is necessary in some cases, in many others, it suffices to obtain information about the sign of G(x, t) and some of its partial derivatives.
The first steps in that direction were made by Levin [16] and Pokornyi [17], who determined the sign of the Green function of (2) in the conjugate case. Their works were broadened by Karlin [18], who introduced the concept of total positivity of the kernel defined by G(x, t). Peterson [19,20], Elias [21], and Peterson and Ridenhour [22] extended it to several particular cases. Later Eloe and Ridenhour [23] provided some sign results for the lowest derivatives in the general poised case, which included the left focal and right focal cases. Other recent works worth mentioning are those of Webb and Infante [24,25], who provided an elegant framework to address non-local boundary value conditions, and Cabada and Saavedra [26], who characterized a set of parameters where a Green function dependent on that parameter had a constant sign.
The focus of most of these papers has been the assessment of the sign of the Green function, with only a few exceptions addressing the sign of the partial derivatives of G(x, t). That is the case of Eloe and Ridenhour's work [23], which identified the signs of the partial derivatives ∂ i G(x,t) ∂x i for i = 0, . . . max(α 1 , β 1 ). In [2], the present authors extended Eloe and Ridenhour's results by increasing the order of the partial derivatives for which a sign could be determined, and in [8], they showed that in the case of linear boundary value problems defined in terms of quasi-derivatives, it was actually possible to determine the sign of all the partial quasi-derivatives.
However, the results of [2] suffered from the following limitations: 1. The need for Ly = 0 to be disfocal on [a, b]. According to Nehari [27], it means that y(x) ≡ 0 is the only solution for Ly = 0 satisfying y (i) (x i ) = 0, i = 0, 1, 2, . . . , n -1, with This restriction was already present in Eloe and Ridenhour's work and, in general, implies that [a, b] must be a short interval, shorter than the intervals where G(x, t) can be defined (namely those in which (2) has only the trivial solution). 2. The fact that only the sign of some of the lowest order partial derivatives was provided, even in cases (like the strongly poised one) where one could expect constant signs in all partial derivatives. In this paper, we will provide some conditions on the signs of the coefficients a i (x) and the existence of solutions of boundary value problems linked to (2), which will remove the two aforementioned restrictions, providing the sign of all the partial derivatives With regards to nomenclature, we will use the expression G ab (x, t) to stress the dependence of the Green function on the extremes a and b, where problem (2) is defined. Note that some theorems will require a and b to change as part of their proofs. In these cases, a and b in the aforementioned expression could be replaced by other variables, according to the proof needs, but they will keep always the meaning of the extremes where the conditions α and β are specified. We will denote indistinctly by α\α i or α\{α i } the set resulting from removing the index α i from the set α. Likewise, we will use indistinctly the expressions β\β i or β\{β i } to refer to the set obtained by removing the index β i from the set β.
The organization of the paper is as follows: Sect. 2 will analyze the sign of the derivatives of boundary value problems with n -1 boundary conditions, which will be used in Sect. 3 to provide signs for the partial derivatives of the Green function of (2). Finally, Sect. 4 will formulate some conclusions.

Preliminary results
In this section, we will study the signs of the derivatives of the nontrivial solutions of the boundary value problem where α can be either α or α\{α i }, and β can be either β\{β i } or β, respectively. That is, (α , β ) are basically (α, β) of (2) without either one boundary condition α i at a or one boundary condition β i at b. Accordingly, solutions y of (5) are subject to only n -1 boundary conditions. We will assume throughout the section that (α, β) are poised. We will need the following definitions:  [a, b] is the number of isolated zeroes or zero components of y (j) (x) on [a, b], for j = 0, . . . , n. • z j (a, b) is the number of isolated zeroes or zero components of y (j) (x) entirely lying on (a, b), for j = 0, . . . , n. • Z j {α , β } is the number of derivative orders with homogeneous boundary conditions defined in {α , β }, which are lower or equal to j.
is the excess of isolated zeroes or zero components of y (j) (x) on [a, b], for j = 0, . . . , n, which are not due to the boundary conditions and the Rolle theorem and which, for reasons that will become clear later, we will define as • m(α , j) is the number of derivatives of order equal to or higher than j, which the boundary conditions α do not specify to vanish at a. • n(β , j) is the number of derivatives of order higher than j, which the boundary conditions β do specify to vanish at b. • I(α , β ) is the set of derivative orders j such that z j (a, b) = 0 and (therefore) y (j) has a constant sign on (a, b).
Proof The proof mimics that of [8,Lemma 1], although this one applied to Green functions and not solutions of (5). Thus, from the definition (6) of E j [a, b], it is clear that Since y does not vanish identically, it cannot have a single zero component covering [a, b].
From here and (7), one has and which together with (7) prove the statement.
Lemma 1 shows that keeping the value of E n [a, b] low allows controlling the number of zeroes z j (a, b) of y (j) (x) on (a, b). In the next results, we will find out conditions that use that mechanism to fix the derivative orders j for which z j (a, b) = 0, that is, the derivative orders that belong to I(α , β ). To achieve that goal, besides the poisedness of (α, β), we need additional tools. In the case of [9], it was the own nature of the quasi-derivatives, which ensured that E n [a, b] = E 0 [a, b] = 0. In the case under study, as we will see, we will need other mechanisms that grant that y (n) does not change sign on [a, b].
Applying Rolle's theorem to (10) in the same way as in Lemma 1, one gets so that all derivatives of y higher than the K(α , β )-th have at least one zero on [a, b]. A comparison of (7) and (11) for these derivatives also yields E j [a, b] ≥ 1. This completes the proof.
] is a short enough interval, the only derivative of y of order lower than n, which does not vanish in [a, b], is the K(α , β )-th one. However, this does not prevent that the derivatives higher than the K(α , β )-th one may have more than one zero and even change the sign several times on [a, b]. The following Theorem introduces the concept of hyperdisfocality, which targets exactly that problem.
Proof We will follow an argument similar to the one used in [ , meaning that all derivatives y (j) , 0 ≤ j ≤ n -1 have a zero in (a, d) but y (K(α ,β )) , according to Lemma 2 (Coppel's proof focuses on disconjugacy, but the reasoning for disfocality is exactly the same). Likewise Then, let us assume that there exists a sequence {b l } with b l ∈ (a, min(d, d )) and b l → a + such that the derivative y (n) l of the solution y l of (5) with b = b l vanishes in [a, b l ]. If {u m } is a fundamental system of solutions of Lu = 0 such that u (s-1) m (a) = δ ms , 1 ≤ m, s ≤ n, then each y l can be expressed as Let us normalize d m,l such that n m=1 |d m,l | 2 = 1, l ≥ 1. Given that for each l the n-tuple (d 1,l , . . . , d n,l ) ∈ B(0, 1) ⊂ R n , and B(0, 1) is a compact set, if one makes b l tend to a, then there will be a subsequence b l j such that d m,l j → d * m ∈ B(0, 1). In turn, this implies that {y l j } will converge uniformly to the function y * defined by which is a solution of Ly = 0 on J with zeroes in all its derivatives y (j) * (a) (j = 0, . . . , n) but potentially at y (K(α ,β )) * (a). However, given that a K(α ,β ) = 0 on each [a, b l ], from (1) one has that y (K(α ,β )) * (a) = 0 too. But that is impossible, since u m were linearly independent. Therefore, the sequence {b l } cannot exist, and there must be a minimum c > a such that Remark 1 It is straightforward to obtain an equivalent theorem for the case a K(α ,β ) (b) = 0, mutatis mutandis.
We will denote the property described in Theorem 1 by K(α , β )-hiperdisfocality.

Corollary 1 Under the assumptions of Theorem
. . , n -1, all zeroes are isolated, and the set I(α , β ) is composed by those derivative orders j for which if equations (13) and (14) follow from the definition of I(α , β ) and the value of E j [a, b] for values of j lower and higher than K(α , β ), respectively.
In the following results, we will use widely the continuity of the solution of (5) with the extremes a and b, which we will prove in the next Lemma. (5) does not have a nontrivial solution that satisfies y (K(α ,β )) (x * ) = 0 for the extremes a and b where it is defined, with x * ∈ [a, b]. Then the solution y(x) of (5) for which y (K(α ,β )) (x * ) = 1 and its derivatives up to the n-th order are continuous with regards to these extremes.

Lemma 3 Let us assume that problem
Proof If u m (x), 1 ≤ m ≤ n are defined as in Theorem 1, then the solution y of (5) for which y (K(α ,β )) (x * ) = 1 is given by and one equation All coefficients of the system are continuous with respect to a and b, so that by Cramer's rule the solutions d m will be continuous with respect to a and b as long as the determinant of the coefficient matrix does not vanish, which is exactly the necessary and sufficient condition for (5) not to have a nontrivial solution such that y (K(α ,β )) (x * ) = 0.
We can start proving results on the sign of the solutions of (5).
Let us also suppose that the following boundary value problems and do not have solutions other than the trivial one for any If y is a solution of (5) such that y (K(α ,β )) (x * ) = 1, then each y (j) (x) does not change sign in (a, b) for j ∈ I(α , β ) ∪ {n}, and such a sign is given by and Proof Let us consider the problem (5) taking a and b (with [a , b ] ⊆ [a, b]) as extremes instead of a and b. Let us set a = x * (if x * = b, we could select a and x * as extremes and repeat exactly the same argument that follows with a instead of b ). From Corollary 1, it follows that there is a c > x * such that, if b < c, the derivatives of the nontrivial solutions of that problem, whose orders satisfy (13) and (14), have a constant sign on (x * , b ), and all zeroes are isolated. Thus, let us set b < c. From Lemma 2 and Theorem 1, one has that y (K(α ,β )) and y (n) neither vanish nor change the sign on [x * , b ]. Given that we are imposing the condition y (K(α ,β )) (x * ) = 1 that implies that the sign of y (K(α ,β )) (x) must be positive on Next, if y (j) (x * ) = 0 for some j ≤ n -1, then obviously there exists δ > 0 such that On the contrary, if y (j) (x * ) = 0 for some j ≤ n -1, then unless j = K(α , β ), y (j) (x) must have at least an isolated zero in (x * , b ] due to Lemma 2 and Corollary 1. Let x j be the minimum of these zeroes. Given that y (j+1) cannot have a zero in (x * , x j ) (the only possible zeroes of y (j+1) in (x * , b ) are those forced by Rolle's theorem as per Lemma 2 and Corollary 1), it follows so that y (j) (x) and y (j+1) (x) must have opposite signs on (x * , x j ). Combining both cases, we obtain that the sign of y (j) (x) for x ∈ (x * , x * + δ) is given by and Following the reasoning of Theorem 1 and taking (1) into account, let us also suppose that b is so close to x * that the sign of y (n) (x) is the same as that of -a K(α ,β ) (x)y (K(α ,β )) (x) on (x * , b ), that is, the same as -a K(α ,β ) (x * ). This and (21) give for x ∈ (x * , x * + δ). For those derivative orders j ∈ I(α , β ), the signs given by (22) and (23) will apply to the whole interval (x * , b ) by the definition of I(α , β ), which means that (19) and (20) hold for the extremes x * , b . Next, let us start gradually increasing b to b while a is kept fixed at a = x * . Note that according to Lemma 3 and (18), y (j) (x) is continuous with respect to b for 0 ≤ j ≤ n. Let us suppose that during that process, there appears a new zero in any of the derivatives of y (j) , 0 ≤ j ≤ n, x ∈ [x * , b ]. Let b * ≤ b be the smallest value of b for which such a new zero appears in any of these derivatives and l be the order of such a derivative. We can have the following cases: 1. l < K(α , β ). By Rolle's theorem and the facts that z j [x * , b ] ≥ 1 for j < K(α , β ) and b ≤ b * (that is, all these derivatives lower than the K(α , β )-th have at least a zero in [x * , b * ] besides the new one) there should also be a change of sign of y (K(α ,β )) in (x * , b * ). By continuity, this zero of y (K(α ,β )) should have appeared for a b < b * , contradicting the definition of b * , so this case is not possible. 2. K(α , β ) < l < n. Again by Rolle's theorem and the fact that z j [x * , b ] ≥ 1 for K(α , β ) < j < n and b ≤ b * , this new zero would force a change of sign of y (n) in (x * , b * ). By continuity of y (n) with b that change of sign should have been a zero in y (n) for a b < b * , contradicting the definition of b * . K(α , β ). Here, we have three subcases:

l =
• Either the zero appears at x * or at b * , which is impossible since (16) and (17) have only the trivial solution for any x * , b ∈ [a, b] by hypothesis. • Or the zero implies the change of sign of y (K(α ,β )) in (x * , b * ). This is also impossible, as that change of sign, by continuity, should have happened for a b < b * , contradicting the definition of b * . • Or such a zero is also a local extreme of y (K(α ,β )) in (x * , b * ), let us call it d. But this implies that y (K(α ,β )+1) (d) must also vanish, leading us to the reasoning of case 2. 4. l = n. Since the term a K(α ,β ) (x)y (K(α ,β )) (x) cannot vanish in [x * , b * ], as per the previous case, according to (1), (15), (19), and (20), this option is not possible either. Generally, this means that no new zero will appear on [x * , b ] in any of the derivatives of y lower than or equal to the n-th as b grows up to b, so that E j [a, b] will be as in Corollary 1. Consequently, the derivatives whose orders belong to I(α , β ) and are lower than or equal to K(α , β ) will keep the signs given by (22); whereas, those whose orders belong to I(α , β ) but are higher than K(α , β ) will have signs determined by (23), that is, (19) and (20), respectively, for x ∈ (x * , b).
Repeating the same reasoning with the lower extreme a decreasing from x * to a, one finally obtains (19) and (20) for x ∈ (a, b).
Remark 2 Apart from the K(α , β )-hyperdisfocality of the problem, the keys to ensure the constant sign of the derivatives of y whose orders belong to I(α , β ) as b increases (or a decreases), are the constant signs of y (K(α ,β )) and y (n) during the process. The first one is achieved by the second one plus forcing that no extra zero appears at y (K(α ,β )) (a ) or y (K(α ,β )) (b ) during the decrease of a and the growth of b (that is, (16) and (17)). The way selected in this paper to achieve the latter is to make that all terms different from y (n) in (1) have the same sign on (a , b ), although any other mechanisms ensuring it would work too. One must note that forcing that the boundary value problem (5) does have only a trivial solution when either the condition y (n-1) (a ) = 0 or the condition y (n-1) (b ) = 0 are added, does not suffice, since a zero of y (n-1) (x) that is simultaneously a local extreme could appear in (a , b ) as b grows (or a decreases) and become a component of different sign for greater values of b (lower values of a ).
Next, we will show that the condition a K(α ,β ) (x) = 0, x ∈ [a, b] can, in fact, be dropped.

Theorem 3 Let us assume that for
and either or Let us also assume that the hypotheses (16)- (17) hold. If y is a solution of (5) such that y (K(α ,β )) (x * ) = 1, with either x * = a or x * = b, then each y (j) (x) does not change sign on (a, b) for j ∈ I(α , β ) ∪ {n}, and such a sign is given by and if (25) holds, and if (26) holds. Inequalities (28) and (29)  (30) a K(α ,β ), is obviously continuous on [a, b], so that we can replace a K(α ,β ) (x) by a K(α ,β ), (x) in the definition of the operator L and obtain the operator L y = y (n) (x) + a n-1 (x)y (n-1) (x) + · · · + a K(α ,β ), (x)y (K(α ,β )) (x) + · · · + a 0 (x)y(x), Let y be the solution of the boundary value problem similar to (5) We will prove first that y is continuous with respect to at = 0. For that let us recall that, if u m , 1 ≤ m ≤ n, are defined as in Theorem 1, and u m, , 1 ≤ m ≤ n, are the solutions of L u = 0 satisfying then the solutions of (5) (plus y (K(α ,β )) (x * ) = 1) and (32) are given by and y = c 1, u 1, + · · · + c n, u n, , x ∈ [a, b], respectively. The functions u m, , 1 ≤ m ≤ n, and their derivatives of order up to the n-th are continuous with respect to at = 0. This can be proven using Gronwall inequality.
To do so, let w m, be defined by Let us construct the vectors W m, , U m, and U m ∈ R n×1 as that is Applying matrix norms to (41) and taking (30) into account gives From Gronwall inequality [28], one finally gets The inequality (43) clearly shows that u m, and its derivatives up to the (n -1)-th order (indeed up to the n-th order if we consider (31) and the fact that u m, satisfy L u m, = 0) tend uniformly to u m and its derivatives, respectively, in [a, b], when tends to zero. Next, (32), (33), and (35) imply that c m, = 0, for m -1 ∈ α , and that the remaining c m, , This is a system of equations whose coefficient matrix determinant is composed by products of u  17), so one can find an small enough such that the determinant of the system (44) does not vanish either. Therefore, one can apply Cramer's rule to (44) and deduce that the solutions c m, tend to the respective c m , when tends to zero.
From the fact that one concludes that y (j) (x) tends to y (j) (x) uniformly on [a, b], when tends to zero for 0 ≤ j ≤ n. From Theorem 2 and the hypotheses of the present Theorem, it is clear that if = 0, then y satisfies (19)- (20). The continuity of all y (j) at = 0 grants (28) and if the signs of the coefficients a j (x), j ≤ K(α , β ), are given by (25). However, as in the previous Theorem, (45) must be strict (that is, (27)). To prove this, if (45) was not strict, because of Lemma 1 and (7), y (K(α ,β )) would have a zero on [a, b]. If that zero was at a or b, then this would violate (16) or (17), respectively. Otherwise, the zero of y (K(α ,β )) would be in (a, b), and it would be a local extreme, so that y (K(α ,β )+1) should also vanish there. By Rolle's theorem, this additional zero of y (K(α ,β )+1) in (a, b) would force a change of sign of y (n) in (a, b), violating (28). Likewise, the continuity of all y (j) at = 0 ensures (27) and (29) if the signs of the coefficients a j (x), j ≤ K(α , β ), are given by (26).
Next, if there is a derivative order l ≤ K(α , β ) such that a l (x) does not vanish in (a, b), from (1) and the fact that l ∈ I(α , β ) (otherwise, a l ≡ 0 as per (24)), one has that y (n) cannot vanish in (a, b), having the same sign in that interval. This prevents zero components and additional zeroes in derivatives of the lower order in [a, b], so that (28) and (29) are strict. On the contrary, if all a l (x) ≡ 0 for l ≤ K(α , β ) then y (j) (x) ≡ 0, K(α , β ) < j ≤ n, on [a, b]. The reason is that (5) can be converted into a boundary value problem of order n-K(α , β ) with n -K(α , β ) boundary conditions just by considering the derivatives y (j) , K(α , β ) < j ≤ n. That problem has only the trivial solution, which gives y (K(α ,β )) ≡ 1. If there was another solution, then the difference between this one and the one given by y (K(α ,β )) ≡ 1 would violate (16)- (17).

The sign of the partial derivatives of the Green function
In this section, we will apply the results of the preceding section to determine the signs of the partial derivatives of G(x, t) with regards to x. As in the previous section, we will assume that {α, β} are poised. We will also assume that the boundary value problem (2) does not have a nontrivial solution, as this is a necessary condition for the existence of G(x, t).
To start with, the next Lemma (see [2,Lemma 2]) assesses the dependence of G(x, t) and its partial derivatives with regards to the extremes a and b. Note that by definition, ∂G ∂a and ∂G ∂b have all derivatives up to the n-th order continuous for x ∈ [a, b].
is the solution of the problem ∈ (a, b); Likewise, ∂G(x,t) ∂a is the solution of the problem The lack of nontrivial solutions of (2) allows decomposing ∂G ∂b as where, fixed t ∈ [a, b], h β i (x, t) is the solution of Note that if β i +1 ∈ β then h β i (x, t) ≡ 0. That implies that we only need to take into account those β i such that β i + 1 / ∈ β. Similarly, where, fixed t ∈ [a, b], g α i (x, t) is the solution of As before, if α i + 1 ∈ α, then g α i (x, t) ≡ 0, so that we only need to take into account those α i such that α i + 1 / ∈ α.
The advantage of the aforementioned decomposition is that each of problems (49) and (51) has the same structure as problem (5). If we manage to find conditions that allow that, fixed the derivative order j, all h (j) β i have the same sign, such a sign will coincide with that of ∂ j ∂x j ∂G ∂b (a similar reasoning can be made with g (j) α i (x, t) and ∂ j ∂x j ∂G ∂a ). The next lemmas will explore that.
Proof It can be proved similarly to the previous one using ( 26), (27), and (29) in Theorems 3 and 4 and noting that (-1) m(α\α i ,K(α\α i ,β)) g Next, we will prove a short lemma that will be used in later calculations.
Otherwise, let us assume β n-k = n -1. From (1), Lemma 7, (61), and (63), one has that ∂x n > 0, the remaining condition (52) for β n-k is met, and the inequality (54) for h β n-k (x, t), which is strict, will apply to all derivative orders j ∈ H(α, β). On the contrary, if the index l ∈ β, then ∂ n G(b,t) ∂x n may be zero depending on the values of a j (b), but even so (in which case h β n-k (x, t) ≡ 0) there must be another β m such that a β m (x) does not vanish in [a, b] and inequality (55) for h β m (x, t) will be strict and will apply to all derivative orders j ∈ H(α, β). Thus, in both cases we have strict inequalities for some h (j) β i (x, t) and all j ∈ H(α, β). A similar result can be obtained for g (j) α i (x, t) if α k = n -1. Applying Lemma 5 to the decomposition (48), taking into account that at least for one h β i the inequalities are strict for all j ∈ H(α, β), one gets (66). Likewise, applying Lemma 6 to the decomposition (50) and noting as before that no j ∈ H(α, β) can meet α i < j ≤ K(α\α i , β), 1 ≤ i ≤ k, one obtains (67).
By Lemma 8 and following an argument analogous to that used in [2, Theorem 6], one can finally determine the searched signs.
Proof We will first assess the case Ly = 0 is disfocal on [a, b], dividing the proof in two subcases: x > t and x < t. Thus, let us suppose in the first place that x > t. We can write From the boundary conditions of (2), one has that ∂ j G ax (x,t) ∂x j = 0 for j ∈ β. Analogously, from [2, Theorem 3] and the disfocality of Ly = 0 on [a, x] ⊂ [a, b], one has that (-1) n(β,j) ∂ j G ax (x, t) ∂x j > 0, j / ∈ β, j < n, which from Lemma 7, for j ∈ H(α, β), is equivalent to (-1) m(α,j) ∂ j G ax (x, t) ∂x j > 0, j / ∈ βj < n. and there is an index l < min(α * , β * ), such that a l (x) does not vanish in [a, b]. Let us also assume that (2) does not have solutions other than the trivial one for any extremes a , b ∈ [a, b]. Then and Proof The proof follows immediately from Theorem 5 on noting that in the strongly poised case all derivative orders j, 0 ≤ j ≤ n -1, belong to H(α, β), β i = K(α, β\β i ), 1 ≤ i ≤ nk, and α i = K(α\α i , β), 1 ≤ i ≤ k.
Remark 5 Theorem 6 improves [2, Theorem 7] as it holds for any combination of strongly poised conditions in comparison with the latter.

Conclusions
This paper has presented conditions that permit identifying the partial derivatives of the Green function of (2) that have a constant sign on (a, b), and has also provided their signs, extending the results of [2, Theorems 6 and 7] and removing some of the limitations of that paper. This information is relevant to know the properties of solutions of problems of the type (3) when f does not change sign, and also allows extending many results of the cone theory to problems like (4). The paper has also introduced the concept of hyperdisfocality, which can become a very useful tool to assess the zeroes of the derivatives of boundary value problems, in general, and it has provided signs for the derivatives of the solutions of boundary value problems with n -1 boundary conditions too. All these findings are new, as far as the authors are aware. The main limitations of the results displayed here are related to the sign requirements of the coefficients a i (x) displayed on Theorems 5 and 6, which are needed to grant a constant sign on the n-th partial derivative. However, as indicated in Remark 2, other mechanisms that granted such a constant sign would also work. This is a possible area for further future research.