# New results on the sign of the Green function of a two-point n-th order linear boundary value problem

## Abstract

This paper provides conditions for determining the sign of all the partial derivatives of the Green functions of n-th order boundary value problems subject to a wide set of homogeneous two-point boundary conditions, removing restrictions of previous results about the distance between the two extremes that define the problem. To do so, it analyzes the sign of the derivatives of the solutions of related two-point n-th order boundary value problems subject to $$n-1$$ boundary conditions by introducing a new property denoted by ‘hyperdisfocality’.

## 1 Introduction

Let J be a compact interval in $\mathbb{R}$, and let us consider the real differential operator $$L: C^{n} (J) \rightarrow C(J)$$ defined by

\begin{aligned} L y = y^{(n)}(x) + a_{n-1} (x) y^{(n-1)}(x) + \cdots + a_{0}(x) y(x),\quad x \in J, \end{aligned}
(1)

where $$a_{j}(x) \in C(J)$$, $$0 \leq j \leq n-1$$.

In this paper, we will study the sign of the derivatives of the Green function of the problem

\begin{aligned} Ly =0,\quad x \in [a,b];\qquad y^{(i)} (a)=0,\quad i \in \alpha;\qquad y^{(j)} (b)=0,\quad j \in \beta; \end{aligned}
(2)

where $$[a,b] \subset J$$, α is the ordered set of integers $$\{ \alpha _{1}, \alpha _{2}, \ldots, \alpha _{k}\}$$, β is the ordered set of integers $$\{ \beta _{1}, \beta _{2}, \ldots, \beta _{n-k} \}$$, $$1 \leq k \leq n-1$$, both $$\alpha _{1}, \beta _{1} \geq 0$$ and $$\alpha _{k}, \beta _{n-k} < n$$.

We will impose the condition that the number of boundary conditions at a and b set on derivatives of order lower than t is greater or equal than t for $$t=1, \ldots, n$$. Elias denoted these conditions by poised in [1], term which we will use in the rest of the manuscript, although they are called admissible in other sources, including reference [2].

If for every integer m such that $$1\leq m \leq p+1$$, exactly m terms of the sequence $$\alpha _{1}, \ldots, \alpha _{k}$$, $$\beta _{1}, \ldots, \beta _{n-k}$$ are less than m, we will say that the boundary conditions are p-alternate. In the case $$p=n-1$$, we will call the boundary conditions strongly poised. The poised conditions cover well-known cases like conjugate boundary conditions ($$\alpha _{1}=0, \alpha _{2}=1, \ldots, \alpha _{k}=k-1$$ and $$\beta _{1}=0, \beta _{2}=1, \ldots, \beta _{n-k}=n-k-1$$), focal boundary conditions (right focal with $$\alpha _{1}=0, \alpha _{2}=1, \ldots, \alpha _{k}=k-1$$ and $$\beta _{1}=k, \beta _{2}=k+1, \ldots, \beta _{n-k}=n-1$$ or left focal with $$\alpha _{1}=n-k, \alpha _{2}=n-k+1, \ldots, \alpha _{k}=n-1$$ and $$\beta _{1}=0, \beta _{2}=1, \ldots, \beta _{n-k}=n-k-1$$), and many others. The focal boundary conditions are also strongly poised (or $$(n-1)$$-alternate).

It is well known (see, for instance, [3, Chap. 3]) that problems of the type

\begin{aligned} \begin{aligned} &L y =f,\quad x \in (a,b), \\ &y^{(\alpha _{i})}(a)=0,\quad \alpha _{i} \in \alpha;\qquad y^{(\beta _{i})}(b)=0,\quad \beta _{i} \in \beta; \end{aligned} \end{aligned}
(3)

with $$f \in C[a,b]$$, have a solution given by $$y(x) = \int _{a}^{b} G(x,t) f(t) \,dt$$. Therefore, the knowledge of the sign of $$G(x,t)$$ and its derivatives can provide information about the sign of the solution $$y(x)$$ and these same derivatives, at least when f does not change sign on $$(a,b)$$. Likewise, there is a large amount of literature ([48], and [9]) on the use of the sign of $$G(x,t)$$ to define cones that, by means of Krein and Rutman’s [10] works, allow finding information about the eigenvalues and eigenfunctions of the general problem

\begin{aligned} \begin{aligned}& L y = \lambda \sum _{l=0}^{\mu} c_{l} (x) y^{(l)}(x),\quad x \in (a,b), \\ &y^{(\alpha _{i})}(a)=0,\quad \alpha _{i} \in \alpha;\qquad y^{(\beta _{i})}(b)=0,\quad \beta _{i} \in \beta; \end{aligned} \end{aligned}
(4)

with $$\mu \leq n-1$$, $$c_{l}(x) \in C(J)$$ for $$0 \leq l \leq \mu$$, and even calculate them.

Incidentally, the calculation of the smallest eigenvalue of (4) with $$\mu =0$$ is also relevant to prove the existence of solutions of nonlinear boundary value problems of the type $$Ly + p(x) g(y)=0$$, in particular, by comparing that eigenvalue with the quotient $$\frac{g(y)}{y}$$ for different values of y, especially when $$y \rightarrow 0^{+}$$ and when $$y \rightarrow + \infty$$. This approach was started by Erbe [11] for symmetric kernels and extended by Webb and Lan [1214] and many others, [15] being a recent example.

Most of the literature that has addressed problem (4) has done it via an explicit calculation of the Green function $$G(x,t)$$ to determine its positive character. Whereas this calculation is necessary in some cases, in many others, it suffices to obtain information about the sign of $$G(x,t)$$ and some of its partial derivatives.

The first steps in that direction were made by Levin [16] and Pokornyi [17], who determined the sign of the Green function of (2) in the conjugate case. Their works were broadened by Karlin [18], who introduced the concept of total positivity of the kernel defined by $$G(x,t)$$. Peterson [19, 20], Elias [21], and Peterson and Ridenhour [22] extended it to several particular cases. Later Eloe and Ridenhour [23] provided some sign results for the lowest derivatives in the general poised case, which included the left focal and right focal cases. Other recent works worth mentioning are those of Webb and Infante [24, 25], who provided an elegant framework to address non-local boundary value conditions, and Cabada and Saavedra [26], who characterized a set of parameters where a Green function dependent on that parameter had a constant sign.

The focus of most of these papers has been the assessment of the sign of the Green function, with only a few exceptions addressing the sign of the partial derivatives of $$G(x,t)$$. That is the case of Eloe and Ridenhour’s work [23], which identified the signs of the partial derivatives $$\frac{ \partial ^{i} G(x,t)}{\partial x^{i}}$$ for $$i=0, \ldots \max (\alpha _{1}, \beta _{1})$$. In [2], the present authors extended Eloe and Ridenhour’s results by increasing the order of the partial derivatives for which a sign could be determined, and in [8], they showed that in the case of linear boundary value problems defined in terms of quasi-derivatives, it was actually possible to determine the sign of all the partial quasi-derivatives.

However, the results of [2] suffered from the following limitations:

1. 1.

The need for $$Ly=0$$ to be disfocal on $$[a,b]$$. According to Nehari [27], it means that $$y(x) \equiv 0$$ is the only solution for $$Ly=0$$ satisfying $$y^{(i)}(x_{i})=0, i=0, 1, 2, \ldots, n-1$$, with $$x_{i} \in [a,b]$$. This restriction was already present in Eloe and Ridenhour’s work and, in general, implies that $$[a,b]$$ must be a short interval, shorter than the intervals where $$G(x,t)$$ can be defined (namely those in which (2) has only the trivial solution).

2. 2.

The fact that only the sign of some of the lowest order partial derivatives was provided, even in cases (like the strongly poised one) where one could expect constant signs in all partial derivatives.

In this paper, we will provide some conditions on the signs of the coefficients $$a_{i}(x)$$ and the existence of solutions of boundary value problems linked to (2), which will remove the two aforementioned restrictions, providing the sign of all the partial derivatives $$\frac{ \partial ^{i} G(x,t)}{\partial x^{i}}$$ that have a constant sign on $$[a,b]$$.

With regards to nomenclature, we will use the expression $$G_{ab} (x,t)$$ to stress the dependence of the Green function on the extremes a and b, where problem (2) is defined. Note that some theorems will require a and b to change as part of their proofs. In these cases, a and b in the aforementioned expression could be replaced by other variables, according to the proof needs, but they will keep always the meaning of the extremes where the conditions α and β are specified. We will denote indistinctly by $$\alpha \backslash \alpha _{i}$$ or $$\alpha \backslash \{ \alpha _{i} \}$$ the set resulting from removing the index $$\alpha _{i}$$ from the set α. Likewise, we will use indistinctly the expressions $$\beta \backslash \beta _{i}$$ or $$\beta \backslash \{ \beta _{i} \}$$ to refer to the set obtained by removing the index $$\beta _{i}$$ from the set β.

The organization of the paper is as follows: Sect. 2 will analyze the sign of the derivatives of boundary value problems with $$n-1$$ boundary conditions, which will be used in Sect. 3 to provide signs for the partial derivatives of the Green function of (2). Finally, Sect. 4 will formulate some conclusions.

## 2 Preliminary results

In this section, we will study the signs of the derivatives of the nontrivial solutions of the boundary value problem

\begin{aligned} Ly =0,\quad x \in [a,b];\qquad y^{(j)} (a)=0, \quad j \in \alpha ';\qquad y^{(j)} (b)=0,\quad j \in \beta '; \end{aligned}
(5)

where $$\alpha '$$ can be either α or $$\alpha \backslash \{ \alpha _{i} \}$$, and $$\beta '$$ can be either $$\beta \backslash \{ \beta _{i} \}$$ or β, respectively. That is, $$(\alpha ',\beta ')$$ are basically $$(\alpha,\beta )$$ of (2) without either one boundary condition $$\alpha _{i}$$ at a or one boundary condition $$\beta _{i}$$ at b. Accordingly, solutions y of (5) are subject to only $$n-1$$ boundary conditions. We will assume throughout the section that $$(\alpha,\beta )$$ are poised.

We will need the following definitions:

### Definition 1

If y is a nontrivial solution of (5), then

• A zero component is a closed subinterval of $$[a,b]$$ where a derivative of y is identically zero. If a derivative has several zero components, there must be subintervals of $$[a,b]$$ of positive measure separating them. Otherwise, they will be considered the same zero component. In what follows, we will use the term zero to refer to an isolated zero or a zero component indistinctly.

• $$z_{j}[a,b]$$ is the number of isolated zeroes or zero components of $$y^{(j)}(x)$$ on $$[a,b]$$, for $$j=0, \ldots, n$$.

• $$z_{j}(a,b)$$ is the number of isolated zeroes or zero components of $$y^{(j)}(x)$$ entirely lying on $$(a,b)$$, for $$j=0, \ldots, n$$.

• $$Z_{j} \{ \alpha ', \beta ' \}$$ is the number of derivative orders with homogeneous boundary conditions defined in $$\{ \alpha ', \beta ' \}$$, which are lower or equal to j.

• $$E_{j}[a,b]$$ is the excess of isolated zeroes or zero components of $$y^{(j)}(x)$$ on $$[a,b]$$, for $$j=0, \ldots, n$$, which are not due to the boundary conditions and the Rolle theorem and which, for reasons that will become clear later, we will define as

\begin{aligned} E_{j}[a,b] = z_{j}[a,b] - Z_{j} \bigl\{ \alpha ', \beta ' \bigr\} +j,\quad j=0, \ldots, n. \end{aligned}
(6)
• $$m(\alpha ',j)$$ is the number of derivatives of order equal to or higher than j, which the boundary conditions $$\alpha '$$ do not specify to vanish at a.

• $$n(\beta ',j)$$ is the number of derivatives of order higher than j, which the boundary conditions $$\beta '$$ do specify to vanish at b.

• $$I(\alpha ',\beta ')$$ is the set of derivative orders j such that $$z_{j}(a,b)=0$$ and (therefore) $$y^{(j)}$$ has a constant sign on $$(a,b)$$.

• $$K(\alpha ',\beta ')$$ is the lowest derivative order j such that $$Z_{j} \{ \alpha ',\beta ' \}=j$$. Note that $$K(\alpha ', \beta ')$$ always exists when there are only $$n-1$$ boundary conditions, since $$Z_{n-1} \{ \alpha ',\beta ' \}=n-1$$. Note also that $$K(\alpha ',\beta ') \notin \alpha ' \cup \beta '$$. Otherwise, $$K(\alpha ',\beta ')-1$$ would contradict the minimal character of $$K(\alpha ',\beta ')$$.

• $$H(\alpha,\beta )$$ (note the use of α and β instead of $$\alpha '$$ and $$\beta '$$) is the set of derivative orders j such that $$Z_{j-1} \{ \alpha,\beta \}=j$$.

• If $$\alpha _{k}=n-1$$, then $$\alpha ^{*}$$ is the lowest derivative order such that $$\{ \alpha ^{*}, \alpha ^{*}+1, \ldots, n-1 \} \subset \alpha$$; if $$\alpha _{k}< n-1$$, then $$\alpha ^{*}=K(\alpha \backslash \alpha _{k},\beta )+1$$. Analogously, if $$\beta _{n-k}=n-1$$, then $$\beta ^{*}$$ is the lowest derivative order such that $$\{ \beta ^{*}, \beta ^{*}+1, \ldots, n-1 \} \subset \beta$$, and if $$\beta _{n-k} < n-1$$, then $$\beta ^{*}=K(\alpha,\beta \backslash \beta _{n-k})+1$$.

### Lemma 1

$$E_{j}[a,b]$$ satisfies $$E_{j}[a,b] \geq E_{j-1}[a,b] \geq 0$$, for $$j=1, \ldots, n$$.

### Proof

The proof mimics that of [8, Lemma 1], although this one applied to Green functions and not solutions of (5). Thus, from the definition (6) of $$E_{j}[a,b]$$, it is clear that

\begin{aligned} z_{j}[a,b] = E_{j}[a,b] + Z_{j} \bigl\{ \alpha ', \beta ' \bigr\} -j,\quad j=0, \ldots, n. \end{aligned}
(7)

Since y does not vanish identically, it cannot have a single zero component covering $$[a,b]$$. This implies that $$z_{0}[a,b] = E_{0}[a,b] + Z_{0} \{ \alpha ', \beta ' \} \geq Z_{0} \{ \alpha ', \beta ' \}$$, so $$E_{0}[a,b] \geq 0$$. Then, by Rolle’s theorem

\begin{aligned} z_{j}(a,b) \geq z_{j-1}[a,b] -1, \end{aligned}

and

\begin{aligned} z_{j}[a,b] \geq Z_{j} \bigl\{ \alpha ', \beta ' \bigr\} - Z_{j-1} \bigl\{ \alpha ', \beta ' \bigr\} + z_{j-1}[a,b] -1. \end{aligned}

From here and (7), one has

\begin{aligned} z_{j}(a,b) \geq E_{j-1} [a,b] + Z_{j-1} \bigl\{ \alpha ', \beta ' \bigr\} -j, \end{aligned}
(8)

and

\begin{aligned} z_{j}[a,b] &\geq Z_{j} \bigl\{ \alpha ', \beta ' \bigr\} - Z_{j-1} \bigl\{ \alpha ', \beta ' \bigr\} + E_{j-1} [a,b] + Z_{j-1} \bigl\{ \alpha ', \beta ' \bigr\} -j +1 -1 \\ &= E_{j-1} [a,b] + Z_{j} \bigl\{ \alpha ', \beta ' \bigr\} -j, \end{aligned}
(9)

which together with (7) prove the statement. □

Lemma 1 shows that keeping the value of $$E_{n} [a,b]$$ low allows controlling the number of zeroes $$z_{j}(a,b)$$ of $$y^{(j)}(x)$$ on $$(a,b)$$. In the next results, we will find out conditions that use that mechanism to fix the derivative orders j for which $$z_{j}(a,b)=0$$, that is, the derivative orders that belong to $$I(\alpha ',\beta ')$$. To achieve that goal, besides the poisedness of $$(\alpha,\beta )$$, we need additional tools. In the case of [9], it was the own nature of the quasi-derivatives, which ensured that $$E_{n} [a,b]=E_{0} [a,b]=0$$. In the case under study, as we will see, we will need other mechanisms that grant that $$y^{(n)}$$ does not change sign on $$[a,b]$$.

### Lemma 2

If $$Ly=0$$ is disfocal on $$[a,b]$$, then $$l=K(\alpha ',\beta ')$$ is the only derivative order for which $$y^{(l)}$$ satisfies $$z_{l}[a,b]=0$$, for $$0 \leq l \leq n-1$$. In addition, $$E_{j}[a,b] =0$$ for $$j=0, \ldots, K(\alpha ',\beta ')$$ and $$E_{j}[a,b] \geq 1$$ for $$j=K(\alpha ',\beta ')+1, \ldots, n-1$$.

### Proof

Let us assume that $$\alpha '=\alpha \backslash \{ \alpha _{i} \}$$ and $$\beta '=\beta$$ (the case $$\alpha '=\alpha$$ and $$\beta '=\beta \backslash \{ \beta _{i} \}$$ can be proven in the same manner). The poisedness of $$(\alpha,\beta )$$ implies that $$Z_{j} \{ \alpha ',\beta ' \} \geq j+1$$ for $$j=0, \ldots, K(\alpha ',\beta ')-1$$ and $$Z_{j} \{ \alpha ',\beta ' \} \geq j$$ for $$j=K(\alpha ',\beta '), \ldots, n-1$$.

From (7), it follows that $$z_{j} [a,b] \geq E_{j}[a,b]+1 \geq 1$$, $$j=0, \ldots, K(\alpha ',\beta ')-1$$, that is, all derivatives of y up to the $$(K(\alpha ',\beta ')-1)$$-th have at least one zero on $$[a,b]$$. Likewise, from the definition of $$K(\alpha ',\beta ')$$ one has $$z_{K(\alpha ',\beta ')} [a,b] = E_{K(\alpha ',\beta ')}[a,b]$$. If $$E_{K(\alpha ',\beta ')}[a,b] \geq 1$$, Lemma 1 would give again $$z_{j}[a,b] \geq E_{j}[a,b] \geq E_{K(\alpha ',\beta ')}[a,b] \geq 1$$ for $$j=K(\alpha ',\beta '), \ldots, n-1$$, contradicting the disfocality of $$Ly=0$$ on $$[a,b]$$. Therefore, $$z_{K(\alpha ',\beta ')} [a,b] =E_{K(\alpha ',\beta ')}[a,b] =0$$, that is, $$y^{(K(\alpha ',\beta '))}$$ has no zero on $$[a,b]$$.

If $$K(\alpha ',\beta ')=n-1$$, the proof is completed. Otherwise, from the definition of $$K(\alpha ',\beta ')$$

\begin{aligned} &z_{K(\alpha ',\beta ') +1 }[a,b] \\ &\quad\geq Z_{K(\alpha ',\beta ') +1 } \bigl\{ \alpha ', \beta ' \bigr\} - Z_{K( \alpha ',\beta ') } \bigl\{ \alpha ', \beta ' \bigr\} = Z_{K(\alpha ',\beta ') +1 } \bigl\{ \alpha ', \beta ' \bigr\} - K \bigl(\alpha ',\beta ' \bigr). \end{aligned}
(10)

Applying Rolle’s theorem to (10) in the same way as in Lemma 1, one gets

\begin{aligned} z_{j} [a,b] \geq Z_{j} \bigl\{ \alpha ', \beta ' \bigr\} -j +1 \geq 1,\quad j= K \bigl( \alpha ',\beta ' \bigr) +1, \ldots, n-1, \end{aligned}
(11)

so that all derivatives of y higher than the $$K(\alpha ',\beta ')$$-th have at least one zero on $$[a,b]$$. A comparison of (7) and (11) for these derivatives also yields $$E_{j}[a,b] \geq 1$$. This completes the proof. □

Lemma 2 ensures that, if $$[a,b]$$ is a short enough interval, the only derivative of y of order lower than n, which does not vanish in $$[a,b]$$, is the $$K(\alpha ',\beta ')$$-th one. However, this does not prevent that the derivatives higher than the $$K(\alpha ',\beta ')$$-th one may have more than one zero and even change the sign several times on $$[a,b]$$. The following Theorem introduces the concept of hyperdisfocality, which targets exactly that problem.

### Theorem 1

Let us suppose that $$a_{K(\alpha ',\beta ')}(a) \neq 0$$, and y is a nontrivial solution of (5). Then, there is $$c \in J$$ such that if b in (5) is smaller than c, $$y^{(n)}$$ does not vanish in $$[a,b]$$.

### Proof

We will follow an argument similar to the one used in [1, Chap. 0].

From [3, Chap. 3] one can find $$d>a$$ with $$d \in J$$ such that $$Ly=0$$ is disfocal on $$[a,d]$$, meaning that all derivatives $$y^{(j)}$$, $$0 \leq j \leq n-1$$ have a zero in $$(a,d)$$ but $$y^{(K(\alpha ',\beta '))}$$, according to Lemma 2 (Coppel’s proof focuses on disconjugacy, but the reasoning for disfocality is exactly the same).

Likewise, since $$a_{K(\alpha ',\beta ')}(x)$$ is continuous on J by hypothesis, one can find a $$d'>a$$ with $$d' \in J$$ such that $$|a_{K(\alpha ',\beta ')}(x)- a_{K(\alpha ',\beta ')}(a) | < | a_{K( \alpha ',\beta ')}(a) |/2$$ for $$x \in [a, d']$$, so that $$a_{K(\alpha ',\beta ')}(x) \neq 0$$ in $$[a,d']$$.

Then, let us assume that there exists a sequence $$\{b_{l}\}$$ with $$b_{l} \in (a,\min (d,d'))$$ and $$b_{l} \rightarrow a^{+}$$ such that the derivative $$y_{l}^{(n)}$$ of the solution $$y_{l}$$ of (5) with $$b=b_{l}$$ vanishes in $$[a,b_{l}]$$.

If $$\{u_{m} \}$$ is a fundamental system of solutions of $$Lu=0$$ such that $$u_{m}^{(s-1)}(a)=\delta _{ms}$$, $$1 \leq m, s \leq n$$, then each $$y_{l}$$ can be expressed as

\begin{aligned} y_{l} (x) = d_{1,l} u_{1} (x) + \cdots + d_{n,l} u_{n} (x),\quad x \in [a, b_{l}]. \end{aligned}
(12)

Let us normalize $$d_{m,l}$$ such that $$\sum_{m=1}^{n} |d_{m,l} |^{2} =1$$, $$l \geq 1$$. Given that for each l the n-tuple $\left({d}_{1,l},\dots ,{d}_{n,l}\right)\in B\left(0,1\right)\subset {\mathbb{R}}^{n}$, and $$B (0,1)$$ is a compact set, if one makes $$b_{l}$$ tend to a, then there will be a subsequence $$b_{l_{j}}$$ such that $$d_{m,l_{j}} \rightarrow d_{m}^{*} \in B(0,1)$$. In turn, this implies that $$\{y_{l_{j}} \}$$ will converge uniformly to the function $$y_{*}$$ defined by

\begin{aligned} y_{*} (x) = d_{1}^{*} u_{1} (x) + \cdots + d_{n}^{*} u_{n} (x), \end{aligned}

which is a solution of $$Ly=0$$ on J with zeroes in all its derivatives $$y_{*}^{(j)} (a)$$ ($$j=0, \ldots, n$$) but potentially at $$y_{*}^{(K(\alpha ',\beta '))} (a)$$. However, given that $$a_{K(\alpha ',\beta ')} \neq 0$$ on each $$[a,b_{l}]$$, from (1) one has that $$y_{*}^{(K(\alpha ',\beta '))} (a)=0$$ too. But that is impossible, since $$u_{m}$$ were linearly independent. Therefore, the sequence $$\{b_{l}\}$$ cannot exist, and there must be a minimum $$c> a$$ such that $$y^{(n)}$$ does not vanish in $$[a,b]$$ for any $$b < c$$. □

### Remark 1

It is straightforward to obtain an equivalent theorem for the case $$a_{K(\alpha ',\beta ')}(b) \neq 0$$, mutatis mutandis.

We will denote the property described in Theorem 1 by $$K(\alpha ',\beta ')$$-hiperdisfocality.

### Corollary 1

Under the assumptions of Theorem 1, if $$b \in (a,c)$$, then $$E_{j}[a,b] = 1$$ for $$j=K(\alpha ',\beta ')+1, \ldots, n-1$$, all zeroes are isolated, and the set $$I(\alpha ',\beta ')$$ is composed by those derivative orders j for which

\begin{aligned} Z_{j-1} \bigl\{ \alpha ', \beta ' \bigr\} = j, \end{aligned}
(13)

if $$0 \leq j \leq K(\alpha ',\beta ')$$, and

\begin{aligned} Z_{j-1} \bigl\{ \alpha ', \beta ' \bigr\} = j-1, \end{aligned}
(14)

if $$K(\alpha ',\beta ') < j \leq n-1$$.

### Proof

If $$E_{j}[a,b] > 1$$ for an index j such that $$K(\alpha ',\beta ') < j \leq n-1$$, then by Lemma 1, one would have $$E_{n-1}[a,b]>1$$ and by (7) $$z_{n-1} [a,b]>1$$. Rolle’s Theorem would give $$z_{n}[a,b] \geq 1$$, contradicting Theorem 1. This and Lemma 2 imply $$E_{j}[a,b] = 1$$ for $$K(\alpha ',\beta ') < j \leq n-1$$. All these zeroes are isolated as otherwise, $$y^{(n)}$$ would vanish in a subinterval of $$[a,b]$$, violating Theorem 1. Next, since

\begin{aligned} z_{j} (a,b) = E_{j}[a,b] + Z_{j-1} \bigl\{ \alpha ', \beta ' \bigr\} - j, \end{aligned}

equations (13) and (14) follow from the definition of $$I(\alpha ',\beta ')$$ and the value of $$E_{j}[a,b]$$ for values of j lower and higher than $$K(\alpha ',\beta ')$$, respectively. □

In the following results, we will use widely the continuity of the solution of (5) with the extremes a and b, which we will prove in the next Lemma.

### Lemma 3

Let us assume that problem (5) does not have a nontrivial solution that satisfies $$y^{(K(\alpha ',\beta '))}(x^{*})=0$$ for the extremes a and b where it is defined, with $$x^{*} \in [a,b]$$. Then the solution $$y(x)$$ of (5) for which $$y^{(K(\alpha ',\beta '))}(x^{*})=1$$ and its derivatives up to the n-th order are continuous with regards to these extremes.

### Proof

If $$u_{m}(x)$$, $$1 \leq m \leq n$$ are defined as in Theorem 1, then the solution y of (5) for which $$y^{(K(\alpha ',\beta '))}(x^{*})=1$$ is given by

\begin{aligned} y(x) = \sum_{m=1}^{n} d_{m} u_{m} (x), \end{aligned}

where $$d_{m}$$ are the solutions of the system of equations comprised of k equations (if $$\alpha '=\alpha$$) or $$k-1$$ equations (if $$\alpha '=\alpha \backslash \{ \alpha _{i} \}$$) of the type

\begin{aligned} d_{1} u^{(\alpha _{j})}_{1} (a) + \cdots + d_{n} u^{(\alpha _{j})}_{n} (a) = 0, \end{aligned}

$$n-k-1$$ equations (if $$\beta ' = \beta \backslash \{ \beta _{i} \}$$) or $$n-k$$ equations (if $$\beta ' = \beta$$) of the type

\begin{aligned} d_{1} u^{(\beta _{j})}_{1} (b) + \cdots + d_{n} u^{(\beta _{j})}_{n} (b) = 0, \end{aligned}

and one equation

\begin{aligned} d_{1} u^{(K(\alpha ',\beta '))}_{1} \bigl(x^{*} \bigr) + \cdots + d_{n} u^{(K( \alpha ',\beta '))}_{n} \bigl(x^{*} \bigr) = 1. \end{aligned}

All coefficients of the system are continuous with respect to a and b, so that by Cramer’s rule the solutions $$d_{m}$$ will be continuous with respect to a and b as long as the determinant of the coefficient matrix does not vanish, which is exactly the necessary and sufficient condition for (5) not to have a nontrivial solution such that $$y^{(K(\alpha ',\beta '))}(x^{*})=0$$. □

We can start proving results on the sign of the solutions of (5).

### Theorem 2

Let $$x^{*} \in [a,b]$$, and let us assume that for $$x \in [a,b]$$

\begin{aligned} \begin{aligned} &a_{j}(x) \equiv 0, \quad j \notin I \bigl(\alpha ',\beta ' \bigr); \\ &(-1)^{m(\alpha ', j) - m(\alpha ', K(\alpha ',\beta '))} a_{j}(x) a_{K( \alpha ', \beta ')}(x) \geq 0, \quad j \in I \bigl(\alpha ',\beta ' \bigr), 0 \leq j < K \bigl( \alpha ', \beta ' \bigr); \\ &a_{K(\alpha ', \beta ')}(x) \neq 0, \\ &(-1)^{m(\alpha ',j)} a_{j}(x) \leq 0, \quad j \in I \bigl(\alpha ',\beta ' \bigr), K \bigl(\alpha ', \beta ' \bigr) < j < n. \end{aligned} \end{aligned}
(15)

Let us also suppose that the following boundary value problems

\begin{aligned} \begin{aligned}& Lv =0,\quad x \in \bigl(a',b' \bigr);\qquad v^{(j)} \bigl(a' \bigr)=0,\quad j \in \alpha ' \cup \bigl\{ K \bigl(\alpha ',\beta ' \bigr) \bigr\} ;\\ &v^{(j)} \bigl(b' \bigr)=0,\quad j \in \beta ';\end{aligned} \end{aligned}
(16)
\begin{aligned} \begin{aligned}&Lw =0,\quad x \in \bigl(a',b' \bigr);\qquad w^{(j)} \bigl(a' \bigr)=0,\quad j \in \alpha ';\\ & w^{(j)} \bigl(b' \bigr)=0,\quad j \in \beta ' \cup \bigl\{ K \bigl(\alpha ',\beta ' \bigr) \bigr\} ; \end{aligned} \end{aligned}
(17)

and

\begin{aligned} \begin{aligned} &L \chi =0,\quad x \in \bigl(a',b' \bigr);\qquad \chi ^{(j)} \bigl(a' \bigr)=0,\quad j \in \alpha ';\qquad \chi ^{(j)} \bigl(b' \bigr)=0,\quad j \in \beta ';\\& \chi ^{(K( \alpha ',\beta '))} \bigl(x^{*} \bigr)=0; \end{aligned} \end{aligned}
(18)

do not have solutions other than the trivial one for any $$[a',b'] \subseteq [a,b]$$.

If y is a solution of (5) such that $$y^{(K(\alpha ',\beta '))}(x^{*})=1$$, then each $$y^{(j)}(x)$$ does not change sign in $$(a,b)$$ for $$j \in I(\alpha ',\beta ') \cup \{ n \}$$, and such a sign is given by

\begin{aligned} (-1)^{m(\alpha ', j) - m(\alpha ', K(\alpha ',\beta '))} y^{(j)}(x) > 0,\quad j = 0, \ldots, K \bigl(\alpha ',\beta ' \bigr), j \in I \bigl(\alpha ',\beta ' \bigr), \end{aligned}
(19)

and

\begin{aligned} (-1)^{m(\alpha ', j)} a_{K(\alpha ', \beta ')}(x) y^{(j)}(x) < 0, \quad j = K \bigl(\alpha ',\beta ' \bigr)+1, \ldots, n, j \in I \bigl(\alpha ',\beta ' \bigr) \cup \{ n \}. \end{aligned}
(20)

### Proof

Let us consider the problem (5) taking $$a'$$ and $$b'$$ (with $$[a',b'] \subseteq [a,b]$$) as extremes instead of a and b. Let us set $$a'=x^{*}$$ (if $$x^{*}=b$$, we could select $$a'$$ and $$x^{*}$$ as extremes and repeat exactly the same argument that follows with $$a'$$ instead of $$b'$$). From Corollary 1, it follows that there is a $$c>x^{*}$$ such that, if $$b' < c$$, the derivatives of the nontrivial solutions of that problem, whose orders satisfy (13) and (14), have a constant sign on $$(x^{*},b')$$, and all zeroes are isolated. Thus, let us set $$b' < c$$. From Lemma 2 and Theorem 1, one has that $$y^{(K(\alpha ',\beta '))}$$ and $$y^{(n)}$$ neither vanish nor change the sign on $$[x^{*},b']$$. Given that we are imposing the condition $$y^{(K(\alpha ',\beta '))}(x^{*})=1$$ that implies that the sign of $$y^{(K(\alpha ',\beta '))}(x)$$ must be positive on $$[x^{*},b']$$.

Next, if $$y^{(j)}(x^{*})=0$$ for some $$j \leq n-1$$, then obviously there exists $$\delta >0$$ such that $$y^{(j)}(x) y^{(j+1)}(x) > 0$$ for $$x \in (x^{*}, x^{*}+ \delta )$$. On the contrary, if $$y^{(j)}(x^{*}) \neq 0$$ for some $$j \leq n-1$$, then unless $$j=K(\alpha ',\beta ')$$, $$y^{(j)}(x)$$ must have at least an isolated zero in $$(x^{*},b']$$ due to Lemma 2 and Corollary 1. Let $$x_{j}$$ be the minimum of these zeroes. Given that $$y^{(j+1)}$$ cannot have a zero in $$(x^{*},x_{j})$$ (the only possible zeroes of $$y^{(j+1)}$$ in $$(x^{*},b')$$ are those forced by Rolle’s theorem as per Lemma 2 and Corollary 1), it follows

\begin{aligned} - y^{(j)} \bigl(x^{*} \bigr) = y^{(j)}(x_{j}) - y^{(j)} \bigl(x^{*} \bigr) = \int _{x^{*}}^{x_{j}} y^{(j+1)}(x) \,dx, \end{aligned}

so that $$y^{(j)}(x)$$ and $$y^{(j+1)}(x)$$ must have opposite signs on $$(x^{*},x_{j})$$. Combining both cases, we obtain that the sign of $$y^{(j)}(x)$$ for $$x \in (x^{*},x^{*}+\delta )$$ is given by

\begin{aligned} (-1)^{m(\alpha ', j)} y^{(j)}(x) y^{(n)}(x) > 0, \quad j = K \bigl(\alpha ', \beta ' \bigr)+1, \ldots, n-1, \end{aligned}
(21)

and

\begin{aligned} (-1)^{m(\alpha ', j) - m(\alpha ', K(\alpha ',\beta '))} y^{(j)}(x) > 0,\quad j = 0, \ldots, K \bigl(\alpha ',\beta ' \bigr). \end{aligned}
(22)

Following the reasoning of Theorem 1 and taking (1) into account, let us also suppose that $$b'$$ is so close to $$x^{*}$$ that the sign of $$y^{(n)}(x)$$ is the same as that of $$- a_{K(\alpha ', \beta ')}(x) y^{(K(\alpha ', \beta '))}(x)$$ on $$(x^{*},b')$$, that is, the same as $$- a_{K(\alpha ', \beta ')}(x^{*})$$. This and (21) give

\begin{aligned} (-1)^{m(\alpha ', j) } a_{K(\alpha ', \beta ')} \bigl(x^{*} \bigr) y^{(j)}(x) < 0,\quad j = K \bigl(\alpha ',\beta ' \bigr)+1, \ldots, n, \end{aligned}
(23)

for $$x \in (x^{*}, x^{*}+\delta )$$. For those derivative orders $$j \in I(\alpha ',\beta ')$$, the signs given by (22) and (23) will apply to the whole interval $$(x^{*},b')$$ by the definition of $$I(\alpha ',\beta ')$$, which means that (19) and (20) hold for the extremes $$x^{*}, b'$$. Next, let us start gradually increasing $$b'$$ to b while $$a'$$ is kept fixed at $$a'=x^{*}$$. Note that according to Lemma 3 and (18), $$y^{(j)}(x)$$ is continuous with respect to $$b'$$ for $$0 \leq j \leq n$$. Let us suppose that during that process, there appears a new zero in any of the derivatives of $$y^{(j)}$$, $$0 \leq j \leq n$$, $$x \in [x^{*},b']$$. Let $$b^{*} \leq b$$ be the smallest value of $$b'$$ for which such a new zero appears in any of these derivatives and l be the order of such a derivative. We can have the following cases:

1. 1.

$$l < K(\alpha ',\beta ')$$. By Rolle’s theorem and the facts that $$z_{j}[x^{*},b'] \geq 1$$ for $$j < K(\alpha ',\beta ')$$ and $$b' \leq b^{*}$$ (that is, all these derivatives lower than the $$K(\alpha ',\beta ')$$-th have at least a zero in $$[x^{*},b^{*}]$$ besides the new one) there should also be a change of sign of $$y^{(K(\alpha ',\beta '))}$$ in $$(x^{*},b^{*})$$. By continuity, this zero of $$y^{(K(\alpha ',\beta '))}$$ should have appeared for a $$b' < b^{*}$$, contradicting the definition of $$b^{*}$$, so this case is not possible.

2. 2.

$$K(\alpha ',\beta ') < l < n$$. Again by Rolle’s theorem and the fact that $$z_{j}[x^{*},b'] \geq 1$$ for $$K(\alpha ',\beta ') < j < n$$ and $$b' \leq b^{*}$$, this new zero would force a change of sign of $$y^{(n)}$$ in $$(x^{*},b^{*})$$. By continuity of $$y^{(n)}$$ with $$b'$$ that change of sign should have been a zero in $$y^{(n)}$$ for a $$b' < b^{*}$$, contradicting the definition of $$b^{*}$$.

3. 3.

$$l = K(\alpha ',\beta ')$$. Here, we have three subcases:

• Either the zero appears at $$x^{*}$$ or at $$b^{*}$$, which is impossible since (16) and (17) have only the trivial solution for any $$x^{*},b' \in [a,b]$$ by hypothesis.

• Or the zero implies the change of sign of $$y^{(K(\alpha ',\beta '))}$$ in $$(x^{*},b^{*})$$. This is also impossible, as that change of sign, by continuity, should have happened for a $$b' < b^{*}$$, contradicting the definition of $$b^{*}$$.

• Or such a zero is also a local extreme of $$y^{(K(\alpha ',\beta '))}$$ in $$(x^{*},b^{*})$$, let us call it d. But this implies that $$y^{(K(\alpha ',\beta ')+1)}(d)$$ must also vanish, leading us to the reasoning of case 2.

4. 4.

$$l=n$$. Since the term $$a_{K(\alpha ', \beta ')}(x) y^{(K(\alpha ', \beta '))}(x)$$ cannot vanish in $$[x^{*},b^{*}]$$, as per the previous case, according to (1), (15), (19), and (20), this option is not possible either.

Generally, this means that no new zero will appear on $$[x^{*},b']$$ in any of the derivatives of y lower than or equal to the n-th as $$b'$$ grows up to b, so that $$E_{j}[a,b]$$ will be as in Corollary 1. Consequently, the derivatives whose orders belong to $$I(\alpha ',\beta ')$$ and are lower than or equal to $$K(\alpha ',\beta ')$$ will keep the signs given by (22); whereas, those whose orders belong to $$I(\alpha ',\beta ')$$ but are higher than $$K(\alpha ',\beta ')$$ will have signs determined by (23), that is, (19) and (20), respectively, for $$x \in (x^{*},b)$$.

Repeating the same reasoning with the lower extreme $$a'$$ decreasing from $$x^{*}$$ to a, one finally obtains (19) and (20) for $$x \in (a,b)$$. □

### Remark 2

Apart from the $$K(\alpha ',\beta ')$$-hyperdisfocality of the problem, the keys to ensure the constant sign of the derivatives of y whose orders belong to $$I(\alpha ',\beta ')$$ as $$b'$$ increases (or $$a'$$ decreases), are the constant signs of $$y^{(K(\alpha ',\beta '))}$$ and $$y^{(n)}$$ during the process. The first one is achieved by the second one plus forcing that no extra zero appears at $$y^{(K(\alpha ',\beta '))}(a')$$ or $$y^{(K(\alpha ',\beta '))}(b')$$ during the decrease of $$a'$$ and the growth of $$b'$$ (that is, (16) and (17)). The way selected in this paper to achieve the latter is to make that all terms different from $$y^{(n)}$$ in (1) have the same sign on $$(a',b')$$, although any other mechanisms ensuring it would work too. One must note that forcing that the boundary value problem (5) does have only a trivial solution when either the condition $$y^{(n-1)}(a')=0$$ or the condition $$y^{(n-1)}(b')=0$$ are added, does not suffice, since a zero of $$y^{(n-1)}(x)$$ that is simultaneously a local extreme could appear in $$(a',b')$$ as $$b'$$ grows (or $$a'$$ decreases) and become a component of different sign for greater values of $$b'$$(lower values of $$a'$$).

Next, we will show that the condition $$a_{K(\alpha ', \beta ')}(x) \neq 0$$, $$x \in [a,b]$$ can, in fact, be dropped.

### Theorem 3

Let us assume that for $$x \in [a,b]$$

\begin{aligned} \begin{aligned}& a_{j}(x) \equiv 0, \quad j \notin I \bigl(\alpha ',\beta ' \bigr); \\ &(-1)^{m(\alpha ',j)} a_{j}(x) \leq 0, \quad j \in I \bigl(\alpha ',\beta ' \bigr), K \bigl(\alpha ', \beta ' \bigr) < j \leq n-1, \end{aligned} \end{aligned}
(24)

and either

\begin{aligned} (-1)^{m(\alpha ', j)} a_{j}(x) \leq 0,\quad j \in I \bigl( \alpha ',\beta ' \bigr), 0 \leq j \leq K \bigl(\alpha ', \beta ' \bigr), \end{aligned}
(25)

or

\begin{aligned} (-1)^{m(\alpha ', j)} a_{j}(x) \geq 0,\quad j \in I \bigl( \alpha ',\beta ' \bigr), 0 \leq j \leq K \bigl(\alpha ', \beta ' \bigr). \end{aligned}
(26)

Let us also assume that the hypotheses (16)(17) hold. If y is a solution of (5) such that $$y^{(K(\alpha ',\beta '))}(x^{*})=1$$, with either $$x^{*}=a$$ or $$x^{*}=b$$, then each $$y^{(j)}(x)$$ does not change sign on $$(a,b)$$ for $$j \in I(\alpha ',\beta ') \cup \{ n \}$$, and such a sign is given by

\begin{aligned} (-1)^{m(\alpha ', j) - m(\alpha ', K(\alpha ',\beta '))} y^{(j)}(x) > 0, \quad j = 0, \ldots, K \bigl(\alpha ',\beta ' \bigr), j \in I \bigl(\alpha ',\beta ' \bigr), \end{aligned}
(27)

and

\begin{aligned} (-1)^{m(\alpha ', j)- m(\alpha ', K(\alpha ',\beta '))} y^{(j)}(x) \geq 0,\quad j = K \bigl(\alpha ',\beta ' \bigr)+1, \ldots, n, j \in I \bigl( \alpha ',\beta ' \bigr) \cup \{ n \}, \end{aligned}
(28)

if (25) holds, and

\begin{aligned} (-1)^{m(\alpha ', j)- m(\alpha ', K(\alpha ',\beta '))} y^{(j)}(x) \leq 0,\quad j = K \bigl(\alpha ',\beta ' \bigr)+1, \ldots, n, j \in I \bigl( \alpha ',\beta ' \bigr) \cup \{ n \}, \end{aligned}
(29)

if (26) holds. Inequalities (28) and (29) are strict if there is a derivative order $$l \leq K(\alpha ',\beta ')$$ such that $$a_{l}(x)$$ does not vanish in $$(a,b)$$. On the contrary, if all $$a_{l}(x) \equiv 0$$ for all $$l \leq K(\alpha ',\beta ')$$, then $$y^{(j)}(x) \equiv 0$$ on $$[a,b]$$ for $$K(\alpha ',\beta ') < j \leq n$$ (note that these two options do not exhaust all possible alternatives). In any case, the number of zeroes of $$y^{(j)}$$ in $$[a,b]$$ is given by $$Z_{j} \{\alpha ',\beta ' \} -j$$ for $$j \leq K(\alpha ',\beta ')$$, and by $$Z_{j} \{\alpha ',\beta ' \} -j +1$$ for $$K(\alpha ',\beta ') < j < n$$.

### Proof

If $$a_{K(\alpha ', \beta ')}(x) \neq 0$$ in $$[a,b]$$, then the result follows from the previous theorem, so let us assume that $$a_{K(\alpha ', \beta ')}(x)$$ vanishes in $$[a,b]$$. Let us pick $$\epsilon >0$$, and let us build the function ${a}_{K\left({\alpha }^{\prime },{\beta }^{\prime }\right),ϵ}:\left[a,b\right]\to \mathbb{R}$ as

\begin{aligned} a_{K(\alpha ', \beta '), \epsilon}(x) = \textstyle\begin{cases} (-1)^{m(\alpha ', K(\alpha ',\beta '))+1} \max \{ \epsilon, \vert a_{K( \alpha ', \beta ')}(x) \vert \} & \text{if (25) holds}, \\ (-1)^{m(\alpha ', K(\alpha ',\beta '))} \max \{ \epsilon, \vert a_{K( \alpha ', \beta ')}(x) \vert \} & \text{if (26) holds.} \end{cases}\displaystyle \end{aligned}
(30)

$$a_{K(\alpha ', \beta '), \epsilon}$$ is obviously continuous on $$[a,b]$$, so that we can replace $$a_{K(\alpha ', \beta ')}(x)$$ by $$a_{K(\alpha ', \beta '), \epsilon}(x)$$ in the definition of the operator L and obtain the operator

\begin{aligned} & L_{\epsilon }y = y^{(n)}(x) + a_{n-1} (x) y^{(n-1)}(x) + \cdots + a_{K( \alpha ', \beta '), \epsilon}(x) y^{(K(\alpha ', \beta '))} (x) + \cdots + a_{0}(x) y(x), \\ &\quad x \in J. \end{aligned}
(31)

Let $$y_{\epsilon}$$ be the solution of the boundary value problem similar to (5)

\begin{aligned} &L_{\epsilon }y_{\epsilon }=0,\quad x \in [a,b]; \\ & y^{(j)}_{\epsilon }(a)=0,\quad j \in \alpha ';\qquad y^{(j)}_{\epsilon }(b)=0,\quad j \in \beta ';\qquad y^{(K(\alpha ', \beta '))}_{\epsilon } \bigl(x^{*} \bigr)=1. \end{aligned}
(32)

We will prove first that $$y_{\epsilon}$$ is continuous with respect to ϵ at $$\epsilon =0$$. For that let us recall that, if $$u_{m}$$, $$1 \leq m \leq n$$, are defined as in Theorem 1, and $$u_{m,\epsilon}$$, $$1 \leq m \leq n$$, are the solutions of $$L_{\epsilon }u=0$$ satisfying

\begin{aligned} u^{(s-1)}_{m, \epsilon}(a)=\delta _{ms}, \quad 1 \leq m,s \leq n, \end{aligned}
(33)

then the solutions of (5) (plus $$y^{(K(\alpha ',\beta '))}(x^{*})=1$$) and (32) are given by

\begin{aligned} y= c_{1} u_{1} + \cdots + c_{n} u_{n},\quad x \in [a,b], \end{aligned}
(34)

and

\begin{aligned} y_{\epsilon}= c_{1, \epsilon} u_{1, \epsilon} + \cdots + c_{n, \epsilon} u_{n, \epsilon},\quad x \in [a,b], \end{aligned}
(35)

respectively. The functions $$u_{m, \epsilon}$$, $$1 \leq m \leq n$$, and their derivatives of order up to the n-th are continuous with respect to ϵ at $$\epsilon =0$$. This can be proven using Gronwall inequality. To do so, let $$w_{m,\epsilon}$$ be defined by

\begin{aligned} w_{m,\epsilon} (x)= u_{m,\epsilon}(x) - u_{m}(x),\quad x \in [a,b], 1 \leq m \leq n. \end{aligned}
(36)

Let us construct the vectors $$W_{m,\epsilon}$$, $$U_{m,\epsilon}$$ and ${U}_{m}\in {\mathbb{R}}^{n×1}$ as

\begin{aligned} W_{m,\epsilon} = \begin{pmatrix} w_{m,\epsilon} (x) \\ w_{m,\epsilon}' (x) \\ \cdots \\ w_{m,\epsilon}^{(n-1)} (x) \end{pmatrix},\qquad U_{m,\epsilon} = \begin{pmatrix} u_{m,\epsilon} (x) \\ u_{m,\epsilon}' (x) \\ \cdots \\ u_{m,\epsilon}^{(n-1)} (x) \end{pmatrix},\qquad U_{m} = \begin{pmatrix} u_{m} (x) \\ u'_{m} (x) \\ \cdots \\ u^{(n-1)}_{m} (x) \end{pmatrix}, \end{aligned}
(37)

and the matrices M, ${M}_{ϵ}\in {\mathbb{R}}^{n×n}$ as

\begin{aligned} M_{\epsilon }= \begin{pmatrix} 0 & 1 & 0 & \ldots & \ldots & \ldots & 0 \\ 0 & 0 & 1 & \ldots & \ldots & \ldots & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ -a_{0}(x) & -a_{1}(x) & -a_{2}(x) & \ldots & -a_{K(\alpha ', \beta '), \epsilon}(x) & \ldots & -a_{n-1}(x) \end{pmatrix} \end{aligned}
(38)

and

\begin{aligned} M = \begin{pmatrix} 0 & 1 & 0 & \ldots & \ldots & \ldots & 0 \\ 0 & 0 & 1 & \ldots & \ldots & \ldots & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ -a_{0}(x) & -a_{1}(x) & -a_{2}(x) & \ldots & -a_{K(\alpha ', \beta ')}(x) & \ldots & -a_{n-1}(x) \end{pmatrix}. \end{aligned}
(39)

Obviously one has $$U_{m, \epsilon}' = M_{\epsilon }U_{m, \epsilon}$$, $$U'_{m}= M U_{m}$$ and $$W_{m,\epsilon} (a)= U_{m,\epsilon} (a) - U_{m}(a) = (0, \ldots, 0)^{T}$$. In addition,

\begin{aligned} W_{m,\epsilon}' (x)= M_{\epsilon }W_{m,\epsilon} (x) + (M_{\epsilon }-M) U_{m}, \end{aligned}
(40)

that is

\begin{aligned} W_{m,\epsilon} (x)= \int _{a}^{x} M_{\epsilon}(t) W_{m,\epsilon} (t) \,dt + \int _{a}^{x} (M_{\epsilon }-M) (t) U_{m} (t) \,dt. \end{aligned}
(41)

Applying matrix norms to (41) and taking (30) into account gives

\begin{aligned} \bigl\Vert W_{m,\epsilon} (x) \bigr\Vert &\leq \int _{a}^{x} \bigl\Vert M_{\epsilon}(t) \bigr\Vert \bigl\Vert W_{m, \epsilon} (t) \bigr\Vert \,dt + \int _{a}^{x} \bigl\vert a_{K(\alpha ', \beta '), \epsilon}(t) -a_{K(\alpha ', \beta ')}(t) \bigr\vert \bigl\Vert U_{m} (t) \bigr\Vert \,dt \\ &\leq \int _{a}^{x} \bigl\Vert M_{\epsilon}(t) \bigr\Vert \bigl\Vert W_{m,\epsilon} (t) \bigr\Vert \,dt+ \epsilon \int _{a}^{x} \bigl\Vert U_{m} (t) \bigr\Vert \,dt. \end{aligned}
(42)

From Gronwall inequality [28], one finally gets

\begin{aligned} &\bigl\Vert W_{m,\epsilon} (x) \bigr\Vert \leq \epsilon \int _{a}^{x} \bigl\Vert U_{m} (t) \bigr\Vert \,dt + \epsilon \int _{a}^{x} \bigl\Vert M_{\epsilon}(t) \bigr\Vert \int _{a}^{t} \bigl\Vert U_{m} (s) \bigr\Vert \,ds \exp \bigl( \bigl\Vert M_{\epsilon }(t) \bigr\Vert \bigr) \,dt, \\ &\quad x \in [a,b]. \end{aligned}
(43)

The inequality (43) clearly shows that $$u_{m,\epsilon}$$ and its derivatives up to the $$(n-1)$$-th order (indeed up to the n-th order if we consider (31) and the fact that $$u_{m,\epsilon}$$ satisfy $$L_{\epsilon }u_{m,\epsilon}=0$$) tend uniformly to $$u_{m}$$ and its derivatives, respectively, in $$[a,b]$$, when ϵ tends to zero.

Next, (32), (33), and (35) imply that $$c_{m, \epsilon}=0$$, for $$m-1 \in \alpha '$$, and that the remaining $$c_{m, \epsilon}$$, $$m-1 \notin \alpha '$$, must satisfy

\begin{aligned} \sum_{m=1, m-1 \notin \alpha '}^{n} c_{m, \epsilon} u_{m, \epsilon}^{( \beta _{j})} (b) =0, \quad \beta _{j} \in \beta '; \qquad\sum_{m=1, m-1 \notin \alpha '}^{n} c_{m, \epsilon} u_{m, \epsilon}^{(K( \alpha ', \beta '))} \bigl(x^{*} \bigr) =1. \end{aligned}
(44)

This is a system of equations whose coefficient matrix determinant is composed by products of $$u_{m, \epsilon}^{(\beta _{j})} (b)$$ and $$u_{m, \epsilon}^{(K(\alpha ', \beta '))} (x^{*})$$ terms, which, in turn, tend to $$u_{m}^{(\beta _{j})} (b)$$ and $$u_{m}^{(K(\alpha ', \beta '))} (x^{*})$$, when ϵ tends to zero as per the previous discussion. The corresponding determinant with $$u_{m}^{(\beta _{j})} (b)$$ and $$u_{m}^{(K(\alpha ', \beta '))} (x^{*})$$ terms instead does not vanish as otherwise, it would violate (16)–(17), so one can find an ϵ small enough such that the determinant of the system (44) does not vanish either. Therefore, one can apply Cramer’s rule to (44) and deduce that the solutions $$c_{m, \epsilon}$$ tend to the respective $$c_{m}$$, when ϵ tends to zero.

From the fact that

\begin{aligned} y^{(j)}_{\epsilon}(x) - y^{(j)} (x) = \sum _{m=1}^{n} (c_{m, \epsilon} -c_{m}) u_{m, \epsilon}^{(j)} + \sum_{m=1}^{n} c_{m} \bigl( u_{m, \epsilon}^{(j)} - u_{m}^{(j)} \bigr), \end{aligned}

one concludes that $$y^{(j)}_{\epsilon}(x)$$ tends to $$y^{(j)}(x)$$ uniformly on $$[a,b]$$, when ϵ tends to zero for $$0 \leq j \leq n$$.

From Theorem 2 and the hypotheses of the present Theorem, it is clear that if $$\epsilon \neq 0$$, then $$y_{\epsilon}$$ satisfies (19)–(20). The continuity of all $$y^{(j)}$$ at $$\epsilon = 0$$ grants (28) and

\begin{aligned} (-1)^{m(\alpha ', j) - m(\alpha ', K(\alpha ',\beta '))} y^{(j)}(x) \geq 0,\quad j = 0, \ldots, K \bigl(\alpha ',\beta ' \bigr), j \in I \bigl(\alpha ', \beta ' \bigr), x \in (a,b), \end{aligned}
(45)

if the signs of the coefficients $$a_{j} (x)$$, $$j \leq K(\alpha ',\beta ')$$, are given by (25). However, as in the previous Theorem, (45) must be strict (that is, (27)). To prove this, if (45) was not strict, because of Lemma 1 and (7), $$y^{(K(\alpha ',\beta '))}$$ would have a zero on $$[a,b]$$. If that zero was at a or b, then this would violate (16) or (17), respectively. Otherwise, the zero of $$y^{(K(\alpha ',\beta '))}$$ would be in $$(a,b)$$, and it would be a local extreme, so that $$y^{(K(\alpha ',\beta ')+1)}$$ should also vanish there. By Rolle’s theorem, this additional zero of $$y^{(K(\alpha ',\beta ')+1)}$$ in $$(a,b)$$ would force a change of sign of $$y^{(n)}$$ in $$(a,b)$$, violating (28).

Likewise, the continuity of all $$y^{(j)}$$ at $$\epsilon = 0$$ ensures (27) and (29) if the signs of the coefficients $$a_{j} (x)$$, $$j \leq K(\alpha ',\beta ')$$, are given by (26).

Next, if there is a derivative order $$l \leq K(\alpha ',\beta ')$$ such that $$a_{l}(x)$$ does not vanish in $$(a,b)$$, from (1) and the fact that $$l \in I(\alpha ',\beta ')$$ (otherwise, $$a_{l} \equiv 0$$ as per (24)), one has that $$y^{(n)}$$ cannot vanish in $$(a,b)$$, having the same sign in that interval. This prevents zero components and additional zeroes in derivatives of the lower order in $$[a,b]$$, so that (28) and (29) are strict. On the contrary, if all $$a_{l}(x) \equiv 0$$ for $$l \leq K(\alpha ',\beta ')$$ then $$y^{(j)}(x) \equiv 0$$, $$K(\alpha ',\beta ') < j \leq n$$, on $$[a,b]$$. The reason is that (5) can be converted into a boundary value problem of order $$n-K(\alpha ',\beta ')$$ with $$n-K(\alpha ',\beta ')$$ boundary conditions just by considering the derivatives $$y^{(j)}$$, $$K(\alpha ',\beta ') < j \leq n$$. That problem has only the trivial solution, which gives $$y^{(K(\alpha ',\beta '))} \equiv 1$$. If there was another solution, then the difference between this one and the one given by $$y^{(K(\alpha ',\beta '))} \equiv 1$$ would violate (16)–(17).

Last, given that $$y^{(K(\alpha ',\beta '))}$$ does not vanish in $$[a,b]$$, from Lemma 1$$E_{j}[a,b]=0$$ for $$l \leq K(\alpha ',\beta ')$$ and therefore $$z_{j}[a,b]=Z_{j} \{\alpha ', \beta '\}-j$$ for such derivatives. In addition, one can repeat the same reasoning of Lemma 2 to obtain $$E_{j} [a,b] \geq 1$$ for all $$j > K(\alpha ',\beta ')$$. If $$E_{j} [a,b] >1$$ for some $$j > K(\alpha ',\beta ')$$, from Lemma 1 and (7) one would get $$z_{n-1}[a,b] >1$$, and by Rolle’s theorem there would be a change of signs of $$y^{(n)}$$ in $$(a,b)$$, contradicting (28)–(29). Therefore, $$E_{j} [a,b] =1$$ for all $$j > K(\alpha ',\beta ')$$, and from (7), it follows $$z_{j}[a,b] = Z_{j} \{ \alpha ', \beta ' \} -j +1$$. This completes the proof. □

### Remark 3

In the previous Theorem, the existence of a single index $$l \leq K(\alpha ',\beta ')$$ such that $$a_{l}(x)$$ does not vanish in $$[a,b]$$ can be replaced by the existence of several indices $$l \leq K(\alpha ',\beta ')$$, $$l \in I(\alpha ',\beta ')$$, such that each $$a_{l}(x)$$ does not vanish in a subinterval of $$[a,b]$$, and the union of all these subintervals covers $$[a,b]$$.

Interestingly, we can also drop one of the conditions (16)–(17) in Theorem 3 without altering its outcome.

### Theorem 4

The conclusions of Theorem 3are valid for $$(\alpha ',\beta ')=(\alpha \backslash \alpha _{i},\beta )$$, $$y^{(K(\alpha \backslash \alpha _{i},\beta ))}(a)=1$$ and hypotheses (24) and (26) if one replaces hypotheses (16)(17) by only (16). Likewise, they are also valid for $$(\alpha ',\beta ')=(\alpha,\beta \backslash \beta _{i})$$, $$y^{(K(\alpha,\beta \backslash \beta _{i}))}(b)=1$$ and hypotheses (24) and (25) if one replaces hypotheses (16)(17) by only (17).

### Proof

Let us suppose that $$(\alpha ',\beta ')=(\alpha \backslash \alpha _{i},\beta )$$ and $$y^{(K(\alpha \backslash \alpha _{i},\beta ))}(a)=1$$. Let us also suppose that (17) was violated for some $$a',b' \in (a,b)$$ for which (16) holds, and hypotheses (16), (17), (24), and (26) hold for all pairs $$a'',b'$$ such that $$a'< a'' < b'$$. In that case, (27) and (29) must be met for the solution of (5) associated to the extremes $$a'',b'$$. Given that $$K(\alpha \backslash \{ \alpha _{i} \},\beta ) \notin \alpha$$, $$m(\alpha, K(\alpha \backslash \alpha _{i},\beta ))= m(\alpha, K( \alpha \backslash \alpha _{i}, \beta ) +1)+1$$, so that $$y^{(K(\alpha \backslash \alpha _{i},\beta ))}(x)$$ and $$y^{(K(\alpha \backslash \alpha _{i},\beta )+1)}(x)$$ must have the same sign in $$(a'',b')$$. From Lemma 3, we know that $$y^{(K(\alpha \backslash \alpha _{i},\beta )+1)}(x)$$ is continuous with respect to $$a''$$ if (16) holds, regardless of (17), that is, even in the case of $$a''=a'$$. But, if (17) is violated, $$y^{(K(\alpha \backslash \alpha _{i},\beta ))}(b')=0$$, $$y^{(K(\alpha \backslash \alpha _{i},\beta ))}(x) >0$$ for $$x < b'$$ and therefore $$y^{(K(\alpha \backslash \alpha _{i},\beta ))}(x)$$ must be positive and decreasing at least on a subinterval of $$(a'',b')$$, contradicting the fact that $$y^{(K(\alpha \backslash \alpha _{i},\beta ))}(x)$$ and $$y^{(K(\alpha \backslash \alpha _{i},\beta )+1)}(x)$$ must have the same sign in $$(a'',b')$$.

The proof for the case $$(\alpha ',\beta ')=(\alpha,\beta \backslash \beta _{i})$$ is similar and will not be repeated. □

## 3 The sign of the partial derivatives of the Green function

In this section, we will apply the results of the preceding section to determine the signs of the partial derivatives of $$G(x,t)$$ with regards to x. As in the previous section, we will assume that $$\{ \alpha, \beta \}$$ are poised. We will also assume that the boundary value problem (2) does not have a nontrivial solution, as this is a necessary condition for the existence of $$G(x,t)$$.

To start with, the next Lemma (see [2, Lemma 2]) assesses the dependence of $$G(x,t)$$ and its partial derivatives with regards to the extremes a and b. Note that by definition, $$\frac{ \partial G}{ \partial a}$$ and $$\frac{ \partial G}{ \partial b}$$ have all derivatives up to the n-th order continuous for $$x \in [a,b]$$.

### Lemma 4

Fixed $$t \in [a,b]$$, $$\frac{ \partial G(x,t) }{ \partial b}$$ is the solution of the problem

\begin{aligned} &L \frac{ \partial G }{ \partial b}=0,\quad x \in (a,b); \\ &\frac{\partial ^{j} }{\partial x^{j}} \frac{ \partial G(a,t) }{ \partial b }=0,\quad j \in \alpha; \\ &\frac{ \partial ^{j}}{\partial x^{j}} \frac{ \partial G(b,t) }{ \partial b}= - \frac{ \partial ^{j+1} G(b,t) }{ \partial x^{j+1}},\quad j \in \beta. \end{aligned}
(46)

Likewise, $$\frac{ \partial G(x,t) }{ \partial a}$$ is the solution of the problem

\begin{aligned} \begin{aligned} &L \frac{ \partial G }{ \partial a}=0,\quad x \in (a,b); \\ & \frac{\partial ^{j} }{\partial x^{j}} \frac{ \partial G(a,t) }{ \partial a }= - \frac{ \partial ^{j+1} G(a,t) }{ \partial x^{j+1}},\quad j \in \alpha; \\ &\frac{ \partial ^{j}}{\partial x^{j}} \frac{ \partial G(b,t) }{ \partial a}= 0,\quad j \in \beta. \end{aligned} \end{aligned}
(47)

The lack of nontrivial solutions of (2) allows decomposing $$\frac{ \partial G }{ \partial b}$$ as

\begin{aligned} \frac{ \partial G(x,t) }{ \partial b} = \sum_{i=1}^{n-k} h_{\beta _{i}} (x,t), \end{aligned}
(48)

where, fixed $$t \in [a,b]$$, $$h_{\beta _{i}} (x,t)$$ is the solution of

\begin{aligned} \begin{aligned} &L h_{\beta _{i}}=0,\quad x \in (a,b); \qquad h_{\beta _{i}}^{(j)} (a,t) =0,\quad j \in \alpha; \\ & h_{\beta _{i}}^{(j)} (b,t) = 0,\quad j \in \beta \backslash \beta _{i};\qquad h_{\beta _{i}}^{(\beta _{i})} (b,t)= - \frac{ \partial ^{\beta _{i}+1} G(b,t) }{ \partial x^{\beta _{i}+1}}. \end{aligned} \end{aligned}
(49)

Note that if $$\beta _{i}+1 \in \beta$$ then $$h_{\beta _{i}} (x,t) \equiv 0$$. That implies that we only need to take into account those $$\beta _{i}$$ such that $$\beta _{i}+1 \notin \beta$$. Similarly,

\begin{aligned} \frac{ \partial G(x,t) }{ \partial a} = \sum_{i=1}^{k} g_{\alpha _{i}} (x,t), \end{aligned}
(50)

where, fixed $$t \in [a,b]$$, $$g_{\alpha _{i}} (x,t)$$ is the solution of

\begin{aligned} \begin{aligned} &L g_{\alpha _{i}}=0,\quad x \in (a,b);\qquad g_{\alpha _{i}}^{(j)} (b,t) = 0, \quad j \in \beta; \\ &g_{\alpha _{i}}^{(j)} (a,t) =0,\quad j \in \alpha \backslash \alpha _{i};\qquad g_{\alpha _{i}}^{(\alpha _{i})} (a,t) = - \frac{ \partial ^{\alpha _{i}+1} G(a,t) }{ \partial x^{\alpha _{i}+1}}. \end{aligned} \end{aligned}
(51)

As before, if $$\alpha _{i}+1 \in \alpha$$, then $$g_{\alpha _{i}} (x,t) \equiv 0$$, so that we only need to take into account those $$\alpha _{i}$$ such that $$\alpha _{i}+1 \notin \alpha$$.

The advantage of the aforementioned decomposition is that each of problems (49) and (51) has the same structure as problem (5). If we manage to find conditions that allow that, fixed the derivative order j, all $$h_{\beta _{i}}^{(j)}$$ have the same sign, such a sign will coincide with that of $$\frac{ \partial ^{j} }{ \partial x^{j}} \frac{ \partial G}{\partial b}$$ (a similar reasoning can be made with $$g_{\alpha _{i}}^{(j)} (x,t)$$ and $$\frac{ \partial ^{j} }{ \partial x^{j}} \frac{ \partial G}{\partial a}$$). The next lemmas will explore that.

### Lemma 5

Given $$\beta _{i} \in \beta$$ with $$\beta _{i}+1 \notin \beta$$, let us assume that

\begin{aligned} (-1)^{n(\beta,\beta _{i}+1)} \frac{ \partial ^{\beta _{i}+1} G(b,t) }{ \partial x^{\beta _{i}+1}} > 0. \end{aligned}
(52)

Let us also assume that for $$x \in [a,b]$$

\begin{aligned} \begin{aligned}& a_{j}(x) \equiv 0, \quad j \notin I(\alpha, \beta \backslash \beta _{i}); \\ &(-1)^{m(\alpha,j)} a_{j}(x) \leq 0, \quad j \in I(\alpha, \beta \backslash \beta _{i}), \end{aligned} \end{aligned}
(53)

and that (17) holds for $$(\alpha ',\beta ')=(\alpha, \beta \backslash \beta _{i})$$. Then, for $$j \in I(\alpha, \beta \backslash \beta _{i})$$, one has

\begin{aligned} (-1)^{m(\alpha,j)} h_{\beta _{i}}^{(j)} (x,t) > 0,\quad 0 \leq j \leq K(\alpha, \beta \backslash \beta _{i}), a < x < b, \end{aligned}
(54)

and

\begin{aligned} (-1)^{m(\alpha,j)} h_{\beta _{i}}^{(j)} (x,t) \geq 0,\quad K(\alpha, \beta \backslash \beta _{i}) < j \leq n-1, a < x < b. \end{aligned}
(55)

If there exists an index $$l \leq K(\alpha, \beta \backslash \beta _{i})$$ such that $$a_{l}(x)$$ does not vanish in $$[a,b]$$, then the inequality (55) is strict.

### Proof

In this case, $$\{\alpha ', \beta ' \}= \{\alpha, \beta \backslash \beta _{i} \}$$. Given that $$\alpha '=\alpha$$, it is clear that $$m(\alpha ',j)=m(\alpha,j)$$, so that the conditions (17) and (24)–(25) of Theorems 3 and 4 hold for the set $$\{ \alpha,\beta \backslash \beta _{i} \}$$, with the exception of $$h_{\beta _{i}}^{(K(\alpha, \beta \backslash \beta _{i}))} (b,t) =1$$. The linearity of the problem, however, implies that the signs of the derivatives of $$h_{\beta _{i}}^{(j)} (x,t)$$ will be given by the product of the sign of $$h_{\beta _{i}}^{(K(\alpha, \beta \backslash \beta _{i}))} (b,t)$$ and those of (27)–(28).

In effect, from (49) and (52), one has that $$(-1)^{n(\beta,\beta _{i}+1)} h_{\beta _{i}}^{(\beta _{i})} (b,t) < 0$$. Following an argument similar to that of Theorem 2 on the changes of sign of $$y^{(i)}(x)$$ and $$y^{(i+1)} (x)$$ for $$x \in (b-\delta,b)$$ only happening when $$y^{(i)}(b)=0$$, it is straightforward to show that $$(-1)^{n(\beta,K (\alpha, \beta \backslash \beta _{i}))} h_{\beta _{i}}^{(K (\alpha, \beta \backslash \beta _{i}))} (b,t) < 0$$. Next, the definition of $$K (\alpha, \beta \backslash \beta _{i})$$ implies that $$Z_{K (\alpha, \beta \backslash \beta _{i})} \{\alpha, \beta \backslash \beta _{i} \} = K (\alpha, \beta \backslash \beta _{i})$$. This means that the sum of the vanishing boundary conditions at a and b for derivatives of order higher than $$K (\alpha, \beta \backslash \beta _{i})$$ is $$n-1- K (\alpha, \beta \backslash \beta _{i})$$. Since the number of derivative orders between the $$K (\alpha, \beta \backslash \beta _{i})$$ one (not included) and $$n-1$$ (included) is also $$n-1- K (\alpha, \beta \backslash \beta _{i})$$, this gives $$n(\beta,K (\alpha, \beta \backslash \beta _{i}))=m(\alpha, K ( \alpha, \beta \backslash \beta _{i})+1)$$. As $$K (\alpha, \beta \backslash \beta _{i}) \notin \alpha$$, $$m(\alpha, K (\alpha, \beta \backslash \beta _{i})) = m(\alpha, K ( \alpha, \beta \backslash \beta _{i})+1)+1$$, and one concludes that $$(-1)^{m(\alpha, K (\alpha, \beta \backslash \beta _{i}))} h_{ \beta _{i}}^{(K (\alpha, \beta \backslash \beta _{i}))} (b,t) > 0$$. Combining this result with (27)–(28), one gets to (54) and (55). As in Theorem 3, if there exists an index $$l \leq K(\alpha, \beta \backslash \beta _{i})$$ such that $$a_{l}(x)$$ does not vanish in $$[a,b]$$, then inequality (55) is strict. □

In a similar manner one can prove the following Lemma for $$g_{\alpha _{i}}(x,t)$$.

### Lemma 6

Given $$\alpha _{i} \in \alpha$$ with $$\alpha _{i}+1 \notin \alpha$$, let us assume that

\begin{aligned} (-1)^{m(\alpha,\alpha _{i}+1)} \frac{ \partial ^{\alpha _{i}+1} G(a,t) }{ \partial x^{\alpha _{i}+1}} > 0. \end{aligned}
(56)

Let us also assume that for $$x \in [a,b]$$

\begin{aligned} \begin{aligned}& a_{j}(x) \equiv 0, \quad j \notin I(\alpha \backslash \alpha _{i}, \beta ), \textit{ or } \alpha _{i} < j \leq K(\alpha \backslash \alpha _{i}, \beta ), \\ &(-1)^{m(\alpha,j)} a_{j}(x) \leq 0, \quad j \in I(\alpha \backslash \alpha _{i}, \beta ), \end{aligned} \end{aligned}
(57)

and that (16) holds for $$(\alpha ',\beta ')=(\alpha \backslash \alpha _{i}, \beta )$$. Then, for $$j \in I(\alpha \backslash \alpha _{i}, \beta )$$, one has

\begin{aligned} & (-1)^{m(\alpha,j)} g_{\alpha _{i}}^{(j)} (x,t) < 0,\quad 0 \leq j \leq \alpha _{i}, a < x < b, \end{aligned}
(58)
\begin{aligned} & (-1)^{m(\alpha,j)} g_{\alpha _{i}}^{(j)} (x,t) > 0,\quad \alpha _{i} < j \leq K(\alpha \backslash \alpha _{i}, \beta ), a < x < b, \end{aligned}
(59)

and

\begin{aligned} (-1)^{m(\alpha,j)} g_{\alpha _{i}}^{(j)} (x,t) \leq 0,\quad K( \alpha \backslash \alpha _{i}, \beta ) < j \leq n-1, a < x < b. \end{aligned}
(60)

If there exists an index $$l \leq K(\alpha \backslash \alpha _{i}, \beta )$$ such that $$a_{l}(x)$$ does not vanish in $$[a,b]$$, then inequality (60) is strict.

### Proof

It can be proved similarly to the previous one using (26), (27), and (29) in Theorems 3 and 4 and noting that $$(-1)^{m(\alpha \backslash \alpha _{i},K(\alpha \backslash \alpha _{i}, \beta ))} g_{\alpha _{i}}^{(K(\alpha \backslash \alpha _{i}, \beta ))} (a,t) > 0$$. □

Next, we will prove a short lemma that will be used in later calculations.

### Lemma 7

If $$j \in H (\alpha,\beta )$$ and $$j \notin \beta$$, then $$n(\beta,j)=m(\alpha,j)$$.

### Proof

From the definition of $$H(\alpha,\beta )$$, it follows that if $$j \in H(\alpha,\beta )$$, then $$Z_{j-1} \{\alpha,\beta \}=j$$, so that the number of boundary conditions set at a and b in derivatives of order higher than $$j-1$$ is $$n-j$$. In consequence, $$n(\alpha,j-1) + n(\beta,j-1)=n-j$$. On the other hand, $$n(\alpha,j-1) + m(\alpha,j)=n-1-(j-1)=n-j$$ also, so that $$m(\alpha,j) = n(\beta,j-1)$$. Since $$j \notin \beta$$, then $$n(\beta,j-1)=n(\beta,j)$$, and one concludes that $$m(\alpha,j) = n(\beta,j)$$. □

Lemmas 5 and 6 suggest an alignment of signs between the functions $$h_{\beta _{i}}^{(j)} (x,t)$$ and $$g_{\alpha _{i}}^{(j)} (x,t)$$. This alignment is confirmed by the next Lemma.

### Lemma 8

Let us assume that

\begin{aligned} (-1)^{n(\beta,j)} \frac{ \partial ^{j} G(b,t) }{ \partial x^{j}} > 0,\quad 0 \leq j \leq n-1, j \notin \beta, \end{aligned}
(61)

and

\begin{aligned} (-1)^{m(\alpha,j)} \frac{ \partial ^{j} G(a,t) }{ \partial x^{j}} > 0, \quad1 \leq j \leq n-1, j \notin \alpha. \end{aligned}
(62)

Let us also assume that for $$x \in [a,b]$$

\begin{aligned} \begin{aligned}& (-1)^{m(\alpha,j)} a_{j}(x) \leq 0, \quad j \in H(\alpha,\beta ), \\ &a_{j} (x) \equiv 0, \quad j \notin H(\alpha,\beta ), \end{aligned} \end{aligned}
(63)

that there is an index $$l \in H(\alpha, \beta )$$, with $$l < \min (\alpha ^{*}, \beta ^{*})$$, such that $$a_{l}(x)$$ does not vanish in $$[a,b]$$, and that (2) and the following boundary value problems

\begin{aligned} \begin{aligned} &Lw =0,\quad x \in \bigl(a',b' \bigr);\qquad w^{(j)} \bigl(a' \bigr)=0,\quad j \in \alpha;\\ & w^{(j)} \bigl(b' \bigr)=0,\quad j \in \{ \beta \backslash \beta _{i} \} \cup \bigl\{ K(\alpha,\beta \backslash \beta _{i}) \bigr\} , \end{aligned} \end{aligned}
(64)

for $$1 \leq i \leq n-k$$ and $$\beta _{i}+1 \notin \beta$$, and

\begin{aligned} \begin{aligned}& Lv =0,\quad x \in \bigl(a',b' \bigr);\qquad v^{(j)} \bigl(a' \bigr)=0,\quad j \in \{ \alpha \backslash \alpha _{i} \} \cup \bigl\{ K(\alpha \backslash \alpha _{i}, \beta ) \bigr\} ; \\ & v^{(j)} \bigl(b' \bigr)=0, \quad j \in \beta, \end{aligned} \end{aligned}
(65)

for $$1 \leq i \leq k$$ and $$\alpha _{i}+1 \notin \alpha$$, do not have solutions other than the trivial one for any $$a',b' \in [a,b]$$. Then

\begin{aligned} (-1)^{m(\alpha,j)} \frac{ \partial ^{j}}{ \partial x^{j}} \frac{ \partial G(x,t) }{ \partial b } > 0,\quad j \in H (\alpha, \beta ), a < x < b. \end{aligned}
(66)

and

\begin{aligned} (-1)^{m(\alpha,j)} \frac{ \partial ^{j}}{ \partial x^{j}} \frac{ \partial G(x,t) }{ \partial a } < 0,\quad j \in H (\alpha, \beta ), a < x < b. \end{aligned}
(67)

### Proof

We will prove first that if $$j \in H (\alpha,\beta )$$, then $$j \in I(\alpha, \beta \backslash \beta _{i})$$, $$1 \leq i \leq n-k$$. Thus, the definition of $$H(\alpha,\beta )$$ tells us that any index j that is part of that set must satisfy

\begin{aligned} Z_{j-1} \{ \alpha,\beta \}=j. \end{aligned}

If $$j \leq \beta _{i}$$, then $$Z_{j-1} \{ \alpha, \beta \backslash \beta _{i} \}= j$$, and therefore j satisfies (13), so that $$j \in I(\alpha, \beta \backslash \beta _{i})$$. Likewise, if $$j > K(\alpha, \beta \backslash \beta _{i})$$, then $$Z_{j-1} \{ \alpha, \beta \backslash \beta _{i} \}= j-1$$, and j satisfies (14). Therefore, $$j \in I(\alpha, \beta \backslash \beta _{i})$$ also. There cannot be $$j \in H(\alpha, \beta )$$ such that $$\beta _{i} < j \leq K(\alpha, \beta \backslash \beta _{i})$$, as otherwise, $$Z_{j-1} \{ \alpha, \beta \backslash \beta _{i} \}= j-1$$, which contradicts the definition of $$K( \alpha, \beta \backslash \beta _{i} )$$. In the same way, one can prove that if $$j \in H (\alpha,\beta )$$, then $$j \in I(\alpha \backslash \alpha _{i}, \beta )$$. As a result, the hypothesis (63) implies conditions (53) and (57) for each set $$\{\alpha, \beta \backslash \beta _{i} \}$$, $$1 \leq i \leq n-k$$, and $$\{ \alpha \backslash \alpha _{i}, \beta \}$$, $$1 \leq i \leq k$$, respectively. From here, (61)–(62), and (64)–(65), it is obvious that most of the conditions of Lemmas 5 and 6 are met for each set $$\{\alpha, \beta \backslash \beta _{i} \}$$, $$1 \leq i \leq n-k$$, and $$\{ \alpha \backslash \alpha _{i}, \beta \}$$, $$1 \leq i \leq k$$, respectively.

If $$\max (\alpha _{k}, \beta _{n-k}) < n-1$$, then (61) and (62) grant (52) and (56) (that is, the remaining conditions of Lemmas 5 and 6) are also met. The definition of $$\beta ^{*}$$, in this case, and the fact that there exists an index $$l \in H(\alpha, \beta )$$, with $$l < \min (\alpha ^{*}, \beta ^{*})$$, such that $$a_{l}(x)$$ does not vanish in $$[a,b]$$, give that inequalities (55) and (60) are strict for $$h_{\beta _{n-k}}^{(j)}(x,t)$$ and $$g_{\alpha _{k}}^{(j)}(x,t)$$, respectively.

Otherwise, let us assume $$\beta _{n-k}=n-1$$. From (1), Lemma 7, (61), and (63), one has that $$\frac{ \partial ^{n} G (b,t)}{\partial x^{n}} \geq 0$$. If the index $$l \notin \beta$$, then $$\frac{ \partial ^{n} G (b,t)}{\partial x^{n}} > 0$$, the remaining condition (52) for $$\beta _{n-k}$$ is met, and the inequality (54) for $$h_{\beta _{n-k}}(x,t)$$, which is strict, will apply to all derivative orders $$j \in H(\alpha,\beta )$$. On the contrary, if the index $$l \in \beta$$, then $$\frac{ \partial ^{n} G (b,t)}{\partial x^{n}}$$ may be zero depending on the values of $$a_{j}(b)$$, but even so (in which case $$h_{\beta _{n-k}}(x,t) \equiv 0$$) there must be another $$\beta _{m}$$ such that $$a_{\beta _{m}}(x)$$ does not vanish in $$[a,b]$$ and inequality (55) for $$h_{\beta _{m}}(x,t)$$ will be strict and will apply to all derivative orders $$j \in H(\alpha,\beta )$$. Thus, in both cases we have strict inequalities for some $$h_{\beta _{i}}^{(j)} (x,t)$$ and all $$j \in H(\alpha,\beta )$$.

A similar result can be obtained for $$g_{\alpha _{i}}^{(j)} (x,t)$$ if $$\alpha _{k}=n-1$$.

Applying Lemma 5 to the decomposition (48), taking into account that at least for one $$h_{\beta _{i}}$$ the inequalities are strict for all $$j \in H(\alpha,\beta )$$, one gets (66). Likewise, applying Lemma 6 to the decomposition (50) and noting as before that no $$j \in H (\alpha,\beta )$$ can meet $$\alpha _{i} < j \leq K(\alpha \backslash \alpha _{i}, \beta )$$, $$1 \leq i \leq k$$, one obtains (67). □

By Lemma 8 and following an argument analogous to that used in [2, Theorem 6], one can finally determine the searched signs.

### Theorem 5

Let us assume that the hypotheses (63)(65) hold, and there is an index $$l \in H(\alpha, \beta )$$, with $$l < \min (\alpha ^{*}, \beta ^{*})$$, such that $$a_{l}(x)$$ does not vanish in $$[a,b]$$. Then, one has

\begin{aligned} &(-1)^{m(\alpha,j)} \frac{ \partial ^{j} G(a,t) }{ \partial x^{j}} > 0,\quad 0 \leq j \leq n-1, j \notin \alpha, \end{aligned}
(68)
\begin{aligned} &(-1)^{n(\beta,j)} \frac{ \partial ^{j} G(b,t) }{ \partial x^{j}} > 0,\quad 0 \leq j \leq n-1, j \notin \beta, \end{aligned}
(69)

and

\begin{aligned} (-1)^{m(\alpha,j)} \frac{ \partial ^{j} G(x,t) }{ \partial x^{j}} > 0,\quad j \in H(\alpha, \beta ), a < x < b. \end{aligned}
(70)

### Proof

We will first assess the case $$Ly=0$$ is disfocal on $$[a,b]$$, dividing the proof in two subcases: $$x >t$$ and $$x < t$$.

Thus, let us suppose in the first place that $$x > t$$. We can write

\begin{aligned} \frac{ \partial ^{j} G_{ab} (x,t) }{\partial x^{j} } = \frac{ \partial ^{j} G_{ax} (x,t) }{\partial x^{j} } + \int _{x}^{b} \frac{ \partial ^{j} }{\partial x^{j} } \frac{ \partial G_{as} (x,t) }{\partial s } \,ds,\quad a \leq t < x \leq b. \end{aligned}
(71)

From the boundary conditions of (2), one has that $$\frac{ \partial ^{j} G_{ax} (x,t) }{\partial x^{j} }=0$$ for $$j \in \beta$$. Analogously, from [2, Theorem 3] and the disfocality of $$Ly=0$$ on $$[a,x] \subset [a,b]$$, one has that

\begin{aligned} (-1)^{n(\beta,j)} \frac{ \partial ^{j} G_{ax} (x,t) }{\partial x^{j} } >0, \quad j \notin \beta, j < n, \end{aligned}
(72)

which from Lemma 7, for $$j \in H(\alpha,\beta )$$, is equivalent to

\begin{aligned} (-1)^{m(\alpha,j)} \frac{ \partial ^{j} G_{ax} (x,t) }{\partial x^{j} } >0,\quad j \notin \beta j < n. \end{aligned}
(73)

Inequality (72) ensures condition (61) of Lemma 8. All in all, from (66), (71), and (73) one obtains (70).

Let us suppose now that $$x < t$$. As before, one can write

\begin{aligned} \frac{ \partial ^{j} G_{ab} (x,t) }{\partial x^{j} } = \frac{ \partial ^{j} G_{xb} (x,t) }{\partial x^{j} } - \int _{a}^{x} \frac{ \partial ^{j} }{\partial x^{j} } \frac{ \partial G_{sb} (x,t) }{\partial s } \,ds,\quad a \leq x < t \leq b. \end{aligned}
(74)

Again from the boundary conditions α, [2, Theorem 3], and the disfocality of $$Ly=0$$ on $$[a,x] \subset [a,b]$$, one has that

\begin{aligned} (-1)^{m(\alpha,j)} \frac{ \partial ^{j} G_{xb} (x,t) }{\partial x^{j} } >0,\qquad j \notin \alpha;\qquad \frac{ \partial ^{j} G_{xb} (x,t) }{\partial x^{j} } =0, \quad j \in \alpha. \end{aligned}
(75)

From (67), (74), and (75), one gets to (70).

In conclusion, one has that (68)–(70) are valid when $$Ly=0$$ is disfocal on $$[a,b]$$. Let c be the supremum of the values of b that ensure the disfocality, let us fix a, and $$t \in (a,c)$$, and let us start increasing b and surpass c. Following a similar argument as before and taking into account that

\begin{aligned} \frac{ \partial ^{j} G_{ab} (x,t) }{\partial x^{j} } = \frac{ \partial ^{j} G_{ax} (x,t) }{\partial x^{j} } + \int _{x}^{b} \frac{ \partial ^{j} }{\partial x^{j} } \frac{ \partial G_{as} (x,t) }{\partial s } \,ds,\quad a \leq t < x \leq b, \end{aligned}

and (66), one can show that if the sign of $$\frac{ \partial ^{j} G_{ab'}(b',t) }{ \partial x^{j}}$$ complies with (69), for any $$b' \in [c,b]$$, then (70) will hold for $$x \in [a,b]$$. But given that $$\frac{ \partial ^{j} G_{ab'} (x,t) }{ \partial x^{j}}$$ is continuous with regards to $$b'$$ (in fact, it is differentiable as per Lemma 4), for $$\frac{ \partial ^{j} G_{ab'}(b',t) }{ \partial x^{j}}$$ to change sign for $$j \notin \beta$$, when $$b'$$ grows to b, it must vanish at some $$b'$$. Let $$b^{*}$$ the lowest value of $$b'$$ for which that happens for a $$j \notin \beta$$. Following the same reasoning as in Lemma 1, the poisedness of $$\{\alpha, \beta \}$$ and Rolle’s theorem imply that $$\frac{ \partial ^{n-1} G(x,t) }{ \partial x^{n-1}}$$ must have either a zero at a or $$b^{*}$$ (if defined by the boundary conditions $$\{ \alpha, \beta \}$$) and a change of sign in $$(a,b^{*})$$, or two changes of sign in $$(a,b^{*})$$. Since $$\frac{ \partial ^{n-1} G(x,t) }{ \partial x^{n-1}}$$ has only one discontinuity point at $$x=t$$, with a positive jump, $$\frac{ \partial ^{n} G(x,t) }{ \partial x^{n}}$$ must be negative in an interval of nonzero measure within $$(a,b^{*})$$. But this is not possible due to (63), the continuity of $$\frac{ \partial ^{n} G_{ab'}(x,t) }{ \partial x^{n}}$$ with respect to $$b'$$ at $$b'=b^{*}$$ and the fact that all $$\frac{ \partial ^{j} G_{ab'} (x,t) }{ \partial x^{j}}$$, $$j \in H(\alpha,\beta )$$, have their signs for $$x \in [a,b']$$ given by (70) for $$b' < b^{*}$$. Therefore, such a new zero or change of sign of $$\frac{ \partial ^{j} G_{ab'}(b',t) }{ \partial x^{j}}$$ cannot appear at any $$b'=b^{*} \leq b$$, and the signs of $$\frac{ \partial ^{j} G(x,t) }{ \partial x^{j}}$$ for $$x \in [a,b]$$, $$0 \leq j \leq n-1$$ must be given by (68)–(70).

The reasoning can be repeated with a decreasing, which makes the placement of $$t \in [a,b]$$ irrelevant given that we could fix t first and build the previous arguments in the same way. This completes the proof. □

### Remark 4

Theorem 5 improves [2, Theorem 6] as it increases significantly the number of the partial derivatives $$\frac{ \partial ^{j} G(x,t) }{ \partial x^{j}}$$ for which a sign can be provided in comparison with the latter.

Finally, for the strongly poised case, one has the following result

### Theorem 6

Let us assume that for $$x \in [a,b]$$

\begin{aligned} (-1)^{m(\alpha,j)} a_{j}(x) \leq 0,\quad 0 \leq j \leq n-1, \end{aligned}
(76)

and there is an index $$l < \min (\alpha ^{*}, \beta ^{*})$$, such that $$a_{l}(x)$$ does not vanish in $$[a,b]$$. Let us also assume that (2) does not have solutions other than the trivial one for any extremes $$a',b' \in [a,b]$$. Then

\begin{aligned} &(-1)^{m(\alpha,j)} \frac{ \partial ^{j} G(a,t) }{ \partial x^{j}} > 0,\quad 0 \leq j \leq n-1, j \notin \alpha, \end{aligned}
(77)
\begin{aligned} &(-1)^{m(\alpha,j)} \frac{ \partial ^{j} G(b,t) }{ \partial x^{j}} > 0,\quad 0 \leq j \leq n-1, j \notin \beta, \end{aligned}
(78)

and

\begin{aligned} (-1)^{m(\alpha,j)} \frac{ \partial ^{j} G(x,t) }{ \partial x^{j}} > 0,\quad 0 \leq j \leq n-1, a < x < b. \end{aligned}
(79)

### Proof

The proof follows immediately from Theorem 5 on noting that in the strongly poised case all derivative orders j, $$0 \leq j \leq n-1$$, belong to $$H(\alpha,\beta )$$, $$\beta _{i} = K(\alpha, \beta \backslash \beta _{i})$$, $$1 \leq i \leq n-k$$, and $$\alpha _{i} = K (\alpha \backslash \alpha _{i}, \beta )$$, $$1 \leq i \leq k$$. □

### Remark 5

Theorem 6 improves [2, Theorem 7] as it holds for any combination of strongly poised conditions in comparison with the latter.

## 4 Conclusions

This paper has presented conditions that permit identifying the partial derivatives of the Green function of (2) that have a constant sign on $$(a,b)$$, and has also provided their signs, extending the results of [2, Theorems 6 and 7] and removing some of the limitations of that paper. This information is relevant to know the properties of solutions of problems of the type (3) when f does not change sign, and also allows extending many results of the cone theory to problems like (4). The paper has also introduced the concept of hyperdisfocality, which can become a very useful tool to assess the zeroes of the derivatives of boundary value problems, in general, and it has provided signs for the derivatives of the solutions of boundary value problems with $$n-1$$ boundary conditions too. All these findings are new, as far as the authors are aware.

The main limitations of the results displayed here are related to the sign requirements of the coefficients $$a_{i}(x)$$ displayed on Theorems 5 and 6, which are needed to grant a constant sign on the n-th partial derivative. However, as indicated in Remark 2, other mechanisms that granted such a constant sign would also work. This is a possible area for further future research.

Not applicable.

## References

1. Elias, U.: Oscillation Theory of Two-Term Differential Equations. Kluwer, Dordrecht (1997)

2. Almenar, P., Jódar, L.: The sign of the Green function of an nth order linear boundary value problem. Mathematics 8(5), 673 (2020). https://doi.org/10.3390/math8050673

3. Coppel, W.A.: Disconjugacy. Springer, Berlin (1971)

4. Eloe, P.W., Hankerson, D., Henderson, J.: Positive solutions and conjugate points for multipoint boundary value problems. J. Differ. Equ. 95, 20–32 (1992)

5. Eloe, P.W., Henderson, J.: Focal point characterizations and comparisons for right focal differential operators. J. Math. Anal. Appl. 181, 22–34 (1994)

6. Webb, J.R.L.: Estimates of eigenvalues of linear operators associated with nonlinear boundary value problems. Dyn. Syst. Appl. 23, 415–430 (2014)

7. Almenar, P., Jódar, L.: Estimation of the smallest eigenvalue of an nth order linear boundary value problem. Math. Methods Appl. Sci. 44, 4491–4514 (2021). https://doi.org/10.1002/mma.7047

8. Almenar, P., Jódar, L.: The principal eigenvalue of some nth order linear boundary value problems. Bound. Value Probl. 2021, Article ID 84 (2021). https://doi.org/10.1186/s13661-021-01561-2

9. Almenar, P., Jódar, L.: Accurate estimations of any eigenpairs of n-th order linear boundary value problems. Mathematics 21, 2663 (2021). https://doi.org/10.3390/math9212663

10. Krein, M.G., Rutman, M.A.: Linear Operators Leaving Invariant a Cone in a Banach Space. Am. Math. Soc., New York (1950)

11. Erbe, L.H.: Eigenvalue criteria for existence of positive solutions to nonlinear boundary value problems. Math. Comput. Model. 32(5–6), 529–539 (2000)

12. Webb, J.R.L., Lan, K.Q.: Eigenvalue criteria for existence of multiple positive solutions of nonlinear boundary value problems of local and nonlocal type. Topol. Methods Nonlinear Anal. 27, 91–116 (2006)

13. Lan, K.Q.: Eigenvalues of semi-positone Hammerstein integral equations and applications to boundary value problems. Nonlinear Anal. 71(12), 5979–5993 (2009)

14. Webb, J.R.L.: A class of positive linear operators and applications to nonlinear boundary value problems. Topol. Methods Nonlinear Anal. 39, 221–242 (2012)

15. Ciancaruso, F.: Existence of solutions of semilinear systems with gradient dependence via eigenvalue criteria. J. Math. Anal. Appl. 482(1), 1–22 (2020). https://doi.org/10.1016/j.jmaa.2019.123547

16. Levin, A.J.: Some problems bearing on the oscillation of solutions of linear differential equations. Sov. Math. Dokl. 4, 121–124 (1963)

17. Pokornyi, J.V.: Some estimates of the Green’s function of a multi-point boundary value problem. Mat. Zametki 4, 533–540 (1968)

18. Karlin, S.: Total positivity, interpolation by splines and Green’s functions for ordinary differential equations. J. Approx. Theory 4(1), 91–112 (1971)

19. Peterson, A.: On the sign of Green’s functions. J. Differ. Equ. 21, 167–178 (1976)

20. Peterson, A.: Green’s functions for focal type boundary value problems. Rocky Mt. J. Math. 9(4), 721–732 (1979)

21. Elias, U.: Green’s functions for a non-disconjugate differential operator. J. Differ. Equ. 37, 318–350 (1980)

22. Peterson, A., Ridenhour, J.: Comparison theorems for Green’s functions for focal boundary value problems. In: Agarwal, R.P. (ed.) Recent Trends in Differential Equations. World Scientific Series in Applicable Analysis, vol. 1, pp. 493–506. World Scientific, Singapore (1992)

23. Eloe, P.W., Ridenhour, J.: Sign properties of Green’s functions for a family of two-point boundary value problems. Proc. Am. Math. Soc. 120(2), 443–452 (1994)

24. Webb, J.R.L., Infante, G.: Positive solutions of nonlocal boundary value problems: a unified approach. J. Lond. Math. Soc. 74(2), 673–693 (2006). https://doi.org/10.1112/S0024610706023179

25. Webb, J.R.L., Infante, G.: Nonlocal boundary value problems of arbitrary order. J. Lond. Math. Soc. 79(1), 238–258 (2009). https://doi.org/10.1112/jlms/jdn066

26. Cabada, A., Saavedra, L.: The eigenvalue characterization for the constant sign Green’s functions of $$(k,n-k)$$ problems. Bound. Value Probl. 44, 1–35 (2016). https://doi.org/10.1186/s13661-016-0547-1

27. Nehari, Z.: Disconjugate linear differential operators. Trans. Am. Math. Soc. 129(3), 500–516 (1967)

28. Hartman, P.: Ordinary Differential Equations. Birkhäuser, Boston (1982)

## Acknowledgements

This paper was finished before the corresponding author joined Amazon Web Services.

## Author information

Authors

### Contributions

PA proposed the problem. Both authors discussed and agreed the mechanisms and tools to find a solution. PA wrote the first version of the manuscript and LJ reviewed and corrected several points. All authors read and approved the final manuscript.

### Corresponding author

Correspondence to Pedro Almenar.

## Ethics declarations

### Competing interests

The authors declare no competing interests.

## Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

Almenar, P., Jódar, L. New results on the sign of the Green function of a two-point n-th order linear boundary value problem. Bound Value Probl 2022, 50 (2022). https://doi.org/10.1186/s13661-022-01631-z