Skip to content


  • Research
  • Open Access

Computation of Green’s functions through algebraic decomposition of operators

Boundary Value Problems20162016:167

  • Received: 6 May 2016
  • Accepted: 31 August 2016
  • Published:


In this article, we use linear algebra to improve the computational time for obtaining Green’s functions of linear differential equations with reflection (DER). This is achieved by decomposing both the ‘reduced’ equation (the ODE associated with a given DER) and the corresponding two-point boundary conditions.


  • Green’s functions
  • ODE
  • reflection
  • decomposition


  • 34Bxx
  • 47Lxx
  • 34Kxx

1 Introduction

Differential operators with reflection have recently been of great interest, partly due to their applications to supersymmetric quantum mechanics [13] or topological methods applied to nonlinear analysis [4].

In the last years, the works in this field have been related to either obtaining eigenvalues and explicit solutions of different problems [58], their qualitative properties [4, 9], or obtaining the associated Green’s function [1016]. In [16], the authors described a method to derive the Green’s function of a differential equation with constant coefficients, reflection and two-point boundary conditions. This algorithm was implemented in Mathematica (see [17]) in order to put it to a practical use. Unfortunately, it was soon observed that, although theoretically correct, there were severe limitations when computing the Green’s functions of problems of high order. In this respect, we have to point out that an nth-order linear DER is reduced to a (2n)th-order ordinary differential equation; see Theorem 2.5 and compare equations (2.4) and (2.5). This particularity posses a great challenge since the computational time increases greatly with n.

To sort this out, the best option is to go back from a \((2n)\)th-order problem to two problems of order n. This procedure, compared to solving directly the order 2n, is much faster. Furthermore, it also happens that, in some cases, the decomposition provides two equivalent problems or a problem and its adjoint. In those cases, the improvement is even more notorious.

In the next section, we contextualize the problem with a brief introduction to differential equations with reflection and state some basic results concerning the Green’s function associated with them. In Section 3, we develop some theoretical results, which provide a way of decomposing the DER we are dealing with. Finally, in Section 4, we establish a suitable decomposition for the boundary conditions, state criteria for self-adjointness of the decomposed problem, and provide examples to illustrate the theory.

2 Differential equations with reflection

In order to establish a useful framework to work with these equations, we consider the differential operator D, the pullback operator of the reflection \(\varphi(t)=-t\), denoted by \(\varphi^{*}(u)(t)=u(-t)\), and the identity operator Id.

Let \(T\in{\mathbb{R}}^{+}\) and \(I:=[-T,T]\). We now consider the algebra \({\mathbb{R}}[D,\varphi^{*} ]\) consisting of the linear operators of the form
$$ L=\sum_{k=0}^{n} \bigl(a_{k}\varphi ^{*}+b_{k} \bigr)D^{k}, $$
where \(n\in{\mathbb{N}}\), \(a_{k},b_{k}\in{\mathbb{R}}\), \(k=1,\ldots,n\), which act as
$$ Lu(t)=\sum_{k=0}^{n}a_{k}u^{(k)}(-t)+ \sum_{k=0}^{n}b_{k}u^{(k)}(t), \quad t\in I, $$
on any function \(u\in W^{n,1}(I)\). The operation in the algebra is the usual composition of operators; we will omit the composition sign. We observe that \(D^{k}\varphi ^{*}=(-1)^{k}\varphi^{*}D^{k}\) for \(k=0,1,\ldots\) , which makes it a noncommutative algebra. We will consider, for convenience, the sums \(\sum_{k=0}^{n}\equiv\sum_{k}\) such that \(k\in\{0,1,\ldots\}\), but taking into account that the coefficients \(a_{k}\), \(b_{k}\) are zero for big enough indices.
Notice that \({\mathbb{R}}[D,\varphi^{*}]\) is not a unique factorization domain. For instance,
$$ D^{2}-1=(D+1) (D-1)=-\bigl(\varphi^{*}D+\varphi^{*}\bigr)^{2}. $$

Let \({\mathbb{R}}[D]\) be the ring of polynomials with real coefficients on the variable D. The following property is crucial for the obtaining of a Green’s function.

Theorem 2.1

([16], Theorem 2.1)

Take L be as defined in (2.1) and define
$$ R:=\sum_{k}a_{k} \varphi^{*}D^{k}+\sum_{l}(-1)^{l+1}b_{l}D^{l} \in{\mathbb{R}}\bigl[D,\varphi^{*} \bigr]. $$
Then \(RL=LR\in{\mathbb{R}}[D]\).

Remark 2.2

If \(S:=RL=\sum_{k=0}^{2n} c_{k}D^{k}\), then
$$ c_{k}= \textstyle\begin{cases} 0, & k \text{ odd}, \\ 2\sum_{l=0}^{\frac{k}{2}-1} (-1 )^{l} (a_{l}a_{k-l}-b_{l}b_{k-l} )+ (-1 )^{\frac{k}{2}} (a_{\frac{k}{2}}^{2}-b_{\frac{k}{2}}^{2} ), & k \text{ even}. \end{cases} $$

This implies that the reduced operator RL has only coefficients for the even powers of the derivative, so the equation is self-adjoint. If the boundary conditions are appropriate (we will clarify this statement in Theorem 4.4), then the Green’s function is symmetric [18]. Observe that \(c_{0}=a_{0}^{2}-b_{0}^{2}\). Also, if \(L=\sum_{i=0}^{n} (a_{i}\varphi^{*}+b_{i} )D^{i}\) with \(a_{n}\ne0\) or \(b_{n}\ne0\), then we have that \(c_{2n}=(-1)^{n}(a_{n}^{2}-b_{n}^{2})\). Hence, if \(a_{n}=\pm b_{n}\), then \(c_{2n}=0\). This shows that composing two elements of \({\mathbb{R}}[D,\varphi^{*} ]\), we can get another element that has simpler terms in the sense of derivatives of lower order. This is quite difficult when computing the Green’s functions since, in this case, we could have one, many, or no solutions of our original problem [16]. The following example is quite illustrative.

Example 2.3

Consider the equation
$$ x^{3)}(t)+x^{3)}(-t)=\sin t,\quad t\in I. $$
This equation cannot have a solution since the left-hand side is an even function whereas the right-hand side is an odd function.
As we said before, \(S=RL\) is a usual differential operator with constant coefficients. Consider now the following problem:
$$ \begin{aligned} &Su(t):=\sum_{k=0}^{n}a_{k}u^{k)}(t)=h(t), \quad t\in I, \\ &B_{k}u:=\sum_{j=0}^{n-1} \bigl[\alpha_{kj}u^{j)}(-T)+\beta _{kj}u^{j)}(T) \bigr]=0,\quad k=1,\ldots,n. \end{aligned} $$
The existence of Green’s functions for problems such as (2.3) is a classical result (see, e.g., [19]). Here we present it adapted to our framework.

Theorem 2.4

Assume that the following homogeneous problem has a unique solution:
$$ Su(t)=0,\quad t\in I,\qquad B_{k}u=0,\quad k=1,\ldots, n. $$
Then there exists a unique function, called Green’s function, such that:
  1. (G1)

    G is defined on the square \(I^{2}\).

  2. (G2)

    The partial derivatives \(\frac{\partial^{k}G}{\partial t^{k}}\) exist and are continuous on \(I^{2}\) for \(k=0,\ldots,n-2\).

  3. (G3)

    \(\frac{\partial^{n-1}G}{\partial t^{n-1}}\) and \(\frac {\partial^{n}G}{\partial t^{n}}\) exist and are continuous on \(I^{2}\backslash\{(t,t) : t\in I\}\).

  4. (G4)
    The lateral limits \(\frac{\partial^{n-1}G}{\partial t^{n-1}}(t,t^{+})\) and \(\frac{\partial^{n-1}G}{\partial t^{n-1}}(t,t^{-})\) exist for every \(t\in(a,b)\), and
    $$ \frac{\partial^{n-1}G}{\partial t^{n-1}}\bigl(t,t^{-}\bigr)-\frac{\partial^{n-1}G}{\partial t^{n-1}}\bigl(t,t^{+}\bigr)= \frac {1}{a_{n}}. $$
  5. (G5)

    For each \(s\in(a,b)\), the function \(G(\cdot,s)\) is a solution of the differential equation \(Su=0\) on \(I\backslash\{s\}\).

  6. (G6)

    For each \(s\in(a,b)\),the function \(G(\cdot,s)\) satisfies the boundary conditions \(B_{k}u=0\), \(k=1,\ldots,n\).

Furthermore, the function \(u(t):=\int_{a}^{b}G(t,s)h(s)\,\mathrm{d}s\) is the unique solution of problem (2.3).

Now we can state the result that relates the Green’s function of a problem with reflection to the Green’s function of its associated reduced problem.

In order to do that, given an operator \({\mathscr {L}}\) defined on some set of functions of one variable, we will define the operator \({\mathscr {L}}_{\vdash}\) as \({\mathscr {L}}_{\vdash}G(t,s):={\mathscr {L}}(G(\cdot,s))|_{t}\) for every s and any suitable function G of two variables.

Theorem 2.5


Let \(I=[-T,T]\). Consider the problem
$$ Lu(t)=h(t),\quad t\in I,\qquad B_{i}u=0,\quad k=1, \ldots,n, $$
where L is defined as in (2.1), \(h\in L^{1}(I)\), and
$$ B_{k}u:=\sum_{j=0}^{n-1} \bigl[ \alpha _{kj}u^{j)}(-T)+\beta_{kj}u^{j)}(T) \bigr],\quad k=1,\ldots,n. $$
Then, there exists \(R\in{\mathbb{R}}[D,\varphi^{*} ]\) (as in (2.2)) such that \(S:=RL\in{\mathbb{R}}[D]\), and the unique solution of problem (2.4) is given by \(\int_{a}^{b}R_{\vdash}G(t,s)h(s)\,\mathrm{d}s\), where G is the Green’s function associated with the problem
$$\begin{aligned}& Su =0, \end{aligned}$$
$$\begin{aligned}& B_{k}u =0,\quad k=1,\ldots,n, \end{aligned}$$
$$\begin{aligned}& B_{k}Ru =0,\quad k=1,\ldots,n, \end{aligned}$$
assuming that it has a unique solution.

As stated in Section 1, Theorem 2.5 was implemented in Mathematica in [17]. We now proceed to describe some steps that could be added to the algorithm in order to improve it.

3 Decomposing the reduced equation

The computation of Green’s functions is prohibitive in computation time terms [17], mostly for high-order equations, so it is necessary to find ways to palliate this problem. Our approach consists of decomposing our problem in order to deal with equations of lower order.

First, observe that from Remark 2.2 we know that the reduced equation has no derivatives of odd indices. For convenience, if p is a real (complex) polynomial, then we denote by \(p_{-}\) the polynomial with the same principal coefficient and opposite roots.

Lemma 3.1

Let \(n\in{\mathbb{N}}\), and let \(p(x)=\sum_{k=0}^{n}\alpha _{2k}x^{2k}\) be a real polynomial of order 2n. Then there is a complex polynomial q of order n such that \(p=\alpha_{2n}qq_{-}\). Furthermore, if \(\tilde{p}(x)=\sum_{k=0}^{n}\alpha_{2k}x^{k}\) has no negative roots, then q is a real polynomial.


First, observe that p is a polynomial on \(x^{2}\), and therefore, if λ is a root of p, so is −λ. Hence, using the fundamental theorem of algebra, the first part of the result can be derived by separating the monomials that compose p in two different polynomials with opposite roots.

Let us explicitly show that in the case has no negative roots, q is a real polynomial.

Take the change of variables \(y=x^{2}\). Then, \(p(x)=\tilde{p}(y)\), and, by the fundamental theorem of algebra,
$$\begin{aligned} \begin{aligned} \tilde{p}(y)={}& \sum_{k=0}^{n} \alpha_{2k}y^{k} \\ ={} & \alpha_{2n}y^{\sigma}\bigl(y-\lambda_{1}^{2} \bigr)\cdots\bigl(y-\lambda_{m}^{2}\bigr) \bigl(y+\lambda _{m+1}^{2}\bigr) \\ &{} \cdots\bigl(y+\lambda_{\overline{m}}^{2}\bigr) \bigl(y^{2}+\mu_{1}y+\nu _{1}^{2}\bigr) \cdots\bigl(y^{2}+\mu_{l}y+\nu_{l}^{2} \bigr) \end{aligned} \end{aligned}$$
for some integers σ, m, , l and real numbers \(\lambda _{1},\ldots,\lambda_{\overline{m}}\), \(\nu_{1},\ldots,\nu_{l}\), \(\mu_{1},\ldots,\mu _{l}\) such that \(\lambda_{k}>0\) and \(\nu_{k}>|\mu_{k}|/2\) for every k in the appropriate set of indices. The terms of the form \(y^{2}+\mu_{k}y+\nu _{k}^{2}\) correspond to the pairs of complex roots of the polynomial. This means that the discriminant \(\Delta=\mu_{k}^{2}-4\nu_{k}<0\), that is, \(\nu _{k}>|\mu_{k}|/2\).
$$\begin{aligned} p(x) = & \alpha_{2n}x^{2\sigma}\bigl(x^{2}- \lambda_{1}^{2}\bigr)\cdots \bigl(x^{2}- \lambda_{m}^{2}\bigr) \bigl(x^{2}+ \lambda_{m+1}^{2}\bigr) \\ &{} \cdots\bigl(x^{2}+\lambda _{\overline{m}}^{2}\bigr) \bigl(x^{4}+\mu_{1}x^{2}+\nu_{1}^{2} \bigr)\cdots\bigl(x^{4}+\mu_{l}x^{2}+ \nu_{l}^{2}\bigr). \end{aligned}$$
Now we have
$$\begin{aligned}& \bigl(x^{2}-\lambda_{k}^{2}\bigr)=(x+ \lambda_{k}) (x-\lambda _{k}), \qquad \bigl(x^{2}+ \lambda_{k}^{2}\bigr)=(x+\lambda_{k}i) (x- \lambda_{k}i),\quad \text{and} \\& \bigl(x^{4}+\mu_{k}x^{2}+\nu_{k}^{2} \bigr)=\bigl(x^{2}-x \sqrt{2\nu_{k}-\mu_{k}}+ \nu_{k}\bigr) \bigl(x^{2}+x \sqrt{2\nu_{k}- \mu_{k}}+\nu_{k}\bigr) \end{aligned}$$
for any k in the appropriate set of indices. Define
$$\begin{aligned} q(x) = & x^{\sigma}(x-\lambda_{1})\cdots(x-\lambda _{m}) (x-\lambda_{m+1}i)\cdots(x-\lambda_{\overline{m}}i) \bigl(x^{2}-x \sqrt {2\nu_{1}-\mu_{1}}+ \nu_{1}\bigr) \\ &{} \cdots\bigl(x^{2}-x \sqrt{2\nu_{l}-\mu_{l}}+ \nu_{l}\bigr) \end{aligned}$$
$$\begin{aligned} q_{-}(x) = & x^{\sigma}(x+\lambda_{1})\cdots(x+\lambda _{m}) (x+\lambda_{m+1}i)\cdots(x+\lambda_{\overline{m}}i) \bigl(x^{2}+x \sqrt {2\nu_{1}-\mu_{1}}+ \nu_{1}\bigr) \\ &{} \cdots\bigl(x^{2}+x \sqrt{2\nu_{l}-\mu_{l}}+ \nu_{l}\bigr). \end{aligned}$$
We have that \(p=\alpha_{2n}qq_{-}\).
Observe that if λ is a root of p, then \(\lambda^{2}\) is a root of . Hence, if has no negative roots, then p has no roots of the form \(\lambda=ai\) with \(a\ne0\). Thus,
$$\begin{aligned}& p(x)= \alpha_{2n}x^{2\sigma}\bigl(x^{2}- \lambda_{1}^{2}\bigr)\cdots \bigl(x^{2}- \lambda_{m}^{2}\bigr) \bigl(x^{4}+ \mu_{1}x^{2}+\nu_{1}^{2}\bigr)\cdots \bigl(x^{4}+\mu_{l}x^{2}+\nu_{l}^{2} \bigr), \\& q(x)= x^{\sigma}(x-\lambda_{1})\cdots(x-\lambda_{m}) \bigl(x^{2}-x \sqrt{2\nu _{1}-\mu_{1}}+ \nu_{1}\bigr) \cdots\bigl(x^{2}-x \sqrt{2\nu_{l}- \mu_{l}}+\nu_{l}\bigr), \\& q_{-}(x)= x^{\sigma}(x+\lambda_{1})\cdots(x+\lambda_{m}) \bigl(x^{2}+x \sqrt {2\nu_{1}-\mu_{1}}+ \nu_{1}\bigr) \cdots\bigl(x^{2}+x \sqrt{2\nu_{l}- \mu_{l}}+\nu_{l}\bigr), \end{aligned}$$
that is, q is a real polynomial. □

Remark 3.2

Descartes’ rule of signs establishes that the number of positive roots (with multiple roots counted separately) of a real polynomial on one variable either is equal to the number of sign differences between consecutive nonzero coefficients or is less than it by an even number, provided that the terms of the polynomial are ordered by descending variable exponent. This implies that for a polynomial \(p(x)\) to have no negative roots, it suffices that all coefficients of \(p(-x)\) are positive, that is, \(p(x)\) has positive even coefficients and negative odd coefficients.

There exist algorithmic ways of determining the exact number of positive (or real) roots of a polynomial. For more information on this issue, see, for instance, [2022].

The following lemma establishes a relation between the coefficients of q and \(q_{-}\).

Lemma 3.3

Let \(n\in{\mathbb{N}}\), and let \(q(x)=\sum_{k=0}^{n}\alpha_{k}x^{k}\) be a complex polynomial. Then
$$q_{-}(x)=\sum_{k=0}^{n}(-1)^{k+n} \alpha_{k}x^{k}. $$


We proceed by induction. For \(n=1\), \(q(x)=\alpha(x-\lambda_{1})\). Clearly, q has the root \(\lambda_{1}\), and \(q_{-}(x)=\alpha(x+\lambda _{1})=(-1)^{1+1}\alpha x+(-1)^{1}\alpha\lambda_{1}\) has the root \(-\lambda_{1}\).

Assume that the result is true for some \(n\ge1\). Then, for \(n+1\), q is of the form \(q(x)=(x-\lambda_{n+1})r(x)\), where \(r(x)=\sum_{k=0}^{n}\alpha_{k}x^{k}\) is a polynomial of order n, that is,
$$ q(x)=(x-\lambda_{n+1})\sum_{k=0}^{n} \alpha _{k}x^{k}=x^{n+1}+\sum _{k=1}^{n} [\alpha_{k-1}-\lambda _{n+1}\alpha_{k} ]x^{k}-\lambda_{n+1} \alpha_{0}. $$
Now, \(q_{-}(x)=(x+\lambda_{n+1})r_{-}(x)\). Since the formula is valid for n,
$$\begin{aligned} q_{-}(x) & =(x+\lambda_{n+1})r_{-}(x)=(x+\lambda_{n+1})\sum _{k=0}^{n}(-1)^{k+n} \alpha_{k}x^{k} \\ & =x^{n+1}+\sum_{k=1}^{n}(-1)^{k+n+1} [\alpha_{k-1}-\lambda_{n+1}\alpha _{k} ]x^{k}-(-1)^{n+1}\lambda_{n+1}\alpha_{0}. \end{aligned}$$
So the formula is valid for \(n+1\) as well. □

Remark 3.4

The result can be directly proven by considering the last statement in Remark 3.2. If we take a polynomial \(p(x)=a(x-\lambda_{1})\cdots(x-\lambda_{n})\), then the polynomial \(p(-x)\) has exactly opposite roots. In fact, \(p(-x)=a(-x-\lambda _{1})\cdots(-x-\lambda_{n})=(-1)^{n}a(x+\lambda_{1})\cdots(x+\lambda_{n})\). It is easy to check that the coefficients of \(p(-x)\) are precisely as described in Lemma 3.3 save for the factor \((-1)^{n}\).

This last lemma allows the computation of the polynomials q and \(q_{-}\) related to the polynomial RL on the variable D using the formula given in Remark 2.2. We will assume that RL is of order 2n, that is, \(a_{n}^{2}-b_{n}^{2}\neq0\). Otherwise, the problem of computing q and \(q_{-}\) would be the same, but these polynomials would be of lower order. Also, assume that RL, considered as a polynomial on \(D^{2}\), has no negative roots in order for q to be a real polynomial. If \(L=\sum_{k=0}^{n}(a_{k}\varphi^{*}+b_{k})D^{k}\) and \(q(D)=D^{n}+\sum_{k=0}^{n-1}\alpha_{k}D^{k}\), then
$$RL=\sum_{k=0}^{2n}c_{k}D^{k}=(-1)^{n} \bigl(a_{n}^{2}-b_{n}^{2}\bigr)q(D)q_{-}(D). $$
This relation establishes the following system of quadratic equations:
$$\begin{aligned} c_{2k}&= 2\sum_{l=0}^{k-1} (-1 )^{l} (a_{l}a_{2k-l}-b_{l}b_{2k-l} )+ (-1 )^{k} \bigl(a_{k}^{2}-b_{k}^{2} \bigr) \\ &= \bigl(a_{n}^{2}-b_{n}^{2}\bigr) \Biggl[2\sum_{l=0}^{k-1} (-1 )^{l} ( \alpha_{l}\alpha_{2k-l} )+ (-1 )^{k} \alpha_{k}^{2} \Biggr], \quad k=0,\ldots,n, \end{aligned}$$
where \(a_{k},b_{k},\alpha_{k}=0\) if \(k\notin\{0,\ldots,n\}\) and \(\alpha _{n}=1\). These are n equations with n unknowns \(\alpha_{0},\ldots ,\alpha_{n}\). We present here the case of \(n=2\) to illustrate the solution of these equations.

Example 3.5

For \(n=2\), we have that
$$\begin{aligned}& RL = \bigl(a_{2}^{2}-b_{2}^{2} \bigr)D^{4}+ \bigl(-a_{1}^{2}+2 a_{0} a_{2}+b_{1}^{2}-2 b_{0} b_{2} \bigr)D^{2}+a_{0}^{2}-b_{0}^{2}, \\& \bigl(a_{2}^{2}-b_{2}^{2} \bigr)q(D)q_{-}(D) = \bigl(a_{2}^{2}-b_{2}^{2} \bigr)D^{4}+ \bigl(2 \alpha_{0}-\alpha _{1}^{2} \bigr) \bigl(a_{2}^{2}-b_{2}^{2} \bigr)D^{2}+\alpha_{0}^{2} \bigl(a_{2}^{2}-b_{2}^{2} \bigr), \end{aligned}$$
and the system of equations is
$$ \begin{aligned} &a_{0}^{2}-b_{0}^{2} = \bigl(a_{2}^{2}-b_{2}^{2} \bigr) \alpha_{0}^{2}, \\ &{-}a_{1}^{2}+2 a_{0} a_{2}+b_{1}^{2}-2 b_{0} b_{2} = \bigl(a_{2}^{2}-b_{2}^{2} \bigr) \bigl(2 \alpha _{0}-\alpha_{1}^{2} \bigr). \end{aligned} $$
Before computing the solutions, let us state explicitly the limitations that RL, considered as an order 2 polynomial on \(D^{2}\), that is, \(RL(x)=a x^{2}+b x +c\), has no negative roots implies. There are two options:
  1. (I)
    There are two complex roots, that is, \(\Delta= b^{2}-4ac<0\). This is equivalent to \(ac>0\land|b|<2\sqrt{ac}\) or, expressed in terms of the coefficients of RL,
    $$ \bigl(b_{0}^{2}-a_{0}^{2}\bigr) \bigl(b_{2}^{2}-a_{2}^{2}\bigr)>0 \quad \text{and}\quad \bigl\vert -a_{1}^{2}+2 a_{0} a_{2}+b_{1}^{2}-2 b_{0} b_{2} \bigr\vert < 2\sqrt{\bigl(b_{0}^{2}-a_{0}^{2} \bigr) \bigl(b_{2}^{2}-a_{2}^{2}\bigr)}. $$
  2. (II)
    There are two nonnegative roots, that is, \(\Delta =b^{2}-4ac\ge0\), and
    $$ \bigl(-b+\sqrt{b^{2}-4ac}\bigr)/(2a)\le0. $$
    This is equivalent to \((a,c\ge0\land-b\ge2\sqrt{ac})\lor(a,c\le 0\land b\ge2\sqrt{ac})\) or, expressed in terms of the coefficients of RL,
    $$ \Bigl[\bigl(b_{0}^{2}-a_{0}^{2}\bigr), \bigl(b_{2}^{2}-a_{2}^{2}\bigr)\ge0\land -\bigl(-a_{1}^{2}+2 a_{0} a_{2}+b_{1}^{2}-2 b_{0} b_{2}\bigr)\ge2\sqrt{\bigl(b_{0}^{2}-a_{0}^{2} \bigr) \bigl(b_{2}^{2}-a_{2}^{2}\bigr)} \Bigr] $$
    $$ \Bigl[\bigl(b_{0}^{2}-a_{0}^{2}\bigr), \bigl(b_{2}^{2}-a_{2}^{2}\bigr)\le0\land -\bigl(-a_{1}^{2}+2 a_{0} a_{2}+b_{1}^{2}-2 b_{0} b_{2}\bigr)\ge2\sqrt{\bigl(b_{0}^{2}-a_{0}^{2} \bigr) \bigl(b_{2}^{2}-a_{2}^{2}\bigr)} \Bigr]. $$

Now, with these conditions, the solutions of the system of equations (3.1) are as follows.

Case (I). We have two solutions:
$$\begin{aligned}& \alpha_{0}=\sqrt{\frac{b_{0}^{2}-a_{0}^{2}}{b_{2}^{2}-a_{2}^{2}}}, \\& \alpha_{1}=\pm\sqrt{\frac{2\operatorname{sign} (a_{2}^{2}-b_{2}^{2})\sqrt{(b_{0}^{2}-a_{0}^{2}) (b_{2}^{2}-a_{2}^{2})}-(-a_{1}^{2}+2 a_{0} a_{2}+b_{1}^{2}-2 b_{0} b_{2})}{a_{2}^{2}-b_{2}^{2}}}. \end{aligned}$$
Case (II). We have four solutions depending on whether we choose \(\xi=1\) or \(\xi=-1\):
$$\begin{aligned}& \alpha_{0}=\xi\sqrt{\frac{b_{0}^{2}-a_{0}^{2}}{b_{2}^{2}-a_{2}^{2}}}, \\& \alpha_{1}=\pm\sqrt{\frac{2\xi\operatorname{sign} (a_{2}^{2}-b_{2}^{2})\sqrt{(b_{0}^{2}-a_{0}^{2}) (b_{2}^{2}-a_{2}^{2})}-(-a_{1}^{2}+2 a_{0} a_{2}+b_{1}^{2}-2 b_{0} b_{2})}{a_{2}^{2}-b_{2}^{2}}}. \end{aligned}$$

These solutions provide well-defined real numbers by conditions (I) and (II).

4 Decomposing the boundary conditions

Now we consider the cases where the problem can be decomposed into two equations. We will try to identify those circumstances when problem (2.5)-(2.6)-(2.7) can be expressed as an equivalent factored problem of the form
$$\begin{aligned}& L_{1}u =y, \qquad V_{j}u=0,\quad j=1, \ldots,n, \end{aligned}$$
$$\begin{aligned}& L_{2}y =Rh, \qquad \widetilde{V}_{j}y=0, \quad j=1,\ldots,n, \end{aligned}$$
where \(S=L_{2}L_{1}\). If that where the case, then conditions (2.6)-(2.7) would be equivalent to
$$ V_{j}u=0, \qquad \widetilde{V}_{j} L_{1}u=0, \quad j=1,\ldots,n. $$
In this case, the Green’s function of problem (2.5)-(2.6)-(2.7) can be expressed as
$$ G(t,s)= \int_{-T}^{T}G_{1}(t,r)G_{2}(r,s) \,\mathrm{d}r, $$
where \(G_{1}\) is the Green’s function associated with problem (4.1), and \(G_{2}\) is the one associated with problem (4.2), assuming that both Green’s functions exist.
In order to guarantee that (2.6)-(2.7) and (4.3) are equivalent, let us establish the following definitions. Let
$$\begin{aligned}& \Gamma_{1}: =(\alpha_{kj})_{k =1,\ldots,n}^{j =0,\ldots ,n-1}, \qquad X_{n}:=\bigl(u(T),u'(T),\ldots,u^{(n-1)}(T) \bigr), \\& \Theta_{1}: =(\beta_{kj})_{k =1,\ldots,n}^{j =0,\ldots,n-1}, \qquad \overline{X}_{n}:=\bigl(u(-T),u'(-T), \ldots,u^{(n-1)}(-T)\bigr). \end{aligned}$$
Then the boundary conditions (2.6) can be expressed as \(\Gamma_{1}\overline{X}_{n}+\Theta_{1}X_{n}=0\). In the same way, (2.7) can be written as \((\Gamma_{2}\ \Gamma_{3})\overline{X}_{2n}+(\Theta_{2}\ \Theta_{3})X_{2n}=0\) for some matrices \(\Gamma _{2},\Gamma_{3},\Theta_{2},\Theta_{3}\in{ \mathscr {M}}_{n}({\mathbb{R}})\). So, globally, the conditions on equation (2.5) can be expressed as
$$ \begin{pmatrix} \Gamma_{1} & 0 \\ \Gamma_{2} & \Gamma_{3} \end{pmatrix} \overline{X}_{2n}+ \begin{pmatrix}\Theta_{1} & 0 \\ \Theta_{2} & \Theta_{3} \end{pmatrix} X_{2n}=0. $$
Now, assume that \(L_{1}\) and \(\widetilde{V}_{j}\) are of the form
$$\begin{aligned}& L_{1} =\sum_{l=0}^{n}c_{l}D^{l}, \\& \widetilde{V}_{j}u =\sum_{k=0}^{n-1} \bigl[\gamma _{jk}u^{k)}(-T)+\delta_{jk}u^{k)}(T) \bigr]=\sum_{k=0}^{n-1} \bigl[ \gamma_{jk}(-T)^{*}+\delta_{jk}T^{*} \bigr]D^{k}u,\quad j=1,\ldots,n \end{aligned}$$
for some \(c_{l},\gamma_{jk},\delta_{jk}\in{\mathbb{R}}\), \(l,j,k=1,\ldots ,n\), where \(a^{*}\) denotes the pullback by the constant a. Define now \(\Phi:=(\gamma_{jk})_{j,k}, \Psi:=(\delta_{jk})_{j,k}\in {\mathscr {M}}_{n}({\mathbb{R}})\), and
$$\begin{aligned} \Xi =&(d_{jk})_{j=0,\ldots, n-1}^{k=0,\ldots,2n-1}:= \begin{pmatrix}c_{0} & c_{1} & c_{2} & \cdots& c_{n-1} & c_{n} & 0 & 0 & \cdots& 0 \\ 0& c_{0} & c_{1} & \cdots& c_{n-2} & c_{n-1} & c_{n} & 0 & \cdots& 0\\ 0 & 0& c_{0} & \cdots& c_{n-3} & c_{n-2} & c_{n-1} & c_{n} & \cdots& 0 \\ \vdots& \vdots& \vdots& \ddots& \vdots& \vdots& \vdots& \vdots& \ddots& \vdots\\ 0 & 0 & 0 & \cdots& c_{0} & c_{1} & c_{2} & c_{3} & \cdots& c_{n} \end{pmatrix} \\ =& \bigl(\textstyle\begin{array}{@{}c@{\quad}c@{}} \Xi_{1}& \Xi_{2}\end{array}\displaystyle \bigr)\in{ \mathscr {M}}_{n\times2n}({\mathbb{R}}), \end{aligned}$$
where \(\Xi_{1}, \Xi_{2}\in{ \mathscr {M}}_{n}({\mathbb{R}})\), \(\Xi_{2}\) is invertible (because \(c_{n}\ne0\)), and \(\Xi_{1}\) is invertible if and only if \(c_{0}\ne0\).
Now we are ready to start the calculations. We have that
$$\begin{aligned} (\widetilde{V}_{j}L_{1}u)_{j} = & \Biggl(\sum _{k=0}^{n-1} \bigl[\gamma_{jk}(-T)^{*}+ \delta_{jk}T^{*} \bigr]D^{k}\sum_{l=0}^{n}c_{l}D^{l}u \Biggr)_{j} \\ =& \Biggl(\sum_{k=0}^{n-1} \sum_{l=0}^{n} \bigl[\gamma_{jk}c_{l}(-T)^{*}+ \delta_{jk}c_{l}T^{*} \bigr]D^{k+l}u \Biggr)_{j} \\ = & \Biggl(\sum_{k=0}^{n-1}\sum _{m=k}^{k+n} \bigl[\gamma_{jk}c_{m-k}(-T)^{*}+ \delta _{jk}c_{m-k}T^{*} \bigr]D^{m}u \Biggr)_{j} \\ = & \Biggl(\sum_{k=0}^{n-1}\sum _{m=0}^{2n-1} \bigl[\gamma _{jk}d_{km}u^{(m)}(-T)+ \delta_{jk}d_{km}u^{(m)}(T) \bigr] \Biggr)_{j} \\ = & \Biggl(\sum_{k=0}^{n-1} \gamma_{jk}d_{km} \Biggr)_{j,m}\overline{X}_{2n}+ \Biggl(\sum_{k=0}^{m}\delta_{jk}d_{km} \Biggr)_{j,m}X_{2n}=\Phi\Xi\overline{X}_{2n}+\Psi\Xi X_{2n}. \end{aligned}$$
Hence, we can write (4.3) in the form
$$ \begin{pmatrix} \widetilde{\Phi}& 0 \\ \Phi\Xi_{1} & \Phi\Xi_{2} \end{pmatrix} \overline{X}_{2n}+ \begin{pmatrix}\widetilde{\Psi}& 0 \\ \Psi\Xi_{1} & \Psi\Xi_{2} \end{pmatrix} X_{2n}=0. $$
Clearly, it is convenient to take \(\widetilde{\Phi}=\Gamma_{1}\) and \(\widetilde{\Psi}=\Theta_{1}\), that is, \(V_{j}=B_{j}\), \(j=1,\ldots,n\).

Lemma 4.1

If \(\Gamma_{1}\) and \(\Gamma_{3}\) are invertible and \(\Theta_{2}=\Gamma _{2}\Gamma_{1}^{-1}\Theta_{1}+\Theta_{3}\Xi_{2}^{-1}\Xi_{1}-\Gamma_{3}\Xi _{2}^{-1}\Xi_{1}\Gamma_{1}^{-1}\Theta_{1}\), then, taking
$$\widetilde{\Phi}=\Gamma_{1}, \qquad \widetilde{\Psi}= \Theta_{1},\qquad \Phi =\mathrm{Id},\quad \textit{and} \quad \Psi= \Xi_{2}\Gamma_{3}^{-1}\Theta _{3}\Xi _{2}^{-1}, $$
condition (4.4) is equivalent to condition (4.5), and, therefore, problems (2.5)-(2.6)-(2.7) and (4.1)-(4.2) are equivalent.


$$A= \begin{pmatrix}\mathrm{Id}& 0 \\ (\Xi_{1}-\Xi_{2}\Gamma _{3}^{-1}\Gamma_{2})\Gamma _{1}^{-1} & \Xi_{2}\Gamma_{3}^{-1} \end{pmatrix} . $$
The matrix A is invertible, and
$$ \begin{pmatrix} \widetilde{\Phi}& 0 \\ \Phi\Xi_{1} & \Phi\Xi_{2} \end{pmatrix} =A \begin{pmatrix} \Gamma_{1} & 0 \\ \Gamma_{2} & \Gamma_{3} \end{pmatrix} ,\qquad \begin{pmatrix}\widetilde{\Psi}& 0 \\ \Psi\Xi_{1} & \Psi\Xi_{2} \end{pmatrix} =A \begin{pmatrix}\Theta_{1} & 0 \\ \Theta_{2} & \Theta_{3} \end{pmatrix} . $$

Hence, conditions (4.4) and (4.5) are equivalent. □

Analogously, we have a result when \(\Theta_{1}\) and \(\Theta_{3}\) are invertible.

Lemma 4.2

If \(\Theta_{1}\) and \(\Theta_{3}\) are invertible and \(\Gamma_{2}=\Theta _{2}\Theta_{1}^{-1}\Gamma_{1}+\Gamma_{3}\Xi_{2}^{-1}\Xi_{1}-\Theta_{3}\Xi _{2}^{-1}\Xi_{1}\Theta_{1}^{-1}\Gamma_{1}\), then, taking
$$\widetilde{\Psi}=\Theta_{1},\qquad \widetilde{\Phi}= \Gamma_{1},\qquad \Psi =\mathrm{Id},\quad \textit{and}\quad \Phi= \Xi_{2}\Theta_{3}^{-1}\Gamma _{3}\Xi _{2}^{-1}, $$
condition (4.4) is equivalent to condition (4.5), and, therefore, problems (2.5)-(2.6)-(2.7) and (4.1)-(4.2) are equivalent.

The following example illustrates this discussion explicitly.

Example 4.3

Consider the following problem:
$$ \begin{aligned} &u'''(t)+u(-t)+u(t)=h(t), \quad t\in I, \\ &u(-1)-u''(1)=0,\qquad u'(-1)=u'(1), \qquad u''(-1)-u(1)=0, \end{aligned} $$
where \(h(t)=\sin t\). Then, the operator we are studying is \(L=D^{3}+\varphi^{*}+1\). If we take \(R:=D^{3}+\varphi^{*}-1\), then we have that \(RL=D^{6}\), which admits a simple decomposition in \({\mathbb{R}}[D]\) as \(RL=(D^{3})(D^{3})=L_{2}L_{1}\).
The boundary conditions are
$$\bigl[(-1)^{*}-1^{*}D^{2}\bigr]u=0, \qquad \bigl[(-1)^{*}D-1^{*}D\bigr]u=0, \qquad \bigl[(-1)^{*}D^{2}-1^{*}\bigr]u=0. $$
Taking this into account, we add the conditions
$$\begin{aligned}& 0 =\bigl[(-1)^{*}-1^{*}D^{2}\bigr]Ru=u'''(-1)-u^{(5)}(1), \\& 0 =\bigl[(-1)^{*}D-1^{*}D\bigr]Ru=u^{(4)}(-1)-u^{(4)}(1), \\& 0 =\bigl[(-1)^{*}D^{2}-1^{*}\bigr]Ru=u^{(5)}(-1)-u'''(1). \end{aligned}$$
Then our new reduced problem, writing the boundary conditions in matrix form, is
$$\begin{aligned}& u^{(6)}(t)=f(t), \\& \begin{pmatrix} 1 & 0 & 0 & 0 &0&0\\ 0 & 1 & 0 & 0&0&0 \\ 0 & 0 & 1 & 0&0&0 \\ 0 & 0 & 0 & 1&0&0 \\ 0 & 0 & 0 & 0&1&0 \\ 0 & 0 & 0 & 0&0&1 \end{pmatrix} \begin{pmatrix} u(-1) \\ u'(-1) \\ u''(-1) \\ u'''(-1)\\u^{(4)}(-1)\\u^{(5)}(-1) \end{pmatrix} \\& \quad {}+ \begin{pmatrix} 0&0 & -1 & 0 & 0&0 \\ 0& -1 & 0 & 0 & 0 &0\\ -1 & 0 & 0 & 0&0&0 \\ 0 & 0 & 0 & 0&0&-1 \\ 0 & 0 & 0 & 0&-1&0 \\ 0 & 0 & 0 & -1&0&0 \end{pmatrix} \begin{pmatrix} u(1) \\ u'(1) \\ u''(1) \\ u'''(1)\\u^{(4)}(1)\\u^{(5)}(1) \end{pmatrix} =0 , \end{aligned}$$
where \(f(t)=R h(t)=h'''(t)+h(-t)-h(t)=-3\sin t\).
Now, we can check that we are working under the conditions of Lemma 4.1. We have that \(\Gamma_{1}=\Gamma_{3}=\mathrm{Id}\), \(\Gamma_{2}=\Theta_{2}=0\), and
$$\Theta_{1}=\Theta_{3}= \begin{pmatrix} 0&0&-1\\ 0&-1&0 \\ -1&0&0 \end{pmatrix} . $$
On the other hand,
$$\Xi= (\Xi_{1} \Xi_{2} )= \begin{pmatrix}1 & 0 & 0 & 1 & 0 & 0 \\ 0& 1 & 0 & 0 & 1 & 0\\0&0& 1 & 0 & 0 & 1 \end{pmatrix} . $$
Thus, it is straightforward to check that
$$\Gamma_{2}\Gamma_{1}^{-1}\Theta_{1}+ \Theta_{3}\Xi_{2}^{-1}\Xi _{1}- \Gamma_{3}\Xi_{2}^{-1}\Xi_{1} \Gamma_{1}^{-1}\Theta_{1}=\Theta_{2}=0, $$
and therefore the hypotheses of Lemma 4.1 are satisfied. The conditions \(\widetilde{V}_{j}\) are given by the matrices \(\Phi=\mathrm{Id}\) and \(\Psi=\Xi_{2}\Gamma_{3}^{-1}\Theta_{3}\Xi_{2}^{-1}=\Theta_{3}\). Hence, we know that this problem is equivalent to the factored system
$$\begin{aligned}& \begin{aligned} &u'''(t) =v(t), \qquad u(-1)-u''(1)=0, \\ &u'(-1)=u'(1), \qquad u''(-1)-u(1)=0, \end{aligned} \end{aligned}$$
$$\begin{aligned}& \begin{aligned} &v'''(t) =-3\sin t, \qquad v(-1)-v''(1)=0, \\ &v'(-1)=v'(1), \qquad v''(-1)-v(1)=0. \end{aligned} \end{aligned}$$
Thus, it is clear that
$$ u(t)= \int_{-1}^{1}G_{1}(t,s)v(s)\,\mathrm{d}s, \qquad v(t)= \int _{-1}^{1}G_{2}(t,s)f(s)\,\mathrm{d}s, $$
where \(G_{1}=G_{2}\) are, respectively, the Green’s functions of (4.8) and (4.9). The Green’s functions of problems involving linear ordinary differential equations with constant coefficients and two-point boundary conditions can be computed with the Mathematica notebooks [23] or [17]. Explicitly,
$$ G_{1}(t,s)= \textstyle\begin{cases} -\frac{1}{4} (s-t) (s (t-1)+t-3), & -1\leq s\leq t\leq 1, \\ -\frac{1}{4} (s-t) ((s-1) t+s-3), & -1< t< s\leq1. \end{cases} $$
Hence, the Green’s function G for problem (4.7) is given by
$$\begin{aligned} G(t,s) =& \int_{-1}^{1}G_{1}(t,r)G_{2}(r,s) \,\mathrm{d}r \\ =&\frac {1}{480} \textstyle\begin{cases} 2 s^{5} (t+1)-5 s^{4} (t (t+2)+3) \\ \quad {}+20 s^{3} t (t+3)-5 s^{2} (t^{2} (t+2)^{2}-5 ) \\ \quad {}+2 s t (t^{2} (t (t+5)+30)-166 )-2 t^{5} \\ \quad {}-15 t^{4}+25 t^{2}-102, & -1< t< s\leq1, \\ -2 s^{5}-15 s^{4}-5 (s^{2} (s+2)^{2}-5 ) t^{2} \\ \quad {}+2 (s^{2} (s (s+5)+30)-166 ) s t \\ \quad {}+25 s^{2}+2 (s+1) t^{5}-5 (s (s+2)+3) t^{4} \\ \quad {}+20 (s+3) s t^{3}-102, & -1\leq s\leq t\leq1. \end{cases}\displaystyle \end{aligned}$$
Therefore, using Theorem 2.5, the Green’s function for problem (4.6) is
$$\begin{aligned} \overline{G}(t,s) =&R_{\vdash}G(t,s) =\frac{\partial^{3} G}{\partial t^{3}}(t,s)+G(-t,s)+G(t,s) \\ =&\frac{1}{120} \textstyle\begin{cases} -(s-1) t^{5}+10 (s-3) s t^{3}+30 (s-1) t^{2}-30 (s-3) s \\ \quad {}- (s^{5}-5 s^{4}+30 s^{3}+30 s^{2}-226 s+90 ) t, & -1\le|t|\le s\le1, \\ s^{5} (-(t-1))+10 s^{3} (t-3) t-30 s^{2} (t-1)+30 (t-3) t \\ \quad {}+s (-t^{5}+5 t^{4}-30 t^{3}+30 t^{2}+106 t+90 ), & -1\le|s|< t\le1, \\ s^{5} (-(t+1))-10 s^{3} t (t+3)-30 s^{2} (t+1)-30 t (t+3) \\ \quad {}-s (t^{5}+5 t^{4}+30 t^{3}-30 t^{2}-226 t-90 ), & -1\le|s|< -t\le1, \\ -(s+1) t^{5}-10 s (s+3) t^{3}+30 (s+1) t^{2}+30 s (s+3) \\ \quad {}- (s^{5}+5 s^{4}+30 s^{3}+30 s^{2}-106 s+90 ) t, &-1\le|t|\le-s\le1. \end{cases}\displaystyle \end{aligned}$$
Hence, the solution of problem (4.6) is given by
$$\begin{aligned} u(t) =& \int_{-1}^{1}\overline{G}(t,s)\sin(s)\,\mathrm{d}s \\ =& -\frac{1}{60} \bigl(-30 - 91 t - 30 t^{2} + 10 t^{3} + t^{5} \bigr) \sin (1) \\ &{} +\frac{2}{3} \bigl(t^{3}-7 t-3 \bigr) \cos(1)+2 \sin (t)+ \cos(t). \end{aligned}$$

Computationally, this procedure poses a big advantage: it is always easier to obtain the Green’s function for two nth-order problems than for one (2n)th-order problem. Furthermore, if the hypotheses of Lemma 3.1 are satisfied and we are able to obtain a factorization of the aforementioned kind using q and \(q_{-}\) in the place of \(L_{1}\) and \(L_{2}\), then we have an extra advantage: the differential equation given by \(q_{-}\) is the adjoint equation of that given by q multiplied by the factor \((-1)^{n}\). This fact, together with the following result (which can be found, although not stated as in this work, in [18]), illustrates that in this case it may be possible to solve problem (2.4) just computing the Green’s function of one nth-order problem.

Theorem 4.4

Consider an interval \(J=[a,b]\subset{\mathbb{R}}\), functions \(\sigma ,a_{i}\in\operatorname{L^{1}}(J)\), \(i=1,\ldots,n\), real numbers \(\alpha _{ij}\), \(\beta_{ij}\), \(h_{i}\), \(i=1,\ldots, n\), \(j=0,\ldots,n-1\), a vector subspace \(D(L_{n})\subset W^{n,1}(J)\), the operator
$$L_{n}u(t)=a_{0}u^{(n)}(t)+a_{1}(t)u^{(n-1)}(t)+ \cdots +a_{n-1}(t)u'(t)+a_{n}(t)u(t), \quad t\in J, u\in D(L_{n}), $$
with \(a_{0}=1\), and the problem
$$ L_{n}u(t)=\sigma(t), \quad t\in J,\qquad U_{i}(u)=h_{i},\quad i=1,\ldots,n, $$
$$U_{i}(u):=\sum_{j=0}^{n-1} \bigl( \alpha_{ij}u^{(j)}(a)+\beta _{ij}u^{(j)}(b) \bigr),\quad i=1,\ldots,n. $$
Then, the associated adjoint problem is
$$ L_{n}^{\dagger}v(t)=\sum _{j=0}^{n}(-1)^{j}a_{n-j}(t)u^{(j)}(t), \quad t\in J, v\in D\bigl(L_{n}^{\dagger}\bigr), $$
$$D\bigl(L_{n}^{\dagger}\bigr)= \Biggl\{ v\in W^{n,2}(J) : \bigl(b^{*}-a^{*}\bigr) \Biggl(\sum_{j=1}^{n} \sum_{i=0}^{j-1}(-1)^{(j-i-1)}(a_{n-j}v)^{j-i-1}u^{(i)} \Biggr)=0, u\in D(L_{n}) \Biggr\} . $$
Furthermore, if \(G(t,s)\) is the Green’s function of problem (4.10), then that associated with problem (4.11) is \(G(s,t)\).
Hence, if we can decompose problem (2.5)-(2.6)-(2.7) in two adjoint problems of the form (4.1)-(4.2), then its Green’s function is
$$ G(t,s)= \int_{-T}^{T}G_{1}(t,r)G_{2}(r,s) \,\mathrm{d}r= \int _{-T}^{T}G_{1}(t,r)G_{1}(s,r) \,\mathrm{d}r, $$
where \(G_{1}\) is the Green’s function of (4.1), and \(G_{2}(t,s)=G_{1}(s,t)\) is that of (4.2). We note, though, that unless the operator \(q_{-}\) is the adjoint equation times \((-1)^{n}\), the boundary conditions may not be the adjoint ones.

Example 4.5

Consider the problem
$$ u'(-t)+u(t)+\sqrt{2} u(-t)=f(t):=e^{t}, \quad t\in [-1,1], \qquad u(-1)=u(1). $$
Taking \(R=\varphi^{*}D+\sqrt{2}\varphi^{*}-\mathrm{Id}\) and composing problem (4.12) with this operator, we obtain the reduced problem
$$ u''(t)-u(t)=Rf(t), \quad t\in[-1,1], \qquad u(-1)=u(1),\qquad u'(-1)=u'(1). $$
Problem (4.13) is equivalent to the factored system
$$\begin{aligned}& u'(t)+u(t) =v(t), \qquad u(-1) =u(1), \end{aligned}$$
$$\begin{aligned}& -v'(t)+v(t) =-Rf(t), \qquad v(-1) =v(1) \end{aligned}$$
for \(t\in[-1,1]\). Observe that problem (4.15) is the adjoint problem of (4.14). Since the Green’s function of problem (4.14) is given by
$$G_{1}(t,s):= \textstyle\begin{cases} \frac{e^{s-t+2}}{e^{2}-1}, & -1\leq s\leq t\leq1, \\ \frac{e^{s-t}}{e^{2}-1}, & -1< t< s\leq1, \end{cases} $$
and, therefore, \(G_{1}(s,t)\) is the Green’s function of problem (4.15), the Green’s function of problem (4.13) is
$$G(t,s)=- \int_{-1}^{1}G_{1}(t,r)G_{1}(s,r) \,\mathrm{d}r= \textstyle\begin{cases} - \frac{e^{s-t+2}+e^{t-s}}{2 e^{2}-2}, & -1\leq s\leq t\leq1, \\ -\frac{e^{s-t}+e^{-s+t+2}}{2 e^{2}-2}, & -1< t< s\leq1. \end{cases} $$
Finally, the Green’s function of problem (4.12) is
$$\begin{aligned} \overline{G}(t,s) & =R_{\vdash}G(t,s) =\frac{\partial G}{\partial t}(-t,s)+\sqrt{2} G(-t,s)-G(t,s) \\ & = \textstyle\begin{cases} \frac{e^{-s-t} [ (\sqrt{2}-1 ) (-e^{2 (s+t+1)} )+e^{2 s+2}+e^{2 t}-\sqrt{2}-1 ]}{2 (e^{2}-1 )}, & |t|\le-s, \\ \frac{e^{-s-t} [ (\sqrt{2}-1 ) (-e^{2 (s+t)} )+e^{2 s+2}+e^{2 t}- (1+\sqrt{2} ) e^{2} ]}{2 (e^{2}-1 )}, & |s|< t, \\ \frac{e^{-s-t} [ (\sqrt{2}-1 ) (-e^{2 (s+t+1)} )+e^{2 s}+e^{2 t+2}-\sqrt{2}-1 ]}{2 (e^{2}-1 )}, & |s|< -t, \\ \frac{e^{-s-t} [ (\sqrt{2}-1 ) (-e^{2 (s+t)} )+e^{2 s}+e^{2 t+2}- (1+\sqrt{2} ) e^{2} ]}{2 (e^{2}-1 )}, & |t|\le s. \end{cases}\displaystyle \end{aligned}$$
Hence, the solution of problem (4.12) is
$$\begin{aligned}& u(t) \\& \quad := -\frac{e^{-t} (-2 (1+\sqrt{2} ) t+e^{2} (2 (1+\sqrt{2} ) t+3 \sqrt{2} )+e^{2 t} (-2 t+e^{2} (2 t+\sqrt{2}-4 )-\sqrt{2} )+\sqrt {2}+4 )}{4 (e^{2}-1 )}. \end{aligned}$$



The author wants to acknowledge his gratitude to the anonymous referee for helping improve the manuscript, especially in the proof of Lemma 3.1. This work was partially supported by Consellería de Cultura, Educación e Ordenación Universitaria, Xunta de Galicia, Spain, project EM2014/032 and supported by FPU scholarship, Ministerio de Educación, Cultura y Deporte, Spain.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

Departamento de Análise Matemática, Facultade de Matemáticas, Universidade de Santiago de Compostela, Santiago de Compostela, Spain


  1. Post, S, Vinet, L, Zhedanov, A: Supersymmetric quantum mechanics with reflections. J. Phys. A, Math. Theor. 44(43), 435301 (2011) MathSciNetView ArticleMATHGoogle Scholar
  2. Roychoudhury, R, Roy, B, Dube, PP: Non-Hermitian oscillator and R-deformed Heisenberg algebra. J. Math. Phys. 54(1), 012104 (2013) MathSciNetView ArticleMATHGoogle Scholar
  3. Gamboa, J, Plyushchay, M, Zanelli, J: Three aspects of bosonized supersymmetry and linear differential field equation with reflection. Nucl. Phys. B 543(1), 447-465 (1999) MathSciNetView ArticleMATHGoogle Scholar
  4. Cabada, A, Infante, G, Tojo, FAF: Nontrivial solutions of Hammerstein integral equations with reflections. Bound. Value Probl. 2013, 86 (2013) MathSciNetView ArticleMATHGoogle Scholar
  5. Piao, D, Sun, J: Besicovitch almost periodic solutions for a class of second order differential equations involving reflection of the argument. Electron. J. Qual. Theory Differ. Equ. 2014, 41 (2014) MathSciNetView ArticleMATHGoogle Scholar
  6. Piao, D, Xin, N: Bounded and almost periodic solutions for second order differential equation involving reflection of the argument (2013). arXiv:1302.0616
  7. Kritskov, L, Sarsenbi, A: Spectral properties of a nonlocal problem for a second-order differential equation with an involution. Differ. Equ. 51(8), 984-990 (2015) MathSciNetView ArticleMATHGoogle Scholar
  8. Kritskov, LV, Sarsenbi, AM: Basicity in \(L_{p}\) of root functions for differential equations with involution. Electron. J. Differ. Equ. 2015, 278 (2015) MathSciNetView ArticleMATHGoogle Scholar
  9. Ashyralyev, A, Sarsenbi, AM: Well-posedness of an elliptic equation with involution. Electron. J. Differ. Equ. 2015, 284 (2015) MathSciNetView ArticleMATHGoogle Scholar
  10. Sarsenbi, A: The Green’s function of the second order differential operator with an involution and its application. AIP Conf. Proc. 1676, 020010 (2015) View ArticleGoogle Scholar
  11. Sarsenbi, AA: Green’s function of the second-order differential operator with involution from boundary conditions of Neumann. AIP Conf. Proc. 1676, 020074 (2015) View ArticleGoogle Scholar
  12. Cabada, A, Tojo, FAF: Comparison results for first order linear operators with reflection and periodic boundary value conditions. Nonlinear Anal. 78, 32-46 (2013) MathSciNetView ArticleMATHGoogle Scholar
  13. Cabada, A, Tojo, FAF: Solutions of the first order linear equation with reflection and general linear conditions. Glob. J. Math. Sci. 2(1), 1-8 (2013) MATHGoogle Scholar
  14. Cabada, A, Tojo, FAF: Existence results for a linear equation with reflection, non-constant coefficient and periodic boundary conditions. J. Math. Anal. Appl. 412(1), 529-546 (2014) MathSciNetView ArticleMATHGoogle Scholar
  15. Cabada, A, Tojo, FAF: Solutions and Green’s function of the first order linear equation with reflection and initial conditions. Bound. Value Probl. 2014, 99 (2014) MathSciNetView ArticleMATHGoogle Scholar
  16. Cabada, A, Tojo, FAF: Green’s functions for reducible functional differential equations. Bull. Malays. Math. Sci. Soc., 1-22 (2016) Google Scholar
  17. Tojo, FAF, Cabada, A, Cid, JA, Máquez-Villamarín, B: Green’s functions with reflection. (2014)
  18. Cabada, A: Green’s functions in the theory of ordinary differential equations. Springer, Berlin (2014) View ArticleMATHGoogle Scholar
  19. Cabada, A, Cid, JÁ: On the sign of the Green’s function associated to Hill’s equation with an indefinite potential. Appl. Math. Comput. 205(1), 303-308 (2008) MathSciNetMATHGoogle Scholar
  20. Yang, L, Xia, B: Explicit criterion to determine the number of positive roots of a polynomial. MM Res. Prepr. 15, 134-145 (1997) Google Scholar
  21. Yang, L, Hou, XR, Zeng, ZB: A complete discrimination system for polynomials. Sci. China Ser. E 39(6), 628-646 (1996) MathSciNetMATHGoogle Scholar
  22. Liang, S, Zhang, J: A complete discrimination system for polynomials with complex coefficients and its automatic generation. Sci. China Ser. E 42(2), 113-128 (1999) MathSciNetView ArticleMATHGoogle Scholar
  23. Cabada, A, Cid, JA, Máquez-Villamarín, B: Green’s functions computation. (2014)