2.1 The n th order problem
Consider the following
n th order DEI with involution:
$Lu:=\sum _{k=0}^{n}[{a}_{k}{u}^{(k)}(t)+{b}_{k}{u}^{(k)}(t)]=h(t),\phantom{\rule{1em}{0ex}}t\in \mathbb{R};\phantom{\rule{2em}{0ex}}u({t}_{0})=c,$
(2.1)
where $h\in {L}_{\mathrm{loc}}^{1}(\mathbb{R})$, ${t}_{0},c,{a}_{k},{b}_{k}\in \mathbb{R}$ for $k=0,\dots ,n1$; ${a}_{n}=0$; ${b}_{n}=1$. A solution to this problem will be a function $u\in {W}_{\mathrm{loc}}^{n,1}(\mathbb{R})$, that is, u is k times differentiable in the sense of distributions and each of the derivatives satisfies ${u}^{(k)}{}_{K}\in {L}^{1}(K)$ for every compact set $K\subset \mathbb{R}$.
Theorem 2.1 Assume that there exist $\tilde{u}$ and $\tilde{v}$,
functions such that satisfy $\sum _{i=0}^{nj}\left(\genfrac{}{}{0ex}{}{i+j}{j}\right)[{(1)}^{n+i1}{a}_{i+j}{\tilde{u}}^{(i)}(t)+{b}_{i+j}{\tilde{u}}^{(i)}(t)]=0,\phantom{\rule{1em}{0ex}}t\in \mathbb{R};j=0,\dots ,n1,$
(2.2)
$\sum _{i=0}^{nj}\left(\genfrac{}{}{0ex}{}{i+j}{j}\right)[{(1)}^{n+i}{a}_{i+j}{\tilde{v}}^{(i)}(t)+{b}_{i+j}{\tilde{v}}^{(i)}(t)]=0,\phantom{\rule{1em}{0ex}}t\in \mathbb{R};j=0,\dots ,n1,$
(2.3)
$({\tilde{u}}_{e}{\tilde{v}}_{e}{\tilde{u}}_{o}{\tilde{v}}_{o})(t)\ne 0,\phantom{\rule{1em}{0ex}}t\in \mathbb{R},$
(2.4)
and also one of the following:
$\begin{array}{rl}(\mathrm{h}1)& L\tilde{u}=0\phantom{\rule{1em}{0ex}}\mathit{\text{and}}\phantom{\rule{1em}{0ex}}\tilde{u}({t}_{0})\ne 0,\\ (\mathrm{h}2)& L\tilde{v}=0\phantom{\rule{1em}{0ex}}\mathit{\text{and}}\phantom{\rule{1em}{0ex}}\tilde{v}({t}_{0})\ne 0,\\ (\mathrm{h}3)& {a}_{0}+{b}_{0}\ne 0\phantom{\rule{1em}{0ex}}\mathit{\text{and}}\phantom{\rule{1em}{0ex}}({a}_{0}+{b}_{0}){\int}_{0}^{{t}_{0}}{({t}_{0}s)}^{n1}\frac{\tilde{v}({t}_{0}){\tilde{u}}_{e}(s)\tilde{u}({t}_{0}){\tilde{v}}_{o}(s)}{({\tilde{u}}_{e}{\tilde{v}}_{e}{\tilde{u}}_{o}{\tilde{v}}_{o})(s)}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\ne 1.\end{array}$
Then problem (2.1) has a solution.
Proof Define
$\phi :=\frac{{h}_{o}{\tilde{v}}_{e}{h}_{e}{\tilde{v}}_{o}}{{\tilde{u}}_{e}{\tilde{v}}_{e}{\tilde{u}}_{o}{\tilde{v}}_{o}}\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}\psi :=\frac{{h}_{e}{\tilde{u}}_{e}{h}_{o}{\tilde{u}}_{0}}{{\tilde{u}}_{e}{\tilde{v}}_{e}{\tilde{u}}_{o}{\tilde{v}}_{o}}.$
Observe that φ is odd, ψ is even and $h=\phi \tilde{u}+\psi \tilde{v}$. So, in order to ensure the existence of solution of problem (2.1) it is enough to find y and z such that $Ly=\phi \tilde{u}$ and $Lz=\psi \tilde{v}$ for, in that case, defining $u=y+z$, we can conclude that $Lu=h$. We will deal with the initial condition later on.
Take
$y=\tilde{\phi}\tilde{u}$, where
$\tilde{\phi}(t):={\int}_{0}^{t}{\int}_{0}^{{s}_{n}}\cdots {\int}_{0}^{{s}_{2}}\phi ({s}_{1})\phantom{\rule{0.2em}{0ex}}\mathrm{d}{s}_{1}\stackrel{n}{\cdots}\phantom{\rule{0.2em}{0ex}}\mathrm{d}{s}_{n}=\frac{1}{(n1)!}{\int}_{0}^{t}{(ts)}^{n1}\phi (s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s.$
Observe that
$\tilde{\phi}$ is even if
n is odd and
vice versa. In particular, we have
${\tilde{\phi}}^{(j)}(t)={(1)}^{j+n1}{\tilde{\phi}}^{(j)}(t),\phantom{\rule{1em}{0ex}}j=0,\dots ,n.$
Thus,
$\begin{array}{rl}Ly(t)& =\sum _{k=0}^{n}[{a}_{k}{(\tilde{\phi}\tilde{u})}^{(k)}(t)+{b}_{k}{(\tilde{\phi}\tilde{u})}^{(k)}(t)]\\ =\sum _{k=0}^{n}\sum _{j=0}^{k}\left(\genfrac{}{}{0ex}{}{k}{j}\right)[{(1)}^{k}{a}_{k}{\tilde{\phi}}^{(j)}(t){\tilde{u}}^{(kj)}(t)+{b}_{k}{\tilde{\phi}}^{(j)}(t){\tilde{u}}^{(kj)}(t)]\\ =\sum _{k=0}^{n}\sum _{j=0}^{k}\left(\genfrac{}{}{0ex}{}{k}{j}\right){\tilde{\phi}}^{(j)}(t)[{(1)}^{k+j+n1}{a}_{k}{\tilde{u}}^{(kj)}(t)+{b}_{k}{\tilde{u}}^{(kj)}(t)]\\ =\sum _{j=0}^{n}{\tilde{\phi}}^{(j)}(t)\sum _{k=j}^{n}\left(\genfrac{}{}{0ex}{}{k}{j}\right)[{(1)}^{k+j+n1}{a}_{k}{\tilde{u}}^{(kj)}(t)+{b}_{k}{\tilde{u}}^{(kj)}(t)]\\ =\sum _{j=0}^{n}{\tilde{\phi}}^{(j)}(t)\sum _{i=0}^{nj}\left(\genfrac{}{}{0ex}{}{i+j}{j}\right)[{(1)}^{i+n1}{a}_{i+j}{\tilde{u}}^{(i)}(t)+{b}_{i+j}{\tilde{u}}^{(i)}(t)]={\tilde{\phi}}^{(n)}(t)\tilde{u}(t)=\phi (t)\tilde{u}(t).\end{array}$
Hence, $Ly=\phi \tilde{u}$.
All the same, by taking $z=\tilde{\psi}\tilde{v}$ with $\tilde{\psi}(t):=\frac{1}{(n1)!}{\int}_{0}^{t}{(ts)}^{n1}\psi (s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s$, we have $Lz=\psi \tilde{v}$.
Hence, defining $\overline{u}:=y+z=\tilde{\phi}\tilde{u}+\tilde{\psi}\tilde{v}$ we find that $\overline{u}$ satisfies $L\overline{u}=h$ and $\overline{u}(0)=0$.
If we assume (h1), $w=\overline{u}+\frac{c\overline{u}({t}_{0})}{\tilde{u}({t}_{0})}\tilde{u}$ is clearly a solution of problem (2.1).
When (h2) is fulfilled a solution of problem (2.1) is given by $w=\overline{u}+\frac{c\overline{u}({t}_{0})}{\tilde{v}({t}_{0})}\tilde{v}$.
If (h3) holds, using the aforementioned construction we can find ${w}_{1}$ such that $L{w}_{1}=1$ and ${w}_{1}(0)=0$. Now, ${w}_{2}:={w}_{1}1/({a}_{0}+{b}_{0})$ satisfies $L{w}_{2}=0$. Observe that the second part of condition (h2) is precisely ${w}_{2}({t}_{0})\ne 0$, and hence, defining $w=\overline{u}+\frac{c\overline{u}({t}_{0})}{{w}_{2}({t}_{0})}{w}_{2}$ we see that w is a solution of problem (2.1). □
Remark 2.1 Having in mind condition (h1) in Theorem 2.1, it is immediate to verify that
$L\tilde{u}=0$ provided that
${a}_{i}=0\phantom{\rule{1em}{0ex}}\text{for all}i\in \{0,\dots ,n1\}\text{such that}n+i\text{is even}.$
In an analogous way for (h2), one can show that
$L\tilde{v}=0$ when
${a}_{i}=0\phantom{\rule{1em}{0ex}}\text{for all}i\in \{0,\dots ,n1\}\text{such that}n+i\text{is odd}.$
2.2 The first order problem
After proving the general result for the
n th order case, we concentrate our work in the first order problem
${u}^{\prime}(t)+au(t)+bu(t)=h(t),\phantom{\rule{1em}{0ex}}\text{for a.e.}t\in \mathbb{R};\phantom{\rule{2em}{0ex}}u({t}_{0})=c,$
(2.5)
with $h\in {L}_{\mathrm{loc}}^{1}(\mathbb{R})$ and ${t}_{0},a,b,c\in \mathbb{R}$. A solution of this problem will be $u\in {W}_{\mathrm{loc}}^{1,1}(\mathbb{R})$.
In order to do so, we first study the homogeneous equation
${u}^{\prime}(t)+au(t)+bu(t)=0,\phantom{\rule{1em}{0ex}}t\in \mathbb{R}.$
(2.6)
By differentiating and making the proper substitutions we arrive at the equation
${u}^{\u2033}(t)+({a}^{2}{b}^{2})u(t)=0,\phantom{\rule{1em}{0ex}}t\in \mathbb{R}.$
(2.7)
Let $\omega :=\sqrt{{a}^{2}{b}^{2}}$. Equation (2.7) presents three different cases:
(C1)
${a}^{2}>{b}^{2}$. In such a case,
$u(t)=\alpha cos\omega t+\beta sin\omega t$ is a solution of (2.7) for every
$\alpha ,\beta \in \mathbb{R}$. If we impose (2.6) onto this expression we arrive at the general solution
$u(t)=\alpha (cos\omega t\frac{a+b}{\omega}sin\omega t)$
of (2.6) with $\alpha \in \mathbb{R}$.
(C2)
${a}^{2}<{b}^{2}$. Now,
$u(t)=\alpha cosh\omega t+\beta sinh\omega t$ is a solution of (2.7) for every
$\alpha ,\beta \in \mathbb{R}$. To get (2.6) we arrive at the general solution
$u(t)=\alpha (cosh\omega t\frac{a+b}{\omega}sinh\omega t)$
of (2.6) with $\alpha \in \mathbb{R}$.
(C3) ${a}^{2}={b}^{2}$. In this a case, $u(t)=\alpha t+\beta $ is a solution of (2.7) for every $\alpha ,\beta \in \mathbb{R}$. So, (2.6) holds provided that one of the two following cases is fulfilled:
is the general solution of (2.6) with $\alpha \in \mathbb{R}$, and
is the general solution of (2.6) with $\alpha \in \mathbb{R}$.
Now, according to Theorem 2.1, we denote
$\tilde{u}$,
$\tilde{v}$ satisfying
${\tilde{u}}^{\prime}(t)+a\tilde{u}(t)+b\tilde{u}(t)=0,\phantom{\rule{2em}{0ex}}\tilde{u}(0)=1,$
(2.8)
${\tilde{v}}^{\prime}(t)a\tilde{v}(t)+b\tilde{v}(t)=0,\phantom{\rule{2em}{0ex}}\tilde{v}(0)=1.$
(2.9)
Observe that $\tilde{u}$ and $\tilde{v}$ can be obtained from the explicit expressions of the cases (C1)(C3) by taking $\alpha =1$.
Remark 2.2 Note that if u is in the case (C3.1), v is in the case (C3.2) and vice versa.
We have now the following properties of functions $\tilde{u}$ and $\tilde{v}$.
Lemma 2.2 For every $t,s\in \mathbb{R}$,
the following properties hold.
 (I)
${\tilde{u}}_{e}\equiv {\tilde{v}}_{e}$, ${\tilde{u}}_{o}\equiv k{\tilde{v}}_{o}$ for some real constant k a.e.,
 (II)
${\tilde{u}}_{e}(s){\tilde{v}}_{e}(t)={\tilde{u}}_{e}(t){\tilde{v}}_{e}(s)$, ${\tilde{u}}_{o}(s){\tilde{v}}_{o}(t)={\tilde{u}}_{o}(t){\tilde{v}}_{o}(s)$,
 (III)
${\tilde{u}}_{e}{\tilde{v}}_{e}{\tilde{u}}_{o}{\tilde{v}}_{o}\equiv 1$,
 (IV)
$\tilde{u}(s)\tilde{v}(s)+\tilde{u}(s)\tilde{v}(s)=2[{\tilde{u}}_{e}(s){\tilde{v}}_{e}(s){\tilde{u}}_{o}(s){\tilde{v}}_{o}(s)]=2$.
Proof (I) and (III) can be checked by inspection of the different cases. (II) is a direct consequence of (I). (IV) is obtained from the definition of even and odd parts and (III). □
Now, Theorem 2.1 has the following corollary.
Corollary 2.3 Problem (2.5) has a unique solution if and only if $\tilde{u}({t}_{0})\ne 0$.
Proof Considering Lemma 2.2(III), $\tilde{u}$ and $\tilde{v}$, defined as in (2.8) and (2.9), respectively, satisfy the hypothesis of Theorem 2.1, (h1), therefore a solution exists.
Now, assume ${w}_{1}$ and ${w}_{2}$ are two solutions of (2.5). Then ${w}_{2}{w}_{1}$ is a solution of (2.6). Hence, ${w}_{2}{w}_{1}$ is of one of the forms covered in the cases (C1)(C3) and, in any case, a multiple of $\tilde{u}$, that is, ${w}_{2}{w}_{1}=\lambda \tilde{u}$ for some $\lambda \in \mathbb{R}$. Also, it is clear that $({w}_{2}{w}_{1})({t}_{0})=0$, but we have $\tilde{u}({t}_{0})\ne 0$ as a hypothesis, therefore $\lambda =0$ and ${w}_{1}={w}_{2}$. This is, problem (2.5) has a unique solution.
Assume now that w is a solution of (2.5) and $\tilde{u}({t}_{0})=0$. Then $w+\lambda \tilde{u}$ is also a solution of (2.5) for every $\lambda \in \mathbb{R}$, which proves the result. □
This last theorem raises an obvious question: In which circumstances $\tilde{u}({t}_{0})\ne 0$? In order to answer this question, it is enough to study the cases (C1)(C3). We summarize this study in the following lemma, which can be checked easily.
Lemma 2.4 $\tilde{u}({t}_{0})=0$ only in the following cases:

if${a}^{2}>{b}^{2}$and${t}_{0}=\frac{1}{\omega}arctan\frac{\omega}{a+b}+k\pi $for some$k\in \mathbb{Z}$,

if${a}^{2}<{b}^{2}$, $ab>0$and${t}_{0}=\frac{1}{\omega}arctanh\frac{\omega}{a+b}$,

if$a=b$and${t}_{0}=\frac{1}{2a}$.
Definition 2.1 Let
${t}_{1},{t}_{2}\in \mathbb{R}$. We define the
oriented characteristic function of the pair
$({t}_{1},{t}_{2})$ as
${\chi}_{{t}_{1}}^{{t}_{2}}(t):=\{\begin{array}{cc}1,\hfill & {t}_{1}\le t\le {t}_{2},\hfill \\ 1,\hfill & {t}_{2}\le t<{t}_{1},\hfill \\ 0,\hfill & \text{otherwise}.\hfill \end{array}$
Remark 2.3 The previous definition implies that, for any given integrable function
$f:\mathbb{R}\to \mathbb{R}$,
${\int}_{{t}_{1}}^{{t}_{2}}f(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s={\int}_{\mathrm{\infty}}^{\mathrm{\infty}}{\chi}_{{t}_{1}}^{{t}_{2}}(s)f(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s.$
Also, ${\chi}_{{t}_{1}}^{{t}_{2}}={\chi}_{{t}_{2}}^{{t}_{1}}$.
The following corollary gives the expression of the Green’s function for problem (2.5).
Corollary 2.5 Suppose $\tilde{u}({t}_{0})\ne 0$.
Then the unique solution of problem (2.5)
is given by $u(t):={\int}_{\mathrm{\infty}}^{\mathrm{\infty}}G(t,s)h(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s+\frac{c\overline{u}({t}_{0})}{\tilde{u}({t}_{0})}\tilde{u}(t),\phantom{\rule{1em}{0ex}}t\in \mathbb{R},$
where
$\begin{array}{rl}G(t,s):=& \frac{1}{2}([\tilde{u}(s)\tilde{v}(t)+\tilde{v}(s)\tilde{u}(t)]{\chi}_{0}^{t}(s)\\ +[\tilde{u}(s)\tilde{v}(t)\tilde{v}(s)\tilde{u}(t)]{\chi}_{t}^{0}(s)),\phantom{\rule{1em}{0ex}}t,s\in \mathbb{R}.\end{array}$
(2.10)
Proof First observe that
$G(t,\cdot )$ is bounded and of compact support for every fixed
$t\in \mathbb{R}$, so the integral
${\int}_{\mathrm{\infty}}^{\mathrm{\infty}}G(t,s)h(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s$ is well defined. It is not difficult to verify, for any
$t\in \mathbb{R}$, the following equalities:
$\begin{array}{rl}{u}^{\prime}(t)\frac{c\overline{u}({t}_{0})}{\tilde{u}({t}_{0})}{\tilde{u}}^{\prime}(t)=& \frac{1}{2}(\frac{\mathrm{d}}{\mathrm{d}t}{\int}_{0}^{t}[\tilde{u}(s)\tilde{v}(t)+\tilde{v}(s)\tilde{u}(t)]h(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ +\frac{\mathrm{d}}{\mathrm{d}t}{\int}_{t}^{0}[\tilde{u}(s)\tilde{v}(t)\tilde{v}(s)\tilde{u}(t)]h(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s)\\ =& \frac{1}{2}(\frac{\mathrm{d}}{\mathrm{d}t}{\int}_{0}^{t}[\tilde{u}(s)\tilde{v}(t)+\tilde{v}(s)\tilde{u}(t)]h(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ +\frac{\mathrm{d}}{\mathrm{d}t}{\int}_{0}^{t}[\tilde{u}(s)\tilde{v}(t)\tilde{v}(s)\tilde{u}(t)]h(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s)\\ =& h(t)+\frac{1}{2}({\int}_{0}^{t}[\tilde{u}(s){\tilde{v}}^{\prime}(t)+\tilde{v}(s){\tilde{u}}^{\prime}(t)]h(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ +{\int}_{0}^{t}[\tilde{u}(s){\tilde{v}}^{\prime}(t)\tilde{v}(s){\tilde{u}}^{\prime}(t)]h(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s).\end{array}$
(2.11)
On the other hand,
$\begin{array}{r}a[u(t)\frac{c\overline{u}({t}_{0})}{\tilde{u}({t}_{0})}\tilde{u}(t)]+b[u(t)\frac{c\overline{u}({t}_{0})}{\tilde{u}({t}_{0})}\tilde{u}(t)]\\ \phantom{\rule{1em}{0ex}}=\frac{1}{2}a{\int}_{0}^{t}([\tilde{u}(s)\tilde{v}(t)+\tilde{v}(s)\tilde{u}(t)]h(s)+[\tilde{u}(s)\tilde{v}(t)\tilde{v}(s)\tilde{u}(t)]h(s))\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{2em}{0ex}}+\frac{1}{2}b{\int}_{0}^{t}([\tilde{u}(s)\tilde{v}(t)+\tilde{v}(s)\tilde{u}(t)]h(s)+[\tilde{u}(s)\tilde{v}(t)\tilde{v}(s)\tilde{u}(t)]h(s))\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{1em}{0ex}}=\frac{1}{2}a{\int}_{0}^{t}([\tilde{u}(s)\tilde{v}(t)+\tilde{v}(s)\tilde{u}(t)]h(s)+[\tilde{u}(s)\tilde{v}(t)\tilde{v}(s)\tilde{u}(t)]h(s))\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{2em}{0ex}}+\frac{1}{2}b{\int}_{0}^{t}([\tilde{u}(s)\tilde{v}(t)+\tilde{v}(s)\tilde{u}(t)]h(s)+[\tilde{u}(s)\tilde{v}(t)\tilde{v}(s)\tilde{u}(t)]h(s))\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{1em}{0ex}}=\frac{1}{2}{\int}_{0}^{t}(a[\tilde{u}(s)\tilde{v}(t)\tilde{v}(s)\tilde{u}(t)]+b[\tilde{u}(s)\tilde{v}(t)+\tilde{v}(s)\tilde{u}(t)])h(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{2em}{0ex}}+\frac{1}{2}{\int}_{0}^{t}(a[\tilde{u}(s)\tilde{v}(t)+\tilde{v}(s)\tilde{u}(t)]+b[\tilde{u}(s)\tilde{v}(t)\tilde{v}(s)\tilde{u}(t)])h(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{1em}{0ex}}=\frac{1}{2}{\int}_{0}^{t}(\tilde{u}(s)[a\tilde{v}(t)+b\tilde{v}(t)]+\tilde{v}(s)[a\tilde{u}(t)+b\tilde{u}(t)]h(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{2em}{0ex}}+\frac{1}{2}{\int}_{0}^{t}(\tilde{u}(s)[a\tilde{v}(t)+b\tilde{v}(t)]\tilde{v}(s)[a\tilde{u}(t)+b\tilde{u}(t)])h(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{1em}{0ex}}=\frac{1}{2}({\int}_{0}^{t}(\tilde{u}(s){\tilde{v}}^{\prime}(t)+\tilde{v}(s){\tilde{u}}^{\prime}(t))h(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{2em}{0ex}}+{\int}_{0}^{t}(\tilde{u}(s){\tilde{v}}^{\prime}(t)\tilde{v}(s){\tilde{u}}^{\prime}(t))h(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s).\end{array}$
(2.12)
Thus, adding (2.11) and (2.12), it is clear that ${u}^{\prime}(t)+au(t)+bu(t)=h(t)$.
We now check the initial condition:
$u({t}_{0})=c\overline{u}({t}_{0})+\frac{1}{2}{\int}_{0}^{{t}_{0}}([\tilde{u}(s)\tilde{v}({t}_{0})+\tilde{v}(s)\tilde{u}({t}_{0})]h(s)+[\tilde{u}(s)\tilde{v}({t}_{0})\tilde{v}(s)\tilde{u}({t}_{0})]h(s))\phantom{\rule{0.2em}{0ex}}\mathrm{d}s.$
Using the construction of the solution provided in Theorem 2.1, it is an easy exercise to check that
$\overline{u}(t)=\frac{1}{2}{\int}_{0}^{t}([\tilde{u}(s)\tilde{v}(t)+\tilde{v}(s)\tilde{u}(t)]h(s)+[\tilde{u}(s)\tilde{v}(t)\tilde{v}(s)\tilde{u}(t)]h(s))\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\phantom{\rule{1em}{0ex}}\mathrm{\forall}t\in \mathbb{R},$
which proves the result. □
Denote now by ${G}_{a,b}$ the Green’s function for problem (2.5) with coefficients a and b. The following lemma is analogous to [[18], Lemma 4.1].
Lemma 2.6 ${G}_{a,b}(t,s)={G}_{a,b}(t,s)$, for all $t,s\in I$.
Proof Let
$u(t):={\int}_{\mathrm{\infty}}^{\mathrm{\infty}}{G}_{a,b}(t,s)h(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s$ be a solution to
${u}^{\prime}(t)+au(t)+bu(t)=h(t)$. Let
$v(t):=u(t)$. Then
${v}^{\prime}(t)av(t)bv(t)=h(t)$, and therefore
$v(t)={\int}_{\mathrm{\infty}}^{\mathrm{\infty}}{G}_{a,b}(t,s)h(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s$. On the other hand, by definition of
v,
$v(t)={\int}_{\mathrm{\infty}}^{\mathrm{\infty}}{G}_{a,b}(t,s)h(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s={\int}_{\mathrm{\infty}}^{\mathrm{\infty}}{G}_{a,b}(t,s)h(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s,$
therefore we can conclude that ${G}_{a,b}(t,s)={G}_{a,b}(t,s)$ for all $t,s\in I$. □
As a consequence of the previous result, we arrive at the following immediate conclusion.
Corollary 2.7 ${G}_{a,b}$ is positive if and only if ${G}_{a,b}$ is negative on ${I}^{2}$.