Solutions and Green's function of the first order linear equation with reflection and initial conditions

This work is devoted to the study of the existence and sign of Green's functions for first order linear problems with constant coefficients and initial (one point) conditions. We first prove a result on the existence of solutions of $n$-th order linear equations with involutions via some auxiliary functions to later prove a uniqueness result in the first order case. We study then different situations for which a Green's function can be obtained explicitly and derive several results in order to obtain information about the sign of the Green's function. Once the sign is known, optimal maximum and anti-maximum principles follow.


Introduction
The study of functional differential equations with involutions (DEI) can be traced back to the solution of the equation x (t) = x(1/t) by Silberstein (see [20]) in 1940. Briefly speaking, an involution is just a function f that satisfies f ( f (x)) = x for every x in its domain of definition. For most applications in analysis, the involution is defined on an interval of R and in the majority of the cases, it is continuous, which implies it is decreasing and has a unique fixed point. Ever since that foundational paper of Siberstein, the study of problems with DEI has been mainly focused on those cases with initial conditions, with an extensive research in the case of the reflection f (x) = −x.
Wiener and Watkins study in [24] the solution of the equation x (t) − a x(−t) = 0 with initial conditions. Equation x (t) + a x(t) + b x(−t) = g(t) has been treated by Piao in [17,18]. In [14,19,21,24,25] some results are introduced to transform this kind of problems with involutions and initial conditions into second order ordinary differential equations with initial conditions or first order two dimensional systems, granting that the solution of the last will be a solution to the first. Furthermore, asymptotic properties and boundedness of the solutions of initial first order problems are studied in [22] and [4] respectively. Second order boundary value problems have been considered in [11,12,16,25] for Dirichlet and Sturm-Liouville boundary value conditions, higher order equations has been studied in [15]. Other techniques applied to problems with reflection of the argument can be found in [5,13,23].
More recently, the papers of Cabada et al. [6,7] have further studied the case of the second order equation with two-point boundary conditions, adding a new element to the previous studies: the existence of a Green's function. Once the study of the sign of the aforementioned function is done, maximum and anti-maximum principles follow. Other works in which Green's functions are obtained for functional differential equations (but with a fairly different setting, like delay or normal equations) are, for instance, [1][2][3][8][9][10].
In this paper we try to answer to the following question: How is it possible find a solution of an initial problem with a differential equation with reflection? What is more, in which cases can a Green's function be constructed and how can it be found? Section 2 will have two parts. In the first one we construct the solutions of the n-th order DEI with reflection, constant coefficients and initial conditions. In the second one we find the Green's function for the order one case. In Section 3 we apply these findings in order to describe exhaustively the range of values for which suitable comparison results are fulfilled and we illustrate them with some examples.

Solutions of the initial problem
In order to prove an existence result for the n-th order DEI with reflection, we consider the even and odd parts of a function f , that is

The n-th order problem
Consider the following n-th order DEI with involution where h ∈ L 1 loc (R), t 0 , c, a k , b k ∈ R for k = 0, . . . n − 1; a n = 0; b n = 1. A solution to this problem will be a function u ∈ W n,1 loc (R), that is, u is k times differentiable in the sense of distributions and each of the derivatives satisfies u k) | K ∈ L 1 (K) for every compact set K ⊂ R.
Theorem 2.1. Assume that there existũ andṽ, functions such that satisfy and also one of the following Then problem (2.1) has a solution.
Proof. Define Observe that ϕ is odd, ψ is even and h = ϕũ + ψṽ. So, in order to ensure the existence of solution of problem (2.1) it is enough to find y and z such that Ly = ϕũ and Lz = ψṽ for, in that case, defining u = y + z, we can conclude that Lu = h. We will deal with the initial condition later on.
Take y =φũ, wherẽ Observe thatφ is even if n is odd and vice-versa. In particular, we have that Thus, Hence, Ly = ϕũ.
Remark 2.1. Having in mind condition (h1) in Theorem 2.1, it is immediate to verify that Lũ = 0 provided that a i = 0 for all i ∈ {0, . . . , n − 1} such that n + i is even.
In an analogous way for (h2), one can show that Lṽ = 0 when a i = 0 for all i ∈ {0, . . . , n − 1} such that n + i is odd.

The first order problem
After proving the general result for the n-th order case, we concentrate our work in the first order problem u (t) + a u(−t) + b u(t) = h(t), for a. e. t ∈ R; u(t 0 ) = c, (2.5) with h ∈ L 1 loc (R) and t 0 , a, b, c ∈ R. A solution of this problem will be u ∈ W 1,1 loc (R). In order to do so, we first study the homogeneous equation (2.6) By differentiating and making the proper substitutions we arrive to the equation Let ω := |a 2 − b 2 |. Equation (2.7) presents three different cases: In such a case, u(t) = α cos ωt + β sin ωt is a solution of (2.7) for every α, β ∈ R. If we impose equation (2.6) to this expression we arrive to the general solution To get equation (2.6) we arrive to the general solution In this a case, u(t) = αt + β is a solution of (2.7) for every α, β ∈ R. So, equation (2.6) holds provided that one of the two following cases is fulfilled: is the general solution of equation (2.6) with α ∈ R, and Now, according to Theorem 2.1, we denoteũ,ṽ satisfying Observe thatũ andṽ can be obtained from the explicit expressions of the cases (C1)-(C3) by taking α = 1. Remark 2.2. Note that if u is in the case (C3.1), v is in the case (C3.2) and vice-versa.
We have now the following properties of functionsũ andṽ.
For every t, s ∈ R, the following properties hold.
(I)ũ e ≡ṽ e ,ũ o ≡ kṽ o for some real constant k a.e., Proof. (I) and (III) can be checked by inspection of the different cases. (II) is a direct consequence of (I). (IV ) is obtained from the definition of even and odd parts and (III).
Assume now that w is a solution of (2.5) andũ(t 0 ) = 0. Then w + λũ is also a solution of (2.5) for every λ ∈ R, which proves the result.
This last Theorem raises an obvious question: In which circumstancesũ(t 0 ) = 0? In order to answer this question, it is enough to study the cases (C1)-(C3). We summarize this study in the following Lemma which can be checked easily.
We define the oriented characteristic function of the pair (t 1 ,t 2 ) as Remark 2.3. The previous definition implies that, for any given integrable function f : R → R, The following corollary gives us the expression of the Green's function for problem (2.5).
Corollary 2.5. Supposeũ(t 0 ) = 0. Then the unique solution of problem (2.5) is given by Proof. First observe that G(t, ·) is bounded and of compact support for every fixed t ∈ R, so the integral ∞ −∞ G(t, s)h(s) d s is well defined. It is not difficult to verify, for any t ∈ R, the following equalities: On the other hand, (2.12) Thus, adding (2.11) and (2.12), it is clear that u (t) + a u(−t) + b u(t) = h(t).
We now check the initial condition.
Using the construction of the solution provided in Theorem 2.1, it is an easy exercise to check that which proves the result.
Denote now G a,b the Green's function for problem (2.5) with coefficients a and b. The following Lemma is analogous to [6,Lemma 4.1].
As a consequence of the previous result, we arrive at the following immediate conclusion.

Sign of the Green's Function
In this section we use the above obtained expressions to obtain the explicit expression of the Green's function, depending on the values of the constants a and b. Moreover we study the sign of the function and deduce suitable comparison results.
We separate the study in three cases, taking into consideration the expression of the general solution of equation (2.6).

The case (C1)
Now, assume the case (C1), i.e., a 2 > b 2 . Using equation (2.10), we get the following expression of G for this situation: which we can rewrite as Studying the expression of G we can obtain maximum and antimaximum principles. In order to do this, we will be interested in those maximal strips (in the sense of inclusion) of the kind [α, β ] × R where G does not change sign depending on the parameters. So, we are in a position to study the sign of the Green's function in the different triangles of definition. The result is the following: Then, the Green's function of problem (2.5) is Proof. For 0 < b < a, the argument of the sin in (3.1c) is positive, so (3.1c) is positive for t < π/ω. On the other hand, it is easy to check that (3.1a) is positive as long as t < η(a, b).
The rest of the proof continues similarly.
As a corollary of the previous result we obtain the following one: • if a < 0, the Green's function of problem (2.5) is non-positive on [−η(a, −b), 0] × R, • the Green's function of problem (2.5) changes sign in any other strip not a subset of the aforementioned.
Proof. The proof follows from the previous result together with the fact that Remark 3.1. Realize that the rectangles defined in the previous Lemma are optimal in the sense that G changes sign in a bigger rectangle. The same observation applies to the similar results we will prove for the other cases. This fact implies that we cannot have maximum or anti-maximum principles on bigger intervals for the solution, something that is widely known and which the following results, together with Example 3.4 illustrate.
Since G(t, 0) changes sign at t = η(a, b). It is immediate to verify that by defining function h ε (s) = 1 for all s ∈ (−ε, ε) and h(s) = 0 otherwise, we have a solution of problem (2.5) that cross the real value c on the right of η(a, b). So the estimates are optimal for this case.
However, one can study problems with particular non homogeneous part h for which the solution has over c for a bigger interval. This is showed in the following example. Also,ū(t) < 0 for t = γ + ε with ε ∈ R + sufficiently small. Furthermore, the solution is periodic of period 2π/3.
If we use Lemma 3.2, we have that, a priori,ū is non-positive on [−4/15, 0] which we know is true by the study we have done ofū, but this estimate is, as expected, far from the interval [γ − 1, 0] in whichū is non-positive. This does not contradict the optimality of the a priori estimate, as we have showed before, some other examples could be found for which the interval where the solution has constant is arbitrarily close to the one given by the a priori estimate.

The case (C2)
We study here the case (C2). In this case, it is clear that which we can rewrite as Studying the expression of G we can obtain maximum and antimaximum principles. With this information, we can state the following Lemma. Then, • if a > 0, the Green's function of problem (2.5) is positive on • if a < 0, the Green's function of problem (2.5) is negative on {(t, s), −t < s < 0} and {(t, s), 0 < s < −t}, • if b > 0, the Green's function of problem (2.5) is negative on {(t, s), t < s < 0}, • if b > 0, the Green's function of problem (2.5) is positive on {(t, s), 0 < s < t} if and only if t ∈ (0, σ (a, b)), • if b < 0, the Green's function of problem (2.5) is positive on {(t, s), 0 < s < t}, • if b < 0, the Green's function of problem (2.5) is negative on {(t, s), t < s < 0} if and only if t ∈ (σ (a, b), 0).
Proof. For 0 < a < b, he argument of the sinh in (3.1d) is negative, so (3.2d) is positive. The argument of the sinh in (3.1c) is positive, so (3.2c) is positive. It is easy to check that (3.2a) is positive as long as t < σ (a, b).
On the other hand, (3.2b) is always negative.
The rest of the proof continues similarly.
As a corollary of the previous result we obtain the following one: Then, • the Green's function of problem (2.5) changes sign in any other strip not a subset of the aforementioned.
Example 3.2. Consider the problem Clearly, we are in the case (C2).
With these equalities, it is straightforward to construct the unique solution w of problem (3.3). For instance, in the case λ = c = 1,ū (t) = sinh(t), and Observe that for λ = 1, c = sinh 1, w(t) = sinht. Lemma 3.4 guarantees the non-negativity of w on [0, 1.52069 . . . ], but it is clear that the solution w is positive on the whole positive real line.

The case (C3)
We study here the case (C3) for a = b. In this case, it is clear that which we can rewrite as Studying the expression of G we can obtain maximum and antimaximum principles. With this information, we can prove the following Lemma as we did with the analogous ones for cases (C1) and (C2). • positive on {(t, s), 0 < s < t}.
As a corollary of the previous result we obtain the following one: Lemma 3.6. Assume a = b. Then, • if 0 < a, the Green's function of problem (2.5) is non-negative on [0, 1/a] × R, • if a < 0, the Green's function of problem (2.5) is non-positive on [1/a, 0] × R, • the Green's function of problem (2.5) changes sign in any other strip not a subset of the aforementioned.
For this particular case we have another way of computing the solution to the problem.
Proof. The equation is satisfied, since The initial condition is also satisfied for, clearly, u(t 0 ) = c.
For p ∈ (−1, 0) we have a singularity at 0. We can apply the theory in order to get the solution whereū(t) = 1 p+1 t|t| p andũ(t) = 1 − 2λt.ū is positive in (0, +∞) and negative in (−∞, 0) independently of λ , so the solution has better properties than the ones guaranteed by Lemma 3.8.
As a corollary of the previous result we obtain the following one: • the Green's function of problem (2.5) changes sign in any other strip not a subset of the aforementioned.