Periodic solutions of semi-explicit differential-algebraic equations with time-dependent constraints

In this paper we investigate the properties of the set of T-periodic solutions of semi-explicit parametrized Differential-Algebraic Equations with non-autonomous constraints of a particular type. We provide simple, degree theoretic conditions for the existence of branches of T-periodic solutions of the considered equations. Our approach is based on topological arguments about differential equations on implicitly defined manifolds, combined with elementary facts of matrix analysis.


Introduction
Several mathematical models arising from physical and engineering problems can be described in terms of differential-algebraic equations (DAEs). Because of this, in recent years, there has been a lot of interest on these equations from both the point of view of the pure and applied mathematicians. Beside the more genuinely modelistic or numerical approaches, there are many books and papers that treat DAEs from an analytical perspective. Of all those, in order to avoid an impossibly long and necessarily incomplete list, we only mention [9,10,11] and references therein.
A relevant case is represented by first order semi-explicit DAEs in Hessenberg form (see, e.g., [9]) that is: where f ∶ R × R m × R s → R m is a continuous map, and G∶ R × R m × R s → R s is sufficiently smooth. If we assume that the partial derivative, ∂ 3 G, of G with respect to the third variable y is invertible, then (1.1) is said to be of index 1.
In this paper we are concerned with a parametrized special case of (1.1). In fact, we assume that the constraint G has the form G(t, x, y) = g A(t)x, B(t)y where g∶ R m × R s → R s is C ∞ , and the square-matrix-valued maps A∶ R → O(R m ) and B∶ R → GL(R s ) are continuous. Here O(R m ) denotes the group of orthogonal m × m matrices and GL(R s ) the group of s × s invertible ones. Namely, for λ ≥ 0 we consider parametrized DAEs of the following form (1.2) ẋ = λf (t, x, y), λ ≥ 0, g A(t)x, B(t)y = 0, with f as in (1.1), and we assume that ∂ 2 g(x, y) is invertible for all (x, y) ∈ R m × R s and (for technical reasons) that A is of class C 1 .
In this case we assume the matrix-valued maps A and B to be of class C 2 and C 1 , respectively. The latter type of equations, in particular, may be used to represent some nontrivial physical systems as, for instance, constrained systems (see e.g. [10]).
We will assume throughout the paper that the matrix-valued function A satisfies the following property: This assumption might seem unnatural, but it is not so. To understand why, consider the case when m = 3. In that case, if {e 1 , e 2 , e 3 } is a fixed reference frame in R 3 and T (t) = {A(t)e 1 , A(t)e 2 , A(t)e 3 } is a moving frame, our assumption is equivalent to imposing that the angular velocity of T is the zero vector. This is, in fact, an immediate consequence of the definition of angular velocity. An entirely similar statement holds for m = 2. Furthermore, in this paper we will always assume that, for some given T > 0, the map f is T -periodic in the first variable and that A and B are T -periodic. Following the approach of [3,4,5,12], we study qualitative properties of the set of T -periodic solutions of (1.2) and (1.3). Roughly speaking, we show the existence of an unbounded connected set of "nontrivial" T -periodic solutions of (1.2) or (1.3) emanating from the set of its constant solutions. Precise statements will be given in Subsection 3.1 for first order equations and in Subsection 3.2 for second order ones. We also show, through some examples and remarks, how our constructions can be extended to include several equations of different forms.
Our continuation results are in the spirit of analogous ones by Furi and Pera for parametrized first-and second-order equations on differentiable manifolds (for more details see the survey [6]) and could be considered, in some sense, as consequences of recent results obtained by the last two authors in [3,4,5,12]. However, we wish to point out the following facts. First of all, while the continuation results on differentiable manifolds by Furi and Pera require the knowledge of the degree (often called characteristic or rotation) of suitable tangent vector fields, here (as in [3,4,5,12]) we give conditions only in terms of the well-known Brouwer degree which is also easier to compute explicitly. On the other hand, in the present paper we tackle the case time-dependent constraints (even if of a peculiar form). In other words, our results can be regarded as concerning ODEs on particular T -periodically moving manifolds defined implicitly. As far as we know, the techniques of Furi and Pera have never been applied to moving manifolds, and this novelty is our main original contribution to the subject. This paper is organized as follows. In Section 2 we collect the preliminaries needed to approach the DAEs in (1.2) and (1.3). In Section 3 we give our main results and we get topological information on the set of T -periodic pairs to the considered equations; examples of applications of our methods are provided. Finally, in Section 4, we give the proofs of the technical results of matrix analysis used throughout the paper.
Notation. Throughout the paper, C T (R k ) will denote the Banach space of all the T -periodic continuous maps ζ∶ R → R k with the usual supremum norm, and C 1 T (R k ) will be the Banach space of all the T -periodic C 1 maps ζ∶ R → R k with the C 1 norm.

Preliminary results
2.1. First order DAEs. Let us consider semi-explicit DAEs, depending on a parameter λ ≥ 0, of the following forms: where we assume that f∶ R m × R s → R m and h∶ R × R m × R s → R m are continuous maps, h is T -periodic in the first variable, and g∶ R m × R s → R s is C ∞ and such that ∂ 2 g(x, y) is invertible for all (x, y). Notice that, consequently, M ∶= g −1 (0) is a closed submanifold of R m × R s . Furthermore observe that, even if (2.2) can be considered as a particular case of (2.1) (i.e. with f(x, y) = 0 identically), for our purposes the two equations need to be treated separately. By a solution of (2.1) we mean a pair of C 1 functions x and y defined on an interval I with the property that the following equalites hold for all t ∈ I:ẋ(t) = f x(t), y(t) + λh t, x(t), y(t) and g x(t), y(t) = 0. The notion of solution of (2.2) is analogous. Notice that one might wish to ask only the continuity of y. In fact, if x is C 1 , the assumptions on g together with the implicit function theorem imply that y is C 1 .
In this section we recall two results from [3] and [12] (see also [4]) about the sets of T -pairs of (2.1) and of (2.2), namely, of those pairs λ; (x, y) ∈ [0, ∞)×C T (R m ×R s ) with (x, y) a T -periodic solution of (2.1) and of (2.2), respectively. Recall that a T -pair λ; (x, y) of (2.1) or of (2.2) is said to be trivial if λ = 0 and (x, y) is constant.
For the sake of simplicity we make some conventions. We will regard every space as its image in the following diagram of natural inclusions In particular, we will identify R m × R s with its image in C T (R m × R s ) under the embedding which associates to any (p, q) ∈ R m × R s the map (p,q) ∈ C T (R m × R s ) constantly equal to (p, q). Moreover we will regard R m × R s as the slice We point out that the images of the above inclusions are closed.
For simplicity, given Ω ⊆ [0, ∞) × C T (R m × R s ), we will denote by Ω # the set consisting of all the constant functions (p,q) with 0; (p,q) ∈ Ω. We will regard Ω # as a subset of R m × R s .
The following is a consequence of Theorem 5.1 in [12].
Let Ω ⊆ [0, ∞) × C T (R m × R s ) be open and assume that deg(F , Ω # ) is well defined and nonzero. Then there exists a connected set Γ of nontrivial T -pairs for (2.1) that meets F −1 (0) ∩ Ω and cannot be both bounded and contained in Ω.
The following is a consequence of Theorem 2.2 in [3].
Theorem 2.2. Let h and g be as above.
be open and assume that deg(ω, Ω # ) is well-defined and nonzero. Then there exists a connected set Γ of nontrivial T -pairs for (2.5) that meets ω −1 (0) ∩ Ω and cannot be both bounded and contained in Ω.

2.2.
Second order DAEs. Consider the following second order parametrized DAEs: By a solution of (2.4) we mean a pair of C 2 functions x and y defined on an interval I with the property that the following equalites hold for all t ∈ I: and g x(t), y(t) = 0. Notice that, as in the first order case, it is equivalent to ask only the continuity of y. Remark 2.3. We wish to point out that, despite their similarity, it might not be possible to reduce second order equations, such as (2.4) or (2.5), to first order ones, as (2.1) or (2.2). Such is the case expecially when there is an explicit dependence onẏ. Thus the latter need a specific study.
Consider for instance equation (2.5). The introduction of a new variable u =ẋ, as it is customary in phase-space techniques, would not reduce it to an equation of the form (2.1) or (2.2) with the required properties. In fact, we would get an equation of the type whereḡ (x, u); y = g(x, y), which is not of the form (2.1), because of theẏ in the second equation. The introduction of another auxiliary variable v =ẏ, as it could seem natural, would only complicate matters. Indeed, the resulting equation would be the following: whereĝ (x, u); (y, v) = g(x, y). What is wrong with this equation is that the rigid dimensional separation between the "differential" and the "algebraic" parts required for (2.1) is now broken.
The structure of the set of solution pairs of (2.4) and of (2.5) has been studied in [5]. As in Section 2.1, we recall that by a T -pair of (2.4) and of (2.5) we mean a pair λ; a T -periodic solution of (2.4) and of (2.5), respectively. Again, a T -pair λ; (x, y) of (2.4) or of (2.5) is said to be trivial if λ = 0 and (x, y) is constant.
As in Section 2.1, for simplicity we will regard every space as its image in the following diagram of natural inclusions we will denote by Ω # the set consisting of all the constant functions (p,q) with 0; (p,q) ∈ Ω and will regard Ω # as a subset of R m × R s .
The next results are straightforward consequences of Corollary 5.2 and Corollary 5.3 in [5], respectively.
be open and assume that deg(F , Ω # ) is well defined and nonzero. Then there exists a connected set Γ of nontrivial T -pairs for (2.4) that meets F −1 (0) ∩ Ω and cannot be both bounded and contained in Ω.
be open and assume that deg(ω, Ω # ) is well-defined and nonzero. Then there exists a connected set Γ of nontrivial T -pairs for (2.5) that meets ω −1 (0) ∩ Ω and cannot be both bounded and contained in Ω.

Coordinate transformation and main results
3.1. First order DAEs. We first investigate parametrized DAEs of the following form: are T -periodic continuous (square-)matrix-valued maps. We will assume that A is of class C 1 . Let us apply, for all t, a change of coordinates in R m × R s : Let us rewrite the first of these two equations as Differentiating with respect to t we get Observe, in fact, that the operations of differentiation and transposition commute; that is: Thus, equation (3.1) can be rewritten in the new coordinates (ξ, η) as follows: If we assume that the matrix M ∶= A(t)Ȧ(t) ⊤ is constant, then we can obtain continuation results for T -pairs of (3.1) as consequences of the results in the previous section.
In the following we will adopt the same notation as in Section 2.1.
Theorem 3.1. Let f, g, A and B be as above. Proof. Consider the transformation (3.2). As discussed above, in the new coordinates ξ, η equation (3.1) becomes (3.4), which we write as where F is defined as in (3.5). Consider also the homeomorphism given by H λ, (x, y) = λ, (ξ, η) with ξ and η given by (3.2). Clearly H establish a homeomorphism between the space X of T -pairs of (3.6) and the space X of T -pairs of (3.4), which preserves triviality. In the sense that H takes trivial T -pairs of (3.6) to trivial ones of (3.4) and, vice versa, H −1 makes trivial T -pairs of (3.4) correspond to trivial ones of (3.6). Let W = H(Ω). Applying Theorem 2.1 we get the existence of a connected set, let us say Υ, of nontrivial T -pairs for (3.6) that meets F −1 (0) ∩ W and cannot be both bounded and contained in W. One sees immediately that Γ = H −1 (Υ) has the required properties.
In the following consequence of Theorem 3.1 we further assume that M is nonsingular and use the properties of Brouwer degree to get a continuation result with the sole assumption that [g(0, ⋅)] −1 (0) ∩ Ω # is a nonempty and compact subset of R m × R s . Proof. Let F be as in the assertion of Theorem 3.1. Since the first component of F is nonsingular, the reduction property of Brouwer degree implies Observe now that since ∂ 2 g(ξ, η) is never singular, which is finite and nonzero.
In the next result we assume M = 0 and apply Theorem 2.2. Theorem 3.3. Let f, g, A and B be as above. Assume that A(t)Ȧ(t) ⊤ is identically zero and define ω∶ R m × R s → R m × R s by Let Ω ⊆ [0, ∞) × C T (R m × R s ) be open and assume that deg(ω, Ω # ) is well-defined and nonzero. Then there exists a connected set Γ of nontrivial T -pairs for (3.1) that meets ω −1 (0) ∩ Ω and cannot be both bounded and contained in Ω.
Proof. Follows from Theorem 2.2 whith the same proof of Theorem 3.1.
Remark 3.5. Notice that a similar coordinate transformation applies also to a slightly different situation. Consider the following DAE: where A, B, f and g are as in (3.1) and H is a matrix that commutes with A. Suppose, as above, that M ∶= A(t)Ȧ(t) ⊤ is constant (not necessarily invertible) and apply the transformation as indicated above. Equation  Example 3.6. Consider the following DAE: respectively. Clearly, as in Remark 3.5, Equation (3.10) becomes

Equation (3.11) is of the form considered in Corollary 3.2.
In our next example we consider periodic perturbations of a class of semi-linear DAEs (semi-linear DAEs find practical applications in robotics and electrical circuit modeling see e.g. [7,9]). We will restrict ourselves to the case when the equation has a particular 'separated variables' form, that is F ∶ R → R n×n , C ∶ R → R n×n and S ∶ R n → R n are continuous maps. Further, we assume that F and C are T -periodic, T > 0 given.
The following orthogonal matrices realize a singular value decomposition for E. In particular, we have that Then, setting x = Q ( x y ) with x, y ∈ R 2 and multiplying (3.12) by P ⊤ on the left, we can rewrite Equation (3.12) as S2(x,y) . This equation can be rewritten as follows: or, in our case, as where we have put x = ẋ1 x2 and y = ( y1 y2 ). The above DAE is of the form (3.1) considered in Theorem 3.3. Observe that the map ω considered there is given by ω(x 1 , x 2 ; y 1 , y 2 ) = y 1 , 3y 1 + 2y 2 , x 1 + y 1 , x 2 + y 2 .
The example considered above is a particular case of a more general procedure that we now roughly sketch. Take E, F and C as in Equation (3.12), and let rank E = r. Assume that n = 2r, and that Let P , Q be orthogonal matrices realizing a singular value decomposition for E. Multiply (3.12) by P ⊤ on the left, and put x = Q ( x y ) with x, y ∈ R r . We get, as in Example 3.7, . Since P and Q realize a singular value decomposition of E, and since E, F and C satisfy equations (3.14), an inspection of the proof of [1, Lemma 5.5] (see also [2]) shows us that for all t, and S2(x,y) . Then, we can rewrite Equation (3.15) as or, equivalently and, ifF 3 (t) is invertible for all t, 3.2. Second order DAEs. Let us now focus on parametrized second order DAEs and proceed as in the first order case. Consider is invertible for all (ξ, η), and the T -periodic matrix-valued maps A∶ R → O(R m ) and B∶ R → GL(R s ) are of class C 2 and C 1 , respectively. As in the frst order case we consider the following change of coordinates for all t: We can rewrite the first of these equations as x(t) = A ⊤ (t)ξ(t) and, taking the derivative, we geṫ Let us multiply by A on the left the second of these equations. Reordering (and omitting the explicit dependence on t) we geẗ Moreover, since y(t) = B −1 (t)η(t), Thus we can rewrite our DAE, in the new coordinates, as follows: is clearly continuous and T -periodic. Now, by Proposition 4.3 (see Appendix), we have that if M ∶= A(t)Ȧ(t) ⊤ is constant (and nonsingular), then A(t)Ä(t) ⊤ is constant (and nonsingular) as well, as it is equal to M 2 . Thus, as for first-order equations, provided that A(t)Ȧ(t) ⊤ is constant, this DAE can be treated with the methods of the previous section.
It is also worth noticing that d dt B(t) −1 , which appears in the expression of F , can also be conveniently expressed as −B(t) −1Ḃ (t)B(t) −1 . This trivial fact is readily established by differentiating the relation B(t)B(t) −1 = I.
Proceeding as in the previous subsection, and using Theorems 2.4 and 2.5 in place of Theorems 2.1 and 2.2, we get the following results, remarkably similar to Theorems 3.1 and 3.3, and Corollary 3.2: Theorem 3.8. Let f, g, A and B be as above.  Theorem 3.10. Let f, g, A and B be as above. Assume that be open and assume that deg(ω, Ω # ) is well-defined and nonzero. Then there exists a connected set Γ of nontrivial T -pairs for (3.17) that meets ω −1 (0) ∩ Ω and cannot be both bounded and contained in Ω.
In the next example we consider the same time-dependent constraint as in Example 3.4, but in the case of second-order DAEs.
Example 3.11. Let f be as in Example 3.4. Consider ẍ = λf (t, x, y), λ ≥ 0, . Applying the coordinate transformation as described above we rewrite our DAE as follows: Then, since deg(F , R 3 ) = 1 ≠ 0, Theorem 3.8 yields an unbounded connected set of 2π-periodic pairs emanating from 0; (0, 0; 0) ∈ [0, ∞)× R 2 × R (regarded as a 2π-pair). Remark 3.12. As in the first order case, our coordinate transformation applies also to a slightly different situation. Consider the following DAE: where A, B, f and g are as in (3.17) and H i , i = 1, 2, are matrices that commute with A. Suppose, as above, that M ∶= A(t)Ȧ(t) ⊤ is constant (not necessarily invertible) and apply the transformation as indicated above. Equation (3.8) becomes with F as in (3.18), so that the results of Subsection 2.2 are applicable to (3.20).
Remark 3.13. Let us consider the following second order DAE where f , A and B are as in (3.17) and t ↦ C(t) ∈ O(R m ) is C 2 and T -periodic. We also assume that C has the same property as A, that is, C(t)Ċ(t) ⊤ is constant. Expanding the derivative on the left-hand side of the first equation in (3.21) and using the fact that C(t) ∈ O(R m ), for all t ∈ R, we rewrite (3.21) as follows x, y,ẋ,ẏ), g(Ax, By) = 0 where, to keep the notation coincise, the explicit dependence on t of A, B and C is omitted. Proposition 4.3 shows that K 1 ∶= C(t) ⊤Ċ (t) is constant and, by Remark 4.4 (3), it follows that K 2 ∶= C(t) ⊤C (t) is constant as well being equal to −K 2 1 . Hence, (3.22) is of the form (3.19). Notice that if we assume that A commutes with K 1 , then it commutes with K 2 as well. In conclusion, if K 1 commutes with A(t) for all t ∈ R, Remark 3.12 applies with H 1 = −2K 1 and H 2 = −K 2 .

Appendix: some lemmas of Matrix Analysis
This section gathers, for reference purposes, a few simple facts -possibly wellknown-concerning time dependent matrices. Lemma 4.1. Let t ↦ A(t) be a C 2 square-matrix-valued function. Suppose that the map t ↦ A(t)Ȧ ⊤ (t) is constant. Then, Proof. For the sake of simplicity, we drop the explicit indication of the dependence of A on t.
Let us put M = AȦ ⊤ . Then,ȦA ⊤ = (AȦ ⊤ ) ⊤ = M ⊤ is also constant. Taking the derivative with respect to t of both these relations, we get From (4.3) and (4.1) it follows whence the assertion.
Observe that under the hypothesis of Lemma 4.1, since Equation (4.1) imply the simmetry of AÄ ⊤ .
We conclude this technical section with a curious remark. As shown by the following example: Nevertheless, one can prove the following fact: Proposition 4.5. Let t ↦ A(t) be a C 2 square-matrix-valued function. Assume that A(t) is orthogonal for all t. Then A(t) ⊤Ȧ (t) is constant if and only if so is A(t)Ȧ(t) ⊤ .
Proof. Let us first prove that if M ∶= A(t)Ȧ(t) ⊤ is constant then A(t) ⊤Ȧ (t) is constant as well. As above, for the sake of simplicity, we drop the explicit indication of the dependence of A on t.
Conversely, if A(t) ⊤Ȧ (t) is constant, a similar proof shows that A(t)Ȧ(t) ⊤ is constant too.