Skip to main content

# Periodic solutions of semi-explicit differential-algebraic equations with time-dependent constraints

## Abstract

In this paper we investigate the properties of the set of T-periodic solutions of semi-explicit parametrized differential-algebraic equations with non-autonomous constraints of a particular type. We provide simple, degree-theoretic conditions for the existence of branches of T-periodic solutions of the considered equations. Our approach is based on topological arguments as regards differential equations on implicitly defined manifolds, combined with elementary facts of matrix analysis.

MSC: 34A09, 34C25, 34C40.

## 1 Introduction

Several mathematical models arising from physical and engineering problems can be described in terms of differential-algebraic equations (DAEs). Because of this, in recent years, there has been a lot of interest on these equations from both the point of view of pure and of applied mathematicians. Beside the more genuinely modelistic or numerical approaches, there are many books and papers that treat DAEs from an analytical perspective. Of all those, in order to avoid an impossibly long and necessarily incomplete list, we only mention – and references therein.

A relevant case is represented by first-order semi-explicit DAEs in Hessenberg form (see, e.g., ) that is,

${ x ˙ = f ( t , x , y ) , G ( t , x , y ) = 0 ,$
(1.1)

where $f:R× R m × R s → R m$ is a continuous map, and $G:R× R m × R s → R s$ is sufficiently smooth. If we assume that the partial derivative, $∂ 3 G$, of $G$ with respect to the third variable y is invertible, then (1.1) is said to be of index 1.

In this paper we are concerned with a parametrized special case of (1.1). In fact, we assume that the constraint $G$ has the form

$G(t,x,y)=g ( A ( t ) x , B ( t ) y ) ,$

where $g: R m × R s → R s$ is $C ∞$, and the square-matrix-valued maps $A:R→O( R m )$ and $B:R→GL( R s )$ are continuous. Here $O( R m )$ denotes the group of orthogonal $m×m$ matrices and $GL( R s )$ the group of $s×s$ invertible ones.

Namely, we consider parametrized DAEs of the following form:

${ x ˙ = λ f ( t , x , y ) , λ ≥ 0 , g ( A ( t ) x , B ( t ) y ) = 0 ,$
(1.2)

with f as in (1.1), and we assume that $∂ 2 g(x,y)$ is invertible for all $(x,y)∈ R m × R s$ and (for technical reasons) that A is of class $C 1$.

We also treat, in parallel, the following parametrized second-order DAEs:

${ x ¨ = λ f ( t , x , y , x ˙ , y ˙ ) , λ ≥ 0 , g ( A ( t ) x , B ( t ) y ) = 0 .$
(1.3)

In this case we assume the matrix-valued maps A and B to be of class $C 2$ and $C 1$, respectively. The latter type of equations, in particular, may be used to represent some nontrivial physical systems as, for instance, constrained systems (see e.g.).

We will assume throughout the paper that the matrix-valued function A satisfies the following property:

(1.4)

This assumption might seem unnatural, but it is not so. To understand why, consider the case when $m=3$. In that case, if ${ e 1 , e 2 , e 3 }$ is a fixed reference frame in $R 3$ and $T(t)={A(t) e 1 ,A(t) e 2 ,A(t) e 3 }$ is a moving frame, our assumption is equivalent to imposing the requirement that the angular velocity of $T$ is the zero vector. This is, in fact, an immediate consequence of the definition of angular velocity. An entirely similar statement holds for $m=2$.

Furthermore, in this paper we will always assume that, for some given $T>0$, the map f is T-periodic in the first variable and that A and B are periodic of the same period T. Following the approach of –, we study qualitative properties of the set of T-periodic solutions of (1.2) and (1.3). Roughly speaking, we show the existence of an unbounded connected component of ‘nontrivial’ T-periodic solutions of (1.2) or (1.3) emanating from the set of the ‘trivial’ ones. In this setting, a trivial solution is a solution which is constant with respect to the moving frame defined by the time-dependent change of variable $(x,y)↦(A(t)x,B(t)y)$. However, precise statements will be given in Section 3.1 for first-order equations and in Section 3.2 for second-order ones. We also show, through some examples and remarks, how our constructions can be extended to include several equations of different forms.

Our continuation results are in the spirit of analogous ones by Furi and Pera for parametrized first- and second-order equations on differentiable manifolds (for more details see the survey ) and could be considered, in some sense, as consequences of recent results obtained by the last two authors in –. However, we wish to point out the following facts. First of all, while the continuation results on differentiable manifolds by Furi and Pera require the knowledge of the degree (often called characteristic or rotation) of suitable tangent vector fields, here (as in –) we give conditions only in terms of the well-known Brouwer degree, which is also easier to compute explicitly. On the other hand, in the present paper we tackle the case of time-dependent constraints (even if of a peculiar form). In other words, our results can be regarded as concerning ODEs on particular T-periodically moving manifolds defined implicitly. As already pointed out, we slightly modify and adapt to the present context the concept of ‘trivial’ and ‘nontrivial’ T-periodic solution. As far as we know, the techniques of Furi and Pera have never been applied to moving manifolds, and this novelty is our main original contribution to the subject.

This paper is organized as follows. In Section 2 we collect the preliminaries needed to approach the DAEs in (1.2) and (1.3). In Section 3 we give our main results and we get topological information on the set of T-periodic pairs to the considered equations; examples of applications of our methods are provided. Finally, in the Appendix, we give the proofs of the technical results of matrix analysis used throughout the paper.

## 2 Notation and preliminary results

Throughout the paper, $C T ( R k )$ will denote the Banach space of all the T-periodic continuous maps $ζ:R→ R k$ with the usual supremum norm, and $C T 1 ( R k )$ will be the Banach space of all the T-periodic $C 1$ maps $ζ:R→ R k$ with the $C 1$ norm.

We will make use of the (extended) theory of the Brouwer degree for maps between open sets of $R k$. Namely, we say that a triple $(F,U,z)$, with $z∈ R k$ and F a proper map defined in some neighborhood of the open set $U⊆ R k$, is admissible if $F − 1 (z)∩U$ is compact. For any admissible triple $(F,U,z)$, the Brouwer degree$deg B (F,U,z)$ of B in U respect to z is an integer that, roughly speaking, counts algebraically the elements of $F − 1 (z)$ which lie in U. See e.g. for a broader definition in the more general case of maps between oriented manifolds, or  for a quick introduction. Since in this paper the target point z will always be the origin, for the sake of simplicity, we will always omit it and write $deg(F,U)$ instead of $deg B (F,U,z)$.

### 2.1 First-order DAEs

Let us consider semi-explicit DAEs, depending on a parameter $λ≥0$, of the following forms:

${ x ˙ = f ( x , y ) + λ h ( t , x , y ) , g ( x , y ) = 0 ,$
(2.1)

and

${ x ˙ = λ h ( t , x , y ) , g ( x , y ) = 0 ,$
(2.2)

where we assume that $f: R m × R s → R m$ and $h:R× R m × R s → R m$ are continuous maps, $h$ is T-periodic in the first variable, and $g: R m × R s → R s$ is $C ∞$ and such that $∂ 2 g(x,y)$ is invertible for all $(x,y)$. Notice that, consequently, $M:= g − 1 (0)$ is a closed submanifold of $R m × R s$. Furthermore observe that, even if (2.2) can be considered as a particular case of (2.1) (i.e. with $f(x,y)=0$ identically), for our purposes the two equations need to be treated separately.

Given $λ≥0$, by a solution of (2.1) we mean a pair of $C 1$ functions x and y defined on an interval I with the property that the following equalities hold for all $t∈I$: $x ˙ (t)=f(x(t),y(t))+λh(t,x(t),y(t))$ and $g(x(t),y(t))=0$. The notion of solution of (2.2) is analogous. Notice that one might wish to ask only the continuity of y. In fact, if x is $C 1$, the assumptions on $g$ together with the implicit function theorem imply that y is $C 1$.

In this section we recall two results from ,  and  (see also ,  for more general results) about the sets of T-pairs of (2.1) and of (2.2), namely, of those pairs $(λ;(x,y))∈[0,∞)× C T ( R m × R s )$ with $(x,y)$ a T-periodic solution of (2.1) and of (2.2), respectively. Recall that a T-pair $(λ;(x,y))$ of (2.1) or of (2.2) is said to be trivial if $λ=0$ and $(x,y)$ is constant. Observe that any T-pair of (2.2) of the form $(0;(x,y))$ is necessarily trivial, because all solutions of (2.2) corresponding to $λ=0$ are constant (because $∂ 2 g(x,y)$ is nonsingular). However, the same statement is not true for (2.1) as shown by the following trivial example with $m=2$ and $s=1$:

${ x ˙ 1 = x 2 , x ˙ 2 = − x 1 , y = 0 .$

For the sake of simplicity we make some conventions. We will regard every space as its image in the following diagram of natural inclusions: (2.3)

In particular, we will identify $R m × R s$ with its image in $C T ( R m × R s )$ under the embedding which associates to any $(p,q)∈ R m × R s$ the map $( p ¯ , q ¯ )∈ C T ( R m × R s )$ constantly equal to $(p,q)$. Moreover, we will regard $R m × R s$ as the slice ${0}× R m × R s ⊂[0,∞)× R m × R s$ and, analogously, $C T ( R m × R s )$ as ${0}× C T ( R m × R s )$. We point out that the images of the above inclusions are closed.

For simplicity, given $Ω⊆[0,∞)× C T ( R m × R s )$, we will denote by $Ω # ⊆ R m × R s$ the set consisting of all pairs $(p,q)$ such that $(0;( p ¯ , q ¯ ))∈Ω$.

The following is a consequence of Theorem 5.1 in .

#### Theorem 2.1

Let$f$, $h$, $g$be as above. Define$F: R m × R s → R m × R s$by

$F(x,y)= ( f ( x , y ) , g ( x , y ) ) .$

Let$Ω⊆[0,∞)× C T ( R m × R s )$be open and assume that$deg(F, Ω # )$is well defined and nonzero. Then there exists a connected component Γ of nontrivial T-pairs for (2.1) whose closure in$[0,∞)× C T ( R m × R s )$meets$F − 1 (0,0)∩ Ω #$and cannot be both bounded and contained in Ω.

#### Sketch of the proof

Taking $U= R m × R s$ in , Theorem 5.1] we see that there exists a connected set G of nontrivial T-pairs for (2.1) whose closure in $[0,∞)× C T ( R m × R s )$ meets $F − 1 (0,0)∩ Ω #$, which, according to our identification in (2.3), means ${0}×( F − 1 (0,0)∩ Ω # )$ and is not contained in any compact subset of Ω. Let Γ be the connected component of the set of nontrivial T-pairs for (2.1) that contains G. Observe that since $[0,∞)× C T ( R m × R s )$ is a complete metric space, the theorem of Ascoli-Arzelà implies that any bounded set of T-pairs for (2.1) is actually relatively compact. Hence, if Γ is bounded then it is compact (being closed), but G cannot be contained in a compact subset of Ω. This implies that Γ cannot be both compact and contained in Ω. □

The same argument of the above proof shows that the following is a consequence of Theorem 2.2 in .

#### Theorem 2.2

Let$h$and$g$be as above. Define$ω: R m × R s → R m × R s$by

$ω(x,y)= ( 1 T ∫ 0 T h ( x , y ) d t , g ( x , y ) ) .$

Let$Ω⊆[0,∞)× C T ( R m × R s )$be open and assume that$deg(ω, Ω # )$is well defined and nonzero. Then there exists a connected component Γ of nontrivial T-pairs for (2.2) whose closure in$[0,∞)× C T ( R m × R s )$meets$ω − 1 (0,0)∩ Ω #$and cannot be both bounded and contained in Ω.

### 2.2 Second-order DAEs

Consider the following second-order parametrized DAEs:

${ x ¨ = f ( x , y , x ˙ , y ˙ ) + λ h ( t , x , y , x ˙ , y ˙ ) , g ( x , y ) = 0 ,$
(2.4)

and

${ x ¨ = λ h ( t , x , y , x ˙ , y ˙ ) , g ( x , y ) = 0 ,$
(2.5)

where we assume that $f: R m × R s × R m × R s → R m$ and $h:R× R m × R s × R m × R s → R m$ are continuous maps, $h$ is T-periodic in the first variable, and $g: R m × R s → R s$ is $C ∞$ and such that $∂ 2 g(x,y)$ is invertible for all $(x,y)$.

Given $λ≥0$, by a solution of (2.4) we mean a pair of $C 2$ functions x and y defined on an interval I with the property that the following equalities hold for all $t∈I$: $x ¨ (t)=f(t,x(t),y(t), x ˙ (t), y ˙ (t))+λh(t,x(t),y(t), x ˙ (t), y ˙ (t))$ and $g(x(t),y(t))=0$. Notice that, as in the first-order case, it is equivalent to ask only the continuity of y.

The structure of the set of solution pairs of (2.4) and of (2.5) has been studied in . As in Section 2.1, we recall that by a T-pair of (2.4) and of (2.5) we mean a pair $(λ;(x,y))∈[0,∞)× C T 1 ( R m × R s )$ with $(x,y)$ a T-periodic solution of (2.4) and of (2.5), respectively. Again, a T-pair $(λ;(x,y))$ of (2.4) or of (2.5) is said to be trivial if $λ=0$ and $(x,y)$ is constant. Unlike in the first-order case, it is not necessarily true that any T-pair of (2.5) of the form $(0;(x,y))$ is trivial. In fact, if $(x,y)$ were a closed geodesics of the manifold $M:= g − 1 (0)$ with appropriate initial velocity then $(0;(x,y))$ would be a nontrivial T-pair of (2.5). Similarly, it is not necessarily true that any T-pair of (2.4) of the form $(0;(x,y))$ is trivial.

As in Section 2.1, for simplicity we will regard every space as its image in the following diagram of natural inclusions: (2.6)

with the obvious analogous identifications.

Again, given $Ω⊆[0,∞)× C T 1 ( R m × R s )$, we will denote by $Ω # ⊆ R m × R s$ the set consisting of all pairs $(p,q)$ such that $(0;( p ¯ , q ¯ ))∈Ω$.

The next results are straightforward consequences of Corollary 5.2 and Corollary 5.3 in , respectively (the arguments are similar to the proof of Theorem 2.1).

#### Theorem 2.3

Let$f$, $h$, $g$be as above. Define$F: R m × R s → R m × R s$by

$F(x,y)= ( f 0 ( x , y ) , g ( x , y ) ) ,$

where$f 0 (x,y):=f(x,y,0,0)$. Let$Ω⊆[0,∞)× C T 1 ( R m × R s )$be open and assume that$deg(F, Ω # )$is well defined and nonzero. Then there exists a connected component Γ of nontrivial T-pairs for (2.4) whose closure in$[0,∞)× C T 1 ( R m × R s )$meets$F − 1 (0,0)∩ Ω #$and cannot be both bounded and contained in Ω.

#### Theorem 2.4

Let$h$and$g$be as above. Define$ω: R m × R s → R m × R s$by

$ω(x,y)= ( 1 T ∫ 0 T h 0 ( x , y ) d t , g ( x , y ) ) ,$

where$h 0 (x,y):=h(x,y,0,0)$. Let$Ω⊆[0,∞)× C T ( R m × R s )$be open and assume that$deg(ω, Ω # )$is well defined and nonzero. Then there exists a connected component Γ of nontrivial T-pairs for (2.5) whose closure in$[0,∞)× C T 1 ( R m × R s )$meets$ω − 1 (0,0)∩ Ω #$and cannot be both bounded and contained in Ω.

#### Remark 2.5

Common manipulations employed for order reduction of differential equations, when applied to (2.4) or (2.5) might not work for deducing Theorems 2.3 or 2.4 directly from Theorems 2.1 or 2.2. In fact, those procedures usually lead to equations whose form is not suited for our first-order results. Thus, despite the similar structure of (2.4) and (2.5) to (2.1) and (2.2), respectively, the former equations seem to need a specific study.

## 3 Coordinate transformation and main results

### 3.1 First-order DAEs

We first investigate parametrized DAEs of the following form:

${ x ˙ = λ f ( t , x , y ) , λ ≥ 0 , g ( A ( t ) x , B ( t ) y ) = 0 ,$
(3.1)

where, as in the introduction, the map $f:R× R m × R s → R m$ is continuous and T-periodic in the first variable, $g: R m × R s → R s$ is $C ∞$ and such that $∂ 2 g(ξ,η)$ is invertible for all $(ξ,η)$, and $A:R→O( R m )$ and $B:R→GL( R s )$ are T-periodic continuous (square-)matrix-valued maps. We will assume that A is of class $C 1$. The constraint that appears in (3.1) forces the motion to occur on a manifold $M:={(p,q)∈ R m × R s :g(p,q)=0}$ that moves integrally to a reference frame defined by the time-dependent change of variable $(x,y)↦(A(t)x,B(t)y)$. As before, we consider the set of T-pairs of (3.1), namely, of those pairs $(λ;(x,y))∈[0,∞)× C T ( R m × R s )$ with $(x,y)$ a T-periodic solution of (3.1). A T-pair $(λ;(x,y))$ of (3.1) is trivial if $λ=0$ and $(A(t)x(t),B(t)y(t))$ is constant. In other words, $(0;(x,y))$ is trivial if $(x,y)$ is constant with respect to the moving reference above, i.e., to the moving manifold that constitutes the constraint. Thus, loosely speaking, we could say that in this case $(x,y)$ is constant with respect to the ‘moving constraint’. Observe that when $(0;(x,y))$ is trivial, then $A(t)x(t)$ must be constantly equal to $A(0)x(0)$ and similarly $B(t)y(t)≡B(0)y(0)$, so that for all t we have $A ( 0 ) ⊤ A(t)x(t)=x(0)$ and $B ( 0 ) − 1 B(t)y(t)=y(0)$. The former fact has an interesting consequence for x. Since $x ˙ (t)=0$, $x(t)≡x(0)=: x 0$ must be constant. Thus, when $(0;(x,y))$ is trivial, we have $A ( 0 ) ⊤ A(t) x 0 ≡ x 0$, i.e., $x(t)≡ x 0$ is invariant for $A ( 0 ) ⊤ A(t)$ for all t’s.

Let us apply, for all t, a change of coordinates in $R m × R s$:

$ξ(t)=A(t)x(t),η(t)=B(t)y(t).$
(3.2)

Let us rewrite the first of these two equations as $x(t)= A ⊤ (t)ξ(t)$. Differentiating with respect to t we get

$x ˙ (t)= A ˙ ( t ) ⊤ ξ(t)+A ( t ) ⊤ ξ ˙ (t).$
(3.3)

Observe, in fact, that the operations of differentiation and transposition commute; that is,

$( A ˙ ( t ) ) ⊤ = d d t ( A ( t ) ⊤ ) .$

From (3.3) we get $ξ ˙ (t)=−A(t) A ˙ ( t ) ⊤ ξ(t)+A(t) x ˙ (t)$. Thus, (3.1) can be rewritten in the new coordinates $(ξ,η)$ as follows:

${ ξ ˙ = − A ( t ) A ˙ ( t ) ⊤ ξ + λ F ( t , ξ , η ) , λ ≥ 0 , g ( ξ , η ) = 0 ,$
(3.4)

where $F:R× R m × R s → R m$ is defined by

$F(t,ξ,η)=A(t)f ( t , A ( t ) ⊤ ξ , B − 1 ( t ) η ) .$
(3.5)

If we assume that the matrix $M:=A(t) A ˙ ( t ) ⊤$ is constant, then we can obtain continuation results for T-pairs of (3.1) as consequences of the results in the previous section.

In the following we will adopt the same notation as in Section 2.1.

#### Theorem 3.1

Let f, g, A, and B be as above. Assume that$M:=A(t) A ˙ ( t ) ⊤$is constant and define$F: R m × R s → R m × R s$by$F(x,y)=(−Mx,g(x,y))$. Let$Ω⊆[0,∞)× C T ( R m × R s )$be open and assume that$deg(F, Ω # )$is well defined and nonzero. Then there exists a connected component Γ of nontrivial T-pairs for (3.1) whose closure in$[0,∞)× C T ( R m × R s )$meets$F − 1 (0,0)∩ Ω #$and cannot be both bounded and contained in Ω.

#### Proof

Consider the transformation (3.2). As discussed above, in the new coordinates ξ, η, (3.1) becomes (3.4), which we write as

${ ξ ˙ = − M ξ + λ F ( t , ξ , η ) , g ( ξ , η ) = 0 ,$
(3.6)

where F is defined as in (3.5). Consider also the homeomorphism $H:[0,∞)× C T ( R m × R s )→[0,∞)× C T ( R m × R s )$ given by $H(λ,(x,y))=(λ,(ξ,η))$ with ξ and η given by (3.2). Clearly establishes a homeomorphism between the space X of T-pairs of (3.1) and the space $X$ of T-pairs of (3.6), which preserves triviality. In the sense that takes trivial T-pairs of (3.1) to trivial ones of (3.6) and, vice versa, $H − 1$ makes trivial T-pairs of (3.6) correspond to trivial ones of (3.1).

Let $W=H(Ω)$. Applying Theorem 2.1 we get the existence of a connected component, let us say ϒ, of nontrivial T-pairs for (3.6) whose closure in $[0,∞)× C T ( R m × R s )$ meets $F − 1 (0)∩W$ and cannot be both bounded and contained in $W$. One sees immediately that $Γ= H − 1 (ϒ)$ has the required properties. □

In the following consequence of Theorem 3.1 we further assume that M is nonsingular and use the properties of the Brouwer degree to get a continuation result with the sole assumption that $Ω # ∩({0}× [ g ( 0 , ⋅ ) ] − 1 (0))$ is a nonempty and compact subset of $R m × R s$.

#### Corollary 3.2

Let f, g, A, and B be as above. Assume that$M:=A(t) A ˙ ( t ) ⊤$is constant and nonsingular. Let$Ω⊆[0,∞)× C T ( R m × R s )$be open. Assume that the set$Ω # ∩({0}× [ g ( 0 , ⋅ ) ] − 1 (0))$is nonempty and compact. Then there exists a connected component Γ of nontrivial T-pairs for (3.1) whose closure in$[0,∞)× C T ( R m × R s )$meets$Ω # ∩({0}× [ g ( 0 , ⋅ ) ] − 1 (0))$and cannot be both bounded and contained in Ω.

#### Proof

Let be as in the assertion of Theorem 3.1. Since the first component of is nonsingular, the reduction property of the Brouwer degree implies

$deg(F, Ω # )= ( − 1 ) m signdetM⋅deg ( g ( 0 , ⋅ ) , Ω # ∩ ( { 0 } × R s ) ) .$

Observe now that since $∂ 2 g(ξ,η)$ is never singular,

$|deg ( g ( 0 , ⋅ ) , Ω # ∩ ( { 0 } × R s ) ) |=# ( Ω # ∩ ( { 0 } × [ g ( 0 , ⋅ ) ] − 1 ( 0 ) ) ) ,$

which is finite and nonzero. □

In the next result we assume $M=0$ and apply Theorem 2.2.

#### Theorem 3.3

Let f, g, A, and B be as above. Assume that$A(t) A ˙ ( t ) ⊤$is identically zero and define$ω: R m × R s → R m × R s$by

$ω(ξ,η)= ( 1 T ∫ 0 T A ( t ) f ( t , A ( t ) ⊤ ξ , B − 1 ( t ) η ) d t , g ( ξ , η ) ) .$

Let$Ω⊆[0,∞)× C T ( R m × R s )$be open and assume that$deg(ω, Ω # )$is well defined and nonzero. Then there exists a connected component Γ of nontrivial T-pairs for (3.1) whose closure in$[0,∞)× C T ( R m × R s )$meets$ω − 1 (0,0)∩ Ω #$and cannot be both bounded and contained in Ω.

#### Proof

It follows from Theorem 2.2 with the same proof as of Theorem 3.1. □

#### Example 3.4

Take $m=2$ and $s=1$. Let $f:R× R 2 ×R→ R 2$ be any continuous mapping 2π-periodic in the first variable. Consider

${ x ˙ = λ f ( t , x , y ) , λ ≥ 0 , y 3 + y − x 1 2 − x 2 2 − ( x 1 sin t + x 2 cos t ) 2 = 0 ,$
(3.7)

where $x=( x 1 , x 2 )$. It is readily verified that

$y 3 +y− x 1 2 − x 2 2 − ( x 1 sin t + x 2 cos t ) 2 =g ( A ( t ) x , y ) ,$

where

$A(t):=( cos t − sin t sin t cos t )andg( p 1 , p 2 ,q)= q 3 +q− p 1 2 −2 p 2 2 .$

Thus, the constraint can be regarded as the surface obeying the equation $q 3 +q= p 1 2 +2 p 2 2$, in the space $( p 1 , p 2 ,q)$, revolving around the q axis (a full rotation takes time 2π). With the transformation (3.2) the above DAE becomes

${ ξ ˙ = − M ξ + λ F ( t , ξ , η ) , g ( ξ , η ) = 0 ,$

where

$M=A(t) A ˙ ⊤ (t)=( 0 1 − 1 0 )andF(t,ξ,η)=A(t)f ( t , A ( t ) ⊤ ξ , B − 1 ( t ) η ) .$

Let $Ω=[0,∞)× C T ( R 2 ×R)$. Since M is nonsingular and $[ g ( 0 , ⋅ ) ] − 1 (0)={0}$, Corollary 3.2 yields an unbounded connected component of nontrivial 2π-pairs for (3.7) that meets $(0;(0,0;0))∈[0,∞)× R 2 ×R$ (regarded as a 2π-pair).

#### Remark 3.5

Notice that a similar coordinate transformation applies also to a slightly different situation. Consider the following DAE:

${ x ˙ = H x + λ f ( t , x , y ) , λ ≥ 0 , g ( A ( t ) x , B ( t ) y ) = 0 ,$
(3.8)

where A, B, f, and g are as in (3.1) and H is a matrix that commutes with A. Suppose, as above, that $M:=A(t) A ˙ ( t ) ⊤$ is constant (not necessarily invertible) and apply the transformation as indicated above. Equation (3.8) becomes

${ ξ ˙ = ( H − M ) ξ + λ F ( t , ξ , η ) , λ ≥ 0 , g ( ξ , η ) = 0$
(3.9)

with F as in (3.4), so that the results of the previous section are applicable to (3.9).

#### Example 3.6

Consider the following DAE:

${ x ˙ 1 = x 1 + λ f 1 ( t , x 1 , x 2 , y ) , x ˙ 2 = λ f 2 ( t , x 1 , x 2 , y ) , y 5 + y = x 1 cos t + x 2 sin t ,$
(3.10)

where $f i :R×R×R×R→ R 2$, $i=1,2$, are continuous mappings 2π-periodic in the first variable. If we put $x=( x 1 , x 2 )$, (3.10) is of the form (3.8) with

$H=( 1 0 0 0 ),A(t)=( cos t sin t − sin t cos t ),B(t)≡( 1 0 0 1 ),$

and $f:R× R 2 → R 2$ and $g: R 2 ×R→R$ defined by

$f(t,x)= ( f 1 ( t , x 1 , x 2 , y ) , f 1 ( t , x 1 , x 2 , y ) ) andg(x,y)=y+ y 3 − x 1 ,$

respectively. Clearly, as in Remark 3.5, (3.10) becomes

${ ξ ˙ = ( H − M ) ξ + λ F ( t , ξ , η ) , η + η 5 − ξ 1 = 0 ,$
(3.11)

where

$M=A(t) A ˙ ⊤ (t)=( 0 − 1 1 0 )andF(t,ξ,η)=A(t)f ( t , A ( t ) ⊤ ξ , B − 1 ( t ) η ) .$

Equation (3.11) is of the form considered in Corollary 3.2.

In our next example we consider periodic perturbations of a class of semi-linear DAEs (semi-linear DAEs find practical applications in robotics and electrical circuit modeling see e.g., ). We will restrict ourselves to the case when the equation has a particular ‘separated variables’ form, that is,

$E x ˙ =F(t)x+λC(t)S(x),$
(3.12)

$F:R→ R n × n$, $C:R→ R n × n$ and $S: R n → R n$ are continuous maps, and $E∈ R n × n$ is a constant matrix (here $R n × n$ denotes the set of real $n×n$ matrices). Further, we assume that F and C are T-periodic, $T>0$ given.

#### Example 3.7

Consider (3.12) with $n=4$ and

$E=( 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 ),F(t)=( 0 0 0 0 cos t 1 0 − sin t 0 0 0 0 sin t 0 1 cos t ),$

and

$C(t)=( 2 + cos t 1 0 1 0 0 0 0 1 3 + sin t 2 0 0 0 0 0 ),S(x)=x.$

The orthogonal matrices

$P=( 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 )andQ=( 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 )$

realize a singular value decomposition for E. In particular, we have

$P ⊤ E Q = ( 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 ) = : ( E ˜ 1 0 0 0 ) , P ⊤ F ( t ) Q = ( 0 0 0 0 0 0 0 0 cos t − sin t 1 0 sin t cos t 0 1 ) = : ( 0 0 F ˜ 3 ( t ) F ˜ 4 ( t ) ) , P ⊤ C ( t ) Q = ( 2 + cos t 1 1 0 1 0 3 + sin t 2 0 0 0 0 0 0 0 0 ) = : ( C ˜ 1 ( t ) C ˜ 2 ( t ) 0 0 ) .$

Then, setting $x=Q ( x y )$ with $x,y∈ R 2$ and multiplying (3.12) by $P ⊤$ on the left, we can rewrite (3.12) as

$P ⊤ EQ ( x ˙ y ˙ ) = P ⊤ F(t)Q ( x y ) +λ ( P ⊤ C ( t ) Q ) Q ⊤ S ( Q ( x y ) ) ,$

that is,

$( E ˜ 1 0 0 0 )( x ˙ y ˙ )=( 0 0 F ˜ 3 ( t ) F ˜ 4 ( t ) )( x y )+λ( C ˜ 1 ( t ) C ˜ 2 ( t ) 0 0 )( S ˜ 1 ( x , y ) S ˜ 2 ( x , y ) ),$

where we have set $Q ⊤ S(Qx)= ( S ˜ 1 ( x , y ) S ˜ 2 ( x , y ) )$. This equation can be rewritten as follows:

${ x ˙ = λ E ˜ 1 − 1 ( C ˜ 1 ( t ) S ˜ 1 ( x , y ) + C ˜ 2 ( t ) S ˜ 2 ( x , y ) ) , y + F ˜ 3 ( t ) x = 0 ,$

or, in our case, as

${ ( x ˙ 1 x ˙ 2 ) = λ ( ( 2 + cos t ) x 1 + x 2 + y 1 x 1 + ( 3 + sin t ) y 1 + 2 y 2 ) , ( y 1 y 2 ) + ( cos t − sin t sin t cos t ) ( x 1 x 2 ) = 0 ,$

where we have put $x= ( x ˙ 1 x ˙ 2 )$ and $y= ( y 1 y 2 )$. The above DAE is of the form (3.1) considered in Theorem 3.3. Observe that the map ω considered there is given by

$ω( x 1 , x 2 ; y 1 , y 2 )=( y 1 ,3 y 1 +2 y 2 , x 1 + y 1 , x 2 + y 2 ).$

The example considered above is a particular case of a more general procedure that we now roughly sketch. Take E, F, C, and S be as in (3.12), and let $rankE=r$. Assume that $n=2r$, and that

$ker C ⊤ (t)=ker E ⊤ ∀t∈R,$
(3.13a)
$imF(t)=ker E ⊤ ∀t∈R.$
(3.13b)

Let P, Q be orthogonal matrices realizing a singular value decomposition for E. Multiply (3.12) by $P ⊤$ on the left, and put $x=Q ( x y )$ with $x,y∈ R r$. We get, as in Example 3.7,

$P ⊤ EQ( x ˙ y ˙ )= P ⊤ F(t)Q( x y )+λ ( P ⊤ C ( t ) Q ) Q ⊤ S ( Q ( x y ) ) .$
(3.14)

Since P and Q realize a singular value decomposition of E, and since E, F, and C satisfy (3.13a) and (3.13b), an inspection of the proof of , Lemma 5.5] (see also ) shows us that for all t,

$P ⊤ E Q = ( E ˜ 1 0 0 0 ) , P ⊤ F ( t ) Q = ( 0 0 F ˜ 3 ( t ) F ˜ 4 ( t ) ) and P ⊤ C ( t ) Q = ( C ˜ 1 ( t ) C ˜ 2 ( t ) 0 0 ) .$

Set $x=Q ( x y )$ and $Q ⊤ S(Qx)= ( S ˜ 1 ( x , y ) S ˜ 2 ( x , y ) )$. Then we can rewrite (3.14) as

$( E ˜ 1 0 0 0 )( x ˙ y ˙ )=( 0 0 F ˜ 3 ( t ) F ˜ 4 ( t ) )( x y )+λ( C ˜ 1 ( t ) C ˜ 2 ( t ) 0 0 )( S ˜ 1 ( x , y ) S ˜ 2 ( x , y ) )$

or, equivalently

${ x ˙ = λ E ˜ 1 − 1 ( C ˜ 1 ( t ) S ˜ 1 ( x , y ) + C ˜ 2 ( t ) S ˜ 2 ( x , y ) ) , F ˜ 3 ( t ) x + F ˜ 4 ( t ) y = 0 ,$

and, if $F ˜ 3 (t)$ is invertible for all t,

${ x ˙ = λ E ˜ 1 − 1 ( C ˜ 1 ( t ) S ˜ 1 ( x , y ) + C ˜ 2 ( t ) S ˜ 2 ( x , y ) ) , x + [ F ˜ 3 ( t ) ] − 1 F ˜ 4 ( t ) y = 0 ,$
(3.15)

which is of type (3.1) with $m=s=r$ if also $F ˜ 4 (t)$ is invertible for all t.

### 3.2 Second-order DAEs

Let us now focus on parametrized second-order DAEs and proceed as in the first-order case. Consider

${ x ¨ = λ f ( t , x , y , x ˙ , y ˙ ) , λ ≥ 0 , g ( A ( t ) x , B ( t ) y ) = 0 ,$
(3.16)

where $f:R× R m × R s × R m × R s → R m$ is continuous and T-periodic in the first variable, $g: R m × R s → R s$ is $C ∞$ and such that $∂ 2 g(ξ,η)$ is invertible for all $(ξ,η)$, and the T-periodic matrix-valued maps $A:R→O( R m )$ and $B:R→GL( R s )$ are of class $C 2$ and $C 1$, respectively.

As in the first-order case, we consider the set of T-pairs of (3.16), namely, of those pairs $(λ;(x,y))∈[0,∞)× C T 1 ( R m × R s )$ with $(x,y)$ a T-periodic solution of (3.16). A T-pair $(λ;(x,y))$ of (3.16) is trivial if $λ=0$ and $(A(t)x(t),B(t)y(t))$ is constant. In other words, $(0;(x,y))$ is trivial if $(x,y)$ is constant with respect to the moving constraint. Again, if $(0;(x,y))$ is trivial, then $A(t)x(t)$ must be constantly equal to $A(0)x(0)$. Also, $x ¨ (t)=0$, so that the periodicity condition implies that $x(t)≡x(0)=: x 0$ must be constant as well. Thus, for all t, $x 0$ is invariant for $A ( 0 ) ⊤ A(t)$.

Let us consider the following change of coordinates for all t:

$ξ(t)=A(t)x(t),η(t)=B(t)y(t).$

We can rewrite the first of these equations as $x(t)= A ⊤ (t)ξ(t)$ and, taking the derivative, we get

$x ˙ = A ˙ ( t ) ⊤ ξ+A ( t ) ⊤ ξ ˙ , x ¨ = A ¨ ( t ) ⊤ ξ+2 A ˙ ( t ) ⊤ ξ ˙ +A ( t ) ⊤ ξ ¨ .$

Let us multiply by A on the left the second of these equations. Reordering (and omitting the explicit dependence on t) we get

$ξ ¨ =−A A ¨ ⊤ ξ−2A A ˙ ⊤ ξ ˙ +A x ¨ .$

Moreover, since $y(t)= B − 1 (t)η(t)$,

$y ˙ (t)= d d t [ B ( t ) − 1 ] η(t)+ B − 1 (t) η ˙ (t).$

Thus we can rewrite our DAE, in the new coordinates, as follows:

${ ξ ¨ = − A ( t ) A ¨ ( t ) ⊤ ξ − 2 A ( t ) A ˙ ( t ) ⊤ ξ ˙ + λ F ( t , ξ , η , ξ ˙ , η ˙ ) , λ ≥ 0 , g ( ξ , η ) = 0 ,$
(3.17)

where $F:R× R m × R s × R m × R s → R m$, defined by

$F ( t , ξ , η , u , v ) = A ( t ) f ( t , A ( t ) ⊤ ξ , B − 1 ( t ) η , A ˙ ( t ) ⊤ ξ + A ( t ) ⊤ u , d d t [ B ( t ) − 1 ] η + B − 1 ( t ) v )$

is clearly continuous and T-periodic.

Now, by Proposition A.3 (see the Appendix), we find that if $M:=A(t) A ˙ ( t ) ⊤$ is constant (and nonsingular), then $A(t) A ¨ ( t ) ⊤$ is constant (and nonsingular) as well, as it is equal to $M 2$. Thus, as for first-order equations, provided that $A(t) A ˙ ( t ) ⊤$ is constant, this DAE can be treated with the methods of the previous section.

It is also worth noticing that $d d t [B ( t ) − 1 ]$, which appears in the expression of F, can also be conveniently expressed as $−B ( t ) − 1 B ˙ (t)B ( t ) − 1$. This trivial fact is readily established by differentiating the relation $B(t)B ( t ) − 1 =I$.

Proceeding as in the previous subsection, and using Theorems 2.3 and 2.4 in place of Theorems 2.1 and 2.2, we get the following results, remarkably similar to Theorems 3.1 and 3.3, and Corollary 3.2.

#### Theorem 3.8

Let f, g, A, and B be as above. Assume that$M:=A(t) A ˙ ( t ) ⊤$is constant and define$F: R m × R s → R m × R s$by$F(ξ,η)=(− M 2 ξ,g(ξ,η))$. Let$Ω⊆[0,∞)× C T 1 ( R m × R s )$be open and assume that$deg(F, Ω # )$is well defined and nonzero. Then there exists a connected component Γ of nontrivial T-pairs for (3.16) whose closure in$[0,∞)× C T 1 ( R m × R s )$meets$F − 1 (0,0)∩ Ω #$and cannot be both bounded and contained in Ω.

#### Corollary 3.9

Let f, g, A, and B be as above. Assume that$M:=A(t) A ˙ ( t ) ⊤$is constant and nonsingular. Let$Ω⊆[0,∞)× C T 1 ( R m × R s )$be open. Assume that the set$Ω # ∩({0}× [ g ( 0 , ⋅ ) ] − 1 (0))$is nonempty and compact. Then there exists a connected component Γ of nontrivial T-pairs for (3.16) whose closure in$[0,∞)× C T 1 ( R m × R s )$meets$Ω # ∩({0}× [ g ( 0 , ⋅ ) ] − 1 (0))$and cannot be both bounded and contained in Ω.

#### Theorem 3.10

Let f, g, A, and B be as above. Assume that$A(t) A ˙ ( t ) ⊤$is identically zero and define$ω: R m × R s → R m × R s$by

$ω(ξ,η)= ( 1 T ∫ 0 T A ( t ) f ( t , A ( t ) ⊤ ξ , B − 1 ( t ) η , 0 , 0 ) d t , g ( ξ , η ) ) .$

Let$Ω⊆[0,∞)× C T 1 ( R m × R s )$be open and assume that$deg(ω, Ω # )$is well defined and nonzero. Then there exists a connected component Γ of nontrivial T-pairs for (3.16) whose closure in$[0,∞)× C T 1 ( R m × R s )$meets$ω − 1 (0,0)∩ Ω #$and cannot be both bounded and contained in Ω.

In the next example we consider the same time-dependent constraint as in Example 3.4, but in the case of second-order DAEs.

#### Example 3.11

Let $f:R× R 2 ×R→ R 2$ be any continuous mapping 2π-periodic in the first variable. Consider

${ x ¨ = λ f ( t , x , y ) , λ ≥ 0 , y 3 + y − x 1 2 − x 2 2 − ( x 1 sin t + x 2 cos t ) 2 = 0 ,$

where $x=( x 1 , x 2 )$. Let A and g be as in Example 3.4. Applying the coordinate transformation as described above we rewrite our DAE as follows:

${ ξ ¨ = ξ − 2 M ξ ˙ + λ A ( t ) f ( t , A ( t ) ⊤ ξ , η ) , λ ≥ 0 , η 3 + η − ξ 1 2 − 2 ξ 2 2 = 0 .$

Here

$M:=A A ˙ ⊤ =( 0 1 − 1 0 )$

so that $A A ¨ ⊤ = M 2 =−I$. Let $Ω=[0,∞)× C T 1 ( R 2 ×R)$. Since M is nonsingular and $[ g ( 0 , ⋅ ) ] − 1 (0)={0}$, Corollary 3.9 yields an unbounded connected component of 2π-periodic pairs emanating from $(0;(0,0;0))∈[0,∞)× R 2 ×R$ (regarded as a 2π-pair).

#### Remark 3.12

As in the first-order case, our coordinate transformation applies also to a slightly different situation. Consider the following DAE:

${ x ¨ = H 1 x ˙ + H 2 x + λ f ( t , x , y ) , λ ≥ 0 , g ( A ( t ) x , B ( t ) y ) = 0 ,$
(3.18)

where A, B, f, and g are as in (3.16) and $H i$, $i=1,2$, are matrices that commute with A. Suppose, as above, that $M:=A(t) A ˙ ( t ) ⊤$ is constant (not necessarily invertible) and apply the transformation as indicated above. Equation (3.8) becomes

${ ξ ¨ = ( H 1 M + H 2 − M 2 ) ξ + ( H 1 − 2 M ) ξ ˙ + λ F ( t , ξ , η ) , λ ≥ 0 , g ( ξ , η ) = 0$
(3.19)

with F as in (3.17), so that the results of Section 2.2 are applicable to (3.19).

#### Remark 3.13

Let us consider the following second-order DAE

${ d 2 d t 2 ( C ( t ) x ) = λ f ( t , x , y , x ˙ , y ˙ ) , λ ≥ 0 , g ( A ( t ) x , B ( t ) y ) = 0 ,$
(3.20)

where f, A, and B are as in (3.16) and $t↦C(t)∈O( R m )$ is $C 2$ and T-periodic. We also assume that C has the same property as A, that is, $C(t) C ˙ ( t ) ⊤$ is constant. Expanding the derivative on the left-hand side of the first equation in (3.20) and using the fact that $C(t)∈O( R m )$, for all $t∈R$, we rewrite (3.20) as follows:

${ x ¨ = − 2 C ⊤ C ˙ x ˙ − C ⊤ C ¨ x + λ C ⊤ f ( t , x , y , x ˙ , y ˙ ) , g ( A x , B y ) = 0 ,$
(3.21)

where, to keep the notation concise, the explicit dependence on t of A, B, and C is omitted. Proposition A.3 shows that $K 1 :=C ( t ) ⊤ C ˙ (t)$ is constant and, by Remark A.4(3), it follows that $K 2 :=C ( t ) ⊤ C ¨ (t)$ is constant as well being equal to $− K 1 2$. Hence, (3.21) is of the form (3.18). Notice that if we assume that A commutes with $K 1$, then it commutes with $K 2$ as well. In conclusion, if $K 1$ commutes with $A(t)$ for all $t∈R$, Remark 3.12 applies with $H 1 =−2 K 1$ and $H 2 =− K 2$.

## Appendix: some lemmas of Matrix Analysis

This section gathers, for reference purposes, a few simple facts - possibly well known - concerning time-dependent matrices.

### Lemma A.1

Let$t↦A(t)$be a$C 2$square-matrix-valued function. Suppose that the map$t↦A(t) A ˙ ⊤ (t)$is constant. Then

$A ¨ (t)A ( t ) ⊤ =A(t) A ¨ ( t ) ⊤ ,$
(A.1)
$A ¨ (t)A ( t ) ⊤ =− A ˙ (t) A ˙ ( t ) ⊤ .$
(A.2)

### Proof

For the sake of simplicity, we drop the explicit indication of the dependence of A on t.

Let us put $M=A A ˙ ⊤$. Then $A ˙ A ⊤ = ( A A ˙ ⊤ ) ⊤ = M ⊤$ is also constant. Taking the derivative with respect to t of both these relations, we get

$A ˙ A ˙ ⊤ +A A ¨ ⊤ =0, A ¨ A ⊤ + A ˙ A ˙ ⊤ =0.$
(A.3)

Hence,

$0= A ˙ A ˙ ⊤ +A A ¨ ⊤ − A ¨ A ⊤ − A ˙ A ˙ ⊤ =A A ¨ ⊤ − A ¨ A ⊤ ,$

which implies (A.1).

From (A.3) and (A.1) it follows that

$0= A ˙ A ˙ ⊤ +A A ¨ ⊤ + A ¨ A ⊤ + A ˙ A ˙ ⊤ =2 A ˙ A ˙ ⊤ +2 A ¨ A ⊤ ,$

whence the assertion. □

Observe that under the hypothesis of Lemma A.1, since

$[ A ¨ ( t ) A ( t ) ⊤ ] ⊤ = A ¨ (t)A ( t ) ⊤ ,$

(A.1) implies the symmetry of $A A ¨ ⊤$.

### Lemma A.2

Let$t↦A(t)$be a$C 1$square-matrix-valued function. Assume that$A(t)$is orthogonal for all t, then$A ˙ (t) A ˙ ( t ) ⊤ =− ( A ( t ) A ˙ ( t ) ⊤ ) 2$.

### Proof

Differentiating the relation $A ⊤ A=I$ we obtain $A ˙ ⊤ A=− A ⊤ A ˙$. Multiplying this relation on the left by $A ˙$ and on the right by $A ⊤$, we get

$A ˙ A ˙ ⊤ A A ⊤ =− A ˙ A ⊤ A ˙ A ⊤ .$

Since $A A ⊤ =I$ and $A ˙ A ˙ ⊤ = ( A ˙ A ˙ ⊤ ) ⊤$, transposing yields

$A ˙ A ˙ ⊤ =− [ ( A ˙ A ⊤ ) ⊤ ] 2 =− ( A A ˙ ⊤ ) 2 ,$

as desired. □

Equation (A.2) and Lemma A.2 together yield the following fact.

### Proposition A.3

Let$t↦A(t)$be a$C 2$square-matrix-valued function. Assume that$A(t)$is orthogonal for all t and that the map$t↦A(t) A ˙ ⊤ (t)=:M$is constant. Then$A ¨ (t)A ( t ) ⊤ = ( A ( t ) A ˙ ( t ) ⊤ ) 2$is constantly equal to$M 2$. In particular, if$A(t) A ˙ ⊤ (t)$is constant and nonsingular then so is$A ¨ (t)A ( t ) ⊤$.

### Remark A.4

Replacing A with $A ⊤$, it is easy to verify that results analogous to Lemma A.1, Lemma A.2 and Proposition A.3 hold if we assume the constancy of $A ( t ) ⊤ A ˙ (t)$ instead of that of $A(t) A ˙ ( t ) ⊤$. Namely, if $t↦A(t)$ is a $C 2$ square-matrix-valued function such that $A(t)$ is orthogonal for all t and the map $t↦A ( t ) ⊤ A ˙ (t)$ is a constant, then

1. (1)

$A ¨ ( t ) ⊤ A(t)=− A ˙ ( t ) ⊤ A ˙ (t)$ and $A ¨ ( t ) ⊤ A(t)=−A ( t ) ⊤ A ¨ (t)$;

2. (2)

$A ˙ ( t ) ⊤ A ˙ (t)=− ( A ( t ) ⊤ A ˙ ( t ) ) 2$;

3. (3)

$A ¨ ( t ) ⊤ A(t)= ( A ( t ) ⊤ A ˙ ( t ) ) 2 =−A ( t ) ⊤ A ¨ (t)$.

These facts should not surprise us in view of Proposition A.5 below.

We conclude this technical section with a curious remark. As shown by the following example:

$A(t)=( 0 0 sin t − cos t 0 0 cos t sin t cos t sin t 0 0 − sin t cos t 0 0 ),$

even for matrix functions as in Proposition A.3, one may have

$A ( t ) ⊤ A ˙ (t)≠A(t) A ˙ ( t ) ⊤ .$

Nevertheless, one can prove the following fact.

### Proposition A.5

Let$t↦A(t)$be a$C 2$square-matrix-valued function. Assume that$A(t)$is orthogonal for all t. Then$A ( t ) ⊤ A ˙ (t)$is constant if and only if so is$A(t) A ˙ ( t ) ⊤$.

### Proof

Let us first prove that if $M:=A(t) A ˙ ( t ) ⊤$ is constant then $A ( t ) ⊤ A ˙ (t)$ is constant as well. As above, for the sake of simplicity, we drop the explicit indication of the dependence of A on t.

Clearly, we have $A ˙ A ⊤ = M ⊤$ and, since Proposition A.3 yields $A A ¨ ⊤ = M 2$, we also have $A A ¨ ⊤ = ( M 2 ) ⊤$. Now, using these facts we get

$[ d d t ( A T A ˙ ) ] ⊤ = A ⊤ A [ d d t ( A ⊤ A ˙ ) ] ⊤ A ⊤ A = A ⊤ A ( A ˙ ⊤ A ˙ + A ¨ ⊤ A ) A ⊤ A = [ A ˙ ⊤ M A ˙ + A ⊤ ( M 2 ) ⊤ A ] A ⊤ A = [ A ⊤ M M ⊤ + A ⊤ ( M 2 ) ⊤ ] A = A ⊤ ( M M ⊤ + ( M 2 ) ⊤ ) A = A ⊤ ( M M ⊤ + M 2 ) ⊤ A .$

Observe also that

$M M ⊤ + M 2 =A A ˙ ⊤ ( A ˙ A ⊤ + A A ˙ ⊤ ) =A A ˙ ⊤ [ d d t ( A A ⊤ ) ] =0,$

because $A A ⊤ ≡I$. Thus, $d d t ( A T A ˙ )=0$, which implies that $A ( t ) ⊤ A ˙ (t)$ is a constant matrix.

Conversely, if $A ( t ) ⊤ A ˙ (t)$ is constant, and a similar proof shows that $A(t) A ˙ ( t ) ⊤$ is constant too. □

## References

1. 1.

Kunkel P, Mehrmann V: Differential-Algebraic Equations: Analysis and Numerical Solution. 2006.

2. 2.

Rabier PJ, Rheinbolt WC: Theoretical and numerical analysis of differential-algebraic equations. In Handbook of Numerical Analysis. Edited by: Ciarlet PG, Lions JL. Elsevier, Amsterdam; 2002:183-540. Solution of Equations in $R n$ (Part 4), Techniques of Scientific Computing (Part4), Numerical Methods for Fluids (Part 2)

3. 3.

Rheinboldt WC: Differential-algebraic systems as differential equations on manifolds. Math. Comput. 1984, 43: 473-482. 10.1090/S0025-5718-1984-0758195-5

4. 4.

Bisconti L: Harmonic solutions to a class of differential-algebraic equations with separated variables. Electron. J. Differ. Equ. 2012., 2012: 10.1186/1687-1847-2012-2

5. 5.

Bisconti, L, Spadini, M: Harmonic perturbations with delay of periodic separated variables differential equations. Topol. Methods Nonlinear Anal. (to appear)

6. 6.

Calamai A: Branches of harmonic solutions for a class of periodic differential-algebraic equations. Commun. Appl. Anal. 2011, 15(2-4):273-282.

7. 7.

Calamai A, Spadini M: Branches of forced oscillations for a class of constrained ODEs: a topological approach. Nonlinear Differ. Equ. Appl. 2012, 19(4):383-399. 10.1007/s00030-011-0134-1

8. 8.

Calamai A, Spadini M: Periodic perturbations of constrained motion problems on a class of implicitly defined manifolds. Commun. Contemp. Math. 2014.

9. 9.

Spadini M: A note on topological methods for a class of differential-algebraic equations. Nonlinear Anal. 2010, 73(4):1065-1076. 10.1016/j.na.2010.04.038

10. 10.

Furi M, Pera MP, Spadini M: The fixed point index of the Poincaré operator on differentiable manifolds. In Handbook of Topological Fixed Point Theory. Edited by: Brown RF, Furi M, Górniewicz L, Jiang B. Spinger, Dordrecht; 2005.

11. 11.

Milnor JW: Topology from the Differentiable Viewpoint. University Press of Virginia, Charlottesville; 1965.

12. 12.

Benevieri, P, Furi, M, Pera, MP, Spadini, M: An Introduction to topological degree in euclidean spaces. Technical Report N. 42, Gennaio 2003, Università di Firenze, Dipartimento di Matematica Applicata, (2003)

13. 13.

Gerdin, M: Identification and estimation for models described by differential-algebraic equations. Department of Electrical Engineering Linköpings universitet, SE-581 83 (2006)

14. 14.

Bisconti L, Spadini M: On a class of differential-algebraic equations with infinite delay. Electron. J. Qual. Theory Differ. Equ. 2011., 2011:

15. 15.

Bisconti L, Spadini M: Corrigendum to On a class of differential-algebraic equations with infinite delay. Electron. J. Qual. Theory Differ. Equ. 2012., 2012: 10.1186/1687-1847-2012-97

Download references

## Acknowledgements

The authors thank both anonymous referees for their careful reading of the manuscript and many precious suggestions.

The authors have been supported by the Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).

## Author information

Authors

### Corresponding author

Correspondence to Alessandro Calamai.

## Additional information

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors, L. Bisconti, A. Calamai, and M. Spadini contributed to each part of this work equally and read and approved the final version of the manuscript.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions 