Skip to main content

Fast-slow dynamical approximation of forced impact systems near periodic solutions

Abstract

We approximate impact systems in arbitrary finite dimensions with fast-slow dynamics represented by regular ODE on one side of the impact manifold and singular ODE on the other. Lyapunov-Schmidt method leading to Poincaré-Melnikov function is applied to study bifurcations of periodic solutions. Several examples are presented as illustrations of abstract theory.

MSC:34C23, 34C25, 37G15, 70K50.

1 Introduction

Non-smooth differential equations when the vector field is only piecewise smooth, occur in various situations: in mechanical systems with dry frictions or with impacts, in control theory, electronics, economics, medicine and biology (see [16] for more references). One way of studying non-smooth systems is a regularization process consisting on approximation of the discontinuous vector field by a one-parametric family of smooth vector fields, which is called a regularization of the discontinuous one. The main problem then is to preserve certain dynamical properties of the original one to the regularized system. According to our knowledge, the regularization method has been mostly used to differential equations with non-smooth nonlinearities, like dry friction nonlinearity (see [7] and a survey paper [8]). As it is shown in [7, 8], the regularization process is closely connected to a geometric singular perturbation theory [9, 10]. On the other hand, it is argued in [11] that a harmonic oscillator with a jumping non-linearity with the force field nearly infinite in one side is a better model for describing the bouncing ball, rather then its limit version for an impact oscillator. This approach is used also in [12] when an impact oscillator is approximated by a one-parametric family of singularly perturbed differential equations, but as discussed in [12], the geometric singular perturbation theory does not apply.

In this paper, we continue in a spirit of [12] as follows. Let Ω R n be an open subset and G:ΩR a C 2 -function, such that G (x)0 for any xS:={xΩG(x)=0}Ω. Then S is a smooth hyper-surface of Ω that we call impact manifold, (or hyper-surface). We set Ω ± ={xΩ±G(x)>0} and consider the following regular-singular perturbed system:

{ ε x ˙ = f + ( x ) + ε g + ( t , x , ε ) for  x Ω + , x ˙ = f ( x ) + ε g ( t , x , ε ) for  x Ω
(1.1)

for ε>0 small. We assume that the system

{ x ˙ = f + ( x ) for  x Ω + , x ˙ = f ( x ) for  x Ω
(1.2)

has a continuous periodic solution q(t) crossing transversally the impact manifold S, given by

q(t)= { q ( t ) Ω for  T 0 < t < 0 , q + ( t ) Ω + for  0 < t < T + 0

and q (0)= q + (0)S, q ( T 0 )= q + ( T + 0 )S. By transversal crossing, we mean that

G ( q ( ± T ± 0 ) ) q ˙ ± ( ± T ± 0 ) <0< G ( q ( 0 ) ) q ˙ ± (0).

We set T ε := T 0 +ε T + 0 and assume that g ± (t,x,ε) are T ε -periodic in t.

Transversal crossing implies that (1.2) has a family of continuous solutions q(t,α), α (an open neighborhood I 0 of 0) R n 1 crossing transversally the impact manifold S, given by

q(t,α)= { q ( t , α ) Ω for  T ( α ) < t < 0 , q + ( t , α ) Ω + for  0 < t < T + ( α ) ,

where q (0,α)= q + (0,α)S, q ( T (α),α), q + ( T + (α),α)S, and q ± (t,0)= q ± (t) and T ± (0)= T ± 0 . Moreover, T ± (α) is C 2 in α, and the maps αq(0,α) and αq(± T ± (α),α) give smooth ( C 2 ) parameterizations of the manifold S in small neighborhoods U 0 of q(0) and U ± of q( T + 0 )=q( T 0 ). Then the map R: U 0 S U + S, q(0,α) q + ( T + (α),α) is C 2 -smooth. In this paper, we study the problem of existence of a T ε -periodic solution of the singular problem (1.1) in a neighborhood of the set

{ q ( t ) t [ T 0 , 0 ] } { q + ( t ) t [ 0 , T + 0 ] } .

As a matter of fact, in the time interval [0,ε T + 0 ], resp. [ T 0 ,0], the periodic solutions will stay close to q + ( ε 1 t), resp. to q (t), and hence it will pass from the point of S near q(0) to the point of S near q + ( T + 0 ) in a very short time (of the size of ε T + 0 ). So, we may say that the behavior of the periodic solutions of (1.1) in the interval [ T 0 ,ε T + 0 ] is quite well simulated by the solution of the perturbed impact system

x ˙ = f ( x ) , R ( q ( 0 , α ) ) = q + ( T + ( α ) , α ) .
(1.3)

It is now clear that our study has been mostly motivated by the paper [12], where a similar problem on planar perturbed harmonic oscillators is studied. However arguments in [12] are mainly based on averaging methods whereas, in this paper, we investigate a general higher-dimensional singular equation such as (1.1) by using the Lyapunov-Schmidt reduction. We focus on the existence of periodic solutions and do not check their local asymptotic properties as, for example, stability or hyperbolicity. This could be also done by following our approach but we do not go into detail in this paper.

Our results (see Theorems 3.1 and 5.1) state that if a certain Poincaré-Melnikov-like function has a simple zero then the above problem has an affirmative answer. The proof of this fact is accomplished in several steps. In Section 2, we show, for any α in a neighborhood of α=0, the existence of a unique continuous solution x(t)=x(t,α,ε) of (1.1) near the set {q(t,α)t[ T 0 , T + 0 ]} which is defined in [ T +τ,ε T + +τ], T ± T ± 0 and such that x(τ)=q(0,α), for some τ, and x( T +τ,α,ε), x(ε T + +τ,α,ε) belong to U ± S. Moreover, αx( T +τ,α,ε) and αx(ε T + +τ,α,ε) are C 2 close to q ± (± T ± (α),α) and then αx( T +τ,α,ε) and αx(ε T + +τ,α,ε) give C 2 parameterizations of S in neighborhoods of q ± (± T (α),α). Hence, x( T +τ,α,ε)x(ε T + +τ,α,ε) gives a Poincaré-like map and a ( T 0 +ε T + 0 )-periodic solution is found by solving the equations

x ( ε T + + τ , α , ε ) = x ( T + τ , α , ε ) , T + ε T + = T 0 + ε T + 0 .

Thus, the bifurcation equation is obtained by putting conditions x(ε T + +τ,α,ε)=x( T +τ,α,ε), T +ε T + = T 0 +ε T + 0 and the fact that the points x(ε T + +τ,α,ε) and x( T +τ,α,ε) belong to S together. Then, in Section 3, we use the Lyapunov-Schmidt method to prove that the above equations can be solved for ( T , T + ,τ,α)( T 0 , T + 0 , τ 0 ,0) as functions of ε>0 small provided a certain Poincaré-Melnikov-like function has a simple zero. We will first study the case, that we call non-degenerate, when

α [ q + ( T + ( α ) , α ) q ( T ( α ) , α ) ] α = 0 w0,w R n 1  such that  T (0)w=0.
(1.4)

Condition (1.4) has a simple geometrical meaning. The impact system (1.3) has a T 0 -periodic solution if and only if the following condition holds:

q + ( T + ( α ) , α ) = q ( T ( α ) , α ) , T (α)= T 0 .
(1.5)

Now, suppose there is a sequence 0 α n 0, as n such that (1.5) holds. Possibly passing to a subsequence we can suppose that lim n α n | α n | =w, |w|=1. Then, taking the limit in the equalities:

q + ( T + ( α n ) , α n ) q ( T ( α n ) , α n ) | α n | =0, T ( α n ) T 0 | α n | =0

we see that condition (1.4) does not hold. Thus, (1.4) implies that, in a neighborhood of α=0, there are no other T 0 -periodic solutions of (1.3) apart from q (t).

In Section 4, we define the adjoint system to the linearization of the impact system

{ x ˙ = f ( x ) , x ( 0 ) = q ( 0 , α ) , x ( T ( α ) ) = R ( x ( 0 ) ) , G ( x ( T ( α ) ) ) = 0 , T ( α ) t 0
(1.6)

along the solution x(t)= q (t,0) and relate the Poincaré-Melnikov function obtained in Section 3 with the solutions of such an adjoint system.

Section 5 is devoted to the extension of the result to the case (that we call degenerate) where q + ( T + (α),α)= q ( T (α),α) for any α I 0 . We will see that our results can be easily extended provided one of the following two conditions hold:

either T (0)0 or T (α)= T 0 for any α I 0 .

Section 6 is devoted to the construction of some planar examples, although our results are given for an arbitrary finite dimension. Finally, the Appendix contains some technical proofs.

2 The bifurcation equation

We set u + (t,α)= q + ( ε 1 t,α), u (t,α)= q (t,α) and

u(t,α)= { u ( t , α ) for  T ( α ) t < 0 , u + ( t , α ) for  0 t < ε T + ( α ) .

Note that

{ ε u ˙ + ( t , α ) = f + ( u + ( t , α ) ) , u ˙ ( t , α ) = f ( u ( t , α ) ) , u + ( 0 , α ) = u ( 0 , α ) , u + ( ε T + ( α ) , α ) , u ( T ( α ) , α ) S

and that u(t,0) is a continuous periodic solution, of period T 0 +ε T + 0 , of the piecewise continuous singular system:

{ ε x ˙ = f + ( x ) for  x Ω + , x ˙ = f ( x ) for  x Ω .

Obviously, u (t,α) extends to a solution of the following impact system:

{ x ˙ = f ( x ) for  x Ω , x ( t + ) = q + ( T + ( α ) , α ) when  x ( t ) = q ( 0 , α )

that can be written as

{ x ˙ = f ( x ) for  x Ω , x ( t + ) = R ( x ( t ) ) when  x ( t ) U 0 S .

Our purpose is to find a T ε -periodic solution x(t,ε) of system (1.1), which is orbitally close to u(t,α) for some α=α(ε)0, as ε 0 + that is such that

sup T 0 t ε T + 0 | x ( t + τ ( ε ) , ε ) u ( t , α ( ε ) ) | 0as ε 0 +
(2.1)

for some (τ(ε),α(ε))( τ 0 ,0) as ε0. Thus, we may say that, in some sense, the impact periodic solution u (t,0) approximates the periodic solution x(t,ε) of the singular perturbed equation (1.1).

To this end, we first set x(t+τ)= x + (t)+ u + (t,α) in equation ε x ˙ = f + (x)+ε g + (t,x,ε). Then x + (t) satisfies

ε x ˙ f + ( u + ( t , α ) ) x= h + (t,τ,x,α,ε)
(2.2)

where

h + ( t , τ , x , α , ε ) = f + ( x + u + ( t , α ) ) f + ( u + ( t , α ) ) f + ( u + ( t , α ) ) x + ε g + ( t + τ , x + u + ( t , α ) , ε ) .

Since u + (0,α) describes U 0 S, we consider (2.2) with the initial condition x 0 =0. Let X + (t,α) be the fundamental solution of x ˙ = f + ( q + (t,α))x, such that X + (0,α)=I. Then X + ( ε 1 t,α) is the fundamental solution of ε x ˙ = f + ( u + (t,α))x, with X + (0,α)=I. Let T + be near T + 0 . By the variation of constants formula, the solution of (2.2) with the initial condition x 0 =0 satisfies

x + (t)= ε 1 0 t X + ( ε 1 t , α ) X + 1 ( ε 1 s , α ) h ( s , τ , x + ( s ) , α , ε ) ds.

Thus, we conclude that for ρ>0 and T + near T + 0 equation ε x ˙ = f + (x)+εg(t,x,ε) has a solution x(t) such that sup 0 t ε T + |x(t+τ) u + (t,α)|<ρ if and only if the map x(t) x ˆ (t) given by

x ˆ (t)= ε 1 0 t X + ( ε 1 t , α ) X + 1 ( ε 1 s , α ) h ( s , τ , x ( s ) , α , ε ) ds,
(2.3)

has a fixed point whose sup-norm in [0,ε T + ] is smaller than ρ. To show that (2.3) has a fixed point of norm less than ρ, we set y(t):=x(ε T + t), t[0,1] and note that x(t) is a fixed point of (2.3) of norm less than ρ, with 0tε T + , if and only if y(t) is a fixed point of norm less than ρ of the map:

y ˆ (t)= T + 0 t X + ( T + t,α) X + 1 ( T + σ,α)h ( ε T + σ , τ , y ( σ ) , α , ε ) dσ,
(2.4)

0t1. Note that

h + ( ε T + t , τ , x , α , ε ) = f + ( x + q + ( t T + , α ) ) f + ( q + ( t T + , α ) ) f + ( q + ( t T + , α ) ) x + ε g + ( ε t T + + τ , x + q + ( t T + , α ) , ε ) ,

and hence in the fixed-point equation (2.4), we may also take ε0. Then since (x, T + ,α,ε) h + (ε T + τ,τ,x,α,ε), 0τ1 is a C 2 -map and

| h + ( t , τ , x , α , ε ) | Δ ( | x | ) |x|+ N g |ε|,

where

N g = sup { | g + ( t , x ˜ , ε ) | t R , | x ˜ | ρ + sup t [ 0 , T + ( α ) ] , α I 0 | q + ( t , α ) | , | ε | ε 0 } , Δ ( ρ ) = sup { | f ( x + q + ( t , α ) ) f ( q + ( t , α ) ) | t [ 0 , T + ( α ) ] , | x | ρ , α I 0 } ,

the map y(t) y ˆ (t) is a C 2 -contraction on the Banach space of bounded continuous functions on [0,1] whose sup-norm is less than or equal to ρ provided ρ is sufficiently small, T + is near T + 0 , |ε| is small, α I 0 and τR. Let y + (t,τ,α, T + ,ε) be the C 2 -solution of the fixed point (2.4). We emphasize the fact that ε may also be non-positive. Then x + (t,τ,α,ε):= y + ( ε 1 T + 1 t,τ,α, T + ,ε) is a fixed point of (2.3) and

x + (εt,τ,α,ε):= y + ( T + 1 t , τ , α , T + , ε )
(2.5)

is C 2 in all parameters and t.

Writing T + 1 t in place of t in (2.4) and using (2.5) we see that

x + (εt,τ,α,ε)= 0 t X + (t,α) X + 1 (s,α) h + ( ε s , τ , x + ( ε s , τ , α , ε ) , α , ε ) ds,
(2.6)

0t T + . We have, by definition, x + (0,τ,α,ε)+ u + (0,α)= u + (0,α)S and

x + (ε T + ,τ,α,ε)+ u + (ε T + ,α)S

if and only if (recall that u + (ε T + ,α)= q + ( T + ,α))

G ( q + ( T + , α ) + 0 T + X + ( T + , α ) X + 1 ( s , α ) h + ( ε s , τ , x + ( ε s , τ , α , ε ) , α , ε ) d s ) =0.
(2.7)

We remark that equation (2.7) has meaning also when ε<0 but its relevance for our problem is only when ε>0.

As second step we consider the solution of the differential equation on Ω :

x ˙ = f (x)+ε g (t,x,ε),x(τ)=q(0,α)

which is close to u (tτ,α) on T +τtτ, T T 0 . Let X (t,α) be the fundamental solution of the linear system x ˙ = f ( u (t,α))x such that X (0,α)=I. Setting x(t+τ)= x (t)+ u (t,α) we see that (for t[ T ,0]) x (t) satisfies the equation:

{ x ˙ f ( u ( t , α ) ) x = h ( t , τ , x , α , ε ) , x ( 0 ) = 0 ,
(2.8)

where

h ( t , τ , x , α , ε ) = f ( x + u ( t , α ) ) f ( u ( t , α ) ) f ( u ( t , α ) ) x + ε g ( t + τ , x + u ( t , α ) , ε ) .

Again by the variation of constants formula we get the integral formula:

x (t)= 0 t X (t,α) X ( s , α ) 1 h ( s , τ , x ( s ) , α , ε ) ds

which, as before, has a unique solution of norm less than a given, small, ρ: x (t,τ,α,ε), with T t0. At t= T the solution of (2.8) takes the value:

T 0 X ( T ,α) X ( s , α ) 1 h ( s , τ , x ( s , α , ε ) , α , ε ) ds.

Now, we want to solve the equation

x ( T ,τ,α,ε)+ u ( T ,α)= x + (ε T + ,τ,α,ε)+ u + (ε T + ,α)

that is [again using u + (ε T + ,α)= q + ( T + ,α) and u ( T ,α)= q ( T ,α)]:

q + ( T + , α ) + 0 T + X + ( T + , α ) X + 1 ( s , α ) h + ( ε s , τ , x + ( ε s , τ , α , ε ) , α , ε ) d s = q ( T , α ) T 0 X ( T , α ) X ( s , α ) 1 h ( s , τ , x ( s , τ , α , ε ) , α , ε ) d s .
(2.9)

Of course, when (2.9) holds, then (2.7) is equivalent to

G ( q ( T , α ) T 0 X ( T , α ) X ( s , α ) 1 h ( s , τ , x ( s , τ , α , ε ) , α , ε ) d s ) =0.
(2.10)

So, our task reduces to solve the system formed by equations (2.9), (2.10) together with the period equation:

T +ε T + = T 0 +ε T + 0

that is the equation F( T + , T ,τ,α,ε)=0 where:

F ( T + , T , τ , α , ε ) : = ( x ( T , τ , α , ε ) + q ( T , α ) x + ( ε T + , τ , α , ε ) q + ( T + , α ) G ( q ( T , α ) T 0 X ( T , α ) X 1 ( s , α ) h ( s , τ , x ( s , τ , α , ε ) , α , ε ) , ε ) d s ) T T 0 + ε ( T + T + 0 ) ) .

According to the smoothness properties of x (t,τ,α,ε) and x + (εt,τ,α,ε), it results that F( T + , T ,τ,α,ε) is C 2 .

3 Solving F( T + , T ,τ,α,ε)=0

In this section, we will give a criterion to solve equation F( T + , T ,τ,α,ε)=0 for ( T + , T ,τ,α) in terms of ε for small ε>0. We will use a Crandall-Rabinowitz type result (see also [[13], Theorem 4.1]) concerning the existence of a solution of a nonlinear equation having a manifold of fixed point at a certain value of a parameter.

Our result is as follows. Consider the linear system

{ ψ q ˙ + ( T + 0 , 0 ) = 0 ψ 2 = [ ψ + ψ 1 G ( q ( T 0 , 0 ) ) ] q ˙ ( T 0 , 0 ) ψ [ q α ( T 0 , 0 ) q + α ( T + 0 , 0 ) ] + ψ 1 G ( q ( T 0 , 0 ) ) q ˙ ( T 0 , 0 ) T ( 0 ) = 0 .
(3.1)

We will prove that if (1.4) holds, system (3.1) has a unique solution, up to a multiplicative constant, and the following result holds:

Theorem 3.1 Assume condition (1.4) holds and let (ψ, ψ 1 , ψ 2 ) R n ×R×R be the unique (up to a multiplicative constant) solution of (3.1). If the Poincaré-Melnikov function

M ( τ ) : = ψ 0 T + 0 X + ( T + 0 , 0 ) X + ( s , 0 ) 1 g + ( τ , u ( 0 , 0 ) , 0 ) d s + ψ T 0 0 X ( T 0 , 0 ) X ( s , 0 ) 1 g ( s + τ , u ( s , 0 ) , 0 ) d s + ψ 1 G ( q ( T 0 , 0 ) ) T 0 0 X ( T 0 , 0 ) X 1 ( s , 0 ) g ( s + τ , q ( s , 0 ) , 0 ) d s
(3.2)

has a simple zero at τ= τ 0 , then system (1.1) has a T ε -periodic solution x(t,ε) satisfying (2.1).

Proof To start with, we make few remarks on the functions x ± (t,τ,α,ε). First we note that when ε=0 equation (2.8) reads

{ x ˙ = f ( x + u ( t , α ) ) f ( u ( t , α ) ) , x ( 0 ) = 0

which has the (unique) solution x(t)=0. Thus,

x (t,τ,α,0)=0.

Next, differentiating equation (2.8) with respect to ε we see that x ε (t,τ,α,0) satisfies the equation:

{ x ˙ f ( u ( t , α ) ) x = g ( t + τ , u ( t , α ) , 0 ) , x ( 0 ) = 0 .

Hence,

x , ε (t,τ,α,0):= x ε (t,τ,α,0)= 0 t X (t,α) X ( s , α ) 1 g ( s + τ , u ( s , α ) , 0 ) ds.

Next, x + (0,τ,α,ε)=0 by the definition and differentiating equation (2.6) with respect to ε at ε=0 and using the equalities:

x + (0,τ,α,ε)=0, h , t (0,τ,0,α,0)=0, h , x (0,τ,0,α,0)=0

we get

t x ˙ + (0,τ,α,0)= 0 t X + (t,α) X + 1 (s,α) g + ( τ , u + ( 0 , α ) , 0 ) ds.

So, equation (2.9) at ε=0 and T ± = T ± (α) becomes

q ( T ( α ) , α ) = q + ( T + ( α ) , α )

which is satisfied for α=0. Now we look at equation (2.10). Since h (t,τ,0,α,0)=0, we see that when ε=0 and T = T (α) the equality is satisfied. As a consequence, we get

F ( T + ( α ) , T ( α ) , τ , α , 0 ) = ( q ( T ( α ) , α ) q + ( T + ( α ) , α ) 0 T ( α ) T 0 )
(3.3)

and F( T + 0 , T 0 ,τ,0,0)=0. Next we look at derivatives of with respect to T + , T , α and ε at the point ( T + 0 , T 0 ,τ,0,0). We have

T [ x ( T , τ , α , ε ) + q ( T , α ) x + ( ε T + , τ , α , ε ) q + ( T + , α ) ] = x ˙ ( T , τ , α , ε ) q ˙ ( T , α ) q ˙ ( T , α ) , as  ε 0 ,

and similarly, using

ε x ˙ + ( ε T + , τ , α , ε ) = f ( x + ( ε T + , τ , α , ε ) + q + ( T + , α ) ) f ( q + ( T + , α ) ) + ε g ( t + τ , x + ( ε T + , τ , α , ε ) + q + ( T + , α ) , ε ) ,

we get

T + [ x ( T , τ , α , ε ) + q ( T , α ) x + ( ε T + , τ , α , ε ) q + ( T + , α ) ] = ε x ˙ + ( ε T + , τ , α , ε ) q ˙ + ( T + , α ) q ˙ + ( T + , α ) , as  ε 0 .

Next

α [ x ( T , τ , α , ε ) + q ( T , α ) x + ( ε T + , τ , α , ε ) q + ( T + , α ) ] q α ( T , α ) q + α ( T + , α ) , as  ε 0 ,

and

τ [ x ( T , τ , α , ε ) + q ( T , α ) x + ( ε T + , τ , α , ε ) q + ( T + , α ) ] 0as ε0.

So, the Jacobian matrix L of at the point ( T + 0 , T 0 ,τ,0,0) is

L : = F ( T + , T , τ , α ) ( T + 0 , T 0 , τ , 0 , 0 ) = ( q ˙ + ( T + 0 , 0 ) q ˙ ( T 0 , 0 ) 0 q α ( T 0 , 0 ) q + α ( T + 0 , 0 ) 0 G ( q ( T 0 , 0 ) ) q ˙ ( T 0 , 0 ) 0 G ( q ( T 0 , 0 ) ) q α ( T 0 , 0 ) 0 1 0 0 )

and ( μ + , μ ,τ,w)R×R×R× R n 1 belongs to the kernel NL of L if and only if

{ μ = 0 , [ q α ( T 0 , 0 ) q + α ( T + 0 , 0 ) ] w = q ˙ + ( T + 0 , 0 ) μ + , G ( q ( T 0 , 0 ) ) q α ( T 0 , 0 ) w = 0 .
(3.4)

From G( q ( T (α),α))=0, we get

G ( q ( T 0 , 0 ) ) [ q ˙ ( T 0 , 0 ) T ( 0 ) + q α ( T 0 , 0 ) ] =0
(3.5)

thus, on account of the transversality condition G (q( T 0 ,0)) q ˙ ( T 0 ,0)0, (3.4) is equivalent to

{ [ q α ( T 0 , 0 ) q + α ( T + 0 , 0 ) ] w = q ˙ + ( T + 0 , 0 ) μ + , T ( 0 ) w = 0 , μ = 0 .
(3.6)

Next, from G( q + ( T + (α),α))=0, we get

G ( q ( T + 0 , 0 ) ) [ q ˙ + ( T + 0 , 0 ) T + ( 0 ) + q + α ( T + 0 , 0 ) ] =0
(3.7)

then subtracting (3.5) from (3.7) and using q( T + 0 ,0)=q( T 0 ,0) we obtain:

G ( q ( T + 0 , 0 ) ) [ q ˙ + ( T + 0 , 0 ) T + ( 0 ) + q ˙ ( T 0 , 0 ) T ( 0 ) ] = G ( q ( T + 0 , 0 ) ) [ q α ( T 0 , 0 ) q + α ( T + 0 , 0 ) ] .

So, if w R n 1 satisfies (3.6), we see that

G ( q ( T + 0 , 0 ) ) q ˙ + ( T + 0 , 0 ) T + (0)w= G ( q ( T + 0 , 0 ) ) q ˙ + ( T + 0 , 0 ) μ +

and then, on account of transversality, T + (0)w= μ + . Summarizing, we have seen that, if ( μ + , μ ,τ,w)NL then μ + = T + (0)w, μ =0 and w R n 1 satisfies

{ [ q α ( T 0 , 0 ) q + α ( T + 0 , 0 ) ] w = q ˙ + ( T + 0 , 0 ) T + ( 0 ) w , T ( 0 ) w = 0 .
(3.8)

On the other hand, if w R n 1 satisfies (3.8), then ( T + (0)w,0,τ,w) belongs to NL. So NL=span{(0,0,1,0)} if and only if system (3.8) has the trivial solution w=0 only. But (3.8) is equivalent to

{ α [ q ( T ( α ) , α ) q + ( T + ( α ) , α ) ] α = 0 w = 0 , T ( 0 ) w = 0 ,

and hence (3.8) has the trivial solution if and only if the non-degenerateness condition (1.4) holds. We emphasize the fact that, assuming condition (1.4), equation F( T + , T ,τ,α,0)=0 has the manifold of fixed points ( T + , T ,τ,α)=( T + 0 , T 0 ,τ,0) and the linearization of at these points is Fredholm with index zero with the one-dimensional kernel span{(0,0,1,0)}. Hence, there is a unique vector, up to a multiplicative constant, ψ ˜ R n + 2 such that ψ ˜ L=0, i.e.,

ψ ˜ ( q ˙ + ( T + 0 , 0 ) q ˙ ( T 0 , 0 ) 0 q α ( T 0 , 0 ) q + α ( T + 0 , 0 ) 0 G ( q ( T 0 , 0 ) ) q ˙ ( T 0 , 0 ) 0 G ( q ( T 0 , 0 ) ) q α ( T 0 , 0 ) 0 1 0 0 )=0.

Writing ψ ˜ =( ψ , ψ 1 , ψ 2 ), ψ R n , ψ 1 , ψ 2 R we see that ψ, ψ 1 , ψ 2 satisfy (3.1). This proves the claim before the statement of Theorem 3.1.

We recall that our purpose is to solve the equation F( T + , T ,τ,α,ε)=0 for ε0 and that F( T + , T ,τ,α,0)=0 has the one-dimensional manifold of solutions ( T + , T ,τ,α)=( T + 0 , T 0 ,τ,0) and its linearization along the points of this manifold is Fredholm with the one-dimensional kernel span{(0,0,1,0)}. Hence, we are in position of applying the following result that has been more or less proved in [13].

Theorem 3.2 Let, X, Y be Banach spaces and F:X×RY a C 2 -map such that F(x,0)=0 has a C 2 , d-dimensional, manifold of solutions M={x=ξ(μ)μ R d }. Assume that for any μ in a neighborhood of μ=0 the linearization L(μ)= D 1 F(ξ(μ),0) has the null space T ξ ( μ ) M=span{ ξ (μ)}. Assume further that L(μ) is Fredholm with index zero and let Π(μ):YRL(μ) a projection of Y onto the range of L(μ). Then if the Poincaré-Melnikov function

[ I Π ( μ ) ] D 2 F ( ξ ( μ ) , 0 )

has a simple zero at μ=0, there exists ε ¯ >0 and a unique map ( ε ¯ , ε ¯ )x(ε)X such that F(x(ε),ε)=0. Moreover, D 1 F(x(ε),ε) is an isomorphism for ε0.

Actually the statement in [[13], Theorem 4.1] is slightly different from the above. Hence, we give a proof of Theorem 3.2 in the Appendix.

We apply Theorem 3.2 to the map F( T + , T ,τ,α,ε) with μ=τ. Then L(τ)=L is independent of τ, and hence so is Π(τ)=Π. Next [IΠ]z= ψ ˜ z | ψ ˜ | 2 ψ ˜ where RL= { ψ ˜ } and ψ ˜ =( ψ , ψ 1 , ψ 2 ) R n + 2 , ψ R n , ψ 1 , ψ 2 R, is any vector satisfying (3.1). To apply Theorem 3.2, we look at the derivative of F( T + 0 , T 0 ,τ,0,ε) with respect to ε at ε=0. First, we have:

whereas differentiating (2.10) with respect to ε at ε=0 we get

G ( q ( T , α ) ) T 0 X ( T ,α) X 1 (s,α) g ( s + τ , q ( s , α ) , 0 ) ds.

We obtain then

F ε ( T + 0 , T 0 , τ , 0 , 0 ) = ( 0 T + 0 X + ( T + 0 , 0 ) X + ( s , 0 ) 1 g + ( τ , q + ( 0 , 0 ) , 0 ) d s T 0 0 X ( T 0 , 0 ) X ( s , 0 ) 1 g ( s + τ , q ( s , 0 ) , 0 ) d s G ( q ( T 0 , 0 ) ) T 0 0 X ( T 0 , 0 ) X 1 ( s , 0 ) g ( s + τ , q ( s , 0 ) , 0 ) d s 0 )

and then the Poincaré-Melnikov function is:

M ( τ ) : = ψ 0 T + 0 X + ( T + 0 , 0 ) X + ( s , 0 ) 1 g + ( τ , u ( 0 , 0 ) , 0 ) d s + ψ T 0 0 X ( T 0 , 0 ) X ( s , 0 ) 1 g ( s + τ , u ( s , 0 ) , 0 ) d s + ψ 1 G ( q ( T 0 , 0 ) ) T 0 0 X ( T 0 , 0 ) X 1 ( s , 0 ) g ( s + τ , q ( s , 0 ) , 0 ) d s .
(3.9)

The conclusion of Theorem 3.1 now easily follows from (3.9) and Theorem 3.2.  □

4 Poincaré-Melnikov function and adjoint system

In this section, we want to give a suitable definition of the adjoint system of the linearization of (1.6) along q (t) in such a way that the Poincaré-Melnikov function (3.2) can be put in relation with the solutions of such an adjoint system.

Let R: U 0 S U + S be the C 1 -map defined in Introduction and recall the impact equation (1.6):

{ x ˙ = f ( x ) , x ( 0 ) = q ( 0 , α ) S U 0 , x ( T ( α ) ) = R ( x ( 0 ) ) , G ( x ( T ( α ) ) ) = 0 , T ( α ) t 0 .
(4.1)

For α=0, (4.1) has the solution x(t)= q (t,0), T 0 t0. We let x(t,α) denote the solution of the impact system (4.1) on [T(α),0]. Then its derivative with respect to α at α=0 satisfies the linearized equation:

{ u ˙ = f ( q ( t , 0 ) ) u , u ( 0 ) = q α ( 0 , 0 ) , R ( q ( 0 , 0 ) ) u ( 0 ) = u ( T 0 ) q ˙ ( T 0 , 0 ) T 1 , G ( q ( T 0 , 0 ) ) [ u ( T 0 ) q ˙ ( T 0 , 0 ) T 1 ] = 0 , T ( 0 ) = T 1 : R n 1 R .
(4.2)

Next, recalling (1.1), we consider a perturbed impact system of (4.1) (see also (2.8)) of the form

{ x ˙ = f ( x ) + ε g ( t + τ , x , ε ) , x ( 0 ) = q ( 0 , α ) S U 0 , x ( T ( α , ε ) ) = R ( τ ; x ( 0 ) , ε ) , G ( x ( T ( α , ε ) ) ) = 0 , T ( α , ε ) t 0
(4.3)

where R:R× U 0 S×(δ,δ) U + S is defined as follows: R(τ;ξ,ε)= x + (ε T + (ξ,τ,ε),τ,ε) and x + (t,τ,ε) is the solution of

ε x ˙ = f + ( x ) + ε g + ( t + τ , x , ε ) , x ( 0 ) = ξ .

Note that R is a C 2 -map on R× U 0 S×R taking values on U + S and R(τ;q(0,α),0)= q + ( T + (α),α); moreover, when g + is autonomous then R is independent of τ, so we may take τ=0 in its definition. We recall that for simplicity we write R(ξ) instead of R(τ;ξ,0), ξS.

To study the problem of existence of solutions of system (4.3), we are then led to find conditions on h(t), d and T 1 so that the non-homogeneous linear equation:

{ u ˙ f ( q ( t , 0 ) ) u = h ( t ) , u ( 0 ) = q α ( 0 , 0 ) θ , θ R n 1 , u ( T 0 ) q ˙ ( T 0 , 0 ) T R ( q ( 0 , 0 ) ) u ( 0 ) = d R n , G ( q ( T 0 , 0 ) ) [ u ( T 0 ) q ˙ ( T 0 , 0 ) T ] = 0 , T = T 1
(4.4)

has a solution (u(t),θ,T). Let us comment on equation (4.4) (and similarly on (4.2)) that condition u( T 0 ) q ˙ ( T 0 ,0)T R (q(0,0))u(0)=d only involves the derivative of R(ξ) on the tangent space T ξ S since u(0)= q α (0,0) T ξ S, ξ= q (0,0). So, it is independent of any extension we take of R(ξ) to a neighborhood of q (0,0). We also note that for simplicity we denote again by T 1 the value of the linear functional T 1 in (4.4).

Since G(R( q (0,α)))=0, we get

G ( R ( q ( 0 , 0 ) ) ) R ( q ( 0 , 0 ) ) q α (0,0)θ=0

for any θ R n 1 and then

G ( R ( q ( 0 , 0 ) ) ) d = G ( R ( q ( 0 , 0 ) ) ) [ u ( T 0 ) q ˙ ( T 0 , 0 ) T R ( q ( 0 , 0 ) ) q α ( 0 , 0 ) θ ] = 0 .

So, if equation (4.4) has a solution, we must necessarily have

G ( R ( q ( 0 , 0 ) ) ) d=0 [ G ( q + ( T + 0 , 0 ) ) d = 0 ] .

Next, we define two Hilbert spaces:

X : = { ( u , θ , T ) W 1 , 2 ( [ T 0 , 0 ] , R n ) × R n 1 × R u ( 0 ) = q α ( 0 , 0 ) θ } Y : = { ( h , d , T ) L 2 ( [ T 0 , 0 ] , R n ) × R n 1 × R × R G ( R ( q ( 0 , 0 ) ) ) d = 0 } .

Note Y is a Hilbert space and X is a closed subspace of a Hilbert space W 1 , 2 ([ T 0 ,0], R n )× R n 1 ×R. Then (4.4) can be written as

A(u,θ,T)=(h,d,0, T 1 )

with

A(u,θ,T):=( u ˙ f ( q ( t , 0 ) ) u u ( T 0 ) q ˙ ( T 0 , 0 ) T R ( q ( 0 , 0 ) ) u ( 0 ) G ( q ( T 0 , 0 ) ) [ u ( T 0 ) q ˙ ( T 0 , 0 ) T ] T )

and A:XY.

Lemma 4.1 The range RA is closed.

Proof Indeed, let A( u n , θ n , T n )=( h n , d n ,0, T 1 n )( h ¯ , d ¯ ,0, T ¯ 1 ). Then

u n (t)= q α (t,0) θ n t 0 X (t) X 1 (t,s) h n (s)ds

and

{ R ( q ( 0 , 0 ) ) q α ( 0 , 0 ) θ n q α ( T 0 , 0 ) θ n = d n T 0 0 X ( T 0 , 0 ) X ( s , 0 ) 1 h n ( s ) d s q ˙ ( T 0 , 0 ) T 1 n , G ( q ( T 0 , 0 ) ) d n = 0 .

Since

d n T 0 0 X ( T 0 , 0 ) X ( s , 0 ) 1 h n ( s ) d s q ˙ ( T 0 , 0 ) T 1 n d ¯ T 0 0 X ( T 0 , 0 ) X ( s , 0 ) 1 h ¯ ( s ) d s q ˙ ( T 0 , 0 ) T ¯ 1 ,

and R[ R (q(0,0)) q α (0,0) q α ( T 0 ,0)] is closed, then G ( q ( T 0 ,0)) d ¯ =0 and there exists θ ¯ R n 1 so that

R ( q ( 0 , 0 ) ) q α ( 0 , 0 ) θ ¯ q α ( T 0 , 0 ) θ ¯ = d ¯ T 0 0 X ( T 0 , 0 ) X ( s , 0 ) 1 h ¯ ( s ) d s q ˙ ( T 0 , 0 ) T ¯ 1 .

By taking

u ¯ (t):= q α (t,0) θ ¯ t 0 X (t) X 1 (t,s) h ¯ (s)ds, T ¯ = T ¯ 1 ,

we derive ( h ¯ , d ¯ ,0, T ¯ 1 )=A( u ¯ , θ ¯ , T ¯ )RA. The proof is finished. □

Next, we prove the following result.

Proposition 4.1 Let (h,d,T)Y. Then the inhomogeneous system (4.4) has a solution (u(t),θ,T)X if and only if equation

T 0 0 v ( t ) h(t)dt+ ψ d+ ψ 2 T 1 =0
(4.5)

holds for any solution v(t) of the adjoint system

{ v ˙ ( t ) + f ( q ( t , 0 ) ) v ( t ) = 0 , [ q α ( 0 , 0 ) ] [ v ( 0 ) R ( q ( 0 , 0 ) ) ψ ] = 0 , v ( T 0 ) = ψ + ψ 1 G ( q ( T 0 , 0 ) ) , ψ q ˙ + ( T + 0 , 0 ) = 0
(4.6)

and ψ 2 = ψ q ˙ ( T 0 ,0)+ ψ 1 G ( q ( T 0 ,0)) q ˙ ( T 0 ,0).

Proof Before starting with the proof we observe that, because of G ( q + ( T + 0 ,0))d=0, ψ is not uniquely determined by equation (4.5) since changing it with ψ+λ G ( q ( T 0 , 0 ) ) , λR, the equation remains the same. So, in equation (4.5), we look for ψ in a subspace of R n which is transverse to G ( q ( T 0 , 0 ) ) . It turns out that the best choice, from a computational point of view, is to take ψ so that ψ q ˙ + ( T + 0 ,0)=0 (see equation (3.1)).

First, we prove necessity. Assume that (4.4) can be solved for (u,θ,T)X and let (v(t),ψ, ψ 1 ), v W 1 , 2 ([ T 0 ,0], R n ), be a solution of equation (4.6). Then

h ( t ) = u ˙ ( t ) f ( q ( t , 0 ) ) u ( t ) , d = u ( T 0 ) q ˙ ( T 0 , 0 ) T R ( q ( 0 , 0 ) ) q α ( 0 , 0 ) θ , 0 = G ( q ( T 0 , 0 ) ) [ u ( T 0 ) q ˙ ( T 0 , 0 ) T ] , T 1 = T .

Plugging these equalities in the left-hand side of (4.5) and integrating by parts, (4.5) reads

v ( 0 ) q α ( 0 , 0 ) θ v ( T 0 ) u ( T 0 ) T 0 0 [ v ˙ ( t ) + f ( q ( t , 0 ) ) v ( t ) ] u ( t ) d t + ψ [ u ( T 0 ) q ˙ ( T 0 , 0 ) T R ( q ( 0 , 0 ) ) q α ( 0 , 0 ) θ ] + ψ 1 G ( q ( T 0 , 0 ) ) [ u ( T 0 ) q ˙ ( T 0 , 0 ) T ] + ψ 2 T = 0

or

{ [ q α ( 0 , 0 ) ] [ v ( 0 ) R ( q ( 0 , 0 ) ) ψ ] } θ + [ ψ v ( T 0 ) + ψ 1 G ( q ( T 0 , 0 ) ) ] u ( T 0 ) T 0 0 [ v ˙ ( t ) + f ( q ( t , 0 ) ) v ( t ) ] u ( t ) d t + [ ψ 2 ψ q ˙ ( T 0 , 0 ) ψ 1 G ( q ( T 0 , 0 ) ) q ˙ ( T 0 , 0 ) ] T = 0
(4.7)

because of the definition of ψ 2 and the fact that (v(t),ψ, ψ 1 ) satisfies (4.6).

To prove the sufficiency, we show that if (h,d,T)Y does not belong to RA, then there exists a solution of the variational equation (4.6) such that (4.7) does not hold. So, assume that (h,d,0, T 1 )RA. By Lemma 4.1 and the Hahn-Banach theorem, there is an ( v ¯ , ψ ¯ , ψ ¯ 1 , ψ ¯ 2 )Y such that

( v ¯ , ψ ¯ , ψ ¯ 1 , ψ ¯ 2 ) , A ( u , θ , T ) =0,(u,θ,T)X,
(4.8)

and

( v ¯ , ψ ¯ , ψ ¯ 1 , ψ ¯ 2 ) , ( h , d , 0 , T 1 ) =1,
(4.9)

where , is the usual scalar product on Y. We already noted that we can assume that ψ ¯ q ˙ + ( T + 0 ,0)=0, and (4.8)-(4.9) remain valid. Repeating our previous arguments, we see that v(t) W 1 , 2 ([ T 0 ,0], R n ) and that (4.8) implies ( v ¯ , ψ ¯ , ψ ¯ 1 , ψ ¯ 2 ) solves the adjoint system (4.6). Summarizing, if (h,d,0, T 1 )RA there exists a solution of the adjoint system for which (4.6) does not hold. This finishes the proof. □

Again we note that equation (4.6) only depends on the derivative R ( q (0,0)) on T q ( 0 , 0 ) S since q α ( 0 , 0 ) R ( q ( 0 , 0 ) ) ψ= [ q ˙ + ( T + 0 , 0 ) T + ( 0 ) + q + α ( T + 0 , 0 ) ] ψ= q + α ( T + 0 , 0 ) ψ, where we use ψ q ˙ + ( T + 0 ,0)=0 or, in other words, it is independent of any C 1 -extension we take of R(ξ) to the whole U 0 .

We now prove the following proposition.

Proposition 4.2 The adjoint system (4.6) has a solution if and only if (ψ, ψ 1 ) satisfy the first and the third equation in (3.1) (and we take the second equation in (3.1) as definition of ψ 2 ).

Proof Indeed let v(t) be a solution of (4.6) then

v(t)=Y(t)Y ( T 0 ) 1 v ( T 0 )

Y(t)= X 1 ( t ) being the fundamental matrix of the linear equation v ˙ (t)+ f ( q ( t , 0 ) ) v(t)=0. Then, taking v( T 0 )=ψ+ ψ 1 G ( q ( T 0 , 0 ) ) the two remaining condition in (4.6) read:

{ [ q α ( 0 , 0 ) ] [ Y ( T 0 ) 1 [ ψ + ψ 1 G ( q ( T 0 , 0 ) ) ] R ( q ( 0 , 0 ) ψ ] = 0 , ψ q ˙ + ( T + 0 , 0 ) = 0

that can be written as

{ q α ( T 0 , 0 ) [ ψ + ψ 1 G ( q ( T 0 , 0 ) ) ] [ ψ R ( q ( 0 , 0 ) q α ( 0 , 0 ) ] = 0 , ψ q ˙ + ( T + 0 , 0 ) = 0

or else, on account of R( q (0,α))= q + ( T + (α),α):

{ ψ [ q α ( T 0 , 0 ) q + α ( T + 0 , 0 ) ] + ψ 1 G ( q ( T 0 , 0 ) ) q α ( T 0 , 0 ) = 0 , ψ q ˙ + ( T + 0 , 0 ) = 0 .

The proof is finished. □

We conclude this section giving another expression of the Poincaré-Melnikov function (3.2) in terms of the solution of the adjoint system (4.6). To this end, let v(t) be a solution of the adjoint system (4.6). Since a fundamental matrix of the linear equation

v ˙ + f ( q ( t , 0 ) ) v=0

is X 1 ( t ) we see that

v(t)= X 1 ( t ) X ( T 0 ) v ( T 0 ) = X 1 ( t ) X ( T 0 ) [ ψ + ψ 1 G ( q ( T 0 , 0 ) ) ]

so:

v (t)= [ ψ + ψ 1 G ( q ( T 0 , 0 ) ) ] X ( T 0 ) X 1 (t).

Then

M(τ)= ψ 0 T + 0 X + ( T + 0 ) X + ( t ) 1 g + ( τ , q + ( 0 , 0 ) , 0 ) dt+ T 0 0 v ( t ) g ( t + τ , q ( t , 0 ) , 0 ) dt.

As for the first term in the above equality, we can show it is related to the impact R(τ;ξ,ε). Indeed, from Section 2 we know that the solution of the singular equation

x ˙ = f + (x)+εg(t,x,ε)

can be written as

x(t+τ)= x + (t)+ q + ( ε 1 t , α )

with x + (εt) as in equation (2.6). Thus, ξ=x(τ)= q + (0,α)S and

R ( τ ; ξ , ε ) = x + ( ε T + ) + q + ( T + , α ) = 0 T + X + ( T + , α ) X + 1 ( s , α ) h + ( ε s , τ , x + ( ε s ) , α , ε ) d s + q + ( T + , α )

for some T + = T + (τ;α,ε). Then

R ε ( τ ; q ( 0 , 0 ) , 0 ) = q ˙ + ( T + 0 , 0 ) T + ε + 0 T + 0 X + ( T + 0 ) X + 1 (s) g + ( τ , q + ( 0 , 0 ) , 0 ) ds

and then, using again ψ q ˙ + ( T + 0 ,0)=0 we see that

ψ 0 T + 0 X + ( T + 0 ) X + 1 (s) g + ( τ , q + ( 0 , 0 ) , 0 ) ds= ψ R ε ( τ ; q ( 0 , 0 ) , 0 )

i.e.

M(τ)= ψ R ε ( τ ; q ( 0 , 0 ) , 0 ) + T 0 0 v ( t ) g ( t + τ , q ( t , 0 ) , 0 ) dt.
(4.10)

When g + is autonomous, then R is independent of τ, and the expression (4.10) of the Poincaré-Melnikov function should be compared with the one given in [[14], Theorem 4.2] where a Poincaré-Melnikov function, characterizing transition to chaos, is given for almost periodic perturbations of autonomous impact equations with a homoclinic orbit.

5 The case of a manifold of periodic solutions

In this section we assume that q ( T (α),α)= q + ( T + (α),α) for any α in (an open neighborhood of α=0 in) R n 1 . Hence, from (3.3), we see that

F ( T + ( α ) , T ( α ) , τ , α , 0 ) =( 0 0 T ( α ) T 0 ).

We distinguish the two cases: T (0)0 and T (α)= T 0 for all α in (an open neighborhood of α=0 in) R n 1 . First, we assume that

T (0)0.

Then a C 2 , (n2)-dimensional submanifold S of (an open neighborhood of α=0 in) R n 1 exists such that T (α)= T 0 for any αS. So, for ε=0, F( T + , T ,τ,α,0)=0 has the (n1)-dimensional manifold of solutions

( T + , T ,τ,α)=ξ(α,τ):= ( T + ( α ) , T 0 , τ , α ) ,(α,τ)S×R.

So, we are in position to apply Theorem 3.2. First, we have to verify that the kernel N D 1 F(ξ(α,τ),0) equals the tangent space T ξ ( α , τ ) X, X={ξ(α,τ)(α,τ)S×R}, and then that the Poincaré-Melnikov function (vector):

[ I Π ( α , τ ) ] D 2 F ( ξ ( α , τ ) , 0 )

has a simple zero at (α,τ)=(0, τ 0 ). Note that

T ξ ( α , τ ) X=span { ( T + ( α ) v 0 0 v ) , ( 0 0 1 0 ) : v T α S } .

From (3.3), we get:

D 1 F ( ξ ( α , τ ) , 0 ) = ( q ˙ + ( T + ( α ) , α ) q ˙ ( T 0 , α ) 0 q α ( T 0 , α ) q + α ( T + ( α ) , α ) 0 G ( q ( T 0 , α ) ) q ˙ ( T 0 , α ) 0 G ( q ( T 0 , α ) ) q α ( T 0 , α ) 0 1 0 0 ) .

Note that D 1 F(ξ(α,τ),0) does not depend on τ. Using G( q ( T 0 ,α))=0 and q ( T 0 ,α)= q + ( T + (α),α) for any αS we easily see that

D 1 F ( ξ ( α , τ ) , 0 ) | T ξ ( α , τ ) X =0

for any v T α S. On the other hand, assume that

( μ + μ w ) N ( q ˙ + ( T + ( α ) , α ) q ˙ ( T 0 , α ) q α ( T 0 , α ) q + α ( T + ( α ) , α ) 0 G ( q ( T 0 , α ) ) q ˙ ( T 0 , α ) G ( q ( T 0 , α ) ) q α ( T 0 , α ) 0 1 0 )

for some μ + , μ R and w R n 1 . Then μ =0 and ( μ + ,w) satisfies

{ q ˙ + ( T + ( α ) , α ) μ + + [ q α ( T 0 , α ) q + α ( T + ( α ) , α ) ] w = 0 , G ( q ( T 0 , α ) ) q α ( T 0 , α ) w = 0

that, on account of q ( T 0 ,α)= q + ( T + (α),α) is equivalent to

{ q ˙ + ( T + ( α ) , α ) [ T + ( α ) w μ + ] = 0 , G ( q ( T 0 , α ) ) q α ( T 0 , α ) w = 0 .

Now, from G( q ( T (α),α))=0 we get, for any w R n 1 :

G ( q ( T ( α ) , α ) ) q α ( T ( α ) , α ) w= G ( q ( T ( α ) , α ) ) q ˙ ( T ( α ) , α ) T (α)w

and hence

G ( q ( T 0 , α ) ) q α ( T ,α)w=0 G ( q ( T 0 , α ) ) q ˙ ( T 0 , α ) T (α)w=0

which, in turn, is equivalent to w T α S because of transversality and the fact that T α S=N T (α).

Hence, we conclude that N D 1 F(ξ(α,τ),0)= T ξ ( α , τ ) X.

Now we consider the second condition. The Poincaré-Melnikov function (vector) [IΠ(α,τ)] D 2 F(ξ(α,τ),0), αS can be written as

ψ (α,τ) D 2 F ( ξ ( α , τ ) , 0 ) ,
(5.1)

where ψ (α,τ) is a matrix whose rows are left eigenvectors of zero eigenvalue of the matrix D 1 F(ξ(α,τ),0), that is,

ψ (α,τ) D 1 F ( ξ ( α , τ ) , 0 ) =0.
(5.2)

Note that ψ(α,τ)=ψ(α) does not depend on τ since so does D 1 F(ξ(α,τ),0). Then (5.1) reads:

M ( α , τ ) : = ψ ( α ) 0 T + 0 X + ( T + 0 , α ) X + ( s , α ) 1 g + ( τ , q ( 0 , α ) , 0 ) d s + ψ ( α ) T 0 0 X ( T 0 , α ) X ( s , α ) 1 g ( s + τ , q ( s , α ) , 0 ) d s + ψ 1 ( α ) G ( q ( T 0 , α ) ) T 0 0 X ( T 0 , α ) X 1 ( s , α ) g ( s + τ , q ( s , α ) , 0 ) d s .

Arguing as in Section 3, equation (5.2) is equivalent to

{ ψ ( α ) q ˙ + ( T + ( α ) , α ) = 0 , ψ 2 ( α ) = [ ψ ( α ) + ψ 1 ( α ) G ( q ( T 0 , α ) ) ] q ˙ ( T 0 , α ) , ψ ( α ) [ q α ( T 0 , α ) q + α ( T + ( α ) , α ) ] + ψ 1 ( α ) G ( q ( T 0 , α ) ) q α ( T 0 , α ) = 0 .
(5.3)

Moreover, the adjoint variational system along q (t,α) is defined as

{ v ˙ ( t ) + f ( q ( t , α ) ) v ( t ) = 0 , [ q α ( 0 , α ) ] [ v ( 0 ) R ( q ( 0 , α ) ) ψ ( α ) ] = 0 , v ( T 0 ) = ψ ( α ) + ψ 1 ( α ) G ( q ( T 0 , α ) ) , ψ ( α ) q ˙ + ( T + ( α ) , α ) = 0 ,
(5.4)

where ( ψ (α), ψ 1 (α), ψ 2 (α)) satisfy equation (5.2). Then the Poincaré-Melnikov vector can be written as

M ( α , τ ) = ψ ( α ) 0 T + ( α ) X + ( T + ( α ) , α ) X + ( t , α ) g + ( τ , q + ( 0 , α ) , 0 ) d t + T 0 0 v ( t , α ) g ( t + τ , q ( t , α ) , 0 ) d t
(5.5)

or else

M(α,τ)= ψ (α) R ε ( τ ; q ( 0 , α ) , 0 ) + T 0 0 v ( t , α ) g ( t + τ , q ( t , α ) , 0 ) dt
(5.6)

v(t,α) being the solution of (5.4) and X + (t,α) the fundamental matrix of the linear equation

x ˙ = f + ( q + ( t , α ) ) x.

Of course the only difference between the cases T (0)0 and T (α)= T 0 for all αS is that in the first case the Poincaré-Melnikov function is defined for (α,τ)S×R while in the second it is defined for (α,τ)O×R for an open neighborhood O of 0 R n 1 . Summarizing, we proved the following result.

Theorem 5.1 Assume that q ( T (α),α)= q + ( T + (α),α) for any α in a neighborhood of α=0, and that either T (0)0 or T (α)= T 0 for any α (in the same neighborhood). Then system (5.3) has a d-dimensional space of solutions where d=n or d=n+1 according to which of the two conditions T (0)0 or T (α)= T 0 holds. Moreover, if the Poincaré-Melnikov function (5.5) (or (5.6)) has a simple zero at (0, τ 0 ) then system (1.1) has a T ε -periodic solution x(t,ε) satisfying (2.1).

Finally, we note that when we can show that a Brouwer degree of a Poincaré-Melnikov function from either Theorem 3.1 or 5.1 is non-zero then by following [15] we can show existence results.

6 Examples

We consider a second-order equation

{ ε 2 x ¨ = f + ( x , x ˙ ) + ε g + ( t , x , x ˙ , ε ) , x > 0 , x ¨ = f ( x ) + ε g ( t , x , x ˙ , ε ) , x < 0

with the line x=0 as discontinuity manifold (i.e., with G(x, x ˙ )=x). We write q ± (t,α)= ( q 1 ± ( t , α ) q ˙ 1 ± ( t , α ) ) with q (0,α)= ( 0 α + α 0 ) (i.e. q 1 ± (0,α)=0 and q ˙ 1 (0,α)=α+ α 0 ). We also write q + ( T + (α),α)= ( 0 φ ( α ) ) so that

R:( 0 α + α 0 )( 0 φ ( α ) )

i.e., we take

R( x 1 , x 2 )=( 0 φ ( x 2 α 0 ) )

in the plane coordinates ( x 1 , x 2 ). According to equation (5.4), the adjoint variational system reads, with ψ(α)= ( ψ ( α ) ψ ( α ) ) :

{ v ˙ 1 = f ( q 1 ( t , α ) ) v 2 , v ˙ 2 = v 1 , v 2 ( 0 ) φ ( α ) ψ ( α ) = 0 , v 1 ( T 0 ) = ψ ( α ) + ψ 1 ( α ) , v 2 ( T 0 ) = ψ ( α ) , ψ ( α ) φ ( α ) + ψ ( α ) f + ( 0 , φ ( α ) ) = 0

which can be written as (with v 2 =w and v 1 = w ˙ ):

{ w ¨ = f ( q 1 ( t , α ) ) w , w ( 0 ) φ ( α ) w ( T 0 ) = 0 , ψ