Consider the following problem:
$$\begin{aligned}& \dot{x}=JH'(t,x), \end{aligned}$$
(3.1)
$$\begin{aligned}& x_{1}(0)\cos\alpha+x_{2}(0)\sin\alpha=M_{0} \bigl(x(0),x(1)\bigr), \end{aligned}$$
(3.2)
$$\begin{aligned}& x_{1}(1)\cos\beta+x_{2}(1)\sin\beta=M_{1} \bigl(x(0),x(1)\bigr), \end{aligned}$$
(3.3)
where \(H\in C^{1}([0,1]\times \mathbf {R}^{2n},\mathbf {R}^{2n})\) and \(H'(t,x)\) is the gradient of H with respect to x, \(x=(x_{1},x_{2})\), \(x_{1},x_{2}\in \mathbf {R}^{n}\), \(\alpha\in[0,\pi)\), \(\beta\in(0,\pi]\), J is the standard symplectic matrix and \(M_{i}\in C(\mathbf {R}^{2n}\times \mathbf {R}^{n},\mathbf {R}^{2n})\) are bounded (\(i=0,1\)). \(x:[0,1]\to \mathbf {R}^{2n}\) is said to be a solution of (3.1)-(3.3) if \(x\in C^{1}([0,1],\mathbf {R}^{2n})\) and \(x=x(t)\) satisfies (3.1)-(3.3).
We also make the following assumptions:
- (H1):
-
There exists \({\bar{B}}:[0,1]\times \mathbf {R}^{2n}\to{\mathcal {L}}_{s}(\mathbf {R}^{2n})\) with \({\bar{B}}(\cdot, x(\cdot))\in L^{\infty }([0,1],{\mathcal {L}}_{s}(\mathbf {R}^{2n}))\) for all \(x\in C([0,1],\mathbf {R}^{2n})\), \({\bar{B}}_{1}, {\bar{B}}_{2}\in L^{\infty}([0,1],\mathcal {L}_{s}(\mathbf {R}^{2n}))\) such that
$$H'(t,x)={\bar{B}}(t,x)x+h(t,x),\qquad {\bar{B}}_{1}(t)\leq{\bar{B}}(t,x)\leq {\bar{B}}_{2}(t) $$
for all \((t,x)\in[0,1]\times \mathbf {R}^{2n}\), and \(h(t,x):[0,1]\times \mathbf {R}^{2n}\to \mathbf {R}^{2n}\) is bounded.
- (H2):
-
There exists \({\bar{B}}_{0}:[0,1]\times \mathbf {R}^{2n}\rightarrow \mathcal {L}_{s}(\mathbf {R}^{2n})\) with \({\bar{B}}_{0}(\cdot, x(\cdot))\in L^{\infty}([0,1],{\mathcal {L}}_{s}(\mathbf {R}^{2n}))\) for all \(x\in C([0,1],\mathbf {R}^{2n})\), \({\bar{B}}_{01},{\bar{B}}_{02}\in L^{\infty}([0,1],\mathcal {L}_{s}(\mathbf {R}^{2n}))\) such that
$$H'(t,x)=\bar{B}_{0}(t,x)x,\qquad {\bar{B}}_{01}(t) \leq{\bar{B}}_{0}(t,x)\leq {\bar{B}}_{02}(t) $$
for all \((t,x)\in[0,1]\times \mathbf {R}^{2n}\) with \(\vert x \vert \leq r\) for some constant \(r>0\).
Theorem 3.1
If
H
satisfies (H1) with
\(i_{\alpha,\beta}^{f}(\bar{B}_{1})=i_{\alpha,\beta}^{f}(\bar{B}_{2})\), \(\nu_{\alpha,\beta}^{f}(\bar{B}_{2})=0\), then (3.1)-(3.3) has one solution. Furthermore, if (H2) and (M1) hold, then (3.1)-(3.3) has one nontrivial solution provided
\(i_{\alpha,\beta }^{f}(\bar{B}_{01})=i_{\alpha,\beta}^{f}(\bar{B}_{02})\), \(\nu_{\alpha ,\beta}^{f}(\bar{B}_{02})=0\)
and
\(i_{\alpha,\beta}^{f}(\bar {B}_{01})-i_{\alpha,\beta}^{f}(\bar{B}_{1})\)
is odd.
Proof
Let \(X=L^{2}([0,1],\mathbf {R}^{2n})\), \(Y=C([0,1],\mathbf {R}^{2n})\), \(D(A_{1})=\{x\in H^{1}([0,1],\mathbf {R}^{2n})\vert x_{1}(0)\cos\alpha +x_{2}(0)\sin\alpha=0, x_{1}(1)\cos\beta+x_{2}(1)\sin\beta=0\}\), \(A_{1}:D(A_{1})\subset Y\rightarrow X\) by \((A_{1}x)(t)=-J\dot{x}(t)-\mu_{1} x(t)\) where \(\mu_{1}<0\), \(\mu _{1}\neq\beta-\alpha+k\pi\), \(k\in \mathbf {Z}\) and \(B_{1}-\mu_{1}I_{2n}\geq I_{2n}\), \(B_{01}-\mu_{1}I_{2n}\geq I_{2n}\). Then \(A_{1}\) is an unbounded self-adjoint and invertible operator in X with \(\sigma(A_{1})=\sigma_{d}(A_{1})=\{\beta-\alpha-\mu_{1}+k\pi \vert k\in \mathbf {Z}\}\). \(N_{1}:Y\rightarrow Y\) by \((N_{1}x)(t)=H'(t,x(t))-\mu_{1}x(t)\), \((B(x)y)(t)=\bar{B}(t,x(t))y(t)-\mu_{1}y(t)\). Hence (H1), (H2) imply (N1), (N2), respectively. Set \((Ax)(t)=-J\dot{x}(t)\), \((\widetilde{B}_{i}x)(t)={\bar{B}}_{i}(t)-\mu_{1}x(t)\), \((\widetilde{B}_{0i}x)(t)=\bar{B}_{0i}(t)-\mu_{1}x(t)\) and \((B_{i}x)(t)={\bar{B}}_{i}(t)\), \(( B_{0i}x)(t)={\bar{B}}_{0i}(t)\); then \(A_{1}=A-\mu_{1}\mathit{Id}\), \({\tilde{B}}_{i}=B_{i}-\mu_{1}\mathit{Id}\), \({\tilde{B}}_{0i}=B_{0i}-\mu_{1}\mathit{Id}\) (\(i=1,2\)). By the definition in the Appendix, \(\nu _{\alpha,\beta}^{f}(\bar{B}_{2})=\nu_{A}(B_{2})\), and
$$\begin{aligned} i_{A_{1}}({\tilde{B}}_{2})-i_{A_{1}}({\tilde{B}}_{1})&=\sum_{0\leq \lambda< 1}\nu_{A_{1}} \bigl((1-\lambda)\tilde{B}_{1}+\lambda\tilde {B}_{2}\bigr)= \sum_{0\leq\lambda< 1}\nu_{A}\bigl((1- \lambda)B_{1}+\lambda B_{2}\bigr)\\ & =i_{\alpha,\beta}^{f}( \bar{B}_{2})-i_{\alpha,\beta}^{f}(\bar{B}_{1}). \end{aligned} $$
Hence, \(i_{\alpha,\beta}^{f}(\bar{B}_{2})=i_{\alpha,\beta}^{f}(\bar {B}_{1})\) implies \(i_{A}(B_{2})=i_{A}(B_{1})\) and \(i_{\alpha,\beta}^{f}(\bar {B}_{01})-i_{\alpha,\beta}^{f}(\bar{B}_{1})\) is odd means that \(i_{A}(B_{01})-i_{A}(B_{1})\) is odd. Therefore, in order to finish the proof we need only to show that (3.1)-(3.3) can be written in the form of (1.5). Noticing that (3.1) is equivalent to
$$x'(t)-J\mu_{1}x(t)=J\bigl(H'(t,x)- \mu_{1}x(t)\bigr)\equiv Jf_{1}(t). $$
Multiplying the equation with the integral factor \(e^{-J\mu_{1}t}\) and integrating over \([0,t]\), we can get
$$x(t)=e^{J\mu_{1}t}x(0)+ \int_{0}^{t}e^{J\mu_{1}(t-s)}Jf_{1}(s) \,ds. $$
Considering (3.2)-(3.3) yields
$$\begin{aligned} x(0)&=\frac{1}{\Delta_{1}}\left ( \begin{matrix} I_{n}\sin(\mu_{1}-\beta)& I_{n}\sin\alpha\\ I_{n}\cos(\mu_{1}-\beta) & -I_{n}\cos\alpha \end{matrix} \right ) \left ( \begin{matrix} M_{0} \\ M_{1} \end{matrix} \right ) \\ &\quad {}-\frac{1}{\Delta_{1}} \left ( \begin{matrix} I_{n}\sin\alpha\\ -I_{n}\cos\alpha \end{matrix} \right )\left ( \begin{matrix} I_{n}\cos\beta& I_{n}\sin\beta \end{matrix} \right ) \int_{0}^{1}e^{J\mu_{1}(1-s)}Jf_{1}(s) \,ds, \end{aligned}$$
where \(\Delta_{1}=\sin(\mu_{1}-\beta+\alpha)\). Then (3.1)-(3.3) is equivalent to
$$\begin{aligned}& x(t)= \int_{0}^{1}G_{1}(t,s)f_{1}(s) \,ds+M^{1}(x)=A_{1}^{-1}N_{1}(x)+M^{1}(x), \end{aligned}$$
(3.4)
where, as \(0\leq s\leq t\leq1\),
$$\begin{aligned}& G_{1}(t,s)=e^{J\mu_{1}(t-s)}J-\frac{1}{\Delta_{1}} \left ( \begin{matrix} I_{n}\sin\alpha\\ - I_{n}\cos\alpha \end{matrix} \right )\left ( \begin{matrix} I_{n}\cos\beta& I_{n}\sin\beta \end{matrix} \right )e^{J\mu_{1}(1-s)}J; \end{aligned}$$
as \(0\leq t\leq s\leq1\),
$$G_{1}(t,s)=-\frac{1}{\Delta_{1}} \left ( \begin{matrix} I_{n}\sin\alpha\\ -I_{n}\cos\alpha \end{matrix} \right )\left ( \begin{matrix} I_{n}\cos\beta& I_{n}\sin\beta \end{matrix} \right )e^{J\mu_{1}(1-s)}J; $$
and
$$\bigl(M^{1}x\bigr) (t)=\frac{1}{\Delta_{1}}\left ( \begin{matrix} I_{n}\sin(\mu_{1}-\beta-\mu_{1}t)& I_{n}\sin(\alpha+\mu_{1}t)\\ I_{n}\cos(\mu_{1}-\beta-\mu_{1}t) & -I_{n}\cos(\alpha+\mu_{1}t) \end{matrix} \right ) \left ( \begin{matrix} M_{0} \\ M_{1} \end{matrix} \right ). $$
It is easy to see that \(M^{1}(x)\) is a compact operator satisfying \(\Vert M^{1}(x) \Vert _{Y}\leq\rho\) for all \(x\in Y\)and some \(\rho>0\) and (M1) implies (M). Hence Theorem 3.1 follows from Theorem 1.1. □
As an application of Theorem 3.1 we investigate the following second order Hamiltonian systems:
$$\begin{aligned}& \ddot{x}+V'(t,x)=0, \end{aligned}$$
(3.5)
$$\begin{aligned}& x(0)\cos\alpha-x'(0)\sin\alpha=M_{0} \bigl(x(0),x(1),x'(0),x'(1)\bigr), \end{aligned}$$
(3.6)
$$\begin{aligned}& x(1)\cos\beta-x'(1)\sin\beta=M_{1}\bigl(x(0),x(1),x'(0),x'(1) \bigr), \end{aligned}$$
(3.7)
where \(V\in C^{1}([0,1]\times \mathbf {R}^{n},\mathbf {R})\), \(V'\) denotes the gradient of V with respect to x, \(\alpha\in[0,\pi)\), \(\beta\in(0,\pi ]\), \(M_{0},M_{1}:\mathbf {R}^{4n}\to \mathbf {R}^{n}\) are continuous and bounded. \(x:[0,1]\to \mathbf {R}^{n}\) is said to be a solution of (3.5)-(3.7) if \(x\in C^{2}([0,1],\mathbf {R}^{n})\) and \(x=x(t)\) satisfies (3.5)-(3.7).
Corollary 3.1
If
V
satisfies (V1) with
\(i_{\alpha,\beta }^{s}(\bar{B}_{1})=i_{\alpha,\beta}^{s}(\bar{B}_{2})\), \(\nu_{\alpha,\beta }^{s}(\bar{B}_{2})=0\), then (3.5)-(3.7) has one solution. Furthermore, if (V2) and (M1) hold, then (3.5)-(3.7) have one nontrivial solution provided
\(i_{\alpha,\beta}^{s}(\bar{B}_{01})=i_{\alpha,\beta}^{s}(\bar {B}_{02})\), \(\nu_{\alpha,\beta}^{s}(\bar{B}_{02})=0\)
and
\(i_{\alpha ,\beta}^{s}(\bar{B}_{01})-i_{\alpha,\beta}^{s}(\bar{B}_{1})\)
is odd.
Proof
Define \(y=-\dot{x}\), \(z=(x,y)\), \(H(t,z)=\frac{1}{2} \vert y \vert ^{2}+V(t,x)\). Then (3.5)-(3.7) are equivalent to (3.1)-(3.3). If (V1) holds, then
$$H'(t,z)=\operatorname{diag}\bigl\{ \bar{B}(t,x),I_{n}\bigr\} z+ \bigl(h(t,x),0\bigr); $$
and if (V2) holds, then
$$H'(t,z)=\operatorname{diag}\bigl\{ \bar{B}_{0}(t,x),I_{n} \bigr\} z $$
for all \((t,z)\in[0,1]\times \mathbf {R}^{2n}\) with \(\vert z \vert \leq r\). By Proposition A.2, \(\nu_{\alpha,\beta}^{s}(\bar {B}_{01})=\nu_{\alpha,\beta}^{f}(\operatorname{diag}\{\bar{B}_{01},I_{n}\})\), \(\nu _{\alpha,\beta}^{s}(\bar{B}_{1})=\nu_{\alpha,\beta}^{f}(\operatorname{diag}\{\bar {B}_{1},I_{n}\})\), and \(i_{\alpha,\beta}^{s}(\bar{B}_{0i})=i_{\alpha,\beta}^{f}(\operatorname{diag}\{\bar {B}_{0i},I_{n}\})\), \(i_{\alpha,\beta}^{s}(\bar{B}_{i})=i_{\alpha,\beta }^{f}(\operatorname{diag}\{\bar{B}_{i},I_{n}\})\) (\(i=1,2\)). Hence, the results follow from Theorem 3.1. □
Remark
-
1.
When \(\alpha=0\), \(\beta=\pi\), (3.6)-(3.7) reduce to (1.2)-(1.3), so that Corollary 3.1 contains Theorem 1.2 as a special case.
-
2.
When \(M_{0}(\xi)=0\), \(M_{1}(\xi)=0\) for \(\xi\in \mathbf {R}^{4n}\), the first part of Theorem 3.1 reduces [17], Theorem 3.4.3.
Next we discuss the problem
$$\begin{aligned}& \dot{x}=JH'(t,x), \\& x(1)-Px(0)=M_{2}\bigl(x(0),x(1)\bigr), \end{aligned}$$
(3.8)
where \(P\in S_{p}(\mathbf {R}^{2n})\), \(M_{2}:\mathbf {R}^{2n}\times \mathbf {R}^{2n}\to \mathbf {R}^{2n}\) is continuous and bounded. \(x:[0,1]\to \mathbf {R}^{2n}\) is said to be a solution of (3.1) and (3.8) if \(x\in C^{1}([0,1],\mathbf {R}^{2n})\) and \(x=x(t)\) satisfies (3.1) and (3.8). We will use the following assumption:
- (M2):
-
\(M_{2}(\xi)=o( \vert \xi \vert )\) as \(\vert \xi \vert \to0\).
Theorem 3.2
If
H
satisfies (H1) with
\(i_{P}^{f}(\bar {B}_{1})=i_{P}^{f}(\bar{B}_{2})\), \(\nu_{P}^{f}(\bar{B}_{2})=0\), then the problem (3.1) and (3.8) has one solution. Furthermore, if (H2) and (M2) hold, then the problem (3.1) and (3.8) has one nontrivial solution provided
\(i_{P}^{f}(\bar{B}_{01})=i_{P}^{f}(\bar{B}_{02})\), \(\nu_{P}^{f}(\bar {B}_{02})=0\)
and
\(i_{P}^{f}(\bar{B}_{01})-i_{P}^{f}(\bar{B}_{1})\)
is odd.
Proof
Let \(X=L^{2}([0,1],\mathbf {R}^{2n})\), \(Y=C([0,1],\mathbf {R}^{2n})\). Define \(D(A_{2})=\{x\in H^{1}([0,1],\mathbf {R}^{2n})\vert x(1)=Px(0)\}\), and \(A_{2}:D(A_{2})\subset Y\rightarrow X\) by \((A_{2}x)(t)=-J\dot {x}(t)-\mu_{2}x(t)\) where we choose \(\mu_{2}<0\) such that the operator \(A_{2}\) is invertible, the matrix \((e^{J\mu_{2}}-P)\) is also invertible and \(B_{1}-\mu_{2}I_{2n}\geq I_{2n}\), \(B_{01}-\mu_{2}I_{2n}\geq I_{2n}\). Then \(A_{2}\) is an unbounded self-adjoint and invertible operator in X with \(\sigma(A_{2})=\sigma_{d}(A_{2})\). \(N_{2}:Y\rightarrow Y\) by \((N_{2}x)(t)=H'(t,x(t))-\mu_{2} x(t)\equiv f_{2}(t)\).
Similar to the proof of Theorem 3.1, if \(x=x(t)\) is a solution of (3.1) and (3.8), then
$$x(t)=e^{J\mu_{2}t}x(0)+ \int_{0}^{t}e^{J\mu_{2}(t-s)}Jf_{2}(s) \,ds. $$
Considering the boundary value condition (3.8) yields
$$x(0)=\bigl(e^{J\mu_{2}}-P\bigr)^{-1}\biggl(M_{2}- \int_{0}^{1}e^{J\mu_{2}(1-s)}Jf_{2}(s) \,ds\biggr). $$
Then the problem (3.1) and (3.8) is equivalent to
$$x(t)= \int_{0}^{1}G_{2}(t,s)f_{2}(s) \,ds+M^{2}(x)=A_{2}^{-1}N_{2}x+M^{2}(x), $$
where
$$G_{2}(t,s)=-e^{J\mu_{2}t}\bigl(e^{J\mu_{2}}-P \bigr)^{-1}e^{J\mu _{2}(1-s)}J+e^{J\mu_{2}(t-s)}J $$
for \(0\leq s\leq t\leq1\);
$$G_{2}(t,s)=-e^{J\mu_{2}t}\bigl(e^{J\mu_{2}}-P \bigr)^{-1}e^{J\mu_{2}(1-s)}J $$
for \(0\leq t\leq s\leq1\); and
$$\bigl(M^{2}x\bigr) (t)=e^{J\mu_{2}t}\bigl(e^{J\mu_{2}}-P \bigr)^{-1}M_{2}\bigl(x(0), x(1)\bigr). $$
\(M^{2}(x)\) is a compact operator and satisfies \(\Vert M^{2}(x) \Vert _{Y}\leq\rho\) for some \(\rho>0\). Hence (H1), (H2), (M1) imply (N1), (N2), (M), respectively. Hence, Theorem 3.1 follows from Theorem 1.1. □
Remark
When \(M_{2}(\xi)=0\) for \(\xi\in \mathbf {R}^{4n}\), the first part of Theorem 3.2 reduces to [17], Theorem 3.5.3.