First, let \(N\geq3\), \(\lambda_{1}=-1\), \(\lambda_{2}=-1\), \(0< p_{1}<\frac {4}{N}\), \(p_{2}=1+\frac{2+\alpha}{N}\), and \(0< M<\|Q\|_{L^{2}}^{2}\), where Q is the ground state solution of (1.5). We can define the variational problem
$$ d_{M}:=\inf_{\{u\in H^{1};\|u\|_{L^{2}}^{2}=M\}}E(u), $$
(3.1)
where \(E(u)\) is the energy functional defined in (1.4). In the following theorem, we apply the profile decomposition of bounded sequences in \(H^{1}\) to solve the variational problem (3.1).
Theorem 3.1
Let
\(N\geq3\), \(\lambda_{1}=-1\), \(\lambda_{2}=-1\), \(0< p_{1}<\frac{4}{N}\), \(p_{2}=1+\frac{2+\alpha}{N}\), and
\(0< M<\|Q\|_{L^{2}}^{2}\), where
Q
is the ground state solution of (1.5). Then there exists
\(u_{0}\in H^{1}\)
such that
\(d_{M}=E(u_{0})\).
Proof
First, we show that the variational problem (3.1) is well-defined and there exists \(C_{0}>0\) such that
$$ d_{M}\leq-C_{0}< 0. $$
(3.2)
Indeed, we deduce from (1.4), (2.2), and (2.7) that there exists a constant C such that
$$\begin{aligned} E(u)&:=\frac{1}{2} \int_{\mathbb{R}^{N}} \bigl\vert \nabla u(x) \bigr\vert ^{2} \,dx -\frac{1}{p_{1}+2} \int_{\mathbb{R}^{N}} \bigl\vert u(x) \bigr\vert ^{p_{1}+2}\,dx\\ &\quad{}- \frac{1}{2p_{2}} \int_{\mathbb{R}^{N}} \bigl(I_{\alpha}\ast \vert u \vert ^{p_{2}}\bigr) (x) \bigl\vert u(x) \bigr\vert ^{p_{2}}\,dx \\ &\geq \biggl(\frac{1}{2}-\frac{ \Vert u \Vert _{L^{2}}^{2p_{2}-2}}{2 \Vert Q \Vert _{L^{2}}^{2p_{2}-2}} \biggr) \Vert \nabla u \Vert _{L^{2}}^{2} -C \Vert u \Vert _{L^{2}}^{p_{1}+2-\frac{Np_{1}}{2}} \Vert \nabla u \Vert _{L^{2}}^{\frac{Np_{1}}{2}}. \end{aligned}$$
Since \(\frac{Np_{1}}{2}<2\), it follows from Young’s inequality that, for all \(0<\varepsilon<\frac{1}{2}\), there exists a constant \(C(\varepsilon ,M)\) such that
$$C \Vert u \Vert _{L^{2}}^{p_{1}+2-\frac{Np_{1}}{2}} \Vert \nabla u \Vert _{L^{2}}^{\frac {Np_{1}}{2}}\leq\varepsilon \Vert \nabla u \Vert _{L^{2}}^{2}+C(\varepsilon,M). $$
This implies that
$$ E(u)\geq \biggl(\frac{1}{2} \biggl(1-\frac{ \Vert u \Vert _{L^{2}}^{2p_{2}-2}}{ \Vert Q \Vert _{L^{2}}^{2p_{2}-2}} \biggr)-\varepsilon \biggr) \Vert \nabla u \Vert _{L^{2}}^{2} -C(\varepsilon,M). $$
(3.3)
Therefore we deduce from the hypothesis \(\|u\|_{L^{2}}^{2}=M<\|Q\|_{L^{2}}^{2}\) that \(E(u)\) has a lower bound and the variational problem (3.1) is well-defined.
Now, let \(u\in H^{s}\) be a fixed function, and let \(\mu>0\). Set \(u_{\mu}=\mu^{\frac{N}{2}}u(\mu x)\). It follows easily that
$$\Vert u_{\mu} \Vert _{L^{2}}^{2}= \Vert u \Vert _{L^{2}}^{2}=M $$
and
$$\begin{aligned} E(u_{\mu})&=\frac{\mu^{2}}{2} \int_{\mathbb{R}^{N}} \bigl\vert \nabla u(x) \bigr\vert ^{2} \,dx -\frac{\mu^{\frac{Np_{1}}{2}}}{p_{1}+2} \int_{\mathbb{R}^{N}} \bigl\vert u(x) \bigr\vert ^{p_{1}+2}\,dx\\&\quad{}- \frac{\mu^{2}}{2p_{2}} \int_{\mathbb{R}^{N}} \bigl(I_{\alpha}\ast \vert u \vert ^{p_{2}}\bigr) (x) \bigl\vert u(x) \bigr\vert ^{p_{2}}\,dx \\ &= \mu^{2} \biggl(\frac{1}{2} \int_{\mathbb{R}^{N}} \bigl\vert \nabla u(x) \bigr\vert ^{2} \,dx-\frac {1}{2p_{2}} \int_{\mathbb{R}^{N}} \bigl(I_{\alpha}\ast \vert u \vert ^{p_{2}}\bigr) (x) \bigl\vert u(x) \bigr\vert ^{p_{2}}\,dx \biggr)\\&\quad{}-\frac{\mu^{\frac {Np_{1}}{2}}}{p_{1}+2} \int_{\mathbb{R}^{N}} \bigl\vert u(x) \bigr\vert ^{p_{1}+2}\,dx. \end{aligned}$$
On the other hand, by the sharp Gagliardo–Nirenberg inequality (2.2) and \(\|u\| _{L^{2}}^{2}=M<\|Q\|_{L^{2}}^{2}\) it follows that there exists \(C_{1}>0\) such that
$$\frac{1}{2} \int_{\mathbb{R}^{N}} \bigl\vert \nabla u(x) \bigr\vert ^{2} \,dx-\frac{1}{2p_{2}} \int _{\mathbb{R}^{N}} \bigl(I_{\alpha}\ast \vert u \vert ^{p_{2}}\bigr) (x) \bigl\vert u(x) \bigr\vert ^{p_{2}}\,dx\geq C_{1}>0. $$
Since \(\frac{Np_{1}}{2}<2\), we can choose \(\mu>0\) sufficiently small such that there exists \(C_{0}>0\) such that \(E(u_{\mu})\leq-C_{0}<0\). Hence (3.2) is true.
Second, let \(\{u_{n}\}_{n=1}^{\infty}\) be a minimizing sequence of the variational problem (3.1) such that
$$ E(u_{n})\rightarrow d_{M}\quad\mbox{and} \quad \Vert u_{n} \Vert _{L^{2}}^{2}=M. $$
(3.4)
This implies that, for n large enough, \(E(u_{n})< d_{M}+1\). Thus, for all \(0<\varepsilon<\frac{1}{2} (1-\frac{\|u\|_{L^{2}}^{2p_{2}-2}}{\|Q\| _{L^{2}}^{2p_{2}-2}} )\), we have
$$\biggl(\frac{1}{2} \biggl(1-\frac{ \Vert u \Vert _{L^{2}}^{2p_{2}-2}}{ \Vert Q \Vert _{L^{2}}^{2p_{2}-2}} \biggr)-\varepsilon \biggr) \Vert \nabla u \Vert _{L^{2}}^{2}\leq d_{M}+1+C\bigl(\varepsilon, \Vert Q \Vert _{L^{2}},M\bigr). $$
This yields that \(\{u_{n}\}_{n=1}^{\infty}\) is bounded in \(H^{1}\).
Third, applying the profile decomposition of bounded sequences in \(H^{1}\), we will show that the infimum of the variational problem (3.1) can be attained. Apply Lemma 2.4 to the minimizing sequence \(\{u_{n}\} _{n=1}^{\infty}\), which, up to a subsequence, can be decomposed as
$$ u_{n}(x)=\sum_{j=1}^{l}U^{j} \bigl(x-x_{n}^{j}\bigr)+r_{n}^{l}, $$
(3.5)
with \(\limsup_{n\rightarrow\infty}\|r_{n}^{l}\|_{L^{q}}\rightarrow0\) as \(l\rightarrow\infty\) for every \(q \in(2,\frac{2N}{N-2})\).
Now, injecting (3.5) into the energy functional \(E(u_{n})\), it follows from (2.4)–(2.6) that
$$ E(u_{n})=\sum_{j=1}^{l}E \bigl(U^{j}\bigr)+E\bigl(r_{n}^{l}\bigr)+\circ(1) $$
(3.6)
as \(n\rightarrow\infty\) and \(l\rightarrow\infty\). For every \(U^{j}\) (\(1\leq j\leq l\)), take the scaling transform \(U^{j}_{\mu _{j}}=\mu_{j}U^{j}\) with \(\mu_{j}=\frac{\sqrt{M}}{\|U^{j}\|_{L^{2}}}\). It follows easily that
$$ \bigl\Vert U^{j}_{\mu_{j}} \bigr\Vert _{L^{2}}^{2}=M $$
(3.7)
and
$$\begin{aligned} E\bigl(U^{j}_{\mu_{j}}\bigr)&=\frac{\mu_{j}^{2}}{2} \int_{\mathbb{R}^{N}} \bigl\vert \nabla U^{j}(x) \bigr\vert ^{2}\,dx-\frac{\mu_{j}^{p_{1}+2}}{p_{1}+2} \int_{\mathbb {R}^{N}} \bigl\vert U^{j}(x) \bigr\vert ^{p_{1}+2}\,dx \\ &\quad{}-\frac{\mu_{j}^{2p_{2}}}{2p_{2}} \int _{\mathbb{R}^{N}} \bigl(I_{\alpha}\ast \bigl\vert U^{j} \bigr\vert ^{p_{2}}\bigr) (x) \bigl\vert U^{j}(x) \bigr\vert ^{p_{2}}\,dx \\ &=\mu _{j}^{2}E\bigl(U^{j}\bigr) - \frac{\mu_{j}^{2}(\mu_{j}^{p_{1}}-1)}{p_{1}+2} \int_{\mathbb {R}^{N}} \bigl\vert U^{j}(x) \bigr\vert ^{p_{1}+2}\,dx \\ &\quad{}-\frac{\mu_{j}^{2}(\mu _{j}^{2p_{2}-2}-1)}{2p_{2}} \int_{\mathbb{R}^{N}} \bigl(I_{\alpha}\ast \bigl\vert U^{j} \bigr\vert ^{p_{2}}\bigr) (x) \bigl\vert U^{j}(x) \bigr\vert ^{p_{2}}\,dx. \end{aligned}$$
This yields
$$ \begin{aligned}[b] E\bigl(U^{j}\bigr)&=\frac{E(U^{j}_{\mu_{j}})}{\mu_{j}^{2}} + \frac{(\mu_{j}^{p_{1}}-1)}{p_{1}+2} \int_{\mathbb {R}^{N}} \bigl\vert U^{j}(x) \bigr\vert ^{p_{1}+2}\,dx\\&\quad{}+\frac{\mu_{j}^{2p_{2}-2}-1}{2p_{2}} \int_{\mathbb {R}^{N}} \bigl(I_{\alpha}\ast \bigl\vert U^{j} \bigr\vert ^{p_{2}}\bigr) (x) \bigl\vert U^{j}(x) \bigr\vert ^{p_{2}}\,dx. \end{aligned} $$
(3.8)
Similarly, for the term \(E(r_{n}^{l})\), we obtain
$$\begin{aligned} E\bigl(r_{n}^{l}\bigr)&=\frac{ \Vert r_{n}^{l} \Vert _{L^{2}}^{2}}{M}E \biggl( \frac{\sqrt{M}}{ \Vert r_{n}^{l} \Vert _{L^{2}}} r_{n}^{l} \biggr)+\frac{((\frac{\sqrt{M}}{ \Vert r_{n}^{l} \Vert _{L^{2}}})^{p_{1}}-1)}{p_{1}+2} \int_{\mathbb{R}^{N}} \bigl\vert r_{n}^{l}(x) \bigr\vert ^{p_{1}+2}\,dx \\ &\quad{}+\frac{(\frac{\sqrt{M}}{ \Vert r_{n}^{l} \Vert _{L^{2}}})^{2p_{2}-2}-1}{2p_{2}} \int _{\mathbb{R}^{N}} \bigl(I_{\alpha}\ast \bigl\vert r_{n}^{l} \bigr\vert ^{p_{2}}\bigr) (x) \bigl\vert r_{n}^{l}(x) \bigr\vert ^{p_{2}}\,dx+\circ (1) \\ &\geq \frac{ \Vert r_{n}^{l} \Vert _{L^{2}}^{2}}{M}E \biggl(\frac{\sqrt{M}}{ \Vert r_{n}^{l} \Vert _{L^{2}}} r_{n}^{l} \biggr)+\circ(1). \end{aligned}$$
(3.9)
Since \(\|U^{j}_{\mu_{j}}\|_{L^{2}}^{2}=\|\frac{\sqrt{M}}{\|r_{n}^{l}\|_{L^{2}}}r_{n}^{l}\| _{L^{2}}^{2}=M\), we deduce from the definition of \(d_{M}\) that
$$ E\bigl(U^{j}_{\mu_{j}}\bigr)\geq d_{M} \quad\mbox{and} \quad E \biggl(\frac{\sqrt{M}}{ \Vert r_{n}^{l} \Vert _{L^{2}}}r_{n}^{l} \biggr)\geq d_{M}. $$
(3.10)
Thus we infer from (3.6), (3.8), and (3.9) that
$$\begin{aligned} E(u_{n})&\geq\sum_{j=1}^{l} \biggl(\frac{E(U^{j}_{\mu_{j}})}{\mu_{j}^{2}} +\frac{(\mu_{j}^{p_{1}}-1)}{p_{1}+2} \int_{\mathbb{R}^{N}} \bigl\vert U^{j}(x) \bigr\vert ^{p_{+}2}\,dx \\&\quad{}+\frac {\mu_{j}^{2p_{2}-2}-1}{2p_{2}} \int_{\mathbb{R}^{N}} \bigl(I_{\alpha}\ast \bigl\vert U^{j} \bigr\vert ^{p_{2}}\bigr) (x) \bigl\vert U^{j}(x) \bigr\vert ^{p_{2}}\,dx \biggr) \\ &\quad{}+\frac{ \Vert r_{n}^{l} \Vert _{L^{2}}^{2}}{M}E \biggl(\frac{\sqrt{M}}{ \Vert r_{n}^{l} \Vert _{L^{2}}} r_{n}^{l} \biggr)+\circ(1) \\ &\geq\sum_{j=1}^{l}\frac{ \Vert U^{j} \Vert _{L^{2}}^{2}}{M}d_{M} +\inf_{j\geq1}\frac{(\mu_{j}^{p_{1}}-1)}{p_{1}+2} \Biggl(\sum _{j=1}^{l} \int_{\mathbb{R}^{N}} \bigl\vert U^{j}(x) \bigr\vert ^{p_{1}+2}\,dx \Biggr) \\ &\quad{}+\inf_{j\geq1}\frac{\mu_{j}^{2p_{2}-2}-1}{2p_{2}} \Biggl(\sum _{j=1}^{l} \int_{\mathbb{R}^{N}} \bigl(I_{\alpha}\ast \bigl\vert U^{j} \bigr\vert ^{p_{2}}\bigr) (x) \bigl\vert U^{j}(x) \bigr\vert ^{p_{2}}\,dx \Biggr)+\frac{ \Vert r_{n}^{l} \Vert _{L^{2}}^{2}}{M}d_{M}+ \circ(1) \\ &\geq\sum_{j=1}^{l}\frac{ \Vert U^{j} \Vert _{L^{2}}^{2}}{M}d_{M} +\frac{ \Vert r_{n}^{l} \Vert _{L^{2}}^{2}}{M}d_{M} \\ &\quad{}+\inf_{j\geq1}\bigl(\mu_{j}^{a_{0}}-1 \bigr) \biggl(\frac{1}{2p_{2}} \int_{\mathbb {R}^{N}} \bigl(I_{\alpha}\ast \vert u_{n} \vert ^{p_{2}}\bigr) (x) \bigl\vert u_{n}(x) \bigr\vert ^{p_{2}}\,dx \\&\quad{}+\frac {1}{p_{1}+2} \int_{\mathbb{R}^{N}} \bigl\vert u_{n}(x) \bigr\vert ^{p_{1}+2}\,dx \biggr) +\circ(1), \end{aligned}$$
(3.11)
where \(a_{0}=\min\{2p_{2}-2,p_{1}\}\).
Note that since the series \(\sum_{j=1}^{\infty}\|U^{j}_{\mu_{j}}\|_{L^{2}}^{2}\) is convergent, there exists \(j_{0}\geq1\) such that
$$ \inf_{j\geq1}\mu_{j}=\mu_{j_{0}}= \frac{\sqrt{M}}{ \Vert U^{j_{0}} \Vert _{L^{2}}}. $$
(3.12)
Letting \(n\rightarrow\infty\) and \(l\rightarrow\infty\) in (3.11), there exists \(C>0\) such that
$$d_{M}\geq d_{M}+C \biggl( \biggl(\frac{\sqrt{M}}{ \Vert U^{j_{0}} \Vert _{L^{2}}} \biggr)^{a_{0}}-1 \biggr), $$
which implies
$$\bigl\Vert U^{j_{0}} \bigr\Vert _{L^{2}}^{2}\geq M. $$
Hence, \(\|U^{j_{0}}\|_{L^{2}}^{2}= M\), and there exists only one term \(U^{j_{0}}\neq0\) in the decomposition (3.5). Moreover, we deduce from (2.4)–(2.6) that \(E(U^{j_{0}})=d_{M}\). This implies that the infimum of the variational problem (3.1) is attained at \(U^{j_{0}}\). This completes the proof. □
Now, define
$$ S_{M}:=\bigl\{ u\in H^{1}; u \mbox{ is a minimizer of the variational problem }(3.1)\bigr\} . $$
(3.13)
Then, for any \(u\in S_{M}\), we deduce from Euler–Lagrange theorem that there exists \(\omega\in\mathbb{R}\) such that
$$ -\Delta u+\omega u- \vert u \vert ^{p_{1}}u- \bigl(I_{\alpha}\ast \vert u \vert ^{p_{2}}\bigr) \vert u \vert ^{p_{2}-2}u=0. $$
(3.14)
In addition, if \(u\in S_{M}\), then u is a solution of (3.14), and \(\psi(t,x)=e^{i\omega t}u(x)\) is a standing wave solution of (1.1). Hence \(e^{i\omega t}u(x)\) is the orbit of \(u(x)\). On the other hand, for any \(t\geq0\), if u is a solution of (3.1), then \(e^{i\omega t}u(x)\) is also solution of (3.1), that is, \(e^{i\omega t}u\in S_{M}\). Applying Theorem 3.1 and the method of Cazenave and Lions [17], we will show that if the initial data is close to an orbit in the set \(S_{M}\), then the solution of (1.1) remains close to the orbit in the set \(S_{M}\).
Theorem 3.2
Let
\(N\geq3\), \(\lambda_{1}=-1\), \(\lambda_{2}=-1\), \(0< p_{1}<\frac{4}{N}\), \(p_{2}=1+\frac{2+\alpha}{N}\), and
\(0< M<\|Q\|_{L^{2}}^{2}\). If Assumption 1
holds, then for arbitrary
\(\varepsilon>0\), there exists
\(\delta >0\)
such that, for any
\(\psi_{0}\in H^{1}\)
such that
$$\inf_{u\in S_{M}} \Vert \psi_{0}-u \Vert _{H^{1}}< \delta, $$
the corresponding solution
ψ
of (1.1) satisfies
$$\inf_{u\in S_{M}} \bigl\Vert \psi(t)-u \bigr\Vert _{H^{1}}< \varepsilon $$
for all
\(t>0\).
Proof
First, by Theorem 2.5 we see that the solution ψ of (1.1) exists globally. Assume by contradiction that there exist \(\varepsilon_{0}\) and a sequence \(\{\psi_{0,n}\}_{n=1}^{\infty}\) such that
$$ \inf_{u\in S_{M}} \Vert \psi_{0,n}-u \Vert _{H^{1}}< \frac{1}{n} $$
(3.15)
and there exists \(\{t_{n}\}_{n=1}^{\infty}\) such that the corresponding solution sequence \(\{\psi_{n}(t_{n})\}_{n=1}^{\infty}\) of (1.1) satisfies
$$ \inf_{u\in S_{M}} \bigl\Vert \psi_{n}(t_{n})-u \bigr\Vert _{H^{1}}\geq\varepsilon_{0}. $$
(3.16)
From (3.15) and the conservation laws it follows that, as \(n\rightarrow\infty\),
$$\int_{\mathbb{R}^{N}} \bigl\vert \psi_{n}(t_{n},x) \bigr\vert ^{2}\,dx= \int_{\mathbb{R}^{N}} \bigl\vert \psi _{0,n}(x) \bigr\vert ^{2}\,dx\rightarrow \int_{\mathbb{R}^{N}} \bigl\vert u(x) \bigr\vert ^{2}\,dx=M $$
and
$$E\bigl(\psi_{n}(t_{n})\bigr)=E(\psi_{0,n}) \rightarrow E(u)=d_{M}. $$
Hence \(\{\psi_{n}(t_{n})\}_{n=1}^{\infty}\) is a minimizing sequence of the variational problem (3.1). We deduce from Theorem 3.1 that there exists a minimizer \(\omega\in S_{M}\) such that
$$ \bigl\Vert \psi_{n}(t_{n})-\omega \bigr\Vert _{H^{1}}\rightarrow0, \quad n\rightarrow\infty, $$
(3.17)
which contradicts with (3.16). This completes the proof. □
Proof of Theorem 1.1
Let \(\psi_{0}\in H^{1}\) and \(0< M<\|Q\|_{L^{2}}^{2}\), where Q is a ground state of (1.5). Then it follows from Theorem 3.1 that the variational problem (3.1) has minimizers. These minimizers correspond to the standing waves of (1.1). Therefore we obtain the existence of the standing waves of (1.1). In addition, we deduce from Theorem 3.2 and the definition of orbital stability (see [1]) that the standing waves of (1.1) are orbitally stable. □