Skip to main content

Existence of positive solutions for a critical nonlinear Schrödinger equation with vanishing or coercive potentials

Abstract

In this paper we investigate the existence of positive solutions for the following nonlinear Schrödinger equation:

u+V(x)u=K(x) | u | p 2 uin  R N ,

where V(x)a | x | b and K(x)μ | x | s as |x| with 0<a,μ<+, b<2, b0, 0< s b <1 and p=2(N2s/b)/(N2).

MSC:35J20, 35J60.

1 Introduction and statement of results

In this paper, we consider the following semilinear elliptic equation:

u+V(x)u=K(x) | u | p 2 uin  R N ,
(1.1)

where N3. The exponent

p=2 ( N 2 s b ) /(N2)
(1.2)

with the real numbers b and s satisfying

b<2,b0,0< s b <1.
(1.3)

By this definition, 2<p< 2 :=2N/(N2).

With respect to the functions V and K, we assume that

(A1) V,KC( R N ) for every x R N , V(x)>0 and K(x)>0.

(A2) There exist 0<a< and 0<μ< such that

lim | x | | x | b V(x)=aand lim | x | | x | s K(x)=μ.
(1.4)

A typical example for Eq. (1.1) with V and K satisfying (A1) and (A2) is the equation

u+ a ( 1 + | x | ) b u= μ ( 1 + | x | ) s | u | p 2 uin  R N .
(1.5)

When 0<b<2, the potentials are vanishing at infinity and when b<0, the potentials are coercive.

Equation (1.1) arises in various applications, such as chemotaxis, population genetics, chemical reactor theory and the study of standing wave solutions of certain nonlinear Schrödinger equations. Therefore, they have received growing attention in recent years (one can see, e.g., [16] and [710] for reference).

Under the above assumptions, Eq. (1.1) has a natural variational structure. For an open subset Ω in R N , let C 0 (Ω) be the collection of smooth functions with a compact support set in Ω. Let E be the completion of C 0 ( R N ) with respect to the inner product

( u , v ) E = R N uvdx+ R N V(x)uvdx.

From assumptions (A1) and (A2), we deduce that

( R N | u | 2 ( 1 + | x | ) b d x ) 1 / 2 and ( R N V ( x ) | u | 2 d x ) 1 / 2

are two equivalent norms in the space

L V 2 ( R N ) = { u  is measurable in  R N | R N V ( x ) | u | 2 d x < + } .

Therefore, there exists B 1 >0 such that

( R N | u | 2 ( 1 + | x | ) b d x ) 1 / 2 B 1 ( R N V ( x ) | u | 2 d x ) 1 / 2 .

Moreover, assumptions (A1) and (A2) imply that there exists B 2 >0 such that

K(x) B 2 ( 1 + | x | ) s ,x R N .

Then, by the Hölder and Sobolev inequalities (see, e.g., [[11], Theorem 1.8]), we have, for every u C 0 ( R N ),

( R N K ( x ) | u | p d x ) 1 p C ( R N | u | p ( 1 + | x | ) s d x ) 1 p = C ( R N | u | 2 s b ( 1 + | x | ) s | u | p 2 s b d x ) 1 p C ( R N | u | 2 ( 1 + | x | ) b d x ) s p b ( R N | u | 2 d x ) 1 p ( 1 s b ) C ( R N | u | 2 ( 1 + | x | ) b d x ) s p b ( R N | u | 2 d x ) 2 2 p ( 1 s b ) = C ( R N | u | 2 ( 1 + | x | ) b d x ) 1 2 2 s p b ( R N | u | 2 d x ) 1 2 ( 1 2 s p b ) C ( R N V ( x ) | u | 2 d x ) 1 2 2 s p b ( R N | u | 2 d x ) 1 2 ( 1 2 s p b ) ,

where C>0 is a constant independent of u. It follows that there exists a constant C >0 such that

( R N K ( x ) | u | p d x ) 1 / p C ( R N | u | 2 d x ) 1 / 2 + C ( R N V ( x ) | u | 2 d x ) 1 / 2 .

This implies that E can be embedded continuously into the weighted L p -space

L K p ( R N ) = { u  is measurable in  R N | R N K ( x ) | u | p d x < + } .

Then the functional

Φ(u)= 1 2 u E 2 1 p R N K(x) | u | p dx,uE,

is well defined in E. And it is easy to check that Φ is a C 2 functional and the critical points of Φ are solutions of (1.1) in E.

In a recent paper [12], Alves and Souto proved that the space E can be embedded compactly into L K p ( R N ) if 0<b<2 and 2(N2s/b)/(N2)<p< 2 and Φ satisfies the Palais-Smale condition consequently. Then, by using the mountain pass theorem, they obtained a nontrivial solution for Eq. (1.1). Unfortunately, when p=2(N2s/b)/(N2), the embedding of E into L K p ( R N ) is not compact and Φ no longer satisfies the Palais-Smale condition. Therefore, the ‘standard’ variational methods fail in this case. From this point of view, p=2(N2s/b)/(N2) should be seen as a kind of critical exponent for Eq. (1.1). If the potentials V and K are restricted to the class of radially symmetric functions, ‘compactness’ of such a kind is regained and ‘standard’ variational approaches work (see [5] and [6]). However, this method does not seem to apply to the more general equation (1.1) where K and V are non-radially symmetric functions.

It is not easy to deal with Eq. (1.1) directly because there are no known approaches that can be used directly to overcome the difficulty brought by the loss of compactness. However, in this paper, through an interesting transformation, we find an equivalent equation for Eq. (1.1) (see Eq. (2.9) in Section 2). This equation has the advantages that its Palais-Smale sequence can be characterized precisely through the concentration-compactness principle (see Theorem 5.1), and it possesses partial compactness (see Corollary 5.8). By means of these advantages, a positive solution for this equivalent equation and then a corresponding positive solution for Eq. (1.1) are obtained.

Before stating our main result, we need to give some definitions.

Let

V (x)= | x | 2 b 2 b V ( | x | b 2 b x ) + C b | x | 2 ,
(1.6)

where

C b = b 4 ( 1 b 4 ) ( N 2 ) 2
(1.7)

and

K (x)= | x | 2 s 2 b K ( | x | b 2 b x ) .
(1.8)

Let H 1 ( R N ) be the Sobolev space endowed with the norm and the inner product

u= ( R N | u | 2 d x + R N u 2 d x ) 1 / 2 and(u,v)= R N (uv+uv)dx,

respectively, and let L p ( R N ) be the function space consisting of the functions on R N that are p-integrable. Since 2<p< 2 , H 1 ( R N ) can be embedded continuously into L p ( R N ). Therefore, the infimum

inf v H 1 ( R N ) { 0 } R N | v | 2 d x + a R N v 2 d x ( R N | v | p d x ) 2 / p >0.
(1.9)

We denote this infimum by S p .

Our main result reads as follows.

Theorem 1.1 Under assumptions (A1) and (A2), if b, s and p satisfy (1.3) and (1.2) and

inf u H 1 ( R N ) { 0 } R N | u | 2 d x + ( b 2 4 b ) R N | x u | 2 | x | 2 d x + R N V ( x ) | u | 2 d x ( R N K ( x ) | u | p d x ) 2 / p < ( 1 b / 2 ) p 2 p μ 2 p S p ,
(1.10)

then Eq. (1.1) has a positive solution uE.

Remark 1.2 We should emphasize that condition (1.10) can be satisfied in many situations. For r>0, let R r ={x R N r/2<|x|<r} and H 0 1 ( R r ) be the closure of C 0 ( R r ) in H 1 ( R N ). Under assumptions (A1) and (A2), we have

inf u H 0 1 ( R r ) { 0 } R r | u | 2 d x ( R r K ( x ) | u | p d x ) 2 / p 0,as r+.

Then, for any ϵ>0, there exist r ϵ >0 and u ϵ H 0 1 ( R r ){0} such that

R r | u ϵ | 2 d x ( R r K ( x ) | u ϵ | p d x ) 2 / p <ϵ.

It follows from this inequality and R r | x u ϵ | 2 | x | 2 dx R r | u ϵ | 2 dx that if sup R r V is small enough such that

R r V ( x ) | u ϵ | 2 d x ( R r K ( x ) | u ϵ | p d x ) 2 / p <ϵ,

then

R r | u ϵ | 2 d x + ( b 2 4 b ) R r | x u ϵ | 2 | x | 2 d x + R r V ( x ) | u ϵ | 2 d x ( R r K ( x ) | u ϵ | p d x ) 2 / p < ( 2 + | b 2 4 b | ) ϵ .

This implies that (1.10) is satisfied if ϵ is chosen such that (2+| b 2 4 b|)ϵ< ( 1 b / 2 ) p 2 p μ 2 p S p .

Notations Let X be a Banach space and φ C 1 (X,R). We denote the Fréchet derivative of φ at u by φ (u). The Gateaux derivative of φ is denoted by φ (u),v, u,vX. By → we denote the strong and by the weak convergence. For a function u, u + denotes the functions max{u(x),0}. The symbol δ i j denotes the Kronecker symbol:

δ i j ={ 1 , i = j , 0 , i j .

We use o(h) to mean o(h)/|h|0 as |h|0.

2 An equivalent equation for Eq. (1.1)

For x R N , let y= | x | b / 2 x. To u, a C 2 function in R N , we associate a function v, a C 2 function in R N {0}, by the transformation

u(x)= | x | b 4 ( N 2 ) v ( | x | b 2 x 1 , , | x | b 2 x N ) .
(2.1)

Lemma 2.1 Under the above assumptions,

x u(x)= | y | b ( N + 2 ) 2 ( 2 b ) ( i , j = 1 N y j ( A i j ( y ) v y i ) C b | y | 2 v ) ,
(2.2)

where

A i j (y)= δ i j + ( b 2 4 b ) y i y j | y | 2 ,i,j=1,,N.
(2.3)

Proof Let r=|x|. By direct computations,

u x i = r b ( N 2 ) 4 b 2 v y i b 2 r b ( N 2 ) 4 b 2 2 x i j = 1 N x j v y j b 4 (N2) r b ( N 2 ) 4 2 x i v
(2.4)

and

2 u x i 2 = b N 2 r b ( N 2 ) 4 b 2 2 x i v y i + r b ( N 2 ) 4 b 2 v y i 2 b r b ( N 2 ) 4 b 2 j = 1 N x j x i 2 v y j y i + ( b 2 4 ( N 1 ) + b ) r b ( N 2 ) 4 b 2 4 x i 2 j = 1 N x j v y j b 2 r b ( N 2 ) 4 b 2 2 j = 1 N x j v y j + b 2 4 r b ( N 2 ) 4 b 4 x i 2 j , k = 1 N x j x k 2 v y j y k + b 4 ( N 2 ) ( b 4 ( N 2 ) + 2 ) r b 4 ( N 2 ) 4 x i 2 v b 4 ( N 2 ) r b 4 ( N 2 ) 2 v .

Then

x u = i = 1 N 2 u x i 2 = r b ( N 2 ) 4 b { y v + ( b 2 4 b ) r 2 i , j = 1 N x i x j 2 v y i y j + ( b 2 4 b ) ( N 1 ) r b 2 2 i = 1 N x i v y i b 4 ( 1 b 4 ) ( N 2 ) 2 r b 2 v } .
(2.5)

Since y= | x | b / 2 x, we have r= | y | 2 2 b and x i = | y | b 2 b y i , 1iN. Then

r 2 i , j = 1 N x i x j 2 v y i y j + ( N 1 ) r b 2 2 i = 1 N x i v y i = | y | 2 i , j = 1 N y i y j 2 v y i y j + ( N 1 ) | y | 2 i = 1 N y i v y i = i , j = 1 N y j ( y i y j | y | 2 v y i ) .
(2.6)

Substituting (2.6) and r= | y | 2 2 b into (2.5) results in

x u ( x ) = | y | b ( N + 2 ) 2 ( 2 b ) ( y v + ( b 2 4 b ) i , j = 1 N y j ( y i y j | y | 2 v y i ) C b | y | 2 v ) = | y | b ( N + 2 ) 2 ( 2 b ) ( i , j = 1 N y j ( A i j ( y ) v y i ) C b | y | 2 v ) .

 □

Let

H loc 1 ( R N ) = { u |  for every bounded domain  Ω R N , Ω | u | 2 d x + Ω u 2 d x < + } .
(2.7)

From the classical Hardy inequality (see, e.g., [[13], Lemma 2.1]), we deduce that for every bounded C 1 domain Ω R N , there exists C Ω >0 such that, for every u H loc 1 ( R N ),

Ω u 2 | x | 2 dx C Ω ( Ω | u | 2 d x + Ω u 2 d x ) .
(2.8)

Theorem 2.2 If v H loc 1 ( R N ) is a weak solution of the equation

i , j = 1 N y j ( A i j ( y ) v y i ) + V v= K | v | p 2 vin  R N ,
(2.9)

i.e., for every ψ C 0 ( R N ),

R N i , j = 1 N A i j (y) v y i ψ y j dy+ R N V (y)vψdy= R N K (y) | v | p 2 vψdy,
(2.10)

and u is defined by (2.1), then u H loc 1 ( R N ) and it is a weak solution of (1.1), i.e., for every φ C 0 ( R N ),

R N uφdx+ R N V(x)uφdx= R N K(x) | u | p 2 uφdx.
(2.11)

Proof Using the spherical coordinates

x 1 = r cos σ 1 , x 2 = r sin σ 1 cos σ 2 , x j = r sin σ 1 sin σ 2 sin σ j 1 cos σ j , 2 j N 1 , x N = r sin σ 1 sin σ 2 sin σ N 2 sin σ N 1 ,

where 0 σ j <π, j=1,2,,N2, 0 σ N 1 <2π, we have

dx= r N 1 f(σ)drd σ 1 d σ N 1 ,

where f(σ)= sin N 2 σ 1 sin N 3 σ 2 sin σ N 2 . Recall that y= | x | b 2 x. Let R=|y|. Then r= R 2 2 b and

d x = r N 1 f ( σ ) d r d σ 1 d σ N 1 = R 2 ( N 1 ) 2 b f ( σ ) d ( R 2 2 b ) d σ 1 d σ N 1 = 2 2 b R 2 N 2 b 1 f ( σ ) d R d σ 1 d σ N 1 = 2 2 b | y | b N 2 b d y .
(2.12)

Here, we used dy= R N 1 f(σ)dRd σ 1 d σ N 1 in the last inequality above. From (2.4), (2.12) and (2.8), we deduce that there exists C>0 such that for every bounded domain Ω R N ,

Ω | u x i | 2 d x C Ω r b ( N 2 ) 2 b ( v y i ( | x | b / 2 x ) ) 2 d x + C Ω r b ( N 2 ) 2 b 4 ( x i j = 1 N x j v y j ( | x | b / 2 x ) ) 2 d x + C Ω r b ( N 2 ) 2 4 x i 2 v 2 ( | x | b / 2 x ) d x = 2 C 2 b Ω ( v ( y ) y i ) 2 d y + 2 C 2 b Ω ( y i | y | j = 1 N y j | y | v ( y ) y j ) 2 d y + 2 C 2 b Ω | y | 4 y i 2 v 2 ( y ) d y C ( Ω | v | 2 d y + Ω v 2 | y | 2 d y ) < + .

Moreover,

Ω u 2 dx= Ω | x | b 2 ( N 2 ) v 2 ( | x | b 2 x ) dx= 2 2 b Ω | y | 2 b 2 b v 2 (y)dy<+.

Therefore, u H loc 1 ( R N ). Then, to prove that u satisfies (2.11) for every φ C 0 ( R N ), it suffices to prove that (2.11) holds for every φ C 0 ( R N {0}). For φ C 0 ( R N {0}), let ψ C 0 ( R N {0}) be such that

φ(x)= | x | b 4 ( N 2 ) ψ ( | x | b 2 x ) .

By using the divergence theorem and Lemma 2.1, we get that

R N u φ d x = R N u φ d x = R N u | y | b ( N + 2 ) 2 ( 2 b ) ( i , j = 1 N y j ( A i j ( y ) ψ y i ) C b | y | 2 ψ ) d x = R N | x | b 4 ( N 2 ) v ( | x | b 2 x ) | y | b ( N + 2 ) 2 ( 2 b ) ( i , j = 1 N y j ( A i j ( y ) ψ y i ) C b | y | 2 ψ ) d x = R N | y | b ( N 2 ) 2 ( 2 b ) v ( y ) | y | b ( N + 2 ) 2 ( 2 b ) ( i , j = 1 N y j ( A i j ( y ) ψ y i ) C b | y | 2 ψ ) 2 2 b | y | b N 2 b d y = 2 2 b R N v ( i , j = 1 N y j ( A i j ( y ) ψ y i ) C b | y | 2 ψ ) d y = 2 2 b R N i , j = 1 N A i j ( y ) v y i ψ y j d y 2 C b 2 b R N v ψ | y | 2 d y .

Moreover,

R N V ( x ) u φ d x = 2 2 b R N V ( | y | b 2 b y ) u ( | y | b 2 b y ) φ ( | y | b 2 b y ) | y | b N 2 b d y = 2 2 b R N | y | 2 b 2 b V ( | y | b 2 b y ) | y | b ( N 2 ) 2 ( 2 b ) u ( | y | b 2 b y ) | y | b ( N 2 ) 2 ( 2 b ) φ ( | y | b 2 b y ) d y = 2 2 b R N | y | 2 b 2 b V ( | y | b 2 b y ) v ( y ) ψ ( y ) d y

and

R N K ( x ) | u | p 2 u φ d x = R N K ( | y | b 2 b y ) | u ( | y | b 2 b y ) | p 2 u ( | y | b 2 b y ) φ ( | y | b 2 b y ) 2 2 b | y | b N 2 b d y = 2 2 b R N | y | 2 s 2 b K ( | y | b 2 b y ) | v ( y ) | p 2 v ( y ) ψ ( y ) d y .

Therefore,

R N u φ d x + R N V ( x ) u φ d x R N K ( x ) | u | p 2 u φ d x = 2 2 b ( R N i , j = 1 N A i j ( y ) v y i ψ y j d y C b R N v ψ | y | 2 d y + R N | y | 2 b 2 b V ( | y | b 2 b y ) v ( y ) ψ ( y ) d y R N | y | 2 s 2 b K ( | y | b 2 b y ) | v ( y ) | p 2 v ( y ) ψ ( y ) d y ) = 2 2 b ( R N i , j = 1 N A i j ( y ) v y i ψ y j d y + R N V ( y ) v ψ d y R N K ( y ) | v | p 2 v ψ d y ) = 0 .

This completes the proof. □

This theorem implies that the problem of looking for solutions of (1.1) can be reduced to a problem of looking for solutions of (2.9).

3 The variational functional for Eq. (2.9)

The following inequality is a variant Hardy inequality.

Lemma 3.1 If v H 1 ( R N ), then

R N | x v | 2 | x | 2 dx ( N 2 ) 2 4 R N | v | 2 | x | 2 dx.
(3.1)

Proof We only give the proof of (3.1) for v C 0 ( R N ) since C 0 ( R N ) is dense in H 1 ( R N ). For v C 0 ( R N ), we have the following identity:

| v ( x ) | 2 = 1 d d λ | v ( λ x ) | 2 dλ=2 1 v(λx) ( x v ( λ x ) ) dλ.

By using the Hölder inequality, it follows that

R N | v ( x ) | 2 | x | 2 d x = 2 1 R N v ( λ x ) | x | 2 ( x v ( λ x ) ) d x d λ = 2 1 d λ λ N 1 R N v ( x ) | x | 2 ( x v ( x ) ) d x = 2 N 2 R N v ( x ) | x | 2 ( x v ( x ) ) d x 2 N 2 ( R N v 2 ( x ) | x | 2 d x ) 1 / 2 ( R N | x v | 2 | x | 2 d x ) 1 / 2 .

Then we conclude that

R N | x v | 2 | x | 2 dx ( N 2 ) 2 4 R N | v | 2 | x | 2 dx.

 □

From the definition of A i j (x) (see (2.3)), it is easy to verify that for u H 1 ( R N ),

R N i , j = 1 N A i j (x) u x i u x j dx= R N | u | 2 dx+ ( b 2 4 b ) R N | x u | 2 | x | 2 dx.
(3.2)

Lemma 3.2 There exist constants C 1 >0 and C 2 >0 such that for every u H 1 ( R N ),

C 1 u 2 R N | u | 2 d x + ( b 2 4 b ) R N | x u | 2 | x | 2 d x + R N V ( x ) | u | 2 d x C 2 u 2 .

Proof From conditions (A1) and (A2), we deduce that there exists a constant C>0 such that

| x | 2 b 2 b V ( | x | b 2 b x ) C ( 1 + | x | 2 ) ,x R N {0}.
(3.3)

Since

R N V (x) | u | 2 dx= R N | x | 2 b 2 b V ( | x | b 2 b x ) | u | 2 dx+ C b R N | u | 2 | x | 2 dx,

by (3.3) and the classical Hardy inequality (see, e.g., [13])

( N 2 ) 2 4 R N | u | 2 | x | 2 dx R N | u | 2 dx,u H 1 ( R N ) ,

we deduce that there exists a constant C>0 such that

R N V (x) | u | 2 dxC u 2 .

This together with the fact that R N | x u | 2 | x | 2 dx R N | u | 2 dx yields that there exists a constant C 2 >0 such that

R N | u | 2 d x + ( b 2 4 b ) R N | x u | 2 | x | 2 d x + R N V ( x ) | u | 2 d x C 2 u 2 , u H 1 ( R N ) .
(3.4)

If 0<b<2, then b 2 4 b<0 and

R N | u | 2 d x + ( b 2 4 b ) R N | x u | 2 | x | 2 d x R N | u | 2 d x + ( b 2 4 b ) R N | u | 2 d x = ( 1 b / 2 ) 2 R N | u | 2 d x .
(3.5)

In this case, C b = b 4 (1 b 4 ) ( N 2 ) 2 >0 and

R N V ( x ) | u | 2 d x = R N | x | 2 b 2 b V ( | x | b 2 b x ) | u | 2 d x + C b R N | u | 2 | x | 2 d x R N | x | 2 b 2 b V ( | x | b 2 b x ) | u | 2 d x .
(3.6)

Conditions (A1) and (A2) imply that there exists a constant C>0 such that

R N | u | 2 dx+ R N | x | 2 b 2 b V ( | x | b 2 b x ) u 2 dxC R N u 2 dx.
(3.7)

Combining (3.5)-(3.7) yields that there exists a constant C 1 >0 such that

R N | u | 2 d x + ( b 2 4 b ) R N | x u | 2 | x | 2 d x + R N V ( x ) | u | 2 d x C 1 u 2 , u H 1 ( R N ) .
(3.8)

If b<0, (3.7) still holds. From Lemma 3.1 and (3.7), we deduce that there exists a constant C 1 >0 such that for every u H 1 ( R N ),

R N | u | 2 d x + ( b 2 4 b ) R N | x u | 2 | x | 2 d x + R N V ( x ) | u | 2 d x = R N | u | 2 d x + ( b 2 4 b ) ( R N | x u | 2 | x | 2 d x ( N 2 ) 2 4 R N | u | 2 | x | 2 d x ) + R N | x | 2 b 2 b V ( | x | b 2 b x ) | u | 2 d x R N | u | 2 d x + R N | x | 2 b 2 b V ( | x | b 2 b x ) | u | 2 d x C 1 u 2 .
(3.9)

Then the desired result of this lemma follows from (3.4), (3.8) and (3.9) immediately. □

This lemma implies that

u A = ( R N | u | 2 d x + ( b 2 4 b ) R N | x u | 2 | x | 2 d x + R N V ( x ) | u | 2 d x ) 1 / 2
(3.10)

is equivalent to the standard norm in H 1 ( R N ). We denote the inner product associated with A by ( , ) A , i.e.,

( u , v ) A = R N u v d x + R N V ( x ) u v d x + ( b 2 4 b ) R N ( x u ) ( x v ) | x | 2 d x .
(3.11)

By the Sobolev inequality, we have

S A := inf u H 1 ( R N ) { 0 } u A 2 ( R N | u | p d x ) 2 / p >0
(3.12)

and

u A S A 1 2 ( R N | u | p d x ) 1 / p ,u H 1 ( R N ) .
(3.13)

By conditions (A1) and (A2), if 0<b<2, then K is bounded in R N . Therefore, by (3.13), there exists C>0 such that

( R N K ( x ) ( u + ) p d x ) 1 / p C u A ,u H 1 ( R N ) .
(3.14)

However, if b<0, K has a singularity at x=0, i.e.,

K (x) | x | 2 s 2 b K(0),as |x|0.
(3.15)

Recall that p=2(N2s/b)/(N2) and 2s/(2b)>2s/b if b<0. Then, by the Hardy-Sobolev inequality (see, for example, [[14], Lemma 3.2]), we deduce that there exists C>0 such that (3.14) still holds. Therefore, the functional

J(u)= 1 2 u A 2 1 p R N K (x) ( u + ) p dx,u H 1 ( R N )
(3.16)

is a C 2 functional defined in H 1 ( R N ). Moreover, it is easy to check that the Gateaux derivative of J is

J ( u ) , h = ( u , h ) A R N K (x) ( u + ) p 1 hdx,u,h H 1 ( R N )

and the critical points of J are nonnegative solutions of (2.9).

4 Some minimizing problems

For θ=( θ 1 ,, θ N ) R N with |θ|=1, let

B i j (θ)= δ i j + ( b 2 4 b ) θ i θ j ,i,j=1,,N.
(4.1)

By this definition, we have, for u H 1 ( R N ),

R N i , j = 1 N B i j (θ) u x i u x j dx= R N | u | 2 dx+ ( b 2 4 b ) R N | θ u | 2 dx.
(4.2)

From

( 1 + | b 2 4 b | ) R N | u | 2 d x R N | u | 2 d x + ( b 2 4 b ) R N | θ u | 2 d x { ( 1 b / 2 ) 2 R N | u | 2 d x , 0 < b < 2 , R N | u | 2 d x , b < 0 ,

we deduce that the norm defined by

u θ := ( R N | u | 2 d x + ( b 2 4 b ) R N | θ u | 2 d x + a R N | u | 2 d x ) 1 / 2
(4.3)

is equivalent to the standard norm in H 1 ( R N ). The inner product corresponding to θ is

( u , v ) θ = R N uvdx+a R N uvdx+ ( b 2 4 b ) R N (θu)(θv)dx.

Lemma 4.1 The infimum

inf u H 1 ( R N ) { 0 } u θ 2 ( R N | u | p d x ) 2 / p
(4.4)

is independent of θ R N with |θ|=1.

Proof In this proof, we always view a vector in R N as a 1×N matrix, and we use A T to denote the conjugate matrix of a matrix A.

For any θ, θ R N with |θ|=| θ |=1, let G be an N×N orthogonal matrix such that θ G T =θ. For any u H 1 ( R N ), let v(x)=u(xG), x R N . The assumption that G is an N×N orthogonal matrix implies that G G T =I, where I is the N×N identity matrix. Then it is easy to check that

R N | v | 2 dx= R N | u | 2 dx, R N | v | p dx= R N | u | p dx.
(4.5)

Note that

v(x)=(u)(xG)G.
(4.6)

By G G T =I, we have

| v ( x ) | 2 = v ( x ) ( v ( x ) ) T = ( u ) ( x G ) G G T ( ( u ) ( x G ) ) T = | ( u ) ( x G ) | 2 .

It follows that

R N | v ( x ) | 2 dx= R N | ( u ) ( x G ) | 2 dx= R N | u ( x ) | 2 dx.
(4.7)

By (4.6) and θ G T =θ, we get that

i = 1 N θ i v x i = θ ( ( u ) ( x G ) G ) T = θ G T ( ( u ) ( x G ) ) T = θ ( ( u ) ( x G ) ) T = i = 1 N θ i ( u y i ) ( x G ) .

It follows that

R N | θ v | 2 d x = R N | i = 1 N θ i v x i | 2 d x = R N | i = 1 N θ i ( u y i ) ( x G ) | 2 d x = R N | i = 1 N θ i u x i | 2 d x = R N | θ u | 2 d x .
(4.8)

By (4.5), (4.7) and (4.8), we get that v θ 2 = u θ 2 . This together with (4.5) leads to the result of this lemma. □

Since the infimum (4.4) is independent of θ R N with |θ|=1, we denote it by S.

Lemma 4.2 Let S p be the infimum in (1.9). Then S= ( 1 b / 2 ) p 2 p S p .

Proof Choosing θ=(1,0,,0) in θ , we have

u θ 2 = ( 1 b 2 ) 2 R N | u x 1 | 2 dx+ i = 2 N R N | u x i | 2 dx+a R N u 2 dx.

By Lemma 4.1, we have

S= inf u H 1 ( R N ) { 0 } ( 1 b 2 ) 2 R N | u x 1 | 2 d x + i = 2 N R N | u x i | 2 d x + a R N u 2 d x ( R N | u | p d x ) 2 / p .

Let

v(x)=u ( ( 1 b / 2 ) x 1 , x 2 , , x N ) ,x R N .

Then

( 1 b 2 ) 2 R N | u x 1 | 2 d x + i = 2 N R N | u x i | 2 d x + a R N u 2 d x ( R N | u | p d x ) 2 / p = ( 1 b / 2 ) p 2 p R N | v | 2 d x + a R N v 2 d x ( R N | v | p d x ) 2 / p .

It follows that

S= ( 1 b / 2 ) p 2 p inf v H 1 ( R N ) { 0 } R N | v | 2 d x + a R N v 2 d x ( R N | v | p d x ) 2 / p = ( 1 b / 2 ) p 2 p S p .

 □

Since the functionals u θ 2 and R N | u | p dx are invariant by translations, the same argument as the proof of [[11], Theorem 1.34] yields that there exists a positive minimizer U θ for the infimum S. Moreover, from the Lagrange multiplier rule, it is a solution of

i , j = 1 N y j ( B i j ( θ ) u y i ) +au=S ( u + ) p 1 in  R N ,

and ( μ / S ) 1 / ( p 2 ) U θ is a solution of

i , j = 1 N y j ( B i j ( θ ) u y i ) +au=μ ( u + ) p 1 in  R N .
(4.9)

In the next section, we shall show that Eq. (4.9) is the ‘limit’ equation of

i , j = 1 N y j ( A i j ( x ) u y i ) + V (x)u= K (x) ( u + ) p 1 in  R N .
(4.10)

It is easy to verify that

J θ (u)= 1 2 u θ 2 μ p R N ( u + ) p dx,u H 1 ( R N )
(4.11)

is a C 2 functional defined in H 1 ( R N ), the Gateaux derivative of J θ is

J θ ( u ) , h = ( u , h ) θ μ R N ( u + ) p 1 hdx,u,h H 1 ( R N ) ,

and the critical points of this functional are solutions of (4.9).

Lemma 4.3 Let θ R N satisfy |θ|=1. If u0 is a critical point of J θ , then

J θ (u) ( 1 2 1 p ) μ 2 p 2 S p p 2 .
(4.12)

Proof Since u is a critical point of J θ , we have

0= J θ ( u ) , u = u θ 2 μ R N ( u + ) p dx.
(4.13)

It follows that

J θ (u)= ( 1 2 1 p ) μ R N ( u + ) p dx.
(4.14)

Since u0, by u θ 2 =μ R N ( u + ) p dx and u θ 2 S ( R N ( u + ) p d x ) 2 / p , we get that

R N ( u + ) p dx ( S / μ ) p / ( p 2 ) .

This together with (4.14) yields the result of this lemma. □

5 The Palais-Smale condition for the functional J

Recall that J is the functional defined by (3.16). By a ( P S ) c sequence of J, we mean a sequence { u n } H 1 ( R N ) such that J( u n )c and J ( u n )0 in H 1 ( R N ) as n, where H 1 ( R N ) denotes the dual space of H 1 ( R N ). J is called satisfying the ( P S ) c condition if every ( P S ) c sequence of J contains a convergent subsequence in H 1 ( R N ).

Our main result in this section reads as follows.

Theorem 5.1 Under assumptions (A1) and (A2), let { u n } H 1 ( R N ) be a ( P S ) c sequence of J. Then replacing { u n } if necessary by a subsequence, there exist a solution u 0 H 1 ( R N ) of Eq. (4.10), a finite sequence { θ l R N | θ l |=1,1lk}, k functions { u l 1ik} H 1 ( R N ) and k sequences { y n l } R N satisfying:

  1. (i)

    i , j = 1 N y j ( B i j ( θ l ) u l y i )+a u l =μ ( u l + ) p 1 in R N ,

  2. (ii)

    | y n l |, | y n l y n l |, l l , n,

  3. (iii)

    u n u 0 l = 1 k u l ( y n l )0,

  4. (iv)

    J( u 0 )+ i = 1 l J θ l ( u l )=c.

This theorem gives a precise representation of the ( P S ) c sequence for the functional J. Through it, partial compactness for J can be regained (see Corollary 5.8).

To prove this theorem, we need some lemmas. Our proof of this theorem is inspired by the proof of [[11], Theorem 8.4].

Lemma 5.2 Let u H 1 ( R N ). Then, for any sequence { y n } R N ,

lim R sup n | x | > R K (x+ y n ) | u | p dx=0.

If | y n |, n, then

lim n R N | K ( x + y n ) μ | | u | p dx=0.

Proof If 2>b>0, then K is bounded in R N . In this case, the result of this lemma is obvious. If b<0, then K (x) | x | 2 s 2 b K(0) as |x|0. Since 2s/(2b)>2s/b, by Lemma 3.2 of [14], the map v K 1 / p v from H 1 ( R N ) L loc p ( R N ) is compact. Therefore, for any ϵ>0, there exists δ ϵ >0 such that

sup n | x | δ ϵ K (x) | u ( x y n ) | p dxϵ.

And there exists D(ϵ)>0 depending only on ϵ such that K (x)D(ϵ), |x| δ ϵ . Then, for every n,

| x | > R K ( x + y n ) | u | p d x { x | | x + y n | δ ϵ , | x | > R } K ( x + y n ) | u | p d x + { x | x + y n | > δ ϵ , | x | > R } K ( x + y n ) | u | p d x ϵ + C ( ϵ ) | x | > R | u | p d x .

It follows that lim sup R sup n | x | > R K (x+ y n ) | u | p dxϵ. Now let ϵ0.

Using the same argument as above, for any ϵ>0, there exist δ ϵ and D(ϵ) such that

sup n | x + y n | δ ϵ | K ( x + y n ) μ | | u | p dxϵ

and

| K ( x + y n ) μ | | u | p dx ( D ( ϵ ) + μ ) | u | p ,|x+ y n | δ ϵ .

Since y n , we have lim K (x+ y n )=μ. Then, using the Lebesgue theorem and the above two inequalities, we get that

lim sup n R N | K ( x + y n ) μ | | u | p dxϵ.

Let ϵ0. Then we get the desired result of this lemma. □

Lemma 5.3 Let ρ>0. If { u n } is bounded in H 1 ( R N ) and

sup y R N B ( y , ρ ) | u n | 2 dx0,n,
(5.1)

then K 1 / p u n 0 in L p ( R N ).

Proof Since 2s/(2b)>2s/b, by Lemma 3.2 of [14], the map v K 1 / p v from H 1 ( R N ) L loc p ( R N ) is compact. Therefore, for any ϵ>0, there exists δ ϵ >0 such that

sup n | x | δ ϵ K (x) | u n | p dxϵ.

And there exists D(ϵ)>0 depending only on ϵ such that K (x)D(ϵ), |x| δ ϵ . By (5.1) and the Lions lemma (see, for example, [[11], Lemma 1.21]), we get that

| x | δ ϵ K (x) | u n | p dxD(ϵ) R N | u n | p dx0,n.

Therefore, lim sup n R N K (x) | u n | p dxϵ. Now let ϵ0. □

Lemma 5.4 Let { y n } R N . If u n u in H 1 ( R N ), then

K (x+ y n ) ( u n + ) p 1 K (x+ y n ) ( ( u n u ) + ) p 1 K (x+ y n ) ( u + ) p 1 0in  H 1 ( R N ) .

One can follow the proof of [[11], Lemma 8.1] step by step and use Lemma 5.2 to give the proof of this lemma.

The following lemma is a variant Brézis-Lieb lemma (see [15]) and its proof is similar to that of [[11], Lemma 1.32].

Lemma 5.5 Let { u n } H 1 ( R N ) and { y n } R N . If

  1. (a)

    { u n } is bounded in H 1 ( R N ),

  2. (b)

    u n u a.e. on R N , then

lim n R N K (x+ y n ) | ( u n + ) p ( ( u n u ) + ) p ( u + ) p | dx=0.

Proof Let

j(t)={ t p , t 0 , 0 , t < 0 .

Then j is a convex function. From [[15], Lemma 3], we have that for any ϵ>0, there exists C(ϵ)>0 such that for all a,bR,

| j ( a + b ) j ( b ) | ϵj(a)+C(ϵ)j(b).
(5.2)

Hence

f n ϵ : = ( K ( x + y n ) | ( u n + ) p ( ( u n u ) + ) p ( u + ) p | ϵ K ( x + y n ) ( ( u n u ) + ) p ) + ( 1 + C ( ϵ ) ) K ( x + y n ) ( u + ) p .

By Lemma 3.2 of [14], the map v K 1 / p v from H 1 ( R N ) L loc p ( R N ) is compact. We get that there exists δ ϵ >0 such that for any n,

| x + y n | < δ ϵ f n ϵ dx<ϵ.
(5.3)

And there exists D(ϵ)>0 depending only on ϵ such that K (x)D(ϵ), |x| δ ϵ . Then

f n ϵ ( 1 + C ( ϵ ) ) D(ϵ) ( u + ) p ,|x+ y n | δ ϵ .

By the Lebesgue theorem, | x + y n | δ ϵ f n ϵ dx0, n. This together with (5.3) yields

lim sup n R N f n ϵ dxϵ.

The left proof is the same as the proof of [[11], Lemma 1.32]. □

Lemma 5.6 If

u n u in  H 1 ( R N ) , u n u a.e. on  R N , J ( u n ) c , J ( u n ) 0 in  H 1 ( R N ) ,

then J (u)=0 in H 1 ( R N ) and v n := u n u is such that

v n A 2 = u n A 2 u A 2 + o ( 1 ) , J ( v n ) c J ( u ) , J ( v n ) 0 in  H 1 ( R N ) .

Proof (1) Since u n u in H 1 ( R N ), we get that as n,

v n A 2 u n A 2 = ( u n u , u n u ) A u n A 2 = u A 2 2 ( u n , u ) A u A 2 .

Therefore,

v n A 2 = u n A 2 u A 2 +o(1).
(5.4)
  1. (2)

    Lemma 5.5 implies

    R N K (x) ( v n + ) p dx= R N K (x) ( u n + ) p dx R N K (x) ( u + ) p dx+o(1).
    (5.5)

By (5.4), (5.5) and the assumption J( u n )c, we get that

J( v n )cJ(u),n.
  1. (3)

    Since J ( u n )0 in H 1 ( R N ) and u n u, it is easy to verify that J (u)=0. For h H 1 ( R N ),

    J ( v n ) , h = ( v n , h ) A R N K ( x ) ( v n + ) p 1 h d x = ( u n , h ) A ( u , h ) A R N K ( x ) ( v n + ) p 1 h d x .
    (5.6)

By Lemma 5.4, we have

sup h 1 | R N K ( x ) ( v n + ) p 1 h d x R N K ( x ) ( u n + ) p 1 h d x + R N K ( x ) ( u + ) p 1 h d x | 0 , n .
(5.7)

Combining (5.6) and (5.7) leads to J ( v n )= J ( u n ) J (u)+o(1). Then, by J ( u n )0 in H 1 ( R N ) and J (u)=0, we obtain that J ( v n )0 in H 1 ( R N ). □

Lemma 5.7 If | y n | and as n,

u n ( + y n ) u in  H 1 ( R N ) , u n ( + y n ) u a.e. on  R N , J ( u n ) c , J ( u n ) 0 in  H 1 ( R N ) ,

then there exists θ R N with |θ|=1 such that J θ (u)=0 and v n = u n u( y n ) is such that

v n 2 = u n 2 u 2 + o ( 1 ) , J ( v n ) c J θ ( u ) , J ( v n ) 0 in  H 1 ( R N ) .

Proof We divide the proof into several steps.

  1. (1)

    Since u n (+ y n )u in H 1 ( R N ), it is clear that

    v n 2 = v n ( + y n ) 2 = u n ( + y n ) 2 + u 2 2 ( u n ( + y n ) , u ) = u n 2 u 2 +o(1).
  2. (2)

    For any h H 1 ( R N ),

    J ( u n ) , h ( y n ) = ( u n , h ( y n ) ) A R N K (x) ( u n + ) p 1 h( y n )dx.
    (5.8)

By the definition of the inner product ( , ) A (see (3.11)), we have

( u n , h ( y n ) ) A = R N u n h ( y n ) d x + ( b 2 4 b ) R N ( x u n ) ( x h ( y n ) ) | x | 2 d x + R N V ( x ) u n h ( y n ) d x = R N u n ( + y n ) h d x + a R N u n ( + y n ) h d x + R N ( V ( x + y n ) a ) u n ( + y n ) h d x + ( b 2 4 b ) R N ( x | y n | + y n | y n | | x | y n | + y n | y n | | u n ( + y n ) ) ( x | y n | + y n | y n | | x | y n | + y n | y n | | h ) d x : = I + II + III .
(5.9)

Since u n (+ y n )u in H 1 ( R N ), we have

I = R N u n ( + y n ) h d x + a R N u n ( + y n ) h d x = ( u n ( + y n ) , h ) R N u h d x + a R N u h d x , n .
(5.10)

By assumption (A2) and the definition of V , we have lim | x | V (x)=a. This yields

sup n | x | R | V ( x ) a | | h ( x y n ) | 2 dx0,R.

Moreover, together with (2.8) and the fact that | y n | yields that for any fixed R>0,

| x | < R | V ( x ) a | | h ( y n ) | 2 d x C ( | x | < R | h ( y n ) | 2 d x + | x | < R | h ( y n ) | 2 d x ) 0 , n .

Combining the above two limits leads to

R N | V ( x + y n ) a | | h | 2 dx0,n.
(5.11)

By (5.11) and the Hölder inequality, we have

| II | = | R N ( V ( x + y n ) a ) u n ( + y n ) h d x | ( R N | V ( x + y n ) a | u n 2 ( + y n ) d x ) 1 2 ( R N | V ( x + y n ) a | h 2 d x ) 1 2 C ( R N | V ( x + y n ) a | h 2 d x ) 1 2 0 , n .
(5.12)

Since h L 2 ( R N ), for any ϵ>0, there exists R ϵ >0 such that

R N { | x | < R ϵ } | h | 2 dx<ϵ.

It follows that

R N { | x | < R ϵ } | ( x | y n | + y n | y n | ) h | 2 | x | y n | + y n | y n | | 2 dx R N { | x | < R ϵ } | h | 2 dx<ϵ.
(5.13)

Then

| R N { | x | < R ϵ } ( x | y n | + y n | y n | | x | y n | + y n | y n | | u n ( + y n ) ) ( x | y n