Skip to content

Advertisement

Open Access

On the -boundedness for the two phase problem: compressible-incompressible model problem

Boundary Value Problems20142014:141

https://doi.org/10.1186/s13661-014-0141-3

Received: 3 March 2014

Accepted: 27 May 2014

Published: 24 September 2014

Abstract

The situation of this paper is that the Stokes equation for the compressible viscous fluid flow in the upper half-space is coupled via inhomogeneous interface conditions with the Stokes equations for the incompressible one in the lower half-space, which is the model problem for the evolution of compressible and incompressible viscous fluid flows with a sharp interface. We show the existence of -bounded solution operators to the corresponding generalized resolvent problem, which implies the generation of analytic semigroup and maximal L p - L q regularity for the corresponding time dependent problem with the help of the Weis’ operator valued Fourier multiplier theorem. The problem was studied by Denisova (Interfaces Free Bound. 2(3):283-312, 2000) under some restriction on the viscosity coefficients and one of our purposes is to eliminate the assumption in (Denisova in Interfaces Free Bound. 2(3):283-312, 2000).

MSC: 35Q35, 76T10.

Keywords

-boundednessgeneralized resolvent problemmodel problemStokes equationscompressible-incompressible two phase problem

1 Introduction

This paper is concerned with the evolution of compressible and incompressible viscous fluids separated by a sharp interface. Typical examples of the physical interpretation of our problem are the evolution of a bubble in an incompressible fluid flow, or a drop in a volume of gas. The problem is formulated as follows: Let Ω ± be two domains. The region Ω + is occupied by a compressible barotropic viscous fluid and the region Ω by an incompressible viscous fluid. Let Γ ± and S ± be the boundaries of Ω ± such that Γ ± S ± = . We assume that Γ + = Γ and S + S = . We may assume that one of S ± is an empty set, or that both of S ± are empty sets. Let Γ t , S t , and Ω t ± be the time evolutions of Γ = Γ + = Γ , S , and Ω ± , respectively, where t is the time variable. We assume that the two fluids are immiscible, so that Ω t + Ω t = for any t 0 . Moreover, we assume that no phase transitions occur and we do not consider the surface tension at the interface Γ t and the free boundary S t + for mathematical simplicity. Thus, the motion of the fluids is governed by the following system of equations:
{ ρ + ( t u + + u + u + ) Div S + ( u + , P ( ρ + ) ) = 0 , t ρ + + div ( ρ + u + ) = 0 in  Ω t + , ρ ( t u + u u ) Div S ( u , π ) = 0 , div u = 0 in  Ω t , S + ( u + , P ( ρ + ) ) n t | Γ t + S ( u , π ) n t | Γ t = 0 , u + | Γ t + u | Γ t = 0 , u + | S + = 0 , S ( u , π ) n t | S t = 0
(1.1)
for t ( 0 , T ) , subject to the initial conditions
( u + , ρ + ) | t = 0 = ( u + 0 , ρ + 0 ) in  Ω + , u | t = 0 = u 0 in  Ω .
(1.2)
Here, t = / t , ρ is a positive constant denoting the mass density of the reference domain Ω , P a pressure function, and u ± = ( u ± 1 , , u ± N ) ( N 2 ), ρ + and π are unknown velocities, scalar mass density and scalar pressure, respectively. Moreover, S ± are stress tensors defined by
S + ( u + , π + ) = μ + D ( u + ) + ( ν + μ + ) div u + I π + I , S ( u , p ) = μ D ( u ) π I ,
(1.3)
where D ( v ) denotes the doubled strain tensor whose ( i , j ) components are D i j ( v ) = i v j + j v i with i = / x i and we set div v = = 1 N v and v = j = 1 N v j j for any vector of functions v = ( v 1 , , v N ) . And also, for any matrix field K with ( i , j ) components K i j , the quantity DivK is an N-vector with components j = 1 N j K i j . Finally, I stands for the N × N identity matrix, n t the unit normal to Γ t pointed from Ω t to Ω t + , n t the unit outward normal to S t , and μ ± and ν + are first and second viscosity coefficients, respectively, which are assumed to be constant and satisfy the condition
μ ± > 0 , ν + > 0 ,
(1.4)
and f | Γ t ± and f | S t are defined by
f | Γ t ± ( x 0 ) = lim x Ω t ± x x 0 f ( x ) for  x 0 Γ t , f | S t = lim x Ω t x x 0 f ( x ) for  x 0 S t .
Aside from the dynamical system (1.1), further kinematic conditions on Γ t and S t are satisfied, which give
Γ t = { x R N x = x ( ξ , t ) ( ξ Γ + ) } , S t = { x R N x = x ( ξ , t ) ( ξ S ) } .
(1.5)
Here, x = x ( ξ , t ) is the solution to the Cauchy problem:
d x d t = u ( x , t ) ( t > 0 ) , x | t = 0 = ξ in  Ω ¯

with u ( x , t ) = u + ( x , t ) for x Ω + and u ( x , t ) = u ( x , t ) for x Ω . This expresses the fact that the interface Γ t and the free boundary S t consist of the same particles for all t > 0 , which do not leave them and are not incident from Ω t ± . In particular, we exclude the mass transportation through the interface Γ t , because we assume that the two fluids are immiscible.

Denisova [1] studied a local in time unique existence theorem to problem (1.1) with surface tension on Γ t under the assumption that μ + < μ and μ / ρ < μ + / R with some positive constant R and that Ω + is bounded and Ω = R N Ω ¯ + . Here, ρ is a positive constant describing the mass density of the reference body Ω . Thus, in [1], both of S ± are empty sets. The purpose of our study is to prove local in time unique existence theorem in a general uniform domain under the assumption (1.4). Especially, the assumption on the viscosity coefficients is improved compared with Denisova [1] and widely accepted in the study of fluid dynamics.

As related topics about the two phase problem for the viscous fluid flows, the incompressible-incompressible case has been studied by [2]–[11] and the compressible-compressible case by [12], [13] as far as the authors know.

To prove a local in time existence theorem for (1.1), we transform (1.1) to the equations in fixed domains Ω ± by using the Lagrange transform (cf. Denisova [1]), so that the key step is to prove the maximal regularity for the linearized problem
{ γ ˆ 0 + t u + Div S + ( u + , γ ˆ 2 + p + ) = g + , t p + + γ ˆ 1 + div u + = f + in  Ω + , γ 0 t u Div S ( u , p ) = g , div u = f in  Ω , S + ( u + , γ ˆ 2 + p + ) n | Γ + S ( u , p ) n | Γ = h , u + | Γ u | Γ = 0 , u + | S + = 0 , S ( u , p ) n | S = h
(1.6)
for any t ( 0 , T ) , subject to the initial conditions (1.2), where f | Γ ± ( x 0 ) = lim x Ω ± , x x 0 f ( x ) for x 0 Γ . Here, γ 0 is a positive constant and γ ˆ i + ( i = 0 , 1 , 2 ) are functions defined on Ω ¯ + such that
ω 0 γ ˆ i + ( x ) ω 1 ( x Ω ¯ + ) , γ i + L r ( Ω + )
for i = 0 , 1 , 2 with some positive constants ω 0 and ω 1 and with some exponent r ( N , ) , and γ 0 is a positive number describing the mass density of the flow occupied in Ω . Our strategy of obtaining the maximal L p - L q result for (1.6) is to show the existence of -bounded solution operator R ( λ ) to the corresponding generalized resolvent problem:
{ γ ˆ 0 + λ u ˆ + Div S + ( u ˆ + , γ ˆ 2 + p ˆ + ) = g ˆ + , λ p ˆ + + γ ˆ 1 + div u ˆ + = f ˆ + in  Ω + , γ 0 λ u ˆ Div S ( u ˆ , p ˆ ) = g ˆ , div u ˆ = f ˆ in  Ω , S + ( u + , γ ˆ 2 + p ˆ + ) n | Γ S ( u ˆ , p ˆ ) n | Γ t = h ˆ , u ˆ + | Γ u ˆ | Γ = 0 , u ˆ + | S + = 0 , S ( u ˆ , p ˆ ) n | S = h ˆ .
(1.7)
Here, f ˆ denotes the Laplace transform of f with respect to t. In fact, solutions u ˆ ± and p ˆ ± are represented by
( u ˆ ± , p ˆ ± ) = R ( λ ) ( f ˆ ± ( λ ) , g ˆ ± ( λ ) , h ˆ ( λ ) , h ˆ ( λ ) ) ,
so that roughly speaking, we can represent the solutions ( u ± ( t ) , p ± ( t ) ) to the non-stationary problem (1.6) by
( u ± ( t ) , p ± ( t ) ) = L 1 [ R ( λ ) ( f ˆ ± ( λ ) , g ˆ ± ( λ ) , h ˆ ( λ ) , h ˆ ( λ ) ) ] ( t )
with Laplace inverse transform L 1 . Thus, we get the maximal L p - L q regularity result:
0 e p γ t { ( p + ( , t ) , t p + ( , t ) ) W q 1 ( Ω + ) + = ± ( t u ( , t ) L q ( Ω ) + u ( , t ) W q 2 ( Ω ) ) } p d t C { suitable norms of initial data and right members in (1.6) } ( 1 < p , q < )

for some positive constants γ and C with help of the Weis operator valued Fourier multiplier theorem [14]. To construct an -bounded solution operator to (1.7), problem (1.7) is reduced locally to the model problems in a neighborhood of an interface point as well as an interior point or a boundary point by using the localization technique and the partition of unity. The model problems for the interior point and boundary point have been studied, but the model problem for the interface point was studied only by Denisova [1] under some restriction on the viscosity coefficients. Moreover, she studied the problem in L 2 framework, so that the Plancherel formula is applicable. But our final goal is to treat the nonlinear problem (1.1) under (1.4) and (1.5) in the maximal L p - L q regularity class, so that we need different ideas. Especially, the core of our approach is to construct an -bounded solution operator to (1.7). Thus, we construct the -bounded solution operator to (1.7) for the model problem in this paper, and in the forthcoming paper [15] we construct an -bounded solution operator to (1.7) in a domain. Moreover, in [15] the maximal L p - L q regularity in a domain is derived automatically with the help of the Weis’ operator valued Fourier multiplier theorem, so that a local in time unique existence theorem is proved by using the usual contraction mapping principle based on the maximal L p - L q regularity.

Now we formulate our problem studied in this paper and state the main results. Let R + N , R N , and R 0 N be the upper half-space, lower half-space and their boundary defined by
R ± N = { x = ( x 1 , , x N ) R N ± x N > 0 } , R 0 N = { x = ( x 1 , , x N ) R N x N = 0 } .
In this paper, we consider the following model problem:
{ λ u + γ 0 + 1 Div S + ( u + , γ 2 + p + ) = g + , λ p + + γ 1 + div u + = f + in  R + N , λ u γ 0 1 Div S ( u , p ) = g , div u = 0 in  R N , S + ( u + , γ 2 + p + ) n | x N = 0 + S ( u , p ) n | x N = 0 = h , u + | x N = 0 + u | x N = 0 = k on  R 0 N .
(1.8)
Throughout the paper, n = ( 0 , , 0 , 1 ) , γ 0 ± , γ 1 + , and γ 2 + are fixed positive constants and the condition (1.4) holds. Substituting the relation p + = ( f + γ 1 + div u + ) λ 1 into the equations in (1.8), we have
λ u + γ 0 + 1 Div [ μ + D ( u + ) + ( ν + μ + + γ 1 + γ 2 + λ 1 ) div u + I ] = g + γ 0 + 1 γ 2 + λ 1 f + , ( μ + D ( u + ) + ( ν + μ + + γ 1 + γ 2 + λ 1 ) div u + I ) n | x N = 0 + S ( u , p ) n | x N = 0 = h + γ 2 + λ 1 f + n .
Thus, g + γ 0 + 1 γ 2 + λ 1 f + and h + γ 2 + λ 1 f + n being renamed g + and h, respectively, and defining S δ + ( u + ) by
S δ + ( u + ) = μ + D ( u + ) + ( ν + μ + + δ ) div u + I ,
(1.9)
mainly we consider the following problem:
{ λ u + γ 0 + 1 Div S δ + ( u + ) = g + in  R + N , λ u γ 0 1 Div S ( u , p ) = g , div u = 0 in  R N , S δ + ( u + ) n | x N = 0 + S ( u , p ) n | x N = 0 = h , u + | x N = 0 + u | x N = 0 = k on  R 0 N .
(1.10)

Here, δ is not only γ 1 + γ 2 + λ 1 but also chosen as some complex number. More precisely, we consider the following three cases for δ and λ:

(C1) δ = γ 1 + γ 2 + λ 1 , λ Σ ϵ , λ 0 K ϵ .

(C2) δ Σ ϵ with Re δ < 0 , λ C with | λ | λ 0 and Re λ | Re δ / Im δ | | Im λ | .

(C3) δ Σ ϵ with Re δ 0 , λ C with | λ | λ 0 and Re λ λ 0 | Im λ | .

Here, Σ ϵ = { λ C { 0 } | arg λ | π ϵ } with 0 < ϵ < π / 2 , Σ ϵ , λ 0 = { λ Σ ϵ | λ | λ 0 } and
K ϵ = { λ C ( Re λ + γ 1 + γ 2 + ν + 1 + ϵ ) 2 + ( Im λ ) 2 ( γ 1 + γ 2 + ν + 1 + ϵ ) 2 } .
(1.11)
We define Γ ϵ , λ 0 by
Γ ϵ , λ 0 = { Σ ϵ , λ 0 K ϵ in case of (C1) , { λ C | λ | λ 0 , Re λ | Re δ / Im δ | | Im λ | } in case of (C2) , { λ C | λ | λ 0 , Re λ λ 0 | Im λ | } in case of (C3) .
(1.12)

The case (C1) is used to prove the existence of -bounded solution operator to (1.8) and the cases (C2) and (C3) are used for some homotopic argument in proving the exponential stability of analytic semigroup in a bounded domain. Such homotopic argument already appeared in [16] and [17] in the non-slip condition case. In (C2), we note that Im δ 0 when δ Σ ϵ with Re δ < 0 .

In case (C1), | δ | = | γ 1 + γ 2 + λ 1 | γ 1 + γ 2 + λ 0 1 . On the other hand, in cases of (C2) and (C3), we assume that | δ | δ 0 for some δ 0 > 0 . Thus, we assume that
| δ | max ( γ 1 + γ 2 + λ 0 1 , δ 0 ) .
(1.13)

We may include the case where γ 1 + γ 2 + = 0 in (1.9), which is corresponding to the Lamé system. We may also consider the case where div u = f in (1.8) under the condition that f W q 1 ( R N ) and f = div F with some F L q ( R N ) N . In fact, first we solve the equation div u = f in R N , which transfers the problem to the case where f = 0 (cf. Shibata [18], Section 3]). Thus, we only consider the case where f = 0 in this paper for the sake of simplicity.

Before stating our main results, we introduce several symbols and functional spaces used throughout the paper. For the differentiations of scalar f and N-vector g = ( g 1 , , g N ) , we use the following symbols:
f = ( 1 f , , N f ) , 2 f = ( i j f i , j = 1 , , N ) , g = ( i g j i , j = 1 , , N ) , 2 g = ( i j g k i , j , k = 1 , , N ) .

For any Banach space X with norm X , X d denotes the d-product space of X, while its norm is denoted by X instead of X d for the sake of simplicity. For any domain D, L q ( D ) , and W q m ( D ) denote the usual Lebesgue space and Sobolev space, while L q ( D ) and W q m ( D ) denote their norms, respectively. We set W ˆ q 1 ( R N ) = { θ L q , loc ( R N ) θ L q ( R N ) N } . For any two Banach spaces X and Y, L ( X , Y ) denotes the set of all bounded linear operators from X into Y. Hol ( U , X ) denotes the set of all X-valued holomorphic functions defined on U. The letter C denotes generic constants and the constant C a , b , depends on a , b ,  . The values of constants C and C a , b , may change from line to line. and denote the set of all natural numbers and complex numbers, respectively, and we set N 0 = N { 0 } . For any multi-index α = ( α 1 , , α N ) N 0 N , we set x α = ( / x 1 ) α 1 ( / x N ) N α N .

We introduce the definition of -boundedness.

Definition 1.1

A family of operators T L ( X , Y ) is called -bounded on L ( X , Y ) , if there exist constants C > 0 and q [ 1 , ) such that for any n N , { T j } j = 1 n T , { x j } j = 1 n X and sequences { r j ( u ) } j = 1 n of independent, symmetric, { 1 , 1 } -valued random variables on [ 0 , 1 ] we have the inequality
{ 0 1 j = 1 n r j ( u ) T j x j Y q d u } 1 q C { 0 1 j = 1 n r j ( u ) x j X q d u } 1 q .

The smallest such C is called -bound of T , which is denoted by R L ( X , Y ) ( T ) .

The following theorem is our main result in this paper.

Theorem 1.2

Let 1 < q < , 0 < ϵ < π / 2 and λ 0 > 0 . Let Γ ϵ , λ 0 be the sets defined in (1.12). Let X q and X q be the sets defined by
X q = { ( g + , g , h , k ) g ± L q ( R ± N ) N , h W q 1 ( R N ) N , k W q 2 ( R N ) N } , X q = { F = ( F 1 + , F 1 , F 2 , F 3 , F 4 , F 5 , F 6 ) X q = F 1 ± L q ( R ± N ) , F 2 , F 5 L q ( R N ) N 2 , F 3 , F 6 L q ( R N ) N , F 4 L q ( R N ) N 3 } .
Then there exist operator families
A ± ( λ ) Hol ( Γ ϵ , λ 0 , L ( X q , W q 2 ( R ± N ) N ) ) , P ( λ ) Hol ( Γ ϵ , λ 0 , L ( X q , W ˆ q 1 ( R N ) ) )

such that u ± = A ± ( λ ) F λ ( g + , g , h , k ) and p = P ( λ ) F λ ( g + , g , h , k ) solve problem (1.10) uniquely for any ( g + , g , h , k ) X q and λ Γ ϵ , λ 0 , where F λ ( g + , g , h , k ) = ( g + , g , h , λ 1 / 2 h ± , 2 k , λ 1 / 2 k , λ k ) .

Moreover, there exists a constant C depending on ϵ, q, and N such that
R L ( X q , L q ( R ± N ) N ˜ ) ( { ( τ τ ) ( G λ A ± ( λ ) ) λ Γ ϵ , λ 0 } ) C ( = 0 , 1 ) , R L ( X q , L q ( R N ) N ) ( { ( τ τ ) ( P ( λ ) ) λ Λ ϵ , λ 0 } ) C ( = 0 , 1 )
(1.14)

with N ˜ = N 3 + N 2 + 2 N and λ = γ + i τ , where G λ is an operator defined by G λ u = ( λ u , γ u , λ 1 / 2 u , 2 u ) .

Setting p + = λ 1 ( f + γ 1 + div u + ) in (1.8), we have the following theorem concerning problem (1.8) immediately with the help of Theorem 1.2.

Theorem 1.3

Let 1 < q < , 0 < ϵ < π / 2 and λ 0 > 0 . Let Γ ϵ , λ 0 be the sets defined in (1.12). Set
Y q = { ( f + , g + , g , h , k ) f + W q 1 ( R + N ) , ( g + , g , h , k ) X q } , Y q = { ( F 0 , F 1 + , F 1 , F 2 , F 3 , F 4 , F 5 , F 6 ) F 0 W q 1 ( R + N ) , ( F 1 + , F 1 , F 2 , F 3 , F 4 , F 5 , F 6 ) X q } .
Then there exist operator families
P + ( λ ) Hol ( Λ ϵ , λ 0 , L ( Y q , W q 1 ( R + N ) ) ) , U ± ( λ ) Hol ( Λ ϵ , λ 0 , L ( Y q , W q 2 ( R ± N ) N ) ) , P ( λ ) Hol ( Λ ϵ , λ 0 , L ( Y q , W ˆ q 1 ( R N ) ) )
such that for any ( f + , g + , g , h , k ) Y q and λ Λ ϵ , λ 0 ,
p + = P + ( λ ) F λ ( f + , g + , g , h , k ) , u ± = U ± ( λ ) F λ ( f + , g + , g , h , k ) , p = P ( λ ) F λ ( f + , g + , g , h , k )

solve problem (1.8) uniquely, where F λ ( f + , g + , g , h , k ) = ( f + , g + , g , h , λ 1 / 2 h , 2 k , λ 1 / 2 k , λ k ) .

Moreover, there exists a constant C depending on ϵ, λ 0 , q, and N such that
R L ( Y q , W q 1 ( R + N ) 2 ) ( { ( τ τ ) { ( λ , γ ) P + ( λ ) } λ Γ ϵ , λ 0 } ) C ( = 0 , 1 ) , R L ( Y q , L q ( R ± N ) N ˜ ) ( { ( τ τ ) ( G λ U ± ( λ ) ) λ Γ ϵ , λ 0 } ) C ( = 0 , 1 ) , R L ( Y q , L q ( R N ) N ) ( { ( τ τ ) ( P ( λ ) ) λ Γ ϵ , λ 0 } ) C ( = 0 , 1 ) .
(1.15)

2 Solution formulas for the model problem

To prove Theorem 1.2, first we consider problem (1.10) with g ± = 0 in this section as a model problem, that is, we consider the following equations:
{ λ u + γ 0 + 1 Div S δ + ( u + ) = 0 in  R + N , λ u γ 0 1 Div S ( u , p ) = 0 , div u = 0 in  R N , S δ + ( u + ) n | x N = 0 + S ( u , p ) n | x N = 0 = h , u + | x N = 0 + u | x N = 0 = k on  R 0 N .
(2.1)
Let v ˆ = F x [ v ] ( ξ , x N ) denote the partial Fourier transform with respect to the tangential variable x = ( x 1 , , x N 1 ) with ξ = ( ξ 1 , , ξ N 1 ) defined by F x [ v ] ( ξ , x N ) = R N 1 e i x ξ v ( x , x N ) d x . Using the formulas
Div S δ + ( u + ) = μ + Δ u + + ( ν + + δ ) div u + , Div S ( u , p ) = μ Δ u p
and applying the partial Fourier transform to (2.1), we transfer problem (2.1) to the ordinary differential equations
{ λ u ˆ + j + γ 0 + 1 μ + | ξ | 2 u ˆ + j γ 0 + 1 μ + D N 2 u ˆ + j γ 0 + 1 ( ν + + δ ) i ξ j ( i ξ u ˆ + + D N u ˆ + N ) = 0 for  x N > 0 , λ u ˆ + N + γ 0 + 1 μ + | ξ | 2 u ˆ + N γ 0 + 1 μ + D N 2 u ˆ + N γ 0 + 1 ( ν + + δ ) D N ( i ξ u ˆ + + D N u ˆ + N ) = 0 for  x N > 0 , λ u ˆ j + γ 0 1 μ | ξ | 2 u ˆ j γ 0 1 μ D N 2 u ˆ j + γ 0 1 i ξ j p ˆ = 0 for  x N < 0 , λ u ˆ N + γ 0 1 μ | ξ | 2 u ˆ N γ 0 1 μ D N 2 u ˆ N + γ 0 1 D N p ˆ = 0 for  x N < 0 , i ξ u ˆ + D N u ˆ N = 0 for  x N < 0 ,
(2.2)
subject to the boundary conditions
{ μ + ( D N u ˆ + j + i ξ j u ˆ + N ) | x N = 0 + μ ( D N u ˆ j + i ξ j u ˆ N ) | x N = 0 = h ˆ j ( 0 ) , 2 μ + D N u ˆ + N + ( ν + μ + + δ ) ( i ξ u ˆ + + D N u ˆ + N ) | x N = 0 + ( 2 μ D N u ˆ N p ˆ ) | x N = 0 = h ˆ N ( 0 ) , u ˆ + J ( 0 + ) u ˆ J ( 0 ) = k ˆ J ( 0 ) ,
(2.3)
where D N = d / d x N and i ξ v ˆ = = 1 N 1 i ξ v ˆ j for v = ( v 1 , , v N 1 , v N ) . Here and in the following, j and J run from 1 through N 1 and N, respectively. Applying the divergence to the first and second equations in (2.1), we have λ div u + γ 0 + 1 ( μ + + ν + + δ ) Δ div u + = 0 in R + N and Δ p = 0 in R N , so that
( λ γ 0 + 1 ( μ + + ν + + δ ) Δ ) ( λ γ 0 + 1 μ + Δ ) u + = 0 in  R + N , ( λ γ 0 1 Δ ) Δ u = 0 in  R N .
Thus, the characteristic roots of (2.2) are
A + = γ 0 + ( μ + + ν + + δ ) 1 λ + A 2 , B ± = γ 0 ± ( μ ± ) 1 λ + A 2 , A = | ξ | .
(2.4)

To state our solution formulas of problem: (2.2)-(2.3), we introduce some classes of multipliers.

Definition 2.1

Let s be a real number and let Γ ϵ , λ 0 be the set defined in (1.12). Set
Γ ˜ ϵ , λ 0 = { ( λ , ξ ) λ = γ + i τ Γ ϵ , λ 0 , ξ = ( ξ 1 , , ξ N 1 ) R N 1 { 0 } } .
Let m ( λ , ξ ) be a function defined on Γ ˜ ϵ , λ 0 .
  1. (1)
    m ( λ , ξ ) is called a multiplier of order s with type 1 if for any multi-index κ = ( κ 1 , , κ N 1 ) N 0 N 1 and ( λ , ξ ) Γ ˜ ϵ , λ 0 there exists a constant C κ depending on κ , λ 0 , ϵ, μ ± , ν + , γ 0 , and γ i + ( i = 0 , 1 , 2 ) such that we have the estimates
    | ξ κ m ( λ , ξ ) | C α ( | λ | 1 / 2 + A ) s | κ | , | ξ κ ( τ m τ ( λ , ξ ) ) | C κ ( | λ | 1 / 2 + A ) s | κ | .
    (2.5)
     
  1. (2)
    m ( λ , ξ ) is called a multiplier of order s with type 2 if for any multi-index κ = ( κ 1 , , κ N 1 ) N 0 N 1 and ( λ , ξ ) Γ ˜ ϵ , λ 0 there exists a constant C κ depending on κ , λ 0 , ϵ, μ ± , ν + , γ 0 , and γ i + ( i = 0 , 1 , 2 ) such that we have the estimates
    | ξ κ m ( λ , ξ ) | C κ ( | λ | 1 / 2 + A ) s A | κ | , | ξ κ ( τ m τ ( λ , ξ ) ) | C κ ( | λ | 1 / 2 + A ) s A | κ | .
    (2.6)
     

Let M s , i be the set of all multipliers of order s with type i ( i = 1 , 2 ).

Obviously, M s , i are vector spaces on . Moreover, by the fact | λ 1 / 2 + A | | α | A | α | and the Leibniz rule, we have the following lemma immediately.

Lemma 2.2

Let s 1 , s 2 be two real numbers. Then the following three assertions hold.
  1. (1)

    Given m i M s i , 1 ( i = 1 , 2 ), we have m 1 m 2 M s 1 + s 2 , 1 .

     
  2. (2)

    Given i M s i , i ( i = 1 , 2 ), we have 1 2 M s 1 + s 2 , 2 .

     
  3. (3)

    Given n i M s i , 2 ( i = 1 , 2 ), we have m 1 m 2 M s 1 + s 2 , 2 .

     

Remark 2.3

We see easily that i ξ j M 1 , 2 ( j = 1 , , N 1 ), A M 1 , 2 , and A 1 M 1 , 2 . Especially, i ξ j / A M 0 , 2 . Moreover, M s , 1 M s , 2 for any s R .

In this section we show the following solution formulas for problem (2.2)-(2.3):
u ˆ + J = k = 1 4 u ˆ J k + , u ˆ J = k = 1 3 u ˆ J k , p ˆ = e A x N = 1 N [ p , 0 h ˆ ( 0 ) + p , 1 k ˆ ( 0 ) ] , u ˆ J 1 ± = A M + ( x N ) = 1 N [ R J , 1 ± h ˆ ( 0 ) + R J , 0 ± k ˆ ( 0 ) ] , u ˆ J 2 ± = A e B ± x N = 1 N [ S J , 2 ± h ˆ ( 0 ) + S J , 1 ± k ˆ ( 0 ) ] , u ˆ J 3 ± = e B ± x N [ T J , 1 ± h ˆ J ( 0 ) + T J , 0 ± k ˆ J ( 0 ) ] , u ˆ j 4 + = 0 , u ˆ N 4 + = A + M + ( x N ) U N , 0 + k ˆ N ( 0 )
(2.7)
with
R J , 1 ± M 1 , 2 , R J , 0 ± M 0 , 2 , S J , 2 ± M 2 , 2 , S J , 1 ± M 1 , 2 , T J , 1 ± M 1 , 1 , T J , 0 ± M 0 , 1 , U N , 0 + M 0 , 1 , p , 0 M 0 , 2 , p , 1 M 1 , 2 .
(2.8)
Here and in the following, M ± ( x N ) denote the Stokes kernels defined by
M + ( x N ) = e B + x N e A + x N B + A + , M ( x N ) = e B x N e A x N B A .
(2.9)
From now on, we prove (2.7). We find solutions u ˆ ± J to problem (2.2)-(2.3) of the forms
u ˆ + J = α + J ( e B + x N e A + x N ) + β + J e B + x N , u ˆ J = α J ( e B x N e A x N ) + β J e B x N , p ˆ = γ e A x N .
(2.10)
Using the symbols B ± , we write (2.2) as follows:
{ μ + B + 2 u ˆ + j μ + D N 2 u ˆ + j ( ν + + δ ) i ξ j ( i ξ u ˆ + + D N u ˆ + N ) = 0 ( x N > 0 ) , μ + B + 2 u ˆ + N μ + D N 2 u ˆ + N ( ν + + δ ) D N ( i ξ u ˆ + + D N u ˆ + N ) = 0 ( x N > 0 ) , μ B 2 u ˆ j μ D N 2 u ˆ j + i ξ j p ˆ = 0 ( x N < 0 ) , μ B 2 u ˆ N μ D N 2 u ˆ N + D N p ˆ = 0 ( x N < 0 ) , i ξ u ˆ + D N u ˆ N = 0 ( x N < 0 ) .
(2.11)
Substituting the formulas of u ± J in (2.10) and (2.11) and equating the coefficients of e B ± x N , e A + x N , and e A x N , we have
μ + ( A + 2 B + 2 ) α + j + ( ν + + δ ) i ξ j ( i ξ α + A + α + N ) = 0 , μ + ( A + 2 B + 2 ) α + N ( ν + + δ ) A + ( i ξ α + A + α + N ) = 0 , i ξ α + α + N B + + i ξ β + β + N B + = 0 , μ ( A 2 B 2 ) α j + i ξ j γ = 0 , μ ( A 2 B 2 ) α N + A γ = 0 , i ξ α + α N B + i ξ β + β N B = 0 , i ξ α + A α N = 0 .
(2.12)
First, we represent i ξ α ± , α ± N and γ by i ξ β ± and β + N . Namely, it follows from (2.12) that
i ξ α + = A 2 A + B + A 2 ( i ξ β + B + β + N ) , α + N = A + A + B + A 2 ( i ξ β + B + β + N ) , i ξ α = A B A ( i ξ β + B β N ) , α N = 1 B A ( i ξ β + B β N ) , γ = μ ( A + B ) A ( i ξ β + B β N ) .
(2.13)
Substituting the relations
u ˆ ± J ( 0 ) = β ± J , N u ˆ + J ( 0 ) = ( A + B + ) α + J B + β + J , N u ˆ J ( 0 ) = ( B A ) α J + B β J
into (2.3), we have
β + J = β J + k ˆ J ( 0 ) , μ + ( ( B + A + ) α + j + B + β + j i ξ j β + N ) + μ ( ( B A ) α j + B β j + i ξ j β N ) = h ˆ j ( 0 ) , 2 μ + ( ( B + A + ) α + N + B + β + N ) + ( ν + μ + + δ ) ( i ξ β + + ( B + A + ) α + N + B + β + N ) + 2 μ ( ( B A ) α N + B β N ) γ = h ˆ N ( 0 ) .
(2.14)
Using (2.14) and (2.13), we have
i ξ h ˆ ( 0 ) = L 11 + ( i ξ β + ) + L 11 ( i ξ β ) + L 12 + A β + N + L 12 A β N , A h ˆ N ( 0 ) = L 21 + ( i ξ β + ) + L 21 ( i ξ β ) + L 22 + A β + N + L 22 β N
with
L 11 + = μ + A + ( B + 2 A 2 ) A + B + A 2 , L 11 = μ ( A + B ) , L 12 + = μ + A ( 2 A + B + A 2 B + 2 ) A + B + A 2 , L 12 = μ ( B A ) , L 21 + = A { 2 μ + A + ( B + A + ) A + B + A 2 ( ν + μ + + δ ) A + 2 A 2 A + B + A 2 } , L 21 = μ ( B A ) , L 22 + = ( μ + + ν + + δ ) B + ( A + 2 A 2 ) A + B + A 2 , L 22 = μ ( A + B ) B .
(2.15)
As is seen in Section 4, we have
L 11 + M 1 , 1 , L 11 M 1 , 2 , L 12 ± M 1 , 2 , L 21 ± M 1 , 2 , L 22 + M 1 , 1 , L 22 M 2 , 2 .
(2.16)
Noting the relation β + J = β J + k ˆ J ( 0 ) , and setting
L 11 = L 11 + + L 11 , L 12 = L 12 + + L 12 , L 21 = L 21 + + L 21 , L 22 = L 22 + A + L 22 , L = ( L 11 A L 12 L 21 L 22 ) ,
(2.17)
we have
L ( i ξ β β N ) = ( H ˆ ( 0 ) H ˆ N ( 0 ) )
(2.18)
with
H ˆ ( 0 ) = i ξ h ˆ ( 0 ) L 11 + i ξ k ˆ ( 0 ) L 12 + A k ˆ N ( 0 ) , H ˆ N ( 0 ) = A h ˆ N ( 0 ) L 21 + i ξ k ˆ ( 0 ) A L 22 + k ˆ N ( 0 ) .
By Lemma 2.2 and (2.16), we see that
L 11 M 1 , 2 , L 12 M 1 , 2 , L 21 M 1 , 2 , L 22 M 2 , 2 .
(2.19)
The most important fact of this paper is that det L 0 for any ( λ , ξ ) Γ ˜ ϵ , λ 0 and
( det L ) 1 M 3 , 2 .
(2.20)
This fact is proved in Section 5, which is the highlight of this paper. Since
L 1 = 1 det L ( L 22 A L 12 L 21 L 11 ) ,
we have
i ξ β = 1 det L ( L 22 H ˆ ( 0 ) A L 12 H ˆ N ( 0 ) ) i ξ β = 1 det L ( L 22 i ξ h ˆ ( 0 ) + A 2 L 12 h ˆ N ( 0 ) + ( A L 12 L 21 + L 11 + L 22 ) i ξ k ˆ ( 0 ) i ξ β = + ( A L 12 L 22 + L 12 + L 22 ) A k ˆ N ( 0 ) ) , β N = 1 det L ( L 11 H ˆ N ( 0 ) L 21 H ˆ ( 0 ) ) β N = 1 det L ( L 21 i ξ h ˆ ( 0 ) L 11 A h ˆ N ( 0 ) + ( L 11 + L 21 L 11 L 21 + ) i ξ k ˆ ( 0 ) β N = + ( L 12 + L 21 L 11 L 22 + ) A k ˆ N ( 0 ) ) .
(2.21)
Writing i ξ k ˆ ( 0 ) = A = 1 N 1 i ξ A k ˆ ( 0 ) and using the relations β + J = β J + k ˆ J ( 0 ) , by (2.21), we have
i ξ β + B + β + N = B + k ˆ N ( 0 ) + = 1 N A ( P , 1 + h ˆ ( 0 ) + P , 0 + k ˆ ( 0 ) ) , i ξ β + B β N = = 1 N A ( P , 1 h ˆ ( 0 ) + P , 0 k ˆ ( 0 ) )
(2.22)
with
P , 1 + = ( L 22 + B + L 21 ) i ξ A det L , P N , 1 + = A L 12 + B + L 11 det L , P , 0 + = ( A L 12 L 21 + L 22 L 11 + B + ( L 11 + L 21 L 11 L 21 + ) ) i ξ A det L + i ξ A , P N , 0 + = A L 12 L 22 + L 12 + L 22 B + ( L 12 + L 21 L 11 L 22 + ) det L , P , 1 = ( L 22 B L 21 ) i ξ A det L , P N , 1 = A L 12 B L 11 det L , P , 0 = ( A L 12 L 21 + L 22 L 11 + + B ( L 11 + L 21 L 11 L 21 + ) ) i ξ A det L , P N , 0 = A L 12 L 22 + L 12 + L 22 + B ( L 12 + L 21 L 11 L 22 + ) det L
for = 1 , , N 1 . By Lemma 2.2, (2.16), (2.19), and (2.20), we have
P J , 1 ± M 1 , 2 , P J , 0 ± M 0 , 2 .
(2.23)
By (2.13) we have
p ˆ ( x N ) = μ ( A + B ) A ( i ξ β + B β N ) e A x N = μ ( A + B ) = 1 N ( P , 1 h ˆ ( 0 ) + P , 0 k ˆ ( 0 ) ) e A x N ,

so that setting p , 0 = μ ( A + B ) P , 1 and p , 1 = μ ( A + B ) P , 0 , we have the formula of p ˆ ( x N ) in (2.7).

By (2.12), we have
( B + A + ) α + j = ( ν + + δ ) i ξ j μ + ( A + + B + ) ( i ξ α + A + α + N ) , ( B + A + ) α + N = ( ν + + δ ) A + μ + ( A + + B + ) ( i ξ α + A + α + N ) , ( B A ) α j = i ξ j A ( i ξ β + B β N ) , ( B A ) α N = ( i ξ β + B β N ) .
Since i ξ α + A + α + N = A 2 A + 2 A + B + A 2 ( i ξ β + B + β N ) as follows from (2.13), by (2.22) we have
( B + A + ) α + j = ( ν + + δ ) ( i ξ j ) B + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 k ˆ N ( 0 ) ( B + A + ) α + j = + ( ν + + δ ) i ξ j μ + ( A + + B + ) A 2 A + 2 A + B + A 2 A = 1 N ( P , 1 + h ˆ ( 0 ) + P , 0 + k ˆ ( 0 ) ) , ( B + A + ) α + N = ( ν + + δ ) A + B + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 k ˆ N ( 0 ) ( B + A + ) α + N = ( ν + + δ ) A + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 A = 1 N ( P , 1 + h ˆ ( 0 ) + P , 0 + k ˆ ( 0 ) ) , ( B A ) α j = i ξ j A A = 1 N ( P , 1 h ˆ ( 0 ) + P , 0 k ˆ ( 0 ) ) , ( B A ) α N = A = 1 N ( P , 1 h ˆ ( 0 ) + P , 0 k ˆ ( 0 ) )
(2.24)
for j = 1 , , N 1 . By (2.24) we have
( B + A + ) α + j = A [ = 1 N ( ν + + δ ) ( i ξ j ) P , 1 + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 h ˆ ( 0 ) ( B + A + ) α + j = + = 1 N 1 ( ν + + δ ) ( i ξ j ) P , 0 + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 k ˆ ( 0 ) ( B + A + ) α + j = + ( ν + + δ ) μ + ( A + + B + ) A 2 A + 2 A + B + A 2 ( i ξ j A B + + i ξ j P N , 0 + ) k ˆ N ( 0 ) ] , ( B + A + ) α + N = ( ν + + δ ) A + B + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 k ˆ N ( 0 ) ( B + A + ) α + N = A [ = 1 N ( ν + + δ ) A + P , 1 + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 h ˆ ( 0 ) ( B + A + ) α + N = + = 1 N ( ν + + δ ) A + P , 0 + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 k ˆ ( 0 ) ] .
Since ( e B + x N e A + x N ) α + J = M + ( x N ) ( B + A + ) α + J , setting
R j , 1 + = ( ν + + δ ) ( i ξ j ) P , 1 + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 , R j , 0 + = ( ν + + δ ) ( i ξ j ) P , 0 + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 , R j N , 0 + = ( ν + + δ ) μ + ( A + + B + ) A 2 A + 2 A + B + A 2 ( i ξ j A B + + i ξ j P N , 0 + ) , R N , 1 + = ( ν + + δ ) A + P , 1 + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 , R N , 0 + = ( ν + + δ ) A + P , 0 + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 , U N , 0 + = ( ν + + δ ) B + μ + ( A + + B + ) A 2 A + 2 A + B + A 2
for = 1 , , N and j , = 1 , , N 1 , we have u ˆ J 1 + and u ˆ N 4 + in (2.7). As is seen in Section 4 below, we have
A + M 1 , 1 , B + M 1 , 1 , ( A + + B + ) 1 M 1 , 1 , A 2 A + 2 A + B + A 2 M 0 , 1 ,
(2.25)

which, combined with (2.23), furnishes R J , 1 + M 1 , 2 , R J , 0 + M 0 , 2 , and U N , 0 + M 0 , 1 .

Analogously, in view of (2.24) we set
R j , 1 = i ξ j A P , 1 , R j , 0 = i ξ j A P , 0 , R N , 1 = P , 1 , R N , 0 = P , 0

for = 1 , , N , and j = 1 , , N 1 , we have u J 1 ( x N ) in (2.7). By (2.23) and (2.25), we have R J , 1 M 1 , 2 and R J , 0 M 0 , 2 .

Using (2.21), we represent β N by
β N = A = 1 N ( Q , 2 h ˆ ( 0 ) + Q , 1 k ˆ ( 0 ) )
(2.26)
with
Q , 2 = L 21 i ξ A det L , Q N , 2 = L 11 det L , Q , 1 = ( L 11 + L 21 L 11 L 21 + ) i ξ A det L , Q N , 1 = ( L 12 + L 21 L 11 L 22 + ) det L
for = 1 , , N 1 . By Lemma 2.2, (2.16), (2.19), and (2.20), we have
Q J , 2 M 2 , 2 , Q J , 1 M 1 , 2 .
(2.27)

In particular, noting that β + N = k ˆ N ( 0 ) + β N and setting S N , 2 ± = Q , 2 , S N , 1 ± = Q , 1 ( = 1 , , N ), T N , 1 ± = 0 , T N , 0 + = 1 and T N , 0 = 0 , we have the u ˆ N 2 ± and u ˆ N 3 ± in (2.7), and by (2.27) S N , 2 ± M 2 , 2 , S N , 1 ± M 1 , 2 , T N , 1 ± M 1 , 1 , and T N , 0 ± M 0 , 1 for = 1 , , N .

From (2.14) it follows that
h ˆ j ( 0 ) = μ + B + β + j + μ B β j + μ + ( B + A + ) α + j + μ ( B A ) α j ( μ + β + N μ β N ) i ξ j .
Noting that β + J = β J + k ˆ J ( 0 ) , we have
β ± j = 1 μ + B + + μ B h ˆ j ( 0 ) ± μ B μ + B + + μ B k ˆ j ( 0 ) + μ + i ξ j μ + B + + μ B k ˆ N ( 0 ) μ + ( B + A + ) μ + B + + μ B α + j μ ( B A ) μ + B + + μ B α j + ( μ + μ ) i ξ j μ + B + + μ B β N ,
which, combined with (2.24) and (2.26), furnishes
β ± j = 1 μ + B + + μ B h ˆ j ( 0 ) ± μ B μ + B + + μ B k ˆ j ( 0 ) + μ + i ξ j μ + B + + μ B k ˆ N ( 0 ) + ( ν + + δ ) i ξ j B + ( μ + B + + μ B ) ( B + + A + ) A 2 A + 2 A + B + A 2 k ˆ N ( 0 ) ( ν + + δ ) i ξ j ( μ + B + + μ B ) ( B + + A + ) A 2 A + 2 A + B + A 2 A = 1 N ( P , 1 + h ˆ ( 0 ) + P , 0 + k ˆ ( 0 ) ) + μ i ξ j ( μ + B + + μ B ) A A = 1 N ( P , 1 h ˆ ( 0 ) + P , 0 k ˆ ( 0 ) ) + ( μ + μ ) i ξ j ( μ + B + + μ B ) A = 1 N ( Q , 2 h ˆ ( 0 ) + Q , 1 k ˆ ( 0 ) ) .
(2.28)
Thus, we set
S j , 2 ± = ( ν + + δ ) ( i ξ j ) P , 1 + ( μ + B + + μ B ) ( A + + B + ) A 2 A + 2 A + B + A 2 S j , 2 ± = + μ ( i ξ j ) P , 1 ( μ + B + + μ B ) A + ( μ + μ ) ( i ξ j ) Q , 2 μ + B + + μ B , S j , 1 ± = ( ν + + δ ) ( i ξ j ) P , 0 + ( μ + B + + μ B ) ( A + + B + ) A 2 A + 2 A + B + A 2 S j , 1 ± = + μ ( i ξ j ) P , 0 ( μ + B + + μ B ) A + ( μ + μ ) ( i ξ j ) Q , 1 μ + B + + μ B , S j N , 1 ± = μ + i ξ j ( μ + B + + μ B ) A + ( ν + + δ ) ( i ξ j ) B + ( μ + B + + μ B ) ( A + + B + ) A A 2 A + 2 A + B + A 2 S j N , 1 ± = ( ν + + δ ) ( i ξ j ) P N , 0 + ( μ + B + + μ B ) ( A + + B + ) A 2 A + 2 A + B + A 2 S j N , 1 ± = + μ ( i ξ j ) P N , 0 ( μ + B + + μ B ) A + ( μ + μ ) ( i ξ j ) Q N , 1 μ + B + + μ B , T j , 1 ± = 1 μ + B + + μ B , T j , 0 + = μ B μ + B + + μ B , T j , 0 = μ + B + μ + B + + μ B ,
so that we have the u ˆ j 2 ± and u ˆ j 3 ± in (2.7). Moreover, as is seen in Section 4, we have
( μ + B + + μ B ) 1 M 1 , 1 ,
(2.29)

so that by (2.23), (2.25), (2.27), and (2.29) we have S j , 2 ± M 2 , 2 , S j , 1 ± M 1 , 2 , T j , 1 ± M 1 , 1 , and T j , 0 ± M 0 , 1 . This completes the proof of (2.7).

To construct our solution operator from the solution formulas in (2.7), first of all we observe that the following formulas due to Volevich hold:
a ( ξ , x N ) h ˆ ( 0 ) = 0 ± { ( N a ) ( ξ , x N + y N ) h ˆ ( y N ) + a ( ξ , x N + y N ) N h ˆ ( ξ , y N ) } d y N ,
where j = / x j . Using the identity 1 = γ 0 ± λ μ ± B ± 2 m = 1 N 1 ( i ξ m ) ( i ξ m ) B ± 2 , we write
a ( ξ , x N ) h ˆ ( ξ , 0 ) = 0 ± a ( ξ , x N + y N ) N h ˆ ( ξ , y N ) d y N 0 ± ( N a ) ( ξ , x N + y N ) γ 0 ± λ 1 / 2 μ ± B ± 2 λ 1 / 2 h ˆ ( ξ , y N ) d y N + = 1 N 1 0 ± ( N a ) ( ξ , x N + y N ) i ξ B ± 2 h ˆ ( ξ , y N ) d y N .
Let F ξ 1 denote the partial Fourier inverse transform with respect to ξ variable and let f 2 and f 3 = ( f 31 , , f 3 N ) be corresponding variables to λ 1 / 2 h and h = ( 1 h , , N h ) . If we define A ± ( f 2 , f 3 ) by
A ± [ a ] ( f 2 , f 3 ) = 0 ± F ξ 1 [ a ( ξ , x N + y N ) f ˆ 3 N ( ξ , y N ) ] d y N 0 ± F 1 [ ( N a ) ( ξ , x N + y N ) γ 0 ± λ 1 / 2 μ ± B ± 2 f ˆ 2 ( ξ , y N ) ] d y N + = 1 N 1 0 ± F ξ 1 [ ( N a ) ( ξ , x N + y N ) i ξ B ± 2 f ˆ 3 ( , y N ) ] d y N ,
(2.30)
then we have
F ξ 1 [ a ( ξ , x N ) h ˆ ( ξ , 0 ) ] = A ± [ a ] ( λ 1 / 2 h , h ) .
(2.31)
Analogously, using the identity 1 = γ 0 ± λ μ ± B ± 2 m = 1 N 1 ( i ξ m ) ( i ξ m ) B ± 2 , we write
a ( ξ , x N ) k ˆ ( ξ , 0 ) = 0 ± a ( ξ , x N + y N ) [ γ 0 ± λ 1 / 2 N k ˆ ( ξ , y N ) μ ± B ± 2 = 1 N 1 i ξ N k ˆ ( ξ , y N ) B ± ] d y N 0 ± ( N a ) ( ξ , x N + y N ) [ γ 0 ± λ k ˆ ( ξ