Skip to main content

On sampling theories and discontinuous Dirac systems with eigenparameter in the boundary conditions

Abstract

The sampling theory says that a function may be determined by its sampled values at some certain points provided the function satisfies some certain conditions. In this paper we consider a Dirac system which contains an eigenparameter appearing linearly in one condition in addition to an internal point of discontinuity. We closely follow the analysis derived by Annaby and Tharwat (J. Appl. Math. Comput. 2010, doi:10.1007/s12190-010-0404-9) to establish the needed relations for the derivations of the sampling theorems including the construction of Green’s matrix as well as the eigen-vector-function expansion theorem. We derive sampling representations for transforms whose kernels are either solutions or Green’s matrix of the problem. In the special case, when our problem is continuous, the obtained results coincide with the corresponding results in Annaby and Tharwat (J. Appl. Math. Comput. 2010, doi:10.1007/s12190-010-0404-9).

MSC:34L16, 94A20, 65L15.

1 Introduction

Sampling theory is one of the most powerful results in signal analysis. It is of great need in signal processing to reconstruct (recover) a signal (function) from its values at a discrete sequence of points (samples). If this aim is achieved, then an analog (continuous) signal can be transformed into a digital (discrete) one and then it can be recovered by the receiver. If the signal is band-limited, the sampling process can be done via the celebrated Whittaker, Shannon and Kotel’nikov (WSK) sampling theorem [13]. By a band-limited signal with band width σ, σ>0, i.e., the signal contains no frequencies higher than σ/2π cycles per second (cps), we mean a function in the Paley-Wiener space B σ 2 of entire functions of exponential type at most σ which are L 2 (R)-functions when restricted to . In other words, f(t) B σ 2 if there exists g() L 2 (σ,σ) such that, cf. [4, 5],

f(t)= 1 2 π σ σ e i x t g(x)dx.
(1.1)

Now WSK sampling theorem states [6, 7]: If f(t) B σ 2 , then it is completely determined from its values at the points t k =kπ/σ, kZ, by means of the formula

f(t)= k = f( t k )sincσ(t t k ),tC,
(1.2)

where

sinct= { sin t t , t 0 , 1 , t = 0 .
(1.3)

The sampling series (1.2) is absolutely and uniformly convergent on compact subsets of .

The WSK sampling theorem has been generalized in many different ways. Here we are interested in two extensions. The first is concerned with replacing the equidistant sampling points by more general ones, which is very important from the practical point of view. The following theorem which is known in some literature as the Paley-Wiener theorem [5] gives a sampling theorem with a more general class of sampling points. Although the theorem in its final form may be attributed to Levinson [8] and Kadec [9], it could be named after Paley and Wiener who first derived the theorem in a more restrictive form; see [6, 7, 10] for more details.

The Paley-Wiener theorem states that if { t k }, kZ, is a sequence of real numbers such that

D:= sup k Z | t k k π σ |< π 4 σ ,
(1.4)

and G is the entire function defined by

G(t):=(t t 0 ) k = 1 ( 1 t t k ) ( 1 t t k ) ,
(1.5)

then, for any function of the form (1.1), we have

f(t)= k Z f( t k ) G ( t ) G ( t k ) ( t t k ) ,tC.
(1.6)

The series (1.6) converges uniformly on compact subsets of .

The WSK sampling theorem is a special case of this theorem because if we choose t k =kπ/σ= t k , then

G(t)=t k = 1 ( 1 t t k ) ( 1 + t t k ) =t k = 1 ( 1 ( t σ / π ) 2 k 2 ) = sin t σ σ , G ( t k )= ( 1 ) k .

The sampling series (1.6) can be regarded as an extension of the classical Lagrange interpolation formula to for functions of exponential type. Therefore, (1.6) is called a Lagrange-type interpolation expansion.

The second extension of WSK sampling theorem is the theorem of Kramer, [11] which states that if I is a finite closed interval, K(,t):I×CC is a function continuous in t such that K(,t) L 2 (I) for all tC. Let { t k } k Z be a sequence of real numbers such that { K ( , t k ) } k Z is a complete orthogonal set in L 2 (I). Suppose that

f(t)= I K(x,t)g(x)dx,

where g() L 2 (I). Then

f(t)= k Z f( t k ) I K ( x , t ) K ( x , t k ) ¯ d x K ( , t k ) L 2 ( I ) 2 .
(1.7)

Series (1.7) converges uniformly wherever K ( , t ) L 2 ( I ) , as a function of t, is bounded. In this theorem, sampling representations were given for integral transforms whose kernels are more general than exp(ixt). Also Kramer’s theorem is a generalization of WSK theorem. If we take K(x,t)= e i t x , I=[σ,σ], t k = k π σ , then (1.7) is (1.2).

The relationship between both extensions of WSK sampling theorem has been investigated extensively. Starting from a function theory approach, cf. [12], it was proved in [13] that if K(x,t), xI, tC, satisfies some analyticity conditions, then Kramer’s sampling formula (1.7) turns out to be a Lagrange interpolation one; see also [1416]. In another direction, it was shown that Kramer’s expansion (1.7) could be written as a Lagrange-type interpolation formula if K(,t) and t k were extracted from ordinary differential operators; see the survey [17] and the references cited therein. The present work is a continuation of the second direction mentioned above. We prove that integral transforms associated with Dirac systems with an internal point of discontinuity can also be reconstructed in a sampling form of Lagrange interpolation type. We would like to mention that works in direction of sampling associated with eigenproblems with an eigenparameter in the boundary conditions are few; see, e.g., [1820]. Also, papers in sampling with discontinuous eigenproblems are few; see [2124]. However, sampling theories associated with Dirac systems which contain eigenparameter in the boundary conditions and have at the same time discontinuity conditions, do not exist as far as we know. Our investigation is be the first in that direction, introducing a good example. To achieve our aim we briefly study the spectral analysis of the problem. Then we derive two sampling theorems using solutions and Green’s matrix respectively.

2 The eigenvalue problem

In this section, we define our boundary value problem and state some of its properties. We consider the Dirac system

u 2 ( x ) p 1 ( x ) u 1 ( x ) = λ u 1 ( x ) , u 1 ( x ) + p 2 ( x ) u 2 ( x ) = λ u 2 ( x ) , x [ 1 , 0 ) ( 0 , 1 ] ,
(2.1)
U 1 ( u ) : = sin α u 1 ( 1 ) cos α u 2 ( 1 ) = 0 ,
(2.2)
U 2 ( u ) : = ( a 1 + λ sin β ) u 1 ( 1 ) ( a 2 + λ cos β ) u 2 ( 1 ) = 0
(2.3)

and transmission conditions

U 3 (u):= u 1 ( 0 ) δ u 1 ( 0 + ) =0,
(2.4)
U 4 (u):= u 2 ( 0 ) δ u 2 ( 0 + ) =0,
(2.5)

where λC; the real-valued functions p 1 () and p 2 () are continuous in [1,0) and (0,1] and have finite limits p 1 ( 0 ± ):= lim x 0 ± p 1 (x) and p 2 ( 0 ± ):= lim x 0 ± p 2 (x); a 1 , a 2 ,δR, α,β[0,π); δ0 and ρ:= a 1 cosβ a 2 sinβ>0.

In [24] the authors discussed problem (2.1)-(2.5) but with the condition sinβ u 1 (1)cosβ u 2 (1)=0 instead of (2.3). To formulate a theoretic approach to problem (2.1)-(2.5), we define the Hilbert space H=HC with an inner product, see [19, 20],

U ( ) , V ( ) H := 1 0 u (x) v ¯ (x)dx+ δ 2 0 1 u (x) v ¯ (x)dx+ δ 2 ρ z w ¯ ,
(2.6)

where denotes the matrix transpose,

U ( x ) = ( u ( x ) z ) , V ( x ) = ( v ( x ) w ) H , u ( x ) = ( u 1 ( x ) u 2 ( x ) ) , v ( x ) = ( v 1 ( x ) v 2 ( x ) ) H ,

u i (), v i () L 2 (1,1), i=1,2, z,wC. For convenience, we put

( R a ( u ( x ) ) R β ( u ( x ) ) ) := ( a 1 u 1 ( 1 ) a 2 u 2 ( 1 ) sin β u 1 ( 1 ) cos β u 2 ( 1 ) ) .
(2.7)

Equation (2.1) can be written as

(u):=A u (x)P(x)u(x)=λu(x),
(2.8)

where

A= ( 0 1 1 0 ) ,P(x)= ( p 1 ( x ) 0 0 p 2 ( x ) ) ,u(x)= ( u 1 ( x ) u 2 ( x ) ) .
(2.9)

For functions u(x), which are defined on [1,0)(0,1] and have finite limit u(±0):= lim x ± 0 u(x), by u ( 1 ) (x) and u ( 2 ) (x), we denote the functions

u ( 1 ) (x)= { u ( x ) , x [ 1 , 0 ) , u ( 0 ) , x = 0 , u ( 2 ) (x)= { u ( x ) , x ( 0 , 1 ] , u ( 0 + ) , x = 0 ,
(2.10)

which are defined on Γ 1 :=[1,0] and Γ 2 :=[0,1], respectively.

In the following lemma, we prove that the eigenvalues of problem (2.1)-(2.5) are real.

Lemma 2.1 The eigenvalues of problem (2.1)-(2.5) are real.

Proof Assume the contrary that λ 0 is a nonreal eigenvalue of problem (2.1)-(2.5). Let ( u 1 ( x ) u 2 ( x ) ) be a corresponding (non-trivial) eigenfunction. By (2.1), we have

d d x { u 1 ( x ) u ¯ 2 ( x ) u ¯ 1 ( x ) u 2 ( x ) } =( λ ¯ 0 λ 0 ) { | u 1 ( x ) | 2 + | u 2 ( x ) | 2 } ,x[1,0)(0,1].

Integrating the above equation through [1,0] and [0,1], we obtain

( λ ¯ 0 λ 0 ) [ 1 0 ( | u 1 ( x ) | 2 + | u 2 ( x ) | 2 ) d x ] = u 1 ( 0 ) u ¯ 2 ( 0 ) u ¯ 1 ( 0 ) u 2 ( 0 ) [ u 1 ( 1 ) u ¯ 2 ( 1 ) u ¯ 1 ( 1 ) u 2 ( 1 ) ] ,
(2.11)
( λ ¯ 0 λ 0 ) [ 0 1 ( | u 1 ( x ) | 2 + | u 2 ( x ) | 2 ) d x ] = u 1 ( 1 ) u ¯ 2 ( 1 ) u ¯ 1 ( 1 ) u 2 ( 1 ) [ u 1 ( 0 + ) u ¯ 2 ( 0 + ) u ¯ 1 ( 0 + ) u 2 ( 0 + ) ] .
(2.12)

Then from (2.2), (2.3) and transmission conditions, we have, respectively,

u 1 ( 1 ) u ¯ 2 ( 1 ) u ¯ 1 ( 1 ) u 2 ( 1 ) = 0 , u 1 ( 1 ) u ¯ 2 ( 1 ) u ¯ 1 ( 1 ) u 2 ( 1 ) = ρ ( λ ¯ 0 λ 0 ) | u 2 ( 1 ) | 2 | a 1 + λ 0 sin β | 2

and

u 1 ( 0 ) u ¯ 2 ( 0 ) u ¯ 1 ( 0 ) u 2 ( 0 ) = δ 2 [ u 1 ( 0 + ) u ¯ 2 ( 0 + ) u ¯ 1 ( 0 + ) u 2 ( 0 + ) ] .

Since λ 0 λ ¯ 0 , it follows from the last three equations and (2.11), (2.12) that

1 0 ( | u 1 ( x ) | 2 + | u 2 ( x ) | 2 ) dx+ δ 2 0 1 ( | u 1 ( x ) | 2 + | u 2 ( x ) | 2 ) dx= ρ δ 2 | u 2 ( 1 ) | 2 | a 1 + λ 0 sin β | 2 .
(2.13)

This contradicts the conditions 1 0 ( | u 1 ( x ) | 2 + | u 2 ( x ) | 2 )dx+ δ 2 0 1 ( | u 1 ( x ) | 2 + | u 2 ( x ) | 2 )dx>0 and ρ>0. Consequently, λ 0 must be real. □

Let D(A)H be the set of all U(x)= ( u ( x ) R β ( u ( x ) ) ) H such that u 1 ( i ) (), u 2 ( i ) () are absolutely continuous on Γ i , i=1,2, and (u)H, sinα u 1 (1)cosα u 2 (1)=0, u i ( 0 )δ u i ( 0 + )=0, i=1,2. Define the operator A:D(A)H by

A( u ( x ) R β ( u ( x ) ) )=( ( u ) R a ( u ( x ) ) ),( u ( x ) R β ( u ( x ) ) )D(A).
(2.14)

Thus, the operator A is symmetric in H. Indeed, for U(),V()D(A),

A U ( ) , U ( ) H = 1 0 ( ( u ( x ) ) ) v ¯ ( x ) d x + δ 2 0 1 ( ( u ( x ) ) ) v ¯ ( x ) d x δ 2 ρ R a ( u ( x ) ) R ¯ β ( v ( x ) ) = 1 0 ( u 2 ( x ) p 1 ( x ) u 1 ( x ) ) v ¯ 1 ( x ) d x 1 0 ( u 1 ( x ) + p 2 ( x ) u 2 ( x ) ) v ¯ 2 ( x ) d x + δ 2 0 1 ( u 2 ( x ) p 1 ( x ) u 1 ( x ) ) v ¯ 1 ( x ) d x δ 2 0 1 ( u 1 ( x ) + p 2 ( x ) u 2 ( x ) ) v ¯ 2 ( x ) d x δ 2 ρ R a ( u ( x ) ) R ¯ β ( v ( x ) ) = 1 0 u 2 ( x ) ( v ¯ 1 ( x ) + p 2 ( x ) v ¯ 2 ( x ) ) d x + 1 0 u 1 ( x ) ( v ¯ 2 ( x ) p 1 ( x ) v ¯ 1 ( x ) ) d x δ 2 0 1 u 2 ( x ) ( v ¯ 1 ( x ) + p 2 ( x ) v ¯ 2 ( x ) ) d x + δ 2 0 1 u 1 ( x ) ( v ¯ 2 ( x ) p 1 ( x ) v ¯ 1 ( x ) ) d x + ( u 2 ( 0 ) v ¯ 1 ( 0 ) u 1 ( 0 ) v ¯ 2 ( 0 ) ) ( u 2 ( 1 ) v ¯ 1 ( 1 ) u 1 ( 1 ) v ¯ 2 ( 1 ) ) + δ 2 ( u 2 ( 1 ) v ¯ 1 ( 1 ) u 1 ( 1 ) v ¯ 2 ( 1 ) ) δ 2 ( u 2 ( 0 + ) v ¯ 1 ( 0 + ) u 1 ( 0 + ) v ¯ 2 ( 0 + ) ) δ 2 ρ R a ( u ( x ) ) R ¯ β ( v ( x ) ) = 0 1 u ( x ) ¯ v ( x ) d x δ 2 ρ R β ( u ( x ) ) R ¯ a ( v ( x ) ) = U ( ) , A V ( ) H .

The operator A:D(A)H and the eigenvalue problem (2.1)-(2.5) have the same eigenvalues. Therefore they are equivalent in terms of this aspect.

Lemma 2.2 Let λ and μ be two different eigenvalues of problem (2.1)-(2.5). Then the corresponding eigenfunctions u(x) and v(x) of this problem satisfy the following equality:

1 0 u (x)v(x)dx+ δ 2 0 1 u (x)v(x)dx= δ 2 ρ R a ( u ( x ) ) R β ( v ( x ) ) .
(2.15)

Proof Equation (2.15) follows immediately from the orthogonality of the corresponding eigenelements:

U ( x ) = ( u ( x ) z ) , V ( x ) = ( v ( x ) w ) H , u ( x ) = ( u 1 ( x ) u 2 ( x ) ) , v ( x ) = ( v 1 ( x ) v 2 ( x ) ) H .

 □

Now, we construct a special fundamental system of solutions of equation (2.1) for λ not being an eigenvalue. Let us consider the next initial value problem:

u 2 (x) p 1 (x) u 1 (x)=λ u 1 (x), u 1 (x)+ p 2 (x) u 2 (x)=λ u 2 (x),x(1,0),
(2.16)
u 1 (1)=cosα, u 2 (1)=sinα.
(2.17)

By virtue of Theorem 1.1 in [25], this problem has a unique solution u= ( ϕ 11 ( x , λ ) ϕ 21 ( x , λ ) ) , which is an entire function of λC for each fixed x[1,0]. Similarly, employing the same method as in the proof of Theorem 1.1 in [25], we see that the problem

u 2 (x) p 1 (x) u 1 (x)=λ u 1 (x), u 1 (x)+ p 2 (x) u 2 (x)=λ u 2 (x),x(0,1),
(2.18)
u 1 (1)= a 2 +λcosβ, u 2 (1)= a 1 +λsinβ
(2.19)

has a unique solution u= ( χ 12 ( x , λ ) χ 22 ( x , λ ) ) , which is an entire function of parameter λ for each fixed x[0,1].

Now the functions ϕ i 2 (x,λ) and χ i 1 (x,λ) are defined in terms of ϕ i 1 (x,λ) and χ i 2 (x,λ), i=1,2, respectively, as follows: The initial-value problem,

u 2 (x) p 1 (x) u 1 (x)=λ u 1 (x), u 1 (x)+ p 2 (x) u 2 (x)=λ u 2 (x),x(0,1),
(2.20)
u 1 (0)= 1 δ ϕ 11 (0,λ), u 2 (0)= 1 δ ϕ 21 (0,λ),
(2.21)

has a unique solution u= ( ϕ 12 ( x , λ ) ϕ 22 ( x , λ ) ) for each λC.

Similarly, the following problem also has a unique solution u= ( χ 11 ( x , λ ) χ 21 ( x , λ ) ) :

u 2 (x) p 1 (x) u 1 (x)=λ u 1 (x), u 1 (x)+ p 2 (x) u 2 (x)=λ u 2 (x),x(1,0),
(2.22)
u 1 (0)=δ χ 12 (0,λ), u 2 (0)=δ χ 22 (0,λ).
(2.23)

Let us construct two basic solutions of equation (2.1) as follows:

ϕ(,λ)= ( ϕ 1 ( , λ ) ϕ 2 ( , λ ) ) ,χ(,λ)= ( χ 1 ( , λ ) χ 2 ( , λ ) ) ,

where

ϕ 1 (x,λ)= { ϕ 11 ( x , λ ) , x [ 1 , 0 ) , ϕ 12 ( x , λ ) , x ( 0 , 1 ] , ϕ 2 (x,λ)= { ϕ 21 ( x , λ ) , x [ 1 , 0 ) , ϕ 22 ( x , λ ) , x ( 0 , 1 ] ,
(2.24)
χ 1 (x,λ)= { χ 11 ( x , λ ) , x [ 1 , 0 ) , χ 12 ( x , λ ) , x ( 0 , 1 ] , χ 2 (x,λ)= { χ 21 ( x , λ ) , x [ 1 , 0 ) , χ 22 ( x , λ ) , x ( 0 , 1 ] .
(2.25)

Then

R β ( χ ( x , λ ) ) =ρ.
(2.26)

By virtue of equations (2.21) and (2.23), these solutions satisfy both transmission conditions (2.4) and (2.5). These functions are entire in λ for all x[1,0)(0,1].

Let W(ϕ,χ)(,λ) denote the Wronskian of ϕ(,λ) and χ(,λ) defined in [[26], p.194], i.e.,

W(ϕ,χ)(,λ):= | ϕ 1 ( , λ ) ϕ 2 ( , λ ) χ 1 ( , λ ) χ 2 ( , λ ) | .

It is obvious that the Wronskian

ω i (λ):=W(ϕ,χ)(x,λ)= ϕ 1 i (x,λ) χ 2 i (x,λ) ϕ 2 i (x,λ) χ 1 i (x,λ),x Γ i ,i=1,2
(2.27)

are independent of x Γ i and are entire functions. Taking into account (2.21) and (2.23), a short calculation gives

ω 1 (λ)= δ 2 ω 2 (λ)

for each λC.

Corollary 2.3 The zeros of the functions ω 1 (λ) and ω 2 (λ) coincide.

Then we may take into consideration the characteristic function ω(λ) as

ω(λ):= ω 1 (λ)= δ 2 ω 2 (λ).
(2.28)

In the following lemma, we show that all eigenvalues of problem (2.1)-(2.5) are simple.

Lemma 2.4 All eigenvalues of problem (2.1)-(2.5) are just zeros of the function ω(λ). Moreover, every zero of ω(λ) has multiplicity one.

Proof Since the functions ϕ 1 (x,λ) and ϕ 2 (x,λ) satisfy the boundary condition (2.2) and both transmission conditions (2.4) and (2.5), to find the eigenvalues of the (2.1)-(2.5), we have to insert the functions ϕ 1 (x,λ) and ϕ 2 (x,λ) in the boundary condition (2.3) and find the roots of this equation.

By (2.1) we obtain for λ,μC, λμ,

d d x { ϕ 1 ( x , λ ) ϕ 2 ( x , μ ) ϕ 1 ( x , μ ) ϕ 2 ( x , λ ) } =(μλ) { ϕ 1 ( x , λ ) ϕ 1 ( x , μ ) + ϕ 2 ( x , λ ) ϕ 2 ( x , μ ) } .

Integrating the above equation through [1,0] and [0,1], and taking into account the initial conditions (2.17), (2.21) and (2.23), we obtain

δ 2 ( ϕ 12 ( 1 , λ ) ϕ 22 ( 1 , μ ) ϕ 12 ( 1 , μ ) ϕ 22 ( 1 , λ ) ) = ( μ λ ) ( 1 0 ( ϕ 11 ( x , λ ) ϕ 11 ( x , μ ) + ϕ 21 ( x , λ ) ϕ 21 ( x , μ ) ) d x + δ 2 0 1 ( ϕ 12 ( x , λ ) ϕ 12 ( x , μ ) + ϕ 22 ( x , λ ) ϕ 22 ( x , μ ) ) d x ) .
(2.29)

Dividing both sides of (2.29) by (λμ) and by letting μλ, we arrive to the relation

δ 2 ( ϕ 22 ( 1 , λ ) ϕ 12 ( 1 , λ ) λ ϕ 12 ( 1 , λ ) ϕ 22 ( 1 , λ ) λ ) = ( 1 0 ( | ϕ 11 ( x , λ ) | 2 + | ϕ 21 ( x , λ ) | 2 ) d x + δ 2 0 1 ( | ϕ 12 ( x , λ ) | 2 + | ϕ 22 ( x , λ ) | 2 ) d x ) .
(2.30)

We show that equation

ω(λ)=W(ϕ,χ)(1,λ)= δ 2 ( ( a 1 + λ sin β ) ϕ 12 ( 1 , λ ) ( a 2 + λ cos β ) ϕ 22 ( 1 , λ ) ) =0
(2.31)

has only simple roots. Assume the converse, i.e., equation (2.31) has a double root λ , say. Then the following two equations hold:

( a 1 + λ sin β ) ϕ 12 ( 1 , λ ) ( a 2 + λ cos β ) ϕ 22 ( 1 , λ ) = 0 ,
(2.32)
sin β ϕ 12 ( 1 , λ ) + ( a 1 + λ sin β ) ϕ 12 ( 1 , λ ) λ cos β ϕ 22 ( 1 , λ ) ( a 2 + λ cos β ) ϕ 22 ( 1 , λ ) λ = 0 .
(2.33)

Since ρ0 and λ is real, then ( a 1 + λ sin β ) 2 + ( a 2 + λ cos β ) 2 0. Let a 1 + λ sinβ0. From (2.32) and (2.33),

ϕ 12 ( 1 , λ ) = ( a 2 + λ cos β ) ( a 1 + λ sin β ) ϕ 22 ( 1 , λ ) , ϕ 12 ( 1 , λ ) λ = ρ ϕ 22 ( 1 , λ ) ( a 1 + λ sin β ) 2 + ( a 2 + λ cos β ) ( a 1 + λ sin β ) ϕ 22 ( 1 , λ ) λ .
(2.34)

Combining (2.34) and (2.30) with λ= λ , we obtain

ρ δ 2 ( ϕ 22 ( 1 , λ ) ) 2 ( a 1 + λ sin β ) 2 = ( 1 0 ( | ϕ 11 ( x , λ ) | 2 + | ϕ 21 ( x , λ ) | 2 ) d x + δ 2 0 1 ( | ϕ 12 ( x , λ ) | 2 + | ϕ 22 ( x , λ ) | 2 ) d x ) ,
(2.35)

contradicting the assumption ρ>0. The other case, when a 2 + λ cosβ0, can be treated similarly and the proof is complete. □

Let { λ n } n = denote the sequence of zeros of ω(λ). Then

Φ(x, λ n ):=( ϕ ( x , λ n ) R β ( ϕ ( x , λ n ) ) )
(2.36)

are the corresponding eigenvectors of the operator A. Since A is symmetric, then it is easy to show that the following orthogonality relation holds:

Φ ( , λ n ) , Φ ( , λ m ) H =0for nm.
(2.37)

Here { ϕ ( , λ n ) } n = is a sequence of eigen-vector-functions of (2.1)-(2.5) corresponding to the eigenvalues { λ n } n = . We denote by Ψ(x, λ n ) the normalized eigenvectors of A, i.e.,

Ψ(x, λ n ):= Φ ( x , λ n ) Φ ( , λ n ) H =( ψ ( x , λ n ) R β ( ψ ( x , λ n ) ) ).
(2.38)

Since χ(,λ) satisfies (2.3)-(2.5), then the eigenvalues are also determined via

sinα χ 11 (1,λ)cosα χ 21 (1,λ)=ω(λ).
(2.39)

Therefore { χ ( , λ n ) } n = is another set of eigen-vector-functions which is related by { ϕ ( , λ n ) } n = with

χ(x, λ n )= c n ϕ(x, λ n ),x[1,0)(0,1],nZ,
(2.40)

where c n 0 are non-zero constants, since all eigenvalues are simple. Since the eigenvalues are all real, we can take the eigen-vector-functions to be real-valued.

Now we derive the asymptotic formulae of the eigenvalues { λ n } n = and the eigen-vector-functions { ϕ ( , λ n ) } n = . We transform equations (2.1), (2.17), (2.21) and (2.24) into the integral equations, see [26], as follows:

ϕ 11 ( x , λ ) = cos ( λ ( x + 1 ) + α ) 1 x sin λ ( x t ) p 1 ( t ) ϕ 11 ( t , λ ) d t 1 x cos λ ( x t ) p 2 ( t ) ϕ 21 ( t , λ ) d t ,
(2.41)
ϕ 21 ( x , λ ) = sin ( λ ( x + 1 ) + α ) + 1 x cos λ ( x t ) p 1 ( t ) ϕ 11 ( t , λ ) d t 1 x sin λ ( x t ) p 2 ( t ) ϕ 21 ( t , λ ) d t ,
(2.42)
ϕ 12 ( x , λ ) = 1 δ ϕ 21 ( 0 , λ ) sin ( λ x ) + 1 δ ϕ 11 ( 0 , λ ) cos ( λ x ) 0 x sin λ ( x t ) p 1 ( t ) ϕ 12 ( t , λ ) d t 0 x cos λ ( x t ) p 2 ( t ) ϕ 22 ( t , λ ) d t ,
(2.43)
ϕ 22 ( x , λ ) = 1 δ ϕ 11 ( 0 , λ ) sin ( λ x ) + 1 δ ϕ 21 ( 0 , λ ) cos ( λ x ) + 0 x cos λ ( x t ) p 1 ( t ) ϕ 12 ( t , λ ) d t ϕ 22 ( x , λ ) = 0 x sin λ ( x t ) p 2 ( t ) ϕ 22 ( t , λ ) d t .
(2.44)

For |λ| the following estimates hold uniformly with respect to x, x[1,0)(0,1], cf. [[25], p.55],

ϕ 11 (x,λ)=cos ( λ ( x + 1 ) + α ) +O ( 1 λ ) ,
(2.45)
ϕ 21 (x,λ)=sin ( λ ( x + 1 ) + α ) +O ( 1 λ ) ,
(2.46)
ϕ 12 (x,λ)= 1 δ ϕ 21 ( 0 , λ ) sin(λx)+ 1 δ ϕ 11 ( 0 , λ ) cos(λx)+O ( 1 λ ) ,
(2.47)
ϕ 22 (x,λ)= 1 δ ϕ 11 ( 0 , λ ) sin(λx)+ 1 δ ϕ 21 ( 0 , λ ) cos(λx)+O ( 1 λ ) .
(2.48)

Now we find an asymptotic formula of the eigenvalues. Since the eigenvalues of the boundary value problem (2.1)-(2.5) coincide with the roots of the equation

( a 1 +λsinβ) ϕ 12 (1,λ)( a 2 +λcosβ) ϕ 22 (1,λ)=0,
(2.49)

then from the estimates (2.47), (2.48) and (2.49), we get

λ sin β [ 1 δ ϕ 21 ( 0 , λ ) sin λ + 1 δ ϕ 11 ( 0 , λ ) cos λ ] λ cos β [ 1 δ ϕ 11 ( 0 , λ ) sin λ + 1 δ ϕ 21 ( 0 , λ ) cos λ ] + O ( 1 ) = 0 ,

which can be written as

1 δ ϕ 11 ( 0 , λ ) sin(λβ)+ 1 δ ϕ 21 ( 0 , λ ) cos(λβ)+O ( 1 λ ) =0.
(2.50)

Then, from (2.45) and (2.46), equation (2.50) has the form

sin(2λ+αβ)+O ( 1 λ ) =0.
(2.51)

For large |λ|, equation (2.51) obviously has solutions which, as is not hard to see, have the form

2 λ n +αβ=nπ+ δ n ,n=0,±1,±2,.
(2.52)

Inserting these values in (2.51), we find that sin δ n =O( 1 n ), i.e., δ n =O( 1 n ). Thus we obtain the following asymptotic formula for the eigenvalues:

λ n = n π + β α 2 +O ( 1 n ) ,n=0,±1,±2,.
(2.53)

Using the formulae (2.53), we obtain the following asymptotic formulae for the eigen-vector-functions ϕ(, λ n ):

ϕ(x, λ n )= { ( cos ( λ n ( x + 1 ) + α ) + O ( 1 n ) sin ( λ n ( x + 1 ) + α ) + O ( 1 n ) ) , x [ 1 , 0 ) , ( 1 δ cos ( λ n ( x + 1 ) + α ) + O ( 1 n ) 1 δ sin ( λ n ( x + 1 ) + α ) + O ( 1 n ) ) , x ( 0 , 1 ] ,
(2.54)

where

ϕ(x, λ n )= { ( ϕ 11 ( x , λ n ) ϕ 21 ( x , λ n ) ) , x [ 1 , 0 ) , ( ϕ 12 ( x , λ n ) ϕ 22 ( x , λ n ) ) , x ( 0 , 1 ] .
(2.55)

3 Green’s matrix and expansion theorem

Let F()= ( f ( ) w ) , where f()= ( f 1 ( ) f 2 ( ) ) , be a continuous vector-valued function. To study the completeness of the eigenvectors of A, and hence the completeness of the eigen-vector-functions of (2.1)-(2.5), we derive Green’s function of problem (2.1)-(2.5) as well as the resolvent of A. Indeed, let λ be not an eigenvalue of A and consider the inhomogeneous problem

(AλI)U(x)=F(x),U(x)= ( u ( x ) R β ( u ( x ) ) ) ,

where I is the identity operator. Since

(AλI)U(x)= ( ( u ) R a ( u ( x ) ) ) λ ( u ( x ) R β ( u ( x ) ) ) = ( f ( x ) w ) ,

then we have

u 2 ( x ) { p 1 ( x ) + λ } u 1 ( x ) = f 1 ( x ) , u 1 ( x ) + { p 2 ( x ) + λ } u 2 ( x ) = f 2 ( x ) , x [ 1 , 0 ) ( 0 , 1 ] ,
(3.1)
w = R a ( u ( x ) ) λ R β ( u ( x ) )
(3.2)

and the boundary conditions (2.2), (2.4) and (2.5) with λ is not an eigenvalue of problem (2.1)-(2.5).

Now, we can represent the general solution of (3.1) in the following form:

u(x,λ)= { A 1 ( ϕ 11 ( x , λ ) ϕ 21 ( x , λ ) ) + B 1 ( χ 11 ( x , λ ) χ 21 ( x , λ ) ) , x [ 1 , 0 ) , A 2 ( ϕ 12 ( x , λ ) ϕ 22 ( x , λ ) ) + B 2 ( χ 12 ( x , λ ) χ 22 ( x , λ ) ) , x ( 0 , 1 ] .
(3.3)

We applied the standard method of variation of the constants to (3.3), and thus the functions A 1 (x,λ), B 1 (x,λ) and A 2 (x,λ), B 2 (x,λ) satisfy the linear system of equations

A 1 ( x , λ ) ϕ 21 ( x , λ ) + B 1 ( x , λ ) χ 21 ( x , λ ) = f 1 ( x ) , A 1 ( x , λ ) ϕ 11 ( x , λ ) + B 1 ( x , λ ) χ 11 ( x , λ ) = f 2 ( x ) , x [ 1 , 0 ) ,
(3.4)

and

A 2 ( x , λ ) ϕ 22 ( x , λ ) + B 2 ( x , λ ) χ 22 ( x , λ ) = f 1 ( x ) , A 2 ( x , λ ) ϕ 12 ( x , λ ) + B 2 ( x , λ ) χ 12 ( x , λ ) = f 2 ( x ) , x ( 0 , 1 ] .
(3.5)

Since λ is not an eigenvalue and ω(λ)0, each of the linear system in (3.4) and (3.5) has a unique solution which leads to

A 1 ( x , λ ) = 1 ω ( λ ) x 0 χ ( ξ , λ ) f ( ξ ) d ξ + A 1 , B 1 ( x , λ ) = 1 ω ( λ ) 1 x ϕ ( ξ , λ ) f ( ξ ) d ξ + B 1 , x [ 1 , 0 ) ,
(3.6)
A 2 ( x , λ ) = δ 2 ω ( λ ) x 1 χ ( ξ , λ ) f ( ξ ) d ξ + A 2 , B 2 ( x , λ ) = δ 2 ω ( λ ) 0 x ϕ ( ξ , λ ) f ( ξ ) d ξ + B 2 , x ( 0 , 1 ] ,
(3.7)

where A 1 , A 2 , B 1 and B 2 are arbitrary constants, and

ϕ(ξ,λ)= { ( ϕ 11 ( ξ , λ ) ϕ 21 ( ξ , λ ) ) , ξ [ 1 , 0 ) , ( ϕ 12 ( ξ , λ ) ϕ 22 ( ξ , λ ) ) , ξ ( 0 , 1 ] , χ(ξ,λ)= { ( χ 11 ( ξ , λ ) χ 21 ( ξ , λ ) ) , ξ [ 1 , 0 ) , ( χ 12 ( ξ , λ ) χ 22 ( ξ , λ ) ) , ξ ( 0 , 1 ] .

Substituting equations (3.6) and (3.7) into (3.3), we obtain the solution of (3.1)

u(x,λ)= { ϕ ( x , λ ) ω ( λ ) x 0 χ ( ξ , λ ) f ( ξ ) d ξ + χ ( x , λ ) ω ( λ ) 1 x ϕ ( ξ , λ ) f ( ξ ) d ξ + A 1 ϕ ( x , λ ) + B 1 χ ( x , λ ) , x [ 1 , 0 ) , δ 2 ϕ ( x , λ ) ω ( λ ) x 1 χ ( ξ , λ ) f ( ξ ) d ξ + δ 2 χ ( x , λ ) ω ( λ ) 0 x ϕ ( ξ , λ ) f ( ξ ) d ξ + A 2 ϕ ( x , λ ) + B 2 χ ( x , λ ) , x ( 0 , 1 ] .
(3.8)

Then from (2.2), (3.2) and the transmission conditions (2.4) and (2.5), we get

A 1 = δ 2 ω ( λ ) 0 1 χ ( ξ , λ ) f ( ξ ) d ξ w δ 2 ω ( λ ) , B 1 = 0 , A 2 = w δ 2 ω ( λ ) , B 2 = 1 ω ( λ ) 1 0 ϕ ( ξ , λ ) f ( ξ ) d ξ .

Then (3.8) can be written as

u ( x , λ ) = w δ 2 ω ( λ ) ϕ ( x , λ ) + χ ( x , λ ) ω ( λ ) 1 x a ( ξ ) ϕ ( ξ , λ ) f ( ξ ) d ξ + ϕ ( x , λ ) ω ( λ ) x 1 a ( ξ ) χ ( ξ , λ ) f ( ξ ) d ξ , x , ξ [ 1 , 0 ) ( 0 , 1 ] ,
(3.9)

where

a(ξ)= { 1 , ξ [ 1 , 0 ) , δ 2 , ξ ( 0 , 1 ] ,
(3.10)

which can be written as

u(x,λ)= w δ 2 ω ( λ ) ϕ(x,λ)+ 1 1 a(ξ)G(x,ξ,λ)f(ξ)dξ,
(3.11)

where

G(x,ξ,λ)= 1 ω ( λ ) { χ ( x , λ ) ϕ ( ξ , λ ) , 1 ξ x 1 , x 0 , ξ 0 , ϕ ( x , λ ) χ ( ξ , λ ) , 1 x ξ 1 , x 0 , ξ 0 .
(3.12)

Expanding (3.12) we obtain the concrete form

G(x,ξ,λ)= 1 ω ( λ ) { ( ϕ 1 ( ξ , λ ) χ 1 ( x , λ ) ϕ 2 ( ξ , λ ) χ 1 ( x , λ ) ϕ 1 ( ξ , λ ) χ 2 ( x , λ ) ϕ 2 ( ξ , λ ) χ 2 ( x , λ ) ) , 1 ξ x 1 , x 0 , ξ 0 , ( ϕ 1 ( x , λ ) χ 1 ( ξ , λ ) ϕ 1 ( x , λ ) χ 2 ( ξ , λ ) ϕ 2 ( x , λ ) χ 1 ( ξ , λ ) ϕ 2 ( x , λ ) χ 2 ( ξ , λ ) ) , 1 x ξ 1 , x 0 , ξ 0 .
(3.13)

The matrix G(x,ξ,λ) is called Green’s matrix of problem (2.1)-(2.5). Obviously, G(x,ξ,λ) is a meromorphic function of λ, for every (x,ξ)([1,0) ( 0 , 1 ] ) 2 , which has simple poles only at the eigenvalues. Although Green’s matrix looks as simple as that of Dirac systems, cf., e.g., [25, 26], it is rather complicated because of the transmission conditions (see the example at the end of this paper). Therefore

U(x)= ( A λ I ) 1 F(x)=( w δ 2 ω ( λ ) ϕ ( x , λ ) + 1 1 a ( ξ ) G ( x , ξ , λ ) f ( ξ ) d ξ R β ( u ( x ) ) ).
(3.14)

Lemma 3.1 The operator A is self-adjoint in H.

Proof Since A is a symmetric densely defined operator, then it is sufficient to show that the deficiency spaces are the null spaces, and hence A= A . Indeed, if F(x)= ( f ( x ) w ) H and λ is a non-real number, then taking

U(x)=( u ( x ) z )=( w δ 2 ω ( λ ) ϕ ( x , λ ) + 1 1 a ( ξ ) G ( x , ξ , λ ) f ( ξ ) d ξ R β ( u ( x ) ) )

implies that UD(A). Since G(x,ξ,λ) satisfies the conditions (2.2)-(2.5), then (AλI)U(x)=F(x). Now we prove that the inverse of (AλI) exists. If AU(x)=λU(x), then

( λ ¯ λ ) U ( ) , U ( ) H = U ( ) , λ U ( ) H λ U ( ) , U ( ) H = U ( ) , A U ( ) H A U ( ) , U ( ) H = 0 ( since  A  is symmetric ) .

Since λR, we have λ ¯ λ0. Thus U ( ) , U ( ) H =0, i.e., U=0. Then R(λ;A):= ( A λ I ) 1 , the resolvent operator of A, exists. Thus

R(λ;A)F= ( A λ I ) 1 F=U.

Take λ=±i. The domains of ( A i I ) 1 and ( A + i I ) 1 are exactly H. Consequently, the ranges of (AiI) and (A+iI) are also H. Hence the deficiency spaces of A are

N i : = N ( A + i I ) = R ( A i I ) = H 1 = { 0 } , N i : = N ( A i I ) = R ( A + i I ) = H 1 = { 0 } .

Hence A is self-adjoint. □

The next theorem is an eigenfunction expansion theorem. The proof is exactly similar to that of Levitan and Sargsjan derived in [[25], pp.67-77]; see also [2629].

Theorem 3.2

  1. (i)

    For U()H,

    U ( ) H 2 = n = | U ( ) , Ψ n ( ) H | 2 .
    (3.15)
  2. (ii)

    For U()D(A),

    U(x)= n = U ( ) , Ψ n ( ) H Ψ n (x),
    (3.16)

the series being absolutely and uniformly convergent in the first component for on [1,0)(0,1], and absolutely convergent in the second component.

4 The sampling theorems

The first sampling theorem of this section associated with the boundary value problem (2.1)-(2.5) is the following theorem.

Theorem 4.1 Let f(x)= ( f 1 ( x ) f 2 ( x ) ) H. For λC, let

F(λ)= 1 0 f (x)ϕ(x,λ)dx+ δ 2 0 1 f (x)ϕ(x,λ)dx,
(4.1)

where ϕ(,λ) is the solution defined above. Then F(λ) is an entire function of exponential type that can be reconstructed from its values at the points { λ n } n = via the sampling formula

F(λ)= n = F( λ n ) ω ( λ ) ( λ λ n ) ω ( λ n ) .
(4.2)

The series (4.2) converges absolutely on and uniformly on any compact subset of , and ω(λ) is the entire function defined in (2.28).

Proof The relation (4.1) can be rewritten in the form

F(λ)= F ( ) , Φ ( , λ ) H = 1 0 f (x)ϕ(x,λ)dx+ δ 2 0 1 f (x)ϕ(x,λ)dx,λC,
(4.3)

where

F(x)=( f ( x ) 0 ),Φ(x,λ)=( ϕ ( x , λ ) R β ( ϕ ( x , λ ) ) )H.

Since both F() and Φ(,λ) are in H, then they have the Fourier expansions

F(x)= n = f ˆ (n) Φ ( x , λ n ) Φ ( , λ n ) H 2 ,Φ(x,λ)= n = Φ ( , λ ) , Φ ( , λ n ) H Φ ( x , λ n ) Φ ( , λ n ) H 2 ,
(4.4)

where λC and f ˆ (n) are the Fourier coefficients

f ˆ (n)= F ( ) , Φ ( , λ n ) H = 1 0 f (x)ϕ(x, λ n )dx+ δ 2 0 1 f (x)ϕ(x, λ n )dx.
(4.5)

Applying Parseval’s identity to (4.3), we obtain

F(λ)= n = F( λ n ) Φ ( , λ ) , Φ ( , λ n ) H Φ ( , λ n ) H 2 ,λC.
(4.6)

Now we calculate Φ ( , λ ) , Φ ( , λ n ) H and Φ ( , λ n ) H of λC, nZ. To prove expansion (4.2), we need to show that

Φ ( , λ ) , Φ ( , λ n ) H Φ ( , λ n ) H 2 = ω ( λ ) ( λ λ n ) ω ( λ ) ,nZ,λC.
(4.7)

Indeed, let λC and nZ be fixed. By the definition of the inner product of H, we have

Φ ( , λ ) , Φ ( , λ n ) H = 1 0 ϕ ( x , λ ) ϕ ( x , λ n ) d x + δ 2 0 1 ϕ ( x , λ ) ϕ ( x , λ n ) d x + δ 2 ρ R β ( ϕ ( x , λ ) ) R β ( ϕ ( x , λ n ) ) .
(4.8)

From Green’s identity, see [[25], p.51], we have

( λ n λ ) [ 1 0 ϕ ( x , λ ) ϕ ( x , λ n ) d x + δ 2 0 1 ϕ ( x , λ ) ϕ ( x , λ n ) d x ] = W ( ϕ ( 0 , λ ) , ϕ ( 0 , λ n ) ) W ( ϕ ( 1 , λ ) , ϕ ( 1 , λ n ) ) δ 2 W ( ϕ ( 0 + , λ ) , ϕ ( 0 + , λ n ) ) + δ 2 W ( ϕ ( 1 , λ ) , ϕ ( 1 , λ n ) ) .
(4.9)

Then (4.9) and the initial conditions (2.21) imply

1 0 ϕ (x,λ)ϕ(x, λ n )dx+ δ 2 0 1 ϕ (x,λ)ϕ(x, λ n )dx= δ 2 W ( ϕ ( 1 , λ ) , ϕ ( 1 , λ n ) ) λ n λ .
(4.10)

From (2.40), (2.19) and (2.7), we have

W ( ϕ ( 1 , λ ) , ϕ ( 1 , λ n ) ) = ϕ 12 ( 1 , λ ) ϕ 22 ( 1 , λ n ) ϕ 22 ( 1 , λ ) ϕ 12 ( 1 , λ n ) = c n 1 [ ϕ 12 ( 1 , λ ) χ 22 ( 1 , λ n ) ϕ 22 ( 1 , λ ) χ 12 ( 1 , λ n ) ] = c n 1 [ ( λ n sin β + a 1 ) ϕ 12 ( 1 , λ ) ( λ n cos β + a 2 ) ϕ 22 ( 1 , λ ) ] = c n 1 [ δ 2 ω ( λ ) + ( λ n λ ) R β ( ϕ ( x , λ ) ) ] .
(4.11)

Also, from (2.40) we have

δ 2 ρ R β ( ϕ ( x , λ ) ) R β ( ϕ ( x , λ n ) ) = δ 2 c n 1 ρ R β ( ϕ ( x , λ ) ) R β ( χ ( x , λ n ) ) .
(4.12)

Then from (2.26) and (4.12) we obtain

δ 2 ρ R β ( ϕ ( x , λ ) ) R β ( ϕ ( x , λ n ) ) = δ 2 c n 1 R β ( ϕ ( x , λ ) ) .
(4.13)

Substituting from (4.10), (4.11) and (4.13) into (4.8), we get

Φ ( , λ ) , Φ ( , λ n ) H = c n 1 ω ( λ ) λ n λ .
(4.14)

Letting λ λ n in (4.14), since the zeros of ω(λ) are simple, we get

Φ ( , λ n ) , Φ ( , λ n ) H = Φ ( , λ n ) H 2 = c n 1 ω ( λ n ).
(4.15)

Since λC and nZ are arbitrary, then (4.14) and (4.15) hold for all λC and all nZ. Therefore, from (4.14) and (4.15), we get (4.7). Hence (4.2) is proved with a pointwise convergence on . Now we investigate the convergence of (4.2). First we prove that it is absolutely convergent on . Using Cauchy-Schwarz’ inequality for λC,

k = | F ( λ k ) ω ( λ ) ( λ λ k ) ω ( λ k ) | ( k = | F ( ) , Φ ( , λ k ) H | 2 Φ ( , λ k ) H 2 ) 1 / 2 × ( k = | Φ ( , λ ) , Φ ( , λ k ) H | 2 Φ ( , λ k ) H 2 ) 1 / 2 .
(4.16)

Since F(), Φ(,λ)H, then the two series on the right-hand side of (4.16) converge. Thus series (4.2) converges absolutely on . As for uniform convergence, let MC be compact. Let λM and N>0. Define ν N (λ) to be

ν N (λ):=|F(λ) k = N N F( λ k ) ω ( λ ) ( λ λ k ) ω ( λ k ) |.
(4.17)

Using the same method developed above, we get

ν N (λ) ( k = N N | F ( ) , Φ ( , λ k ) H | 2 Φ ( , λ k ) H 2 ) 1 / 2 ( k = N N | Φ ( , λ ) , Φ ( , λ k ) H | 2 Φ ( , λ k ) H 2 ) 1 / 2 .
(4.18)

Therefore

ν N (λ) Φ ( , λ k ) H ( k = N N | F ( ) , Φ ( , λ k ) H | 2 Φ ( , λ k ) H 2 ) 1 / 2 .
(4.19)

Since [1,1]×M is compact, then we can find a positive constant C M such that

Φ ( , λ ) H C M for all λM.
(4.20)

Then

ν N (λ) C M ( k = N N | F ( ) , Φ ( , λ k ) H | 2 Φ ( , λ k ) H 2 ) 1 / 2
(4.21)

uniformly on M. In view of Parseval’s equality,

( k = N N | F ( ) , Φ ( , λ k ) H | 2 Φ ( , λ k ) H 2 ) 1 / 2 0as N.

Thus ν N (λ)0 uniformly on M. Hence (4.2) converges uniformly on M. Thus F(λ) is an entire function. From the relation

| F ( λ ) | 1 0 | f 1 ( x ) | | ϕ 11 ( x , λ ) | d x + 1 0 | f 2 ( x ) | | ϕ 21 ( x , λ ) | d x + δ 2 0 1 | f 1 ( x ) | | ϕ 12 ( x , λ ) | d x + δ 2 0 1 | f 2 ( x ) | | ϕ 22 ( x , λ ) | d x , λ C ,

and the fact that ϕ i j (,λ), i,j=1,2, are entire functions of exponential type, we conclude that F(λ) is of exponential type. □

Remark 4.2 To see that expansion (4.2) is a Lagrange-type interpolation, we may replace ω(λ) by the canonical product

ω ˜ (λ)=(λ λ 0 ) n = 1 ( 1 λ λ n ) ( 1 λ λ n ) .
(4.22)

From Hadamard’s factorization theorem, see [4], ω(λ)=h(λ) ω ˜ (λ), where h(λ) is an entire function with no zeros. Thus,

ω ( λ ) ω ( λ n ) = h ( λ ) ω ˜ ( λ ) h ( λ n ) ω ˜ ( λ n )

and (4.1), (4.2) remain valid for the function F(λ)/h(λ). Hence

F(λ)= n = 0 F( λ n ) h ( λ ) ω ˜ ( λ ) h ( λ n ) ω ˜ ( λ n ) ( λ λ n ) .
(4.23)

We may redefine (4.1) by taking kernel ϕ ( , λ ) h ( λ ) = ϕ ˜ (,λ) to get

F ˜ (λ)= F ( λ ) h ( λ ) = n = 0 F ˜ ( λ n ) ω ˜ ( λ ) ( λ λ n ) ω ˜ ( λ n ) .
(4.24)

The next theorem is devoted to giving vector-type interpolation sampling expansions associated with problem (2.1)-(2.5) for integral transforms whose kernels are defined in terms of Green’s matrix. As we see in (3.12), Green’s matrix G(x,ξ,λ) of problem (2.1)-(2.5) has simple poles at { λ k } k = . Define the function G(x,λ) to be G(x,λ):=ω(λ)G(x, ξ 0 ,λ), where ξ 0 [1,0)(0,1] is a fixed point and ω(λ) is the function defined in (2.28) or it is the canonical product (4.22).

Theorem 4.3 Let f(x)= ( f 1 ( x ) f 2 ( x ) ) H. Let F(λ)= ( F 1 ( λ ) F 2 ( λ ) ) be the vector-valued transform

F(λ)= 1 0 G(x,λ) f ¯ (x)dx+ δ 2 0 1 G(x,λ) f ¯ (x)dx.
(4.25)

Then F(λ) is a vector-valued entire function of exponential type that admits the vector-valued sampling expansion

F(λ)= n = F( λ n ) ω ( λ ) ( λ λ n ) ω ( λ n ) .
(4.26)

The vector-valued series (4.26) converges absolutely on and uniformly on compact subsets of . Here (4.26) means

F 1 (λ)= n = F 1 ( λ n ) ω ( λ ) ( λ λ n ) ω ( λ n ) , F 2 (λ)= n = F 2 ( λ n ) ω ( λ ) ( λ λ n ) ω ( λ n ) ,
(4.27)

where both series converge absolutely on and uniformly on compact sets of .

Proof The integral transform (4.25) can be written as

F ( λ ) = G ( , λ ) , F ( ) H , F ( x ) = ( f ( x ) 0 ) , G ( x , λ ) = ( G ( x , λ ) R β ( G ( x , λ ) ) ) H .
(4.28)

Applying Parseval’s identity to (4.28) with respect to { Φ ( , λ n ) } n = , we obtain

F(λ)= n = G ( , λ ) , Φ ( , λ n ) H F ( ) , Φ ( , λ n ) ¯ H Φ ( , λ n ) H 2 .
(4.29)

Let λC be such that λ λ n for nZ. Since each Φ(, λ n ) is an eigenvector of A, then

(AλI)Φ(x, λ n )=( λ n λ)Φ(x, λ n ).

Thus

( A λ I ) 1 Φ(x, λ n )= 1 λ n λ Φ(x, λ n ).
(4.30)

From (3.14) and (4.30) we obtain

δ 2 R β ( ϕ ( x , λ n ) ) ω ( λ ) ϕ( ξ 0 ,λ)+ 1 1 a(x)G(x, ξ 0 ,λ)ϕ(x, λ n )dx= 1 λ n λ ϕ( ξ 0 , λ n ).
(4.31)

Then from (2.26) and (2.40) in (4.31), we get

ρ δ 2 c n 1 ω ( λ ) ϕ( ξ 0 ,λ)+ 1 1 a(x)G(x, ξ 0 ,λ)ϕ(x, λ n )dx= 1 λ n λ ϕ( ξ 0 , λ n ).
(4.32)

Hence equation (4.32) can be rewritten as

ρ δ 2 c n 1 ϕ ( ξ 0 , λ ) + 1 0 G ( x , λ ) ϕ ( x , λ n ) d x + δ 2 0 1 G ( x , λ ) ϕ ( x , λ n ) d x = ω ( λ ) λ n λ ϕ ( ξ 0 , λ n ) .
(4.33)

The definition of G(,λ) implies

G ( , λ ) , Φ ( , λ n ) H = 1 0 G ( x , λ ) ϕ ( x , λ n ) d x + δ 2 0 1 G ( x , λ ) ϕ ( x , λ n ) d x + 1 ρ R β ( G ( x , λ ) ) R β ( ϕ ( x , λ n ) ) .
(4.34)

Moreover, from (3.12) we have

R β ( G ( x , λ ) ) =ϕ( ξ 0 ,λ) R β ( χ ( x , λ ) ) .
(4.35)

Then from (4.35), (2.26) and (2.40) in (4.34), we obtain

G ( , λ ) , Φ ( , λ n ) H = 1 0 G ( x , λ ) ϕ ( x , λ n ) d x + δ 2 0 1 G ( x , λ ) ϕ ( x , λ n ) d x + ρ δ 2 c n 1 ϕ ( ξ 0 , λ ) .
(4.36)

Combining (4.36) and (4.33), yields

G ( , λ ) , Φ ( , λ n ) H = ω ( λ ) λ n λ ϕ( ξ 0 , λ n ).
(4.37)

Taking the limit when λ λ n in (4.28), we get

F ( λ n ) = lim λ λ n G ( , λ ) , F ( ) H = lim λ λ n k = G ( , λ ) , Φ ( , λ k ) H Φ ( , λ k ) , F ( ) H Φ ( , λ k ) H 2 .
(4.38)

Making use of (4.37), we may rewrite (4.38) as, ξ 0 [1,0)(0,1],

F ( λ n ) = ( F 1 ( λ n ) F 2 ( λ n ) ) = ( lim λ λ n k = ω ( λ ) λ k λ ϕ 1 ( ξ 0 , λ k ) Φ ( , λ k ) , F ( ) H Φ ( , λ k ) H 2 lim λ λ n k = ω ( λ ) λ k λ ϕ 2 ( ξ 0 , λ k ) Φ ( , λ k ) , F ( ) H Φ ( , λ k ) H 2 ) = ( ω ( λ n ) ϕ 1 ( ξ 0 , λ n ) Φ ( , λ n ) , F ( ) H Φ ( , λ n ) H 2 ω ( λ n ) ϕ 2 ( ξ 0 , λ n ) Φ ( , λ n ) , F ( ) H Φ ( , λ n ) H 2 ) .
(4.39)

The interchange of the limit and summation is justified by the asymptotic behavior of Φ(x, λ n ) and that of ω(λ). If ϕ 1 ( ξ 0 , λ n )0 and ϕ 2 ( ξ 0 , λ n )0, then (4.39) gives

F ( ) , Φ ( , λ n ) ¯ H Φ ( , λ n ) H 2 = F 1 ( λ n ) ω ( λ n ) ϕ 1 ( ξ 0 , λ n ) F ( ) , Φ ( , λ n ) ¯ H Φ ( , λ n ) H 2 = F 2 ( λ n ) ω ( λ n ) ϕ 2 ( ξ 0 , λ n ) .
(4.40)

Combining (4.37), (4.40) and (4.29), we get (4.28) under the assumption that ϕ 1 ( ξ 0 , λ n )0 and ϕ 2 ( ξ 0 , λ n )0 for all n. If ϕ i ( ξ 0 , λ n )=0, for some n,i=1 or 2, the same expansions hold with F i ( λ n )=0. The convergence properties as well as the analytic and growth properties can be established as in Theorem 4.1 above. □

Now we derive an example illustrating the previous results.

Example 4.1

Consider the system

u 2 p(x) u 1 =λ u 1 , u 1 +p(x) u 2 =λ u 2 ,x[1,0)(0,1],
(4.41)
sinα u 1 (0)cosα u 2 (0)=0,
(4.42)
(cosβ+λsinβ) u 1 (1)+(sinβλcosβ) u 2 (1)=0
(4.43)

and transmission conditions

u 1 ( 0 ) δ u 1 ( 0 + ) =0,
(4.44)
u 2 ( 0 ) δ u 2 ( 0 + ) =0.
(4.45)

This problem is a special case of problem (2.1)-(2.5) when p 1 (x)= p 2 (x)=p(x) and a 1 =cosβ, a 2 =sinβ. Then ρ=1>0. For simplicity, we define

P 1 (x):= 1 x p(t)dt, P 2 (x):= 1 x p(t)dt,x[1,0)(0,1].

In the notations of the above section, the solutions ϕ(,λ) and χ(,λ) are

ϕ(x,λ)= { ( cos [ ζ 1 ( x , λ ) ] sin [ ζ 1 ( x , λ ) ] ) , x [ 1 , 0 ) , ( 1 δ cos [ ζ 1 ( x , λ ) ] 1 δ sin [ ζ 1 ( x , λ ) ] ) , x ( 0 , 1 ] ,
(4.46)
χ(x,λ)= { ( δ ( sin [ ζ 2 ( x , λ ) ] + λ cos [ ζ 2 ( x , λ ) ] ) δ ( cos [ ζ 2 ( x , λ ) ] + λ sin [ ζ 2 ( x , λ ) ] ) ) , x [ 1 , 0 ) , ( sin [ ζ 2 ( x , λ ) ] + λ cos [ ζ 2 ( x , λ ) ] cos [ ζ 2 ( x , λ ) ] + λ sin [ ζ 2 ( x , λ ) ] ) , x ( 0 , 1 ] ,
(4.47)

where

ζ 1 (x,λ):=λ(x+1)+ P 1 (x)+α, ζ 2 (x,λ):=λ(x1) P 2 (x)+β.

The eigenvalues are the solutions of the equation

ω(λ)= δ 2 [ 1 δ ( cos β + λ sin β ) cos [ ζ 1 ( 1 , λ ) ] + 1 δ ( sin β λ cos β ) sin [ ζ 1 ( 1 , λ ) ] ] =0,

which can be rewritten as

cos [ ζ 1 ( 1 , λ ) β ] λsin [ ζ 1 ( 1 , λ ) β ] =0.
(4.48)

Green’s function of problem (4.41)-(4.45) is given by

G ( x , ξ , λ ) = 1 δ [ cos [ ζ 1 ( 1 , λ ) β ] λ sin [ ζ 1 ( 1 , λ ) β ] ] × { ( δ cos [ ζ 1 ( ξ , λ ) ] ( sin [ ζ 2 ( x , λ ) ] + λ cos [ ζ 2 ( x , λ ) ] ) δ sin [ ζ 1 ( ξ , λ ) ] ( sin [ ζ 2 ( x , λ ) ] + λ cos [ ζ 2 ( x , λ ) ] ) δ cos [ ζ 1 ( ξ , λ ) ] ( cos [ ζ 2 ( x , λ ) ] + λ sin [ ζ 2 ( x , λ ) ] ) δ sin [ ζ 1 ( ξ , λ ) ] ( cos [ ζ 2 ( x , λ ) ] + λ sin [ ζ 2 ( x , λ ) ] ) ) , 1 ξ x < 0 , ( δ cos [ ζ 1 ( x , λ ) ] ( sin [ ζ 2 ( ξ , λ ) ] + λ cos [ ζ 2 ( ξ , λ ) ] ) δ cos [ ζ 1 ( x , λ ) ] ( cos [ ζ 2 ( ξ , λ ) ] + λ sin [ ζ 2 ( ξ , λ ) ] ) δ sin [ ζ 1 ( x , λ ) ] ( sin [ ζ 2 ( ξ , λ ) ] + λ cos [ ζ 2 ( ξ , λ ) ] ) δ sin [ ζ 1 ( x , λ ) ] ( cos [ ζ 2 ( ξ , λ ) ] + λ sin [ ζ 2 ( ξ , λ ) ] ) ) , 1 x ξ < 0 , ( cos [ ζ 1 ( ξ , λ ) ] ( sin [ ζ 2