Open Access

Forward scattering on the line with a transfer condition

Boundary Value Problems20132013:255

https://doi.org/10.1186/1687-2770-2013-255

Received: 18 June 2013

Accepted: 1 November 2013

Published: 25 November 2013

Abstract

We consider scattering on the line with a transfer condition at the origin. Under suitable growth conditions on the potential, the spectrum consists of a finite number of eigenvalues which are negative real numbers, while the remainder is continuous spectrum which is comprised of the positive real axis. Asymptotics are provided for the Jost solutions. Conditions which characterize transfer conditions resulting in self-adjoint problems are found. Properties are given of the scattering coefficient linking it to the spectrum.

MSC:34L25, 47N50, 34B10, 34B40.

Keywords

scattering transfer condition asymptotics Jost solutions spectrum

1 Introduction

The mathematical analysis of scattering theory has been a major area of interest in mathematics and physics research since the latter half of the twentieth century. In this work we investigate forward scattering for the differential equation
y : = d 2 y d x 2 + q ( x ) y = ζ 2 y , on  ( , 0 ) ( 0 , ) ,
(1.1)
in L 2 ( , 0 ) L 2 ( 0 , ) = L 2 ( R ) with the point transfer condition
[ y ( 0 + ) y ( 0 + ) ] = M [ y ( 0 ) y ( 0 ) ] .
(1.2)
Here, the entries of M are taken to be real, q L 2 ( R ) is assumed to be real-valued and obey the growth condition
( 1 + | x | ) | q ( x ) | d x < .
(1.3)
Note that (1.3) gives that q L 1 . Denote f ( 0 + ) : = lim t 0 f ( t ) and f ( 0 ) : = lim t 0 f ( t ) . The operator L in L 2 ( R ) is defined by
L y = y
(1.4)
on R { 0 } for y in the domain D ( L ) of L, where
D ( L ) = { y y , y L 2 ( R ) , y | ( , 0 ) ( j ) , y | ( 0 , ) ( j ) A C , j = 0 , 1 , y  obeys (1.2) } .
(1.5)

Only point transfer matrices at the origin will be considered, and henceforth we will refer to them as transfer matrices. In [1] conditions for self-adjointness and limit point/limit circle criteria are considered for a more general problem.

In the physical context, the transfer matrix represents a change of medium which affects the incident wave as represented by components of the matrix. Livšic in [[2], p.7] gives a concrete physical example of a scattering problem of the form given above. He considers a uniform, infinite string, attached at the point x = 0 to a transverse spring. The behaviour at the point of attachment is described by what we have called the transfer matrix M.

Our transfer matrices will be real constant transfer matrices, i.e., all components will be constants. The theory could be extended to eigenparameter-dependent transfer matrices, this will be considered in future studies. Gordon and Pearson, in [3], characterized the self-adjoint constant point transfer matrices as well as eigenparameter-dependent ones.

The Jost solutions of (1.1) represent oscillations moving left or right on . The scattering data is defined in terms of these solutions. The classical Jost solutions correspond to the case where the transfer matrix M is the identity. In particular, asymptotic approximations to the classical Jost solutions and their derivatives, as well as some basic relations regarding them, are given. The Jost solutions for the scattering problem with a transfer condition ( M I ) are then expressed in terms of the classical Jost solutions. Consequently, we consider functional analytic aspects of the operator L such as location of eigenvalues and continuous spectrum, multiplicity of eigenvalues, properties of the scattering coefficients/scattering matrix and the reflection coefficient. We show that the operator L has finitely many eigenvalues, they are negative and simple, and that the positive real axis [ 0 , ) is the continuous spectrum of L.

This paper is structured as follows. The Jost solutions of the scattering problem (1.1) and (1.2) are defined in Section 2. In addition, some basic properties and asymptotic approximations of the classical Jost solutions and their derivatives are recalled. In Section 3, the scattering problem is reformulated as a system spectral problem, and we prove that L is self-adjoint if and only if det M = 1 . Moreover, it is shown that all the eigenvalues are negative and that the entire non-negative real half-line is the continuous spectrum of L. In Section 4, since the scattering problem (1.1) and (1.2) can be considered as two half-line problems, the Jost solutions are expressed in terms of the classical Jost solutions. Asymptotics for the scattering coefficients are determined, and we prove that the problem has finitely many eigenvalues all of which are simple. In the final section, Section 5, we give a relationship between the norming constants and the derivative of the scattering coefficient.

The results obtained in this paper provide the foundation needed in order to consider the associated inverse problem, this will be the topic of a subsequent paper.

2 Preliminaries

In this section, asymptotic solutions y ( x , ζ ) , ( x , ζ ) R × C ¯ + , will be developed for equation (1.1) for large values of | x | + | ζ | , where C ¯ + = { ξ + i η ξ , η R , η 0 } . The focus of this section will be on the so-called Jost solutions of (1.1) and their derivatives. The Jost solutions f + , M and f , M of (1.1) and (1.2) are the solutions of (1.1) and (1.2) defined by their asymptotic behaviour at ±∞ as follows.

Definition 2.1 [[4], p.297]

The Jost solutions f + , M ( x , ζ ) and f , M ( x , ζ ) are the solutions of (1.1) and (1.2) with
lim x e i ζ x f + , M ( x , ζ ) = 1 , lim x e i ζ x f , M ( x , ζ ) = 1 .
When M = I , the subscript M will be dropped. In this case, the existence and asymptotic behaviour of the Jost solutions are well known, see [5, 6]. In particular,
f + ( x , ζ ) = e i ζ x + O ( C ( x ) ρ ( x ) e η x 1 + | ζ | ) ,
(2.1)
d f + d x ( x , ζ ) = i ζ e i ζ x x cos ( ζ ( x τ ) ) q ( τ ) e i ζ τ d τ + O ( C ( x ) ρ ( x ) σ ( x ) e η x 1 + | ζ | )
(2.2)
and
f ( x , ζ ) = e i ζ x + O ( C ( x ) ρ ˜ ( x ) e η x 1 + | ζ | ) ,
(2.3)
d f d x ( x , ζ ) = i ζ e i ζ x + x cos ( ζ ( x τ ) ) q ( τ ) e i ζ τ d τ + O ( C ( x ) ρ ˜ ( x ) σ ˜ ( x ) e η x 1 + | ζ | ) ,
(2.4)
as | x | + | ζ | , where η = ( ζ ) . Here, C ( x ) is a non-negative, non-increasing function of x and
ρ ( x ) = x ( 1 + | τ | ) | q ( τ ) | d τ , ρ ˜ ( x ) = x ( 1 + | τ | ) | q ( τ ) | d τ , σ ( x ) = x | q ( τ ) | d τ , σ ˜ ( x ) = x | q ( τ ) | d τ .
Let ζ be real and denote ζ = ξ R . Then as ξ ¯ = ξ , f ¯ + ( x , ξ ) obeys equations (1.1) and (1.2) with M = I . Taking the conjugate of the integral equation
f + ( x , ξ ) = e i ξ x x sin ( ξ ( x τ ) ) ξ q ( τ ) f + ( τ , ξ ) d τ ,
(2.5)
which defines f + , gives
f ¯ + ( x , ξ ) = e i ξ x x sin ( ξ ( x τ ) ) ( ξ ) q ( τ ) f ¯ + ( τ , ξ ) d τ .
From the above two expressions and the uniqueness of the solution of (2.5), we have
f ¯ + ( x , ξ ) = f + ( x , ξ ) , ξ R .
(2.6)
The asymptotic results given earlier in this section obviously hold for the conjugate solution f ¯ + ( x , ξ ) but with the simplification that η = 0 in this case. In particular,
f ¯ + ( x , ξ ) = e i ξ x + O ( C ( x ) ρ ( x ) 1 + | ξ | ) ,
(2.7)
d f ¯ + d x ( x , ξ ) = i ξ e i ξ x + x cos ( ξ ( x τ ) ) q ( τ ) e i ξ τ d τ + O ( C ( x ) ρ ( x ) σ ( x ) 1 + | ξ | ) ,
(2.8)

where C ( x ) is non-increasing and ρ and σ are as defined earlier.

Note that the Wronskian, W [ f + , f ¯ + ] , of f + and f ¯ + for ξ R and x R is given by W [ f + , f ¯ + ] ( x , ξ ) = 2 i ξ , see [7]. Thus f + ( x , ξ ) and f ¯ + ( x , ξ ) = f + ( x , ξ ) for ξ = ζ R { 0 } are independent.

3 Nature of the spectrum

Here we consider the scattering problem on the line with transfer condition (1.2) at x = 0 . In order for (1.1) and (1.2) to be self-adjoint in L 2 ( ( , 0 ) ( 0 , ) ) = L 2 ( R ) , the transfer matrix M will have to be of a certain form. Here, we restrict our attention to the most interesting case of M invertible, the case of M non-invertible will be considered elsewhere. The scattering problem can then be treated as two classical half-line problems joined at the origin by matrix condition (1.2).

The operator eigenvalue problem associated with L, of (1.4), can be reformulated as a system eigenvalue problem as follows. Let y 1 ( t ) = y ( t ) , y 2 ( t ) = y ( t ) and Y ( t ) = ( y 1 ( t ) y 2 ( t ) ) and consider the differential operator in L 2 ( 0 , ) L 2 ( 0 , ) given by
T Y : = d 2 Y d x 2 + Q Y = ζ 2 Y ,
(3.1)
where Q ( t ) = ( q ( t ) 0 0 q ( t ) ) . The domain of T is given by
D ( T ) = { Y Y , T Y ( L 2 ( 0 , ) ) 2 , Y , Y A C , U Y ( 0 ) = V Y ( 0 ) } ,
(3.2)
where U = ( 1 m 11 0 m 21 ) and V = ( 0 m 12 1 m 22 ) . Here, m i j , for i , j = 1 , 2 , are the entries of the transfer matrix M. As the norm on L 2 ( 0 , ) L 2 ( 0 , ) , we take
Y 2 = 0 Y T Y ¯ d x .
It should be noted here that L and T are unitarily equivalent by the map Ψ : L 2 ( R ) ( L 2 ( 0 , ) ) 2 given by Ψ y = ( y 1 y 2 ) , where
y ( t ) = { y 1 ( t ) , t > 0 , y 2 ( t ) , t < 0 .

It is easily verified that Ψ is a bijective isometry between L 2 ( R ) and ( L 2 ( 0 , ) ) 2 with the additional properties that Ψ ( D ( L ) ) = D ( T ) and Ψ 1 T Ψ = L .

The transfer matrix scattering problem can now be posed as
T Y = ζ 2 Y , Y D ( T ) .
(3.3)
Let F , G D ( T ) . Define
S ( F , G ) : = T F , G F , T G for  F , G D ( T ) ,
where
F , G = 0 F ( x ) T G ¯ ( x ) d x .
(3.4)

We now obtain conditions on the transfer matrix which ensure that T is a self-adjoint operator. We begin by defining the minimal and maximal operators associated with T.

The minimal operator associated with T is T 0 which is given by
D ( T 0 ) = { F ( L 2 ( 0 , ) ) 2 F , F A C , F ( 0 ) = F ( 0 ) = 0 }
with
T 0 F = T F for  F D ( T 0 ) .
The maximal operator associated with T is T max = T 0 , where
D ( T max ) = { F ( L 2 ( 0 , ) ) 2 F , F A C } .

Note that T 0 has deficiency indices ( 2 , 2 ) .

Theorem 3.1 If det M 0 , then the operator L is a self-adjoint operator if and only if det M = 1 .

Proof We will prove the result for the operator T and, consequently, it will be true for the operator L.

Let F = ( f 1 f 2 ) , G = ( g 1 g 2 ) ( C 2 ( 0 , ) ) 2 such that F ( x ) = G ( x ) = 0 for all x K . Then, since q L 2 ( R ) , F , G D ( T max ) . Also,
T max F , G F , T max G = T F , G F , T G = S ( F , G ) = 0 ( f 1 g ¯ 1 f 1 g ¯ 1 + f 2 g ¯ 2 f 2 g ¯ 2 ) d x = ( f 1 g ¯ 1 f 1 g ¯ 1 + f 2 g ¯ 2 f 2 g ¯ 2 ) ( 0 ) .

As the deficiency indices of T 0 are ( 2 , 2 ) , we need to restrict the domain of T max by two boundary conditions to ensure self-adjointness, see Weidmann [[8], p.53]. For the operator to be self-adjoint with two linear domain conditions, they must ensure that ( f 1 g ¯ 1 f 1 g ¯ 1 + f 2 g ¯ 2 f 2 g ¯ 2 ) ( 0 ) = 0 .

We now show what the above condition implies in terms of the transfer matrix M,
S ( F , G ) = ( F T G ¯ F T G ¯ ) ( 0 ) = [ F T , F T ] ( 0 I I 0 ) [ G ¯ G ¯ ] ( 0 ) .

Denote J = ( 0 I I 0 ) , then we have that J T = J and J 2 = I .

Let U and V be as in (3.2), then if F , G D ( T ) we have
U ¯ G ¯ ( 0 ) = V ¯ G ¯ ( 0 )
and
F T ( 0 ) U T = F T ( 0 ) V T .
So
[ F F ] ( 0 ) , [ G G ] ( 0 ) { α [ det M m 22 0 m 21 ] + β [ 0 m 12 det M m 11 ] : α , β C } ,
where [ det M m 22 0 m 21 ] and [ 0 m 12 det M m 11 ] are linearly independent. Thus we require
[ det M 0 m 22 m 12 0 det M m 21 m 11 ] T [ 0 I I 0 ] [ det M 0 m 22 m 12 0 det M m 21 m 11 ] = 0 ̲ .

Thus ( det M ) 2 = m 12 m 21 + m 11 m 22 = det M giving that det M = 1 since M is invertible. □

It should be noted that in [1] a condition on the transfer matrix M is given for self-adjointness; however, for the case of det M 0 , where the entries of M are real, the proof presented above is substantially simpler.

Throughout the remainder of the paper, we will now assume that det M = 1 .

Theorem 3.2 Let det M = 1 . All eigenvalues (if any) of L as defined in (1.4), (1.5), and consequently of T, are negative.

Proof For λ = ζ 2 R { 0 } , f + and f ¯ + constitute an independent set of solutions to equation (1.1). If L has a positive eigenvalue λ = ξ 2 > 0 , where ξ > 0 , then L has an eigenfunction of the form
F ( x , ξ ) = c 1 f + ( x , ξ ) + c 2 f ¯ + ( x , ξ )
for x > 0 . From (2.1) and (2.7)
f + ( x , ξ ) = e i ξ x + O ( C ( x ) ρ ( x ) 1 + | ξ | ) , f ¯ + ( x , ξ ) = e i ξ x + O ( C ( x ) ρ ( x ) 1 + | ξ | )
for | ξ | + x large, ξ R , x > 0 . Hence
F ( x , ξ ) L 2 ( 0 , )

if | c 1 | + | c 2 | 0 and L has no λ positive eigenvalues.

From the definition of f + ( x , 0 ) , we have that f + ( x , 0 ) 1 as x , so f + ( x , 0 ) L 2 ( 0 , ) . Observe that
g ( x ) : = f + ( x , 0 ) c x d τ ( f + ( τ , 0 ) ) 2

is a solution of (1.1) which is asymptotic to x for x , [[7], p.173], and therefore linearly independent of f + ( x , 0 ) . But a f + ( x , 0 ) + b g ( x ) is asymptotic to a + b x as x , and thus not in L 2 ( 0 , ) unless | a | + | b | = 0 . Hence ξ 2 = 0 is not an eigenvalue of L. □

We now study the continuous spectrum.

Theorem 3.3 The continuous spectrum of T (and thus of L) is [ 0 , ) .

Proof From [[9], p.66] or [[10], p.251], as T is self-adjoint, the spectrum of T, σ ( T ) , is comprised of continuous and point spectrum only, i.e., T has no residual spectrum. In addition, the continuous spectrum of T is real. By [6] the essential spectrum of the minimal operator generated by T is [ 0 , ) and this is the same as the essential spectrum of T, see [10]. Thus, as T has no residual spectrum, we have that the continuous spectrum of T is [ 0 , ) . □

4 Jost solutions with a transfer condition

We now consider the Jost solutions to problem (3.3). Since this problem can be considered as two half-line problems, solutions to (3.3) can be given in terms of the classical Jost solutions f + ( x , ζ ) and f ( x , ζ ) , i.e., when M = I .

The solutions f + , M ( x , ζ ) and f , M ( x , ζ ) , as per Definition 2.1, can be expressed as
f + , M ( x , ζ ) : = { f + ( x , ζ ) , x > 0 , h 1 ( x , ζ ) , x < 0 ,
(4.1)
f , M ( x , ζ ) : = { f ( x , ζ ) , x < 0 , h 2 ( x , ζ ) , x > 0 ,
(4.2)
where h 1 ( x , ζ ) is a solution of (1.1) on ( , 0 ) obeying the condition
( h 1 ( 0 , ζ ) h 1 ( 0 , ζ ) ) = M 1 ( f + ( 0 + , ζ ) f + ( 0 + , ζ ) ) ,
and h 2 ( x , ζ ) is a solution of (1.1) on ( 0 , ) obeying the condition
( h 2 ( 0 + , ζ ) h 2 ( 0 + , ζ ) ) = M ( f ( 0 , ζ ) f ( 0 , ζ ) ) .

Here, it should be noted that the existence of an extension of f + from ( 0 , ) to the solution f + , M on R { 0 } relies on M being non-singular.

As in the classical case, for ζ = ξ R , we may find the conjugate Jost solution. In this case, for f ¯ + , M ( x , ξ ) , we have
f ¯ + , M ( x , ξ ) = { f ¯ + ( x , ξ ) = f + ( x , ξ ) , x > 0 , h ¯ 1 ( x , ξ ) = h 1 ( x , ξ ) , x < 0 ,
(4.3)

where the transfer condition holds at x = 0 .

Since f + , M ( x , ξ ) and f ¯ + , M ( x , ξ ) are equal to f + ( x , ξ ) and f ¯ + ( x , ξ ) on ( 0 , ) , we see that f + , M and f ¯ + , M are linearly independent on ( 0 , ) for all ξ R { 0 } . Therefore h 2 can be expressed as a linear combination of f + , M and f ¯ + , M on ( 0 , ) with coefficients A ( ξ ) and B ( ξ ) giving that on ( 0 , )
f , M ( x , ξ ) = A ( ξ ) f ¯ + , M ( x , ξ ) + B ( ξ ) f + , M ( x , ξ ) .
(4.4)

Note that (4.4) also holds on ( , 0 ) as det M = 1 .

For ( ζ ) 0 , ζ 0 , we extend A ( ξ ) to C + by
A ( ζ ) = 1 2 i ζ W [ f + , M ( x , ζ ) , f , M ( x , ζ ) ] ,
(4.5)
and for ξ R { 0 } ,
B ( ξ ) = 1 2 i ξ W [ f + , M ( x , ξ ) , f , M ( x , ξ ) ] .
(4.6)
Theorem 4.1 For ξ R { 0 } , A ( ξ ) and B ( ξ ) satisfy the following equality:
| A ( ξ ) | 2 | B ( ξ ) | 2 = 1 .
Proof We begin by obtaining an expression for the solution f + , M ( x , ξ ) in terms of the conjugate solutions f , M ( x , ξ ) and f ¯ , M ( x , ξ ) for ξ R { 0 } . In a similar manner to the classical case, we obtain
det [ f ¯ , M f , M f ¯ , M f , M ] = W [ f ¯ , M , f , M ] = 2 i ξ .
Thus, f ¯ , M ( x , ξ ) and f , M ( x , ξ ) are linearly independent for ξ R { 0 } and consequently,
f + , M ( x , ξ ) = G 1 ( ξ ) f ¯ , M ( x , ξ ) + G 2 ( ξ ) f , M ( x , ξ ) ,
(4.7)
where G 1 ( ξ ) , G 2 ( ξ ) are independent of x. Equation (4.7) and its x-derivative give the matrix equation
[ f + , M f + , M ] = [ f ¯ , M f , M f ¯ , M f , M ] [ G 1 G 2 ] ,
which has a solution
[ G 1 G 2 ] = [ f ¯ , M f , M f ¯ , M f , M ] 1 [ f + , M f + , M ] = i 2 ξ [ f , M f , M f ¯ , M f ¯ , M ] [ f + , M f + , M ] .
Thus, from (4.5),
G 1 ( ξ ) = i 2 ξ ( f + , M f , M f , M f + , M ) = i 2 ξ W [ f + , M , f , M ] = A ( ξ ) ,
and, by (4.6),
G 2 ( ξ ) = i 2 ξ ( f ¯ , M f + , M f ¯ , M f + , M ) = i 2 ξ W [ f ¯ , M , f + , M ] = B ¯ ( ξ ) .
Combining the expressions for G 1 ( ξ ) and G 2 ( ξ ) with (4.7) gives
f + , M ( x , ξ ) = A ( ξ ) f ¯ , M ( x , ξ ) B ¯ ( ξ ) f , M ( x , ξ ) .
(4.8)
Substituting (4.8) into (4.4) gives
f , M ( x , ξ ) = A ( ξ ) f ¯ + , M ( x , ξ ) + B ( ξ ) f + , M ( x , ξ ) = A ( ξ ) [ A ¯ ( ξ ) f , M ( x , ξ ) B ( ξ ) f ¯ , M ( x , ξ ) ] + B ( ξ ) [ A ( ξ ) f ¯ , M ( x , ξ ) B ¯ ( ξ ) f , M ( x , ξ ) ] = ( | A ( ξ ) | 2 | B ( ξ ) | 2 ) f , M ( x , ξ ) .
Now, since f , M ( x , ξ ) 0 for ξ R { 0 } , x R ,
| A ( ξ ) | 2 | B ( ξ ) | 2 = 1 .

 □

The reflection coefficient can be defined for this case as
R ( ξ ) = B ( ξ ) A ( ξ ) , ξ R .
(4.9)
By Theorem 3.2, all eigenvalues, ζ 2 , for the scattering problem (1.1)-(1.2) on the line are negative. So, for each eigenvalue, ζ is of the form
ζ = i η for some  η R + .
Let ζ = i η , η R + . Then, for large x > 0 , f + , M ( x , i η ) = f + ( x , i η ) = e η x ( 1 + o ( 1 ) ) by (2.1). So there exists a such that | f + ( x , i η ) | > 0 for all x a . Direct computation gives that
y ( x ) = a x f + ( x ) f + 2 ( t ) d t = e η x 2 η ( 1 + o ( 1 ) )
is a solution of differential equation (1.1) on [ a , ) which in not in L 2 [ a , ) . Elementary existence theory for differential equations gives that y can be extended to a solution of differential equation (1.1) with transfer condition (1.2) on ( , 0 ) ( 0 , ) . Since the transfer matrix M is invertible and (1.1) is second-order, the solution space to (1.1) with transfer condition (1.2) on ( , 0 ) ( 0 , ) is 2-dimensional. Combining these facts gives that the geometric multiplicity of an eigenvalue is 1. Thus, if ζ 2 is an eigenvalue, then for some k C { 0 } , f , M ( x , ζ ) = k f + , M ( x , ζ ) for all x R + . Consequently, ζ 2 is an eigenvalue if and only if W [ f , M , f + , M ] ( x , ζ ) = 0 , i.e., from (4.5), A ( ζ ) = 0 . Hence, for ζ = i η , η R + with ζ 2 an eigenvalue of (3.1)-(3.2), we have
A ( i η ) = 0 .
(4.10)

Conversely, if A ( ζ ) = 0 , then f , M ( x , ζ ) and f + , M ( x , ζ ) are linearly dependent making f + , M , f , M L 2 ( , ) . Thus ζ 2 is an eigenvalue and ζ = i η for some η R + .

Theorem 4.2 The function A ( ζ ) has a finite set of zeros.

Proof We know, from the above reasoning, that if A ( ζ ) = 0 then ζ = i η for some η R + . It should be noted that A ( ζ ) is analytic on the upper half-plane and continuous there and up to the boundary.

We now compute the asymptotic form of A ( ζ ) . For | ζ | large in the upper half-plane, taking x 0 + in (4.5), we obtain
A ( ζ ) = 1 2 i ζ det [ f + , M ( 0 + , ζ ) f , M ( 0 + , ζ ) f + , M ( 0 + , ζ ) f , M ( 0 + , ζ ) ] = 1 2 i ζ det [ f + , M ( 0 + , ζ ) f + , M ( 0 + , ζ ) M ( f , M ( 0 , ζ ) f , M ( 0 , ζ ) ) ] = 1 2 i ζ det [ f + ( 0 , ζ ) m 11 f ( 0 , ζ ) + m 12 f ( 0 , ζ ) f + ( 0 , ζ ) m 21 f ( 0 , ζ ) + m 22 f ( 0 , ζ ) ] = 1 2 i ζ { f + ( 0 , ζ ) [ m 21 f ( 0 , ζ ) + m 22 f ( 0 , ζ ) ] f + ( 0 , ζ ) [ m 11 f ( 0 , ζ ) + m 12 f ( 0 , ζ ) ] } .
Substituting into the above the asymptotic approximations to the Jost solutions for large values of | ζ | , we get, for ( ζ ) 0 ,
A ( ζ ) = m 12 2 i ζ + m 11 + m 22 2 + m 12 2 cos ( ζ τ ) q ( τ ) e i ζ | τ | d τ + O ( 1 1 + | ζ | ) .
(4.11)

Here, we observe that | cos ( ζ τ ) e i ζ | τ | | 1 and q L 1 ( R ) , thus the integral term in (4.11) is uniformly bounded in the upper half-plane. In addition, as det M = 1 and M has real entries, not both of m 12 and m 11 + m 22 can simultaneously be zero. Hence | A ( ζ ) | is bounded away from zero for large ζ in the upper half-plane giving that A ( ζ ) does not have an unbounded sequence of zeros.

Using the same approach as in [6], by Lemma 4.1, 1 | A ( ζ ) | is bounded in a neighbourhood of the real axis, so the zeros of A ( ζ ) have no finite accumulation point and are thus a finite set. □

From Theorem 4.2 and the reasoning preceding Theorem 4.2, we obtain the following corollary.

Corollary 4.3 The scattering problem with a transfer condition has a finite number of eigenvalues, all of which are simple.

We note for reference the following properties of B ( ξ ) .

From (4.6), we have
B ( ξ ) = 1 2 i ξ W [ f ¯ + , M ( x , ξ ) f , M ( x , ξ ) ] , ξ R { 0 } .
Taking x 0 + , we obtain
B ( ξ ) = 1 2 i ξ det [ f ¯ + , M ( 0 + , ξ ) f , M ( 0 + , ξ ) f ¯ + , M ( 0 + , ξ ) f , M ( 0 + , ξ ) ] = 1 2 i ξ det [ f ¯ + ( 0 , ξ ) f ¯ + ( 0 , ξ ) M ( f ( 0 , ξ ) f ( 0 , ξ ) ) ] .
For | ξ | large, we get
B ( ξ ) = m 12 2 i ξ + m 22 m 11 2 m 12 2 cos ( ξ τ ) q ( τ ) e i ξ τ d τ + O ( 1 1 + | ξ | ) .
(4.12)

5 Norming constants and the zeros of A ( ζ )

Denote the eigenvalues ζ 2 , where ζ = i η , η ( 0 , ) , by 0 < η 1 < η 2 < < η N . We define the norming constants c k by
1 c k = | f + , M ( x , i η k ) | 2 d x = 0 | h 1 ( x , i η k ) | 2 d x + 0 | f + ( x , i η k ) | 2 d x .
(5.1)
Theorem 5.1 The zeros of A ( ζ ) are simple. In addition, if ζ 2 = η k 2 is an eigenvalue of the scattering problem with a transfer condition, then
d A ( ζ ) d ζ | ζ = i η k = i d k c k ,

where f , M ( x , i η k ) = d k f + , M ( x , i η k ) for all x R { 0 } and d k 0 .

Proof Differentiating (4.5), we get
d A d ζ = 1 ζ A ( ζ ) + i 2 ζ W [ f + , M ζ , f , M ] + i 2 ζ W [ f + , M , f , M ζ ] ,
where the right-hand side is independent of x. So,
d A d ζ ( i η k ) = 1 2 η k W [ f + , M ζ , f , M ] ( x , i η k ) + 1 2 η k W [ f + , M , f , M ζ ] ( x , i η k ) .
The functions f + , M and f , M obey f + q f = ζ 2 f so, differentiating this equation with respect to ζ, gives
f + , M ζ + q ( x ) f + , M ζ = 2 ζ f + , M + ζ 2 f + , M ζ
and similarly for f , M . Thus
d d x W [ f + , M ζ , f , M ] = 2 ζ f + , M f , M , x 0
(5.2)
and
d d x W [ f + , M , f , M ζ ] = 2 ζ f + , M f , M , x 0 .
(5.3)
Let y > x > 0 , integrating (5.2) over the interval [ x , y ] gives
W [ f + , M ζ , f , M ] ( y ) W [ f + , M ζ , f , M ] ( x ) = 2 ζ x y f + , M f , M d x ,
(5.4)
where we should keep in mind that f , M = h 2 on the positive semi-axis, see (4.2). Similarly for y < x < 0 , integrating (5.3) over the interval [ y , x ] , we get
W [ f + , M , f , M ζ ] ( x ) W [ f + , M , f , M ζ ] ( y ) = 2 ζ y x f + , M f , M d x .
(5.5)
Let Γ be a square (with side length equal to ε) contour oriented anticlockwise around i η k (see Figure 1), where ε > 0 is sufficiently small that η k > ε and ε < η i + 1 η i for all i = 1 , , N 1 .
Figure 1

The square contour with centre i η k .

By Cauchy’s integral representation for analytic functions and their derivatives, we have
f + , M ζ ( x , i η k ) = 1 2 π i Γ f + , M ( x , z ) ( z i η k ) 2 d z .
Since 1 | z i η k | 4 ε 2 , for z Γ , we have
| f + , M ζ ( x , i η k ) | 1 2 π 4 ε 2 4 ε max z Γ | f + , M ( x , z ) | 8 π ε max z Γ | f + , M ( x , z ) | .
From (2.1), for x > 0 , we obtain
| f + , M ζ ( x , i η k ) | 8 π ε max z Γ ( e ( z ) x + | C ( x ) ρ ( x ) e ( z ) x 1 + | z | | ) ,
which is bounded on each interval of the form [ y , ) . Similarly, as f + , M ( x , ζ ) is analytic in the upper half-plane and twice differentiable with respect to x on , we have
d d x f + , M ζ ( x , i η k ) = f + , M ζ ( x , i η k ) .
Proceeding as above, we obtain
| f + , M ζ ( x , i η k ) | 8 π ϵ max z Γ | f + , M ( x , z ) | ,

which by (2.2) is bounded on each interval of the form [ y , ) . In exactly the same way, it can be shown that f , M ζ ( x , i η k ) and f , M ζ ( x , i η k ) are bounded functions on each interval of the form ( , y ] . Now, for ζ 2 an eigenvalue, we have, by the reasoning prior to Lemma 4.2, that ζ = i η k for η k R + and f , M ( x , i η k ) = d k f + , M ( x , i η k ) for some d k C { 0 } .

Thus
W [ f + , M ζ , f , M ] ( y , i η k ) = W [ 1 d k f , M ζ , f , M ] ( y , i η k ) 0
and
W [ f + , M ζ , f , M ] ( y , i η k ) = W [ f + , M ζ , d k f + , M ] ( y , i η k ) 0
as y . Letting y in (5.4) and (5.5), from the above equations, we get
W [ f + , M , f , M ζ ] ( x , i η k ) = 2 i η k x f + , M f , M | ζ = i η k d x , x > 0
and
W [ f + , M ζ , f , M ] ( x , i η k ) = 2 i η k x f + , M f , M | ζ = i η k d x , x < 0 .
Hence
d A d ζ ( i η k ) = 1 2 η k ( 2 i η k ) ( x f + , M f , M | ζ = i η k d x + x f + , M f , M | ζ = i η k d x )
for each x > 0 . So, letting x 0 + , we obtain
d A d ζ ( i η k ) = i ( 0 f + , M f , M | ζ = i η k d x + 0 f + , M f , M | ζ = i η k d x ) = i d k ( f + , M 2 | ζ = i η k d x ) = i d k c k 0

by (5.1), and the zeros of A ( ζ ) are simple. □

Note that the theorem given above is especially useful when solving the associated (matrix) inverse scattering problem using the approach given in [7] say, since it provides an explicit relationship between the norming constants c k and the constants d k in terms of the derivative of A ( ζ ) .

The inverse scattering problem, building on the results obtained here, will be considered in the sequel to this paper.

Declarations

Acknowledgements

The authors would like to thank the referees for their useful comments and suggestions. The first author was supported by NRF Grant No. IFR2011040100017. The second author was partially supported by Foundation for Polish Science, Programme Homing 2009/9. The third author was supported by NRF Grant No. IFR2011032400120.

Authors’ Affiliations

(1)
School of Mathematics, University of the Witwatersrand
(2)
Faculty of Applied Mathematics, AGH University of Science and Technology

References

  1. Buschmann D, Stolz G, Weidmann J: One-dimensional Schrödinger operators with local point interactions. J. Reine Angew. Math. 1995, 467: 169-186.MathSciNetMATHGoogle Scholar
  2. Livšic MS: Operators, Oscillations, Waves (Open Systems). Am. Math. Soc., Providence; 1973.MATHGoogle Scholar
  3. Gordon NA, Pearson DB: Point transfer matrices for the Schrödinger equation: the algebraic theory. Proc. R. Soc. Edinb. A 1999, 129: 717-732. 10.1017/S030821050001310XMathSciNetView ArticleMATHGoogle Scholar
  4. Chadan K, Sabatier PC: Inverse Problems in Quantum Scattering Theory. Springer, Berlin; 1977.View ArticleMATHGoogle Scholar
  5. Freiling G, Yurko V: Inverse Sturm-Liouville Problems and Their Applications. Nova Science Publishers, New York; 2001.MATHGoogle Scholar
  6. Marčenko VA: Sturm-Liouville Operators and Applications: Revised Edition. Am. Math. Soc., Providence; 2011.Google Scholar
  7. Hsieh P, Sibuya Y: Basic Theory of Ordinary Differential Equations. Springer, Berlin; 1999.View ArticleMATHGoogle Scholar
  8. Weidmann J: Spectral Theory of Ordinary Differential Operators. Springer, Berlin; 1980.Google Scholar
  9. Goldberg S: Unbounded Linear Operators, Theory and Applications. McGraw-Hill, New York; 1966.MATHGoogle Scholar
  10. Weidmann J: Linear Operators in Hilbert Spaces. Springer, Berlin; 1980.View ArticleMATHGoogle Scholar

Copyright

© Currie et al.; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.