Open Access

Computing eigenvalues and Hermite interpolation for Dirac systems with eigenparameter in boundary conditions

Boundary Value Problems20132013:36

DOI: 10.1186/1687-2770-2013-36

Received: 8 November 2012

Accepted: 5 February 2013

Published: 21 February 2013

Abstract

Eigenvalue problems with eigenparameter appearing in the boundary conditions usually have complicated characteristic determinant where zeros cannot be explicitly computed. In this paper we use the derivative sampling theorem ‘Hermite interpolations’ to compute approximate values of the eigenvalues of Dirac systems with eigenvalue parameter in one or two boundary conditions. We use recently derived estimates for the truncation and amplitude errors to compute error bounds. Using computable error bounds, we obtain eigenvalue enclosures. Examples with tables and illustrative figures are given. Also numerical examples, which are given at the end of the paper, give comparisons with the classical sinc-method in Annaby and Tharwat (BIT Numer. Math. 47:699-713, 2007) and explain that the Hermite interpolations method gives remarkably better results.

MSC:34L16, 94A20, 65L15.

Keywords

Dirac systems eigenvalue problems with eigenparameter in the boundary conditions Hermite interpolations truncation error amplitude error sinc methods

1 Introduction

Let σ > 0 and PW σ 2 be the Paley-Wiener space of all L 2 ( R ) -entire functions of exponential type σ. Assume that f ( t ) PW σ 2 PW 2 σ 2 . Then f ( t ) can be reconstructed via the Hermite-type sampling series
f ( t ) = n = [ f ( n π σ ) S n 2 ( t ) + f ( n π σ ) sin ( σ t n π ) σ S n ( t ) ] ,
(1.1)
where S n ( t ) is the sequences of sinc functions
S n ( t ) : = { sin ( σ t n π ) ( σ t n π ) , t n π σ , 1 , t = n π σ .
(1.2)
Series (1.1) converges absolutely and uniformly on , cf. [14]. Sometimes, series (1.1) is called the derivative sampling theorem. Our task is to use formula (1.1) to compute eigenvalues of Dirac systems numerically. This approach is a fully new technique that uses the recently obtained estimates for the truncation and amplitude errors associated with (1.1), cf. [5]. Both types of errors normally appear in numerical techniques that use interpolation procedures. In the following we summarize these estimates. The truncation error associated with (1.1) is defined to be
R N ( f ) ( t ) : = f ( t ) f N ( t ) , N Z + , t R ,
(1.3)
where f N ( t ) is the truncated series
f N ( t ) = | n | N [ f ( n π σ ) S n 2 ( t ) + f ( n π σ ) sin ( σ t n π ) σ S n ( t ) ] .
(1.4)
It is proved in [5] that if f ( t ) PW σ 2 and f ( t ) is sufficiently smooth in the sense that there exists k Z + such that t k f ( t ) L 2 ( R ) , then, for t R , | t | < N π / σ , we have
| R N ( f ) ( t ) | T N , k , σ ( t ) : = ξ k , σ E k | sin σ t | 2 3 ( N + 1 ) k ( 1 ( N π σ t ) 3 / 2 + 1 ( N π + σ t ) 3 / 2 ) + ξ k , σ ( σ E k + k E k 1 ) | sin σ t | 2 σ ( N + 1 ) k ( 1 N π σ t + 1 N π + σ t ) ,
(1.5)
where the constants E k and ξ k , σ are given by
E k : = | t k f ( t ) | 2 d t , ξ k , σ : = σ k + 1 / 2 π k + 1 1 4 k .
(1.6)
The amplitude error occurs when approximate samples are used instead of the exact ones, which we cannot compute. It is defined to be
A ( ε , f ) ( t ) = n = [ { f ( n π σ ) f ˜ ( n π σ ) } S n 2 ( t ) + { f ( n π σ ) f ˜ ( n π σ ) } sin ( σ t n π ) σ S n ( t ) ] , t R ,
(1.7)
where f ˜ ( n π σ ) and f ˜ ( n π σ ) are approximate samples of f ( n π σ ) and f ( n π σ ) , respectively. Let us assume that the differences ε n : = f ( n π σ ) f ˜ ( n π σ ) , ε n : = f ( n π σ ) f ˜ ( n π σ ) , n Z , are bounded by a positive number ε, i.e., | ε n | , | ε n | ε . If f ( t ) PW σ 2 satisfies the natural decay conditions
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ8_HTML.gif
(1.8)
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ9_HTML.gif
(1.9)
0 < ω 1 , then for 0 < ε min { π / σ , σ / π , 1 / e } , we have, [5],
A ( ε , f ) 4 e 1 / 4 σ ( ω + 1 ) { 3 e ( 1 + σ ) + ( ( π / σ ) A + M f ) ρ ( ε ) + ( σ + 2 + log ( 2 ) ) M f } ε log ( 1 / ε ) ,
(1.10)
where
A : = 3 σ π ( | f ( 0 ) | + M f ( σ π ) ω ) , ρ ( ε ) : = γ + 10 log ( 1 / ε ) ,
(1.11)

and γ : = lim n [ k = 1 n 1 k log n ] 0.577216 is the Euler-Mascheroni constant.

The classical [6] sampling theorem of Whittaker, Kotel’nikov and Shannon (WKS) for f PW σ 2 is the series representation
f ( t ) = n = f ( n π σ ) S n ( t ) , t R ,
(1.12)
where the convergence is absolute and uniform on and it is uniform on compact sets of , cf. [68]. Series (1.12), which is of Lagrange interpolation type, has been used to compute eigenvalues of second-order eigenvalue problems; see, e.g., [915]. The use of (1.12) in numerical analysis is known as the sinc-method established by Stenger, cf. [1618]. In [10, 12], the authors applied (1.12) and the regularized sinc-method to compute eigenvalues of Dirac systems with a derivation of the error estimates as given by [19, 20]. In [12] the Dirac system has an eigenparameter appearing in the boundary conditions. The aim of this paper is to investigate the possibilities of using Hermite interpolations rather than Lagrange interpolations, to compute the eigenvalues numerically. Notice that, due to Paley-Wiener’s theorem [21], f PW σ 2 if and only if there is g ( ) L 2 ( σ , σ ) such that
f ( t ) = 1 2 π σ σ g ( x ) e i x t d x .
(1.13)
Therefore f ( t ) PW σ 2 , i.e., f ( t ) also has an expansion of the form (1.12). However, f ( t ) can be also obtained by the term-by-term differentiation formula of (1.12)
f ( t ) = n = f ( n π σ ) S n ( t ) ,
(1.14)

see [[6], p.52] for convergence. Thus the use of Hermite interpolations will not cost any additional computational efforts since the samples f ( n π σ ) will be used to compute both f ( t ) and f ( t ) according to (1.12) and (1.14), respectively.

Consider the Dirac system which consists of the system of differential equations
u 2 ( x ) r 1 ( x ) u 1 ( x ) = λ u 1 ( x ) , u 1 ( x ) + r 2 ( x ) u 2 ( x ) = λ u 2 ( x ) , x [ 0 , 1 ]
(1.15)
and the boundary conditions
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ16_HTML.gif
(1.16)
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ17_HTML.gif
(1.17)
where r 1 ( ) , r 2 ( ) L 1 ( 0 , 1 ) and α i , β i , α i , β i R , i = 0 , 1 , satisfying
( ( α 1 , α 2 ) = ( 0 , 0 )  or  α 1 α 2 α 1 α 2 > 0 ) and ( ( β 1 , β 2 ) = ( 0 , 0 )  or  β 1 β 2 β 1 β 2 > 0 ) .
(1.18)

The eigenvalue problem (1.15)-(1.17) will be denoted by Γ ( r , α , β , α , β ) when ( α 1 , α 2 ) ( 0 , 0 ) ( β 1 , β 2 ) . It is a Dirac system when the eigenparameter λ appears linearly in both boundary conditions. The classical problem when α 1 = α 2 = β 1 = β 2 = 0 , which we denote by Γ ( r , α , β , 0 , 0 ) , is studied in the monographs of Levitan and Sargsjan [22, 23]. Annaby and Tharwat [24] used Hermite-type sampling series (1.1) to compute the eigenvalues of problem Γ ( r , α , β , 0 , 0 ) numerically. In [25], Kerimov proved that Γ ( r , α , β , α , β ) has a denumerable set of real and simple eigenvalues with ±∞ as the limit points. Similar results are established in [26] for the problem when the eigenparameter appears in one condition, i.e., when α 1 = α 2 = 0 , ( β 1 , β 2 ) ( 0 , 0 ) or equivalently when ( α 1 , α 2 ) ( 0 , 0 ) and β 1 = β 2 = 0 , where also sampling theorems have been established. These problems will be denoted by Γ ( r , α , β , 0 , β ) and Γ ( r , α , β , α , 0 ) , respectively. The aim of the present work is to compute the eigenvalues of Γ ( r , α , β , α , β ) and Γ ( r , α , β , 0 , β ) numerically by the Hermite interpolations with an error analysis. This method is based on sampling theorem, Hermite interpolations, but applied to regularized functions hence avoiding any (multiple) integration and keeping the number of terms in the Cardinal series manageable. It has been demonstrated that the method is capable of delivering higher-order estimates of the eigenvalues at a very low cost; see [24]. In Sections 2 and 3, we derive the Hermite interpolation technique to compute the eigenvalues of Dirac systems with error estimates. We briefly derive some necessary asymptotics for Dirac systems’ spectral quantities. The last section contains three worked examples with comparisons accompanied by figures and numerics with the Lagrange interpolation method.

2 Treatment of Γ ( r , α , β , α , β )

In this section we derive approximate values of the eigenvalues of Γ ( r , α , β , α , β ) . Recall that Γ ( r , α , β , α , β ) has a denumerable set of real and simple eigenvalues, cf. [25]. Let φ ( , λ ) = ( φ 1 ( , λ ) , φ 2 ( , λ ) ) be a solution of (1.15) satisfying the following initial:
φ 1 ( 0 , λ ) = α 2 + λ α 2 , φ 2 ( 0 , λ ) = α 1 + λ α 1 .
(2.1)
Here A denotes the transpose of a matrix A. Since φ ( , λ ) satisfies (1.16), then the eigenvalues of the problem Γ ( r , α , β , α , β ) are the zeros of the function
Δ ( λ ) : = ( β 1 + λ β 1 ) φ 1 ( 1 , λ ) ( β 2 + λ β 2 ) φ 2 ( 1 , λ ) .
(2.2)
Similarly to [[22], p.220], φ 1 ( , λ ) and φ 2 ( , λ ) satisfy the system of integral equations
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ21_HTML.gif
(2.3)
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ22_HTML.gif
(2.4)
where T i and T ˜ i , i = 1 , 2 , are the Volterra operators defined by
T i u ( x , λ ) : = 0 x sin λ ( x t ) r i ( t ) u ( t , λ ) d t , T ˜ i u ( x , λ ) : = 0 x cos λ ( x t ) r i ( t ) u ( t , λ ) d t , i = 1 , 2 .
(2.5)
For convenience, we define the constants
c 1 : = max { | α 1 | + | α 2 | , | α 1 | + | α 2 | } , c 2 : = 0 1 [ | r 1 ( t ) | + | r 2 ( t ) | ] d t , c 3 : = c 1 c 2 , c 4 : = c 3 exp ( c 2 ) , c 5 : = max { | β 1 | + | β 2 | , | β 1 | + | β 2 | } , c 6 : = c 4 c 5 .
(2.6)
Define h 1 ( , λ ) and h 2 ( , λ ) to be
h 1 ( x , λ ) : = T 1 φ 1 ( x , λ ) + T ˜ 2 φ 2 ( x , λ ) , h 2 ( x , λ ) : = T ˜ 1 φ 1 ( x , λ ) + T 2 φ 2 ( x , λ ) .
(2.7)
As in [12] we split Δ ( λ ) into two parts via
Δ ( λ ) : = G ( λ ) + S ( λ ) ,
(2.8)
where G ( λ ) is the known part
G ( λ ) : = ( β 1 + λ β 1 ) ( ( α 1 + λ α 1 ) sin λ + ( α 2 + λ α 2 ) cos λ ) ( β 2 + λ β 2 ) ( ( α 2 + λ α 2 ) sin λ + ( α 1 + λ α 1 ) cos λ )
(2.9)
and S ( λ ) is the unknown one
S ( λ ) : = ( β 1 + λ β 1 ) h 1 ( 1 , λ ) ( β 2 + λ β 2 ) h 2 ( 1 , λ ) .
(2.10)
Then the function S ( λ ) is entire in λ for each x [ 0 , 1 ] for which, cf. [12],
| S ( λ ) | c 6 ( 1 + | λ | ) 2 e | λ | , λ C .
(2.11)
The analyticity of S ( λ ) as well as estimate (2.11) are not adequate to prove that S ( λ ) lies in a Paley-Wiener space. To solve this problem, we will multiply S ( λ ) by a regularization factor. Let θ > 0 and m Z + , m 4 , be fixed. Let F θ , m ( λ ) be the function
F θ , m ( λ ) : = ( sin θ λ θ λ ) m S ( λ ) , λ C .
(2.12)
We choose θ sufficiently small for which | θ λ | < π . More specifications on m, θ will be given latter on. Then F θ , m ( λ ) , see [12], is an entire function of λ which satisfies the estimate
| F θ , m ( λ ) | c 0 m c 6 ( 1 + | λ | ) 2 ( 1 + θ | λ | ) m e | λ | ( 1 + m θ ) , λ C .
(2.13)
Moreover, λ m 3 F θ , m ( λ ) L 2 ( R ) and
E m 3 ( F θ , m ) = | λ m 3 F θ , m ( λ ) | 2 d λ 2 c 0 m c 6 ξ 0 ,
(2.14)
where
ξ 0 : = 1 θ 2 m 1 ( 3 + 2 m 2 6 θ + 6 θ 2 + 4 m θ 5 m 4 m 3 12 m 2 + 11 m 3 + 6 θ 3 ( θ + 2 m 5 ) ( 4 m 3 12 m 2 + 11 m 3 ) ( m 2 ) ( 2 m 5 ) ) .
What we have just proved is that F θ , m ( λ ) belongs to the Paley-Wiener space PW σ 2 with σ = 1 + m θ . Since F θ , m ( λ ) PW σ 2 PW 2 σ 2 , then we can reconstruct the functions F θ , m ( λ ) via the following sampling formula:
F θ , m ( λ ) = n = [ F θ , m ( n π σ ) S n 2 ( λ ) + F θ , m ( n π σ ) sin ( σ λ n π ) σ S n ( λ ) ] .
(2.15)
Let N Z + , N > m , and approximate F θ , m ( λ ) by its truncated series F θ , m , N ( λ ) , where
F θ , m , N ( λ ) : = n = N N [ F θ , m ( n π σ ) S n 2 ( λ ) + F θ , m ( n π σ ) sin ( σ λ n π ) σ S n ( λ ) ] .
(2.16)
Since all eigenvalues are real, then from now on we restrict ourselves to λ R . Since λ m 3 F θ , m ( λ ) L 2 ( R ) , the truncation error, cf. (1.5), is given for | λ | < N π σ by
| F θ , m ( λ ) F θ , m , N ( λ ) | T N , m 3 , σ ( λ ) ,
(2.17)
where
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ36_HTML.gif
(2.18)
The samples { F θ , m ( n π σ ) } n = N N and { F θ , m ( n π σ ) } n = N N , in general, are not known explicitly. So, we approximate them by solving numerically 8 N + 4 initial value problems at the nodes { n π σ } n = N N . Let { F ˜ θ , m ( n π σ ) } n = N N and { F ˜ θ , m ( n π σ ) } n = N N be the approximations of the samples of { F θ , m ( n π σ ) } n = N N and { F θ , m ( n π σ ) } n = N N , respectively. Now we define F ˜ θ , m , N ( λ ) , which approximates F θ , m , N ( λ )
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ37_HTML.gif
(2.19)
Using standard methods for solving initial problems, we may assume that for | n | < N ,
| F θ , m ( n π σ ) F ˜ θ , m ( n π σ ) | < ε , | F θ , m ( n π σ ) F ˜ θ , m ( n π σ ) | < ε
(2.20)
for a sufficiently small ε. From (2.13) we can see that F θ , m ( λ ) satisfies the condition (1.9) when m 4 and therefore whenever 0 < ε min { π / σ , σ / π , 1 / e } , we have
| F θ , m , N ( λ ) F ˜ θ , m , N ( λ ) | A ( ε ) , λ R ,
(2.21)
where there is a positive constant M F θ , m for which, cf. (1.10),
A ( ε ) : = 2 e 1 / 4 σ { 3 e ( 1 + σ ) + ( π σ A + M F θ , m ) ρ ( ε ) + ( σ + 2 + log ( 2 ) ) M F θ , m } ε log ( 1 / ε ) .
(2.22)
Here
A : = 3 σ π ( | F θ , m ( 0 ) | + σ π M F θ , m ) , ρ ( ε ) : = γ + 10 log ( 1 / ε ) .
In the following, we use the technique of [27], where only the truncation error analysis is considered, to determine enclosure intervals for the eigenvalues; see also [24, 28]. Let λ be an eigenvalue with | θ λ | < π , that is,
Δ ( λ ) = G ( λ ) + ( sin θ λ θ λ ) m F θ , m ( λ ) = 0 .
Then it follows that
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equd_HTML.gif
and so
| G ( λ ) + ( sin θ λ θ λ ) m F ˜ θ , m , N ( λ ) | | sin θ λ θ λ | m ( T N , m 3 , σ ( λ ) + A ( ε ) ) .
Since G ( λ ) + ( sin θ λ θ λ ) m F ˜ θ , m , N ( λ ) is given and | sin θ λ θ λ | m ( T N , m 3 , σ ( λ ) + A ( ε ) ) has computable upper bound, we can define an enclosure for λ by solving the following system of inequalities:
| sin θ λ θ λ | m ( T N , m 3 , σ ( λ ) + A ( ε ) ) G ( λ ) + ( sin θ λ θ λ ) m F ˜ θ , m , N ( λ ) | sin θ λ θ λ | m ( T N , m 3 , σ ( λ ) + A ( ε ) ) .
(2.23)
Its solution is an interval containing λ , and over which the graph
G ( λ ) + ( sin θ λ θ λ ) m F ˜ θ , m , N ( λ )
is squeezed between the graphs
| sin θ λ θ λ | m ( T N , m 3 , σ ( λ ) + A ( ε ) )
(2.24)
and
| sin θ λ θ λ | m ( T N , m 3 , σ ( λ ) + A ( ε ) ) .
(2.25)
Using the fact that
F ˜ θ , m , N ( λ ) F θ , m ( λ )
uniformly over any compact set, and since λ is a simple root, we obtain, for large N and sufficiently small ε,
λ ( G ( λ ) + ( sin θ λ θ λ ) m F ˜ θ , m , N ( λ ) ) 0
in a neighborhood of λ . Hence the graph of G ( λ ) + ( sin θ λ θ λ ) m F ˜ θ , m , N ( λ ) intersects the graphs | sin θ λ θ λ | m ( T N , m 3 , σ ( λ ) + A ( ε ) ) and | sin θ λ θ λ | m ( T N , m 3 , σ ( λ ) + A ( ε ) ) at two points with abscissae a ( λ , N , ε ) a + ( λ , N , ε ) and the solution of the system of inequalities (2.23) is the interval
I ε , N : = [ a ( λ , N , ε ) , a + ( λ , N , ε ) ]

and in particular λ I ε , N . Summarizing the above discussion, we arrive at the following lemma which is similar to that of [27] for Sturm-Liouville problems.

Lemma 2.1 For any eigenvalue λ , we can find N 0 Z + and sufficiently small ε such that λ I ε , N for N > N 0 . Moreover,
[ a ( λ , N , ε ) , a + ( λ , N , ε ) ] { λ } as N and ε 0 .
(2.26)
Proof Since all eigenvalues of Γ ( r , α , β , α , β ) are simple, then for large N and sufficiently small ε, we have λ ( G ( λ ) + ( sin θ λ θ λ ) m F ˜ θ , m , N ( λ ) ) > 0 in a neighborhood of λ . Choose N 0 such that
G ( λ ) + ( sin θ λ θ λ ) m F ˜ θ , m , N 0 ( λ ) = ± | sin θ λ θ λ | m ( T N 0 , m 3 , σ ( λ ) + A ( ε ) )
has two distinct solutions which we denote by a ( λ , N 0 , ε ) a + ( λ , N 0 , ε ) . The decay of T N , m 3 , σ ( λ ) 0 as N and A ( ε ) 0 as ε 0 will ensure the existence of the solutions a ( λ , N , ε ) and a + ( λ , N , ε ) as N and ε 0 . For the second point, we recall that F ˜ θ , m , N ( λ ) F θ , m ( λ ) as N and as ε 0 . Hence, by taking the limit, we obtain
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equk_HTML.gif

that is, Δ ( a + ) = Δ ( a ) = 0 . This leads us to conclude that a + = a = λ since λ is a simple root.

Let Δ ˜ N ( λ ) : = G ( λ ) + ( sin θ λ θ λ ) m F ˜ θ , m , N ( λ ) . Then (2.17) and (2.21) imply
| Δ ( λ ) Δ ˜ N ( λ ) | | sin θ λ θ λ | m ( T N , m 3 , σ ( λ ) + A ( ε ) ) , | λ | < N π σ .
(2.27)
Therefore θ, m must be chosen so that for | λ | < N π σ
m 4 , | θ λ | < π .

Let λ be an eigenvalue and λ N be its approximation. Thus Δ ( λ ) = 0 and Δ ˜ N ( λ N ) = 0 . From (2.27) we have | Δ ˜ N ( λ ) | | sin θ λ θ λ | m ( T N , m 3 , σ ( λ ) + A ( ε ) ) . Now we estimate the error | λ λ N | for an eigenvalue λ . □

Theorem 2.2 Let λ be an eigenvalue of Γ ( r , α , β , α , β ) . For sufficient large N, we have the following estimate:
| λ λ N | < | sin θ λ N θ λ N | m T N , m 3 , σ ( λ N ) + A ( ε ) inf ζ I ε , N | Δ ( ζ ) | .
(2.28)

Moreover, | λ λ N | 0 when N and ε 0 .

Proof Since Δ ( λ N ) Δ ˜ N ( λ N ) = Δ ( λ N ) Δ ( λ ) , then from (2.27) and after replacing λ by λ N , we obtain
| Δ ( λ N ) Δ ( λ ) | | sin θ λ N θ λ N | m ( T N , m 3 , σ ( λ N ) + A ( ε ) ) .
(2.29)
Using the mean value theorem yields that for some ζ J ε , N : = [ min ( λ , λ N ) , max ( λ , λ N ) ] ,
| ( λ λ N ) Δ ( ζ ) | | sin θ λ N θ λ N | m ( T N , m 3 , σ ( λ N ) + A ( ε ) ) , ζ J ε , N I ε , N .
(2.30)

Since the eigenvalues are simple, then for sufficiently large N inf ζ I ε , N | Δ ( ζ ) | > 0 and we get (2.28). The rest of the proof follows from the fact that Δ N ( λ ) converges uniformly to Δ ( λ ) in and A ( ε ) 0 when ε 0 . □

3 The case of Γ ( r , α , β , 0 , β )

This section includes briefly a treatment similar to that of the previous section for the eigenvalue problem Γ ( r , α , β , 0 , β ) introduced in Section 1 above. Notice that the condition (1.18) implies that the analysis of problem Γ ( r , α , β , 0 , β ) is not included in that of Γ ( r , α , β , α , β ) . Let ψ ( , λ ) = ( ψ 1 ( , λ ) , ψ 2 ( , λ ) ) be a solution of (1.15) satisfying the following initial:
ψ 1 ( 0 , λ ) = α 2 , ψ 2 ( 0 , λ ) = α 1 .
(3.1)
Therefore, the eigenvalues of the problem in question are the zeros of the function
Ω ( λ ) : = ( β 1 + λ β 1 ) ψ 1 ( 1 , λ ) ( β 2 + λ β 2 ) ψ 2 ( 1 , λ ) .
(3.2)
Similarly to [[22], p.220], ψ ( , λ ) satisfies the system of integral equations
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ51_HTML.gif
(3.3)
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ52_HTML.gif
(3.4)
where T i and T ˜ i , i = 1 , 2 , are the Volterra operators defined in (2.5) above. Define g 1 ( , λ ) and g 2 ( , λ ) to be
g 1 ( x , λ ) : = T 1 ψ 1 ( x , λ ) + T ˜ 2 ψ 2 ( x , λ ) , g 2 ( x , λ ) : = T ˜ 1 ψ 1 ( x , λ ) + T 2 ψ 2 ( x , λ ) .
(3.5)
As in [12] we split Ω ( λ ) into
Ω ( λ ) : = K ( λ ) + U ( λ ) ,
(3.6)
where K ( λ ) is the known part
K ( λ ) : = ( β 1 + λ β 1 ) ( α 2 cos λ α 1 sin λ ) ( β 2 + λ β 2 ) ( α 1 cos λ + α 2 sin λ )
(3.7)
and U ( λ ) is the unknown one
U ( λ ) : = ( β 1 + λ β 1 ) g 1 ( 1 , λ ) ( β 2 + λ β 2 ) g 2 ( 1 , λ ) .
(3.8)
Then U ( λ ) is entire in λ for each x [ 0 , 1 ] for which, see [12],
| U ( λ ) | c 6 ( 1 + | λ | ) e | λ | , λ C .
(3.9)
Define R m , θ ( λ ) to be
R m , θ ( λ ) = ( sin θ λ θ λ ) m U ( λ ) , λ C ,
(3.10)
where θ is sufficiently small, for which | θ λ | < π and m are as in the previous section, but m 3 . Hence
| R m , θ ( λ ) | c 0 m c 6 ( 1 + | λ | ) ( 1 + θ | λ | ) m e | λ | ( 1 + m θ ) , λ C
(3.11)
and λ m 2 R m , θ ( λ ) L 2 ( R ) with
E m 2 ( R m , θ ) = | λ m 2 R m , θ ( λ ) | 2 d λ c 0 m c 6 ω 0 ,
(3.12)
where
ω 0 : = 2 ( 3 5 m + 2 m 2 3 θ + 2 m θ + θ 2 ) θ 2 m 1 ( 3 + 11 m 12 m 2 + 4 m 3 ) .
Thus, R m , θ ( λ ) belongs to the Paley-Wiener space PW σ 2 with σ = 1 + m θ . Since R θ , m ( λ ) PW σ 2 PW 2 σ 2 , then we can reconstruct the functions R θ , m ( λ ) via the following sampling formula:
R θ , m ( λ ) = n = [ R θ , m ( n π σ ) S n 2 ( λ ) + R θ , m ( n π σ ) sin ( σ λ n π ) σ S n ( λ ) ] .
(3.13)
Let N Z + , N > m , and approximate R θ , m ( λ ) by its truncated series R θ , m , N ( λ ) , where
R θ , m , N ( λ ) : = n = N N [ R θ , m ( n π σ ) S n 2 ( λ ) + R θ , m ( n π σ ) sin ( σ λ n π ) σ S n ( λ ) ] .
(3.14)
Since all eigenvalues are real, then from now on we restrict ourselves to λ R . Since λ m 2 R θ , m ( λ ) L 2 ( R ) , the truncation error, cf. (1.5), is given for | λ | < N π σ by
| R θ , m ( λ ) R θ , m , N ( λ ) | T N , m 2 , σ ( λ ) ,
(3.15)
where
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ64_HTML.gif
(3.16)
The samples { R θ , m ( n π σ ) } n = N N and { R θ , m ( n π σ ) } n = N N , in general, are not known explicitly. So, we approximate them by solving numerically 8 N + 4 initial value problems at the nodes { n π σ } n = N N . Let { R ˜ θ , m ( n π σ ) } n = N N and { R ˜ θ , m ( n π σ ) } n = N N be the approximations of the samples of { R θ , m ( n π σ ) } n = N N and { R θ , m ( n π σ ) } n = N N , respectively. Now we define R ˜ θ , m , N ( λ ) , which approximates R θ , m , N ( λ )
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ65_HTML.gif
(3.17)
Using standard methods for solving initial problems, we may assume that for | n | < N ,
| R θ , m ( n π σ ) R ˜ θ , m ( n π σ ) | < ε , | R θ , m ( n π σ ) R ˜ θ , m ( n π σ ) | < ε
(3.18)
for a sufficiently small ε. From (2.13) we can see that R θ , m ( λ ) satisfies the condition (1.9) when m 3 and therefore whenever 0 < ε min { π / σ , σ / π , 1 / e } , we have
| R θ , m , N ( λ ) R ˜ θ , m , N ( λ ) | A ( ε ) , λ R ,
(3.19)
where there is a positive constant M R θ , m for which, cf. (1.10),
A ( ε ) : = 2 e 1 / 4 σ { 3 e ( 1 + σ ) + ( π σ A + M R θ , m ) ρ ( ε ) + ( σ + 2 + log ( 2 ) ) M R θ , m } ε log ( 1 / ε ) .
(3.20)
Here
A : = 3 σ π ( | R θ , m ( 0 ) | + σ π M R θ , m ) , ρ ( ε ) : = γ + 10 log ( 1 / ε ) .

As in the above section, we have the following lemma.

Lemma 3.1 For any eigenvalue λ of the problem Γ ( r , α , β , 0 , β ) , we can find N 0 Z + and sufficiently small ε such that λ I ε , N for N > N 0 , where
I ε , N : = [ b ( λ , N , ε ) , b + ( λ , N , ε ) ] ,
b , b + are the solutions of the inequalities
| sin θ λ θ λ | m ( T N , m 2 , σ ( λ ) + A ( ε ) ) Ω ˜ N ( λ ) | sin θ λ θ λ | m ( T N , m 2 , σ ( λ ) + A ( ε ) ) .
(3.21)
Moreover,
[ b ( λ , N , ε ) , b + ( λ , N , ε ) ] { λ } as N and ε 0 .
(3.22)
Let Ω ˜ N ( λ ) : = K ( λ ) + ( sin θ λ θ λ ) m R ˜ θ , m , N ( λ ) . Then (3.15) and (3.19) imply
| Ω ( λ ) Ω ˜ N ( λ ) | | sin θ λ θ λ | m ( T N , m 2 , σ ( λ ) + A ( ε ) ) , | λ | < N π σ .
(3.23)
Therefore, θ, m must be chosen so that for | λ | < N π σ ,
m 3 , | θ λ | < π .

Let λ be an eigenvalue and λ N be its approximation. Thus Ω ( λ ) = 0 and Ω ˜ N ( λ N ) = 0 . From (3.23) we have | Ω ˜ N ( λ ) | | sin θ λ θ λ | m ( T N , m 2 , σ ( λ ) + A ( ε ) ) . Now we estimate the error | λ λ N | for an eigenvalue λ . Finally, we have the following estimate.

Theorem 3.2 Let λ be an eigenvalue of the problem Γ ( r , α , β , 0 , β ) . For sufficient large N, we have the following estimate:
| λ λ N | < | sin θ λ N θ λ N | m T N , m 2 , σ ( λ N ) + A ( ε ) inf ζ I ε , N | Ω ( ζ ) | .
(3.24)

Moreover, | λ λ N | 0 when N and ε 0 .

In the following section, we have taken θ = 1 / ( N m ) , where σ = 1 + m θ , in order to avoid the first singularity of ( sin θ λ N θ λ N ) 1 .

4 Examples

This section includes three detailed worked examples illustrating the above technique accompanied by comparison with the sinc-method derived in [12]. It is clearly seen that the Hermite interpolations method gives remarkably better results. The first two examples are computed in [12] with the classical sinc-method where r 1 ( x ) = r 2 ( x ) . But in the last example, where eigenvalues cannot be computed concretely, r 1 ( x ) r 2 ( x ) . By E S and E H we mean the absolute errors associated with the results of the classical sinc-method and our new method (Hermite interpolations), respectively. We indicate in these examples the effect of the amplitude error in the method by determining enclosure intervals for different values of ε. We also indicate the effect of the parameters m and θ by several choices. Each example is exhibited via figures that accurately illustrate the procedure near to some of the approximated eigenvalues. More explanations are given below. Recall that a ± ( λ ) and b ± ( λ ) are defined by
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ73_HTML.gif
(4.1)
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ74_HTML.gif
(4.2)
respectively. Recall also that the enclosure intervals I ε , N : = [ a , a + ] and I ε , N : = [ b , b + ] are determined by solving
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ75_HTML.gif
(4.3)
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ76_HTML.gif
(4.4)

respectively. We would like to mention that Mathematica has been used to obtain the exact values for the three examples where eigenvalues cannot be computed concretely. Mathematica is also used in rounding the exact eigenvalues, which are square roots.

Example 1

The boundary value problem
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ77_HTML.gif
(4.5)
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ78_HTML.gif
(4.6)
is a special case of the problem Γ ( r , α , β , α , β ) when r 1 ( x ) = r 2 ( x ) = x 2 , α 1 = α 2 = β 1 = β 2 = 0 , α 1 = 1 and α 2 = β 1 = β 2 = 1 . Here the characteristic function is
Δ ( λ ) : = 2 λ cos ( 1 3 + λ ) ( λ 2 1 ) sin ( 1 3 + λ ) .
(4.7)
The function G ( λ ) will be
G ( λ ) : = 2 λ cos λ + ( 1 λ 2 ) sin λ .
(4.8)
As is clearly seen, eigenvalues cannot be computed explicitly. Five tables indicate the application of our technique to this problem and the effect of ε, θ and m (Tables 1, 2, 3, 4 and 5). By exact, we mean the zeros of Δ ( λ ) computed by Mathematica.
Table 1

N = 20 , m = 10 , θ = 1 / 10

λ k

Sinc λ k , N

Exact λ k

Hermite λ k , N

E S

E H

λ 2

−1.505786875767961

−1.5057868758327264

−1.5057868758327246

6.47653 × 10−11

1.77636 × 10−15

λ 1

−0.11141619186432938

−0.11141619146375636

−0.11141619146375908

4.00573 × 10−10

2.72005 × 10−15

λ 0

1.1223201536675476

1.1223201551741047

1.1223201551741295

1.50656 × 10−9

2.4869 × 10−14

λ 1

3.3830704087110752

3.383070408212596

3.3830704082125935

4.98479 × 10−10

2.66454 × 10−15

Table 2

N = 20 , m = 15 , θ = 1 / 5

λ k

Sinc λ k , N

Exact λ k

Hermite λ k , N

λ 2

−1.5057868758327237144550336

−1.5057868758327218561623117

−1.5057868758327237144491654

λ 1

−0.1114161914637569327965574

−0.1114161914637563667829627

−0.1114161914637563668056111

λ 0

1.1223201551741129577354075

1.1223201551741041543767735

1.1223201551741041544693398

λ 1

3.3830704082126126090125379

3.3830704082125963004202471

3.3830704082125963003644934

Table 3

Absolute error | λ k λ k , N | for N = 20 , m = 15 , θ = 1 / 5

λ k

λ 2

λ 1

λ 0

λ 1

E S

1.858 × 10−15

5.660 × 10−16

8.803 × 10−15

1.630 × 10−14

E H

5.868 × 10−21

2.265 × 10−20

9.257 × 10−20

5.575 × 10−20

Table 4

For N = 20 , m = 10 and θ = 1 / 10 , the exact solutions λ k are all inside the interval [ a , a + ] for different values of ε

λ k

Exact λ k

[ a , a + ] , ε = 10 10

[ a , a + ] , ε = 10 15

λ 2

−1.5057868758327264

[−1.650349,−1.220683]

[−1.508403,−1.502664]

λ 1

−0.11141619146375636

[−0.179803,−0.071447]

[−0.130019,−0.100199]

λ 0

1.1223201551741047

[0.429491,1.314588]

[0.884579,1.206467]

λ 1

3.383070408212596

[3.314309,3.464923]

[3.369349,3.400197]

E 7 ( F θ , m ) = 3.05294 × 10 11 , E 6 ( F θ , m ) = 2.53419 × 10 9 , ω = 1, M F θ , m = 3.56048 × 10 6 .

Table 5

With N = 20 , m = 15 and θ = 1 / 5 , λ k are all inside the interval [ a , a + ] for different values of ε

λ k

Exact λ k

[ a , a + ] , ε = 10 10

[ a , a + ] , ε = 10 15

λ 2

−1.5057868758327218561623117

[−1.652755,−1.334613]

[−1.505894,−1.505678]

λ 1

−0.1114161914637563667829627

[−0.331996,0.121731]

[−0.111834,−0.111003]

λ 0

1.1223201551741041543767735

[0.923906,1.285003]

[1.120633,1.124014]

λ 1

3.3830704082125963004202471

[3.241846,3.533914]

[3.382059,3.384093]

E 12 ( F θ , m ) = 1.61064 × 10 13 , E 11 ( F θ , m ) = 1.71043 × 10 11 , ω = 1, M F θ , m = 3.98665 × 10 6 .

Figures 1 and 2 illustrate the comparison between Δ ( λ ) and Δ ˜ N ( λ ) for different values of m and θ. Figures 3 and 4, for N = 20 , m = 10 and θ = 1 / 10 , illustrate the enclosure intervals for ε = 10 10 and ε = 10 15 , respectively. Also, Figures 5 and 6 illustrate the enclosure intervals for ε = 10 10 and ε = 10 15 , respectively, but for m = 15 , θ = 1 / 5 .
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Fig1_HTML.jpg
Figure 1

Δ ( λ ) , Δ ˜ N ( λ ) with N = 20 , m = 10 and θ = 1 / 10 .

https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Fig2_HTML.jpg
Figure 2

Δ ( λ ) , Δ ˜ N ( λ ) with N = 20 , m = 15 and θ = 1 / 5 .

https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Fig3_HTML.jpg
Figure 3

a + , Δ ( λ ) , a with N = 20 , m = 10 , θ = 1 / 10 and ε = 10 10 .

https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Fig4_HTML.jpg
Figure 4

a + , Δ ( λ ) , a with N = 20 , m = 10 , θ = 1 / 10 and ε = 10 15 .

https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Fig5_HTML.jpg
Figure 5

a + , Δ ( λ ) , a with N = 20 , m = 15 , θ = 1 / 5 and ε = 10 10 .

https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Fig6_HTML.jpg
Figure 6

a + , Δ ( λ ) , a with N = 20 , m = 15 , θ = 1 / 5 and ε = 10 15 .

Example 2

The Dirac system
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ81_HTML.gif
(4.9)
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ82_HTML.gif
(4.10)
is a special case of the problem treated in the previous section with r 1 ( x ) = r 2 ( x ) = x 2 , α 1 = β 1 = 1 , α 2 = β 1 = β 2 = 0 and β 2 = 1 . The characteristic function is
Ω ( λ ) : = cos ( 1 3 + λ ) λ sin ( 1 3 + λ ) .
(4.11)
The function K ( λ ) will be
K ( λ ) : = cos λ λ sin λ .
(4.12)
As in the previous example, Figures 7, 8, 9, 10, 11 and 12 illustrate the results of Tables 6, 7, 8, 9 and 10. Figures 7 and 8 illustrate the comparison between Ω ( λ ) and Ω ˜ N ( λ ) for different values of m and θ. Figures 9 and 10, for N = 20 , m = 6 and θ = 1 / 14 , illustrate the enclosure intervals for ε = 10 10 and ε = 10 15 , respectively. Also, Figures 11 and 12 illustrate the enclosure intervals for ε = 10 10 and ε = 10 15 , respectively, but for m = 12 , θ = 1 / 8 .
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Fig7_HTML.jpg
Figure 7

Ω ( λ ) , Ω ˜ N ( λ ) with N = 20 , m = 6 and θ = 1 / 14 .

https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Fig8_HTML.jpg
Figure 8

Ω ( λ ) , Ω ˜ N ( λ ) with N = 20 , m = 12 and θ = 1 / 8 .

https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Fig9_HTML.jpg
Figure 9

b + , Ω ( λ ) , b with N = 20 , m = 6 , θ = 1 / 14 and ε = 10 10 .

https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Fig10_HTML.jpg
Figure 10

b + , Ω ( λ ) , b with N = 20 , m = 6 , θ = 1 / 14 and ε = 10 15 .

https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Fig11_HTML.jpg
Figure 11

b + , Ω ( λ ) , b with N = 20 , m = 12 , θ = 1 / 8 and ε = 10 10 .

https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Fig12_HTML.jpg
Figure 12

b + , Ω ( λ ) , b with N = 20 , m = 12 , θ = 1 / 8 and ε = 10 15 .

Table 6

N = 20 , m = 6 , θ = 1 / 14

λ k

Sinc λ k , N

Exact λ k

Hermite λ k , N

E S

E H

λ 2

−3.7364320716761927

−3.736432198331617

−3.7364321983463715

1.26655 × 10−7

1.47544 × 10−11

λ 1

−1.0801974353048152

−1.0801976714797825

−1.0801976714531203

2.36175 × 10−7

2.66622 × 10−11

λ 0

0.6565189567613093

0.6565187872152198

0.6565187872029187

1.69546 × 10−7

1.23012 × 10−11

λ 1

3.118561532798614

3.1185614501648167

3.1185614501681216

4.98479 × 10−8

3.30491 × 10−12

Table 7

N = 20 , m = 12 , θ = 1 / 8

λ k

Sinc λ k , N

Exact λ k

Hermite λ k , N

λ 2

−3.736432198332202082929465

−3.736432198331617091212013

−3.736432198331617091189782

λ 1

−1.080197671476027921290673

−1.080197671479782493157863

−1.080197671479782493947136

λ 0

0.6565187872242083579354743

0.6565187872152199183983102

0.6565187872152199230592640

λ 1

3.118561450158043898832776

3.118561450164816849643922

3.118561450164816845810261

Table 8

Absolute error | λ k λ k , N | for N = 20 , m = 12 , θ = 1 / 8

λ k

λ 2

λ 1

λ 0

λ 1

E S

5.849 × 10−13

3.755 × 10−12

8.988 × 10−12

6.773 × 10−12

E H

2.223 × 10−20

7.893 × 10−19

4.661 × 10−18

3.834 × 10−18

Table 9

For N = 20 , m = 6 and θ = 1 / 14 , the exact solutions λ k are all inside the interval [ b , b + ] for different values of ε

λ k

Exact λ k

[ b , b + ] , ε = 10 10

[ b , b + ] , ε = 10 15

λ 2

−3.736432198331617091212013

[−3.881037,−3.476447]

[−3.836682,−3.557513]

λ 1

−1.080197671479782493157863

[−1.435432,−0.665868]

[−1.365324,−0.760935]

λ 0

0.6565187872152199183983102

[0.410872,1.116247]

[0.492155,1.004381]

λ 1

3.118561450164816849643922

[2.884061,3.390359]

[2.940901,3.331955]

E 4 ( R θ , m ) = 2.9056 × 10 7 , E 3 ( R θ , m ) = 2.29859 × 10 6 , ω = 1, M R θ , m = 98845.4 .

Table 10

With N = 20 , m = 12 and θ = 1 / 8 , λ k are all inside the interval [ b , b + ] for different values of ε

λ k

Exact λ k

[ b , b + ] , ε = 10 10

[ b , b + ] , ε = 10 15

λ 2

−3.736432198331617

[−4.1011429,−3.3717065]

[−3.7364598,−3.7364045]

λ 1

−1.0801976714797825

[−1.5078873,−0.4433678]

[−1.0808585,−1.07952734]

λ 0

0.6565187872152198

[0.0168549,1.1086918]

[0.6528005,0.6602210]

λ 1

3.1185614501648167

[2.7401391,3.1185614]

[3.1157222,3.1214041]

E 10 ( R θ , m ) = 6.2724 × 10 12 , E 9 ( R θ , m ) = 8.21004 × 10 11 , ω = 1, M R θ , m = 501421 .

Example 3

The boundary value problem
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ85_HTML.gif
(4.13)
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Equ86_HTML.gif
(4.14)
is a special case of the problem Γ ( r , α , β , 0 , β ) when r 1 ( x ) = x , r 2 ( x ) = 1 , α 2 = β 1 = β 2 = 1 and α 1 = β 1 = β 2 = 0 . Here the characteristic function is
Ω ( λ ) : = 1 / ( AiryAiPrime [ λ ( 1 λ ) 1 / 3 ] AiryBi [ λ ( 1 λ ) 1 / 3 ] AiryAi [ λ ( 1 λ ) 1 / 3 ] AiryBiPrime [ λ ( 1 λ ) 1 / 3 ] ) × [ λ ( 1 λ ) 2 / 3 AiryAi [ ( 1 + λ ) ( 1 λ ) 1 / 3 ] AiryBi [ λ ( 1 λ ) 1 / 3 ] + AiryAiPrime [ ( λ + 1 ) ( 1 λ ) 1 / 3 ] AiryBi [ λ ( 1 λ ) 1 / 3 ] AiryAiPrime [ λ ( 1 λ ) 1 / 3 ] ( λ ( 1 λ ) 2 / 3 AiryBi [ ( λ + 1 ) ( 1 λ ) 1 / 3 ] + AiryBiPrime [ ( λ + 1 ) ( 1 λ ) 1 / 3 ] ) ] ,
(4.15)
where AiryAi [ z ] and AiryBi [ z ] are Airy functions Ai ( z ) and Bi ( z ) , respectively, and AiryAiPrime [ z ] and AiryBiPrime [ z ] are derivatives of Airy functions. The function K ( λ ) will be
K ( λ ) : = cos λ λ sin λ .
(4.16)
Figures 13, 14 and Tables 11, 12 illustrate the applications of the method to this problem.
https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Fig13_HTML.jpg
Figure 13

b + , Ω ( λ ) , b with N = 20 , m = 16 , θ = 1 / 4 and ε = 10 12 .

https://static-content.springer.com/image/art%3A10.1186%2F1687-2770-2013-36/MediaObjects/13661_2012_Article_289_Fig14_HTML.jpg
Figure 14

b + , Ω ( λ ) , b with N = 20 , m = 16 , θ = 1 / 4 and ε = 10 15 .

Table 11

N = 20 , m = 16 , θ = 1 / 4

λ k

Exact λ k

λ k , N

E H

λ 2

−3.1976270593385675784857858037

−3.1976270593385675784857498452

3.596 × 10−23

λ 1

−0.64351783872891518984316280760

−0.64351783872891518984316309998

2.924 × 10−25

λ 0

1.4487204290456776077365351429

1.4487204290456776077365176362

1.751 × 10−23

λ 1

3.8015200831700579923508826075

3.8015200831700579923509045951

2.199 × 10−23

Table 12

With N = 20 , m = 16 and θ = 1 / 4 , λ k are all inside the interval [ b , b + ] for different values of ε

λ k

Exact λ k

[ b , b + ] , ε = 10 10

[ b , b + ] , ε = 10 15

λ 2

−3.1976270593385675784857858037

[−3.30255437,−3.11013060]

[−3.19791869,−3.19733846]

λ 1

−0.64351783872891518984316280760

[−0.67219637,−0.61489406]

[−0.64356572,−0.64346999]

λ 0

1.4487204290456776077365351429

[1.40795687,1.49107473]

[1.44812338,1.44932224]

λ 1

3.8015200831700579923508826075

[3.60636554,4.19453907]

[3.80103975,3.80200804]

E 14 ( R θ , m ) = 2.16956 × 10 13 , E 13 ( R θ , m ) = 5.61116 × 10 12 , ω = 1, M R θ , m = 3.15557 × 10 6 .

Declarations

Acknowledgements

This article was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah. The author, therefore, acknowledges with thanks DSR technical and financial support.

Authors’ Affiliations

(1)
Department of Mathematics, Faculty of Science, King Abdulaziz University
(2)
Department of Mathematics, Faculty of Science, Beni-Suef University

References

  1. Grozev GR, Rahman QI: Reconstruction of entire functions from irregularly spaced sample points. Can. J. Math. 1996, 48: 777-793. 10.4153/CJM-1996-040-7MATHMathSciNetView Article
  2. Higgins JR, Schmeisser G, Voss JJ: The sampling theorem and several equivalent results in analysis. J. Comput. Anal. Appl. 2000, 2: 333-371.MATHMathSciNet
  3. Hinsen G:Irregular sampling of bandlimited L p -functions. J. Approx. Theory 1993, 72: 346-364. 10.1006/jath.1993.1027MATHMathSciNetView Article
  4. Jagerman D, Fogel L: Some general aspects of the sampling theorem. IRE Trans. Inf. Theory 1956, 2: 139-146. 10.1109/TIT.1956.1056821View Article
  5. Annaby MH, Asharabi RM: Error analysis associated with uniform Hermite interpolations of bandlimited functions. J. Korean Math. Soc. 2010, 47: 1299-1316. 10.4134/JKMS.2010.47.6.1299MATHMathSciNetView Article
  6. Higgins JR: Sampling Theory in Fourier and Signal Analysis: Foundations. Oxford University Press, Oxford; 1996.MATH
  7. Butzer PL, Schmeisser G, Stens RL: An introduction to sampling analysis. In Non Uniform Sampling: Theory and Practices. Edited by: Marvasti F. Kluwer Academic, New York; 2001:17-121.View Article
  8. Butzer PL, Higgins JR, Stens RL: Sampling theory of signal analysis. In Development of Mathematics 1950-2000. Birkhäuser, Basel; 2000:193-234.View Article
  9. Annaby MH, Asharabi RM: On sinc-based method in computing eigenvalues of boundary-value problems. SIAM J. Numer. Anal. 2008, 46: 671-690. 10.1137/060664653MATHMathSciNetView Article
  10. Annaby MH, Tharwat MM: On the computation of the eigenvalues of Dirac systems. Calcolo 2012, 49: 221-240. 10.1007/s10092-011-0052-yMATHMathSciNetView Article
  11. Annaby MH, Tharwat MM: On computing eigenvalues of second-order linear pencils. IMA J. Numer. Anal. 2007, 27: 366-380.MATHMathSciNetView Article
  12. Annaby MH, Tharwat MM: Sinc-based computations of eigenvalues of Dirac systems. BIT Numer. Math. 2007, 47: 699-713. 10.1007/s10543-007-0154-8MATHMathSciNetView Article
  13. Boumenir A, Chanane B: Eigenvalues of S-L systems using sampling theory. Appl. Anal. 1996, 62: 323-334. 10.1080/00036819608840486MATHMathSciNetView Article
  14. Tharwat MM, Bhrawy AH, Yildirim A: Numerical computation of eigenvalues of discontinuous Sturm-Liouville problems with parameter dependent boundary conditions using sinc method. Numer. Algorithms 2012. doi:10.1007/s11075-012-9609-3
  15. Tharwat MM, Bhrawy AH, Yildirim A: Numerical computation of eigenvalues of discontinuous Dirac system using sinc method with error analysis. Int. J. Comput. Math. 2012, 89: 2061-2080. 10.1080/00207160.2012.700112MATHMathSciNetView Article
  16. Lund J, Bowers K: Sinc Methods for Quadrature and Differential Equations. SIAM, Philadelphia; 1992.MATHView Article
  17. Stenger F: Numerical methods based on Whittaker cardinal, or sinc functions. SIAM Rev. 1981, 23: 156-224.MathSciNet
  18. Stenger F: Numerical Methods Based on Sinc and Analytic Functions. Springer, New York; 1993.MATHView Article
  19. Butzer PL, Splettstösser W, Stens RL: The sampling theorem and linear prediction in signal analysis. Jahresber. Dtsch. Math.-Ver. 1988, 90: 1-70.MATH
  20. Jagerman D: Bounds for truncation error of the sampling expansion. SIAM J. Appl. Math. 1966, 14: 714-723. 10.1137/0114060MATHMathSciNetView Article
  21. Boas RP: Entire Functions. Academic Press, New York; 1954.MATH
  22. Levitan BM, Sargsjan IS Translation of Mathematical Monographs 39. In Introduction to Spectral Theory: Self Adjoint Ordinary Differential Operators. Am. Math. Soc., Providence; 1975.
  23. Levitan BM, Sargsjan IS: Sturm-Liouville and Dirac Operators. Kluwer Academic, Dordrecht; 1991.View Article
  24. Annaby MH, Tharwat MM: The Hermite interpolation approach for computing eigenvalues of Dirac systems. Math. Comput. Model. 2012. doi:10.1016/j.mcm.2012.07.025
  25. Kerimov NB: A boundary value problem for the Dirac system with a spectral parameter in the boundary conditions. Differ. Equ. 2002, 38: 164-174. 10.1023/A:1015368926127MATHMathSciNetView Article
  26. Annaby MH, Tharwat MM: On sampling and Dirac systems with eigenparameter in the boundary conditions. J. Appl. Math. Comput. 2011, 36: 291-317. 10.1007/s12190-010-0404-9MATHMathSciNetView Article
  27. Boumenir A: Higher approximation of eigenvalues by the sampling method. BIT Numer. Math. 2000, 40: 215-225. 10.1023/A:1022334806027MATHMathSciNetView Article
  28. Tharwat MM, Bhrawy AH: Computation of eigenvalues of discontinuous Dirac system using Hermite interpolation technique. Adv. Differ. Equ. 2012. doi:10.1186/1687-1847-2012-59

Copyright

© Tharwat; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.