Skip to content

Advertisement

  • Research
  • Open Access

Approximation of eigenvalues of boundary value problems

Boundary Value Problems20142014:51

https://doi.org/10.1186/1687-2770-2014-51

  • Received: 18 December 2013
  • Accepted: 27 February 2014
  • Published:

Abstract

In the present paper we apply a sinc-Gaussian technique to compute approximate values of the eigenvalues of discontinuous Dirac systems, which contain an eigenvalue parameter in one boundary condition, with transmission conditions at the point of discontinuity. The error of this method decays exponentially in terms of the number of involved samples. Therefore the accuracy of the new technique is higher than the classical sinc-method. Numerical worked examples with tables and illustrative figures are given at the end of the paper showing that this method gives us better results.

MSC: 34L16, 94A20, 65L15.

Keywords

  • sinc-Gaussian
  • sinc-method
  • Dirac systems
  • transmission conditions
  • discontinuous boundary value problems
  • truncation and amplitude errors

1 Introduction

Consider the discontinuous Dirac system which consists of the system of differential equations
( y 2 ( x ) r 1 ( x ) y 1 ( x ) y 1 ( x ) + r 2 ( x ) y 2 ( x ) ) = ( λ y 1 ( x ) λ y 2 ( x ) ) , x [ 1 , 0 ) ( 0 , 1 ] ,
(1.1)
with boundary conditions
U 1 ( y ) : = sin α y 1 ( 1 ) cos α y 2 ( 1 ) = 0 ,
(1.2)
U 2 ( y ) : = ( a 1 + λ sin β ) y 1 ( 1 ) ( a 2 + λ cos β ) y 2 ( 1 ) = 0
(1.3)
and transmission conditions
U 3 ( y ) : = y 1 ( 0 ) δ y 1 ( 0 + ) = 0 ,
(1.4)
U 4 ( y ) : = y 2 ( 0 ) δ y 2 ( 0 + ) = 0 ,
(1.5)

where λ C ; y = ( y 1 y 2 ) ; the real-valued functions r 1 ( ) and r 2 ( ) are continuous in [ 1 , 0 ) and ( 0 , 1 ] , and have finite limits r 1 ( 0 ± ) : = lim x 0 ± r 1 ( x ) , r 2 ( 0 ± ) : = lim x 0 ± r 2 ( x ) ; a 1 , a 2 , δ R , α , β [ 0 , π ) ; δ 0 and ρ : = a 1 cos β a 2 sin β > 0 .The aim of the present work is to compute the eigenvalues of (1.1)-(1.5) numerically by the sinc-Gaussian technique with errors analysis, truncation error and amplitude error.

Sampling theory is one of the most important mathematical tools used in communication engineering since it enables engineers to reconstruct signals from some of their sampled data. A fundamental result in information theory is the Whittaker-Kotel’nikov-Shannon (WKS) sampling theorem [13]. It states that any f B σ 2 , σ > 0 ,
B σ 2 : = { f : f  entire , | f ( μ ) | C e σ | μ | , R | f ( μ ) | 2 d μ < } ,
can be reconstructed from its sampled values { f ( n π / σ ) : n Z } by the formula
f ( λ ) = n Z f ( n π / σ ) sinc ( σ λ n π ) , λ C ,
(1.6)
where
sinc ( λ ) : = { sin ( λ ) λ , λ 0 , 1 , λ = 0 .
(1.7)
Series (1.6) converges absolutely and uniformly on compact subsets of , and uniformly on , cf. [4]. Expansion (1.6) is used in several approximation problems which are known as sinc-methods; see, e.g., [58]. In particular the sinc-method is used to approximate eigenvalues of boundary value problems; see, for example, [912]. The sinc-method has a slow rate of decay at infinity, which is as slow as O ( | λ 1 | ) . There have been several attempts to improve the rate of decay. One of the interesting ways is to multiply the sinc-function in (1.6) by a kernel function; see, e.g., [1315]. Let h ( 0 , π / σ ] and γ ( 0 , π h σ ) . Assume that Φ B γ 2 such that Φ ( 0 ) = 1 , then for g B σ 2 we have the expansion [16]
g ( λ ) = n = g ( n h ) sinc ( h 1 π λ n π ) Φ ( h 1 λ n ) .
(1.8)
The speed of convergence of the series in (1.8) is determined by the decay of | Φ ( λ ) | . But the decay of an entire function of exponential type cannot be as fast as e c | x | as | x | for some positive c [16]. In [17], Qian has introduced the following regularized sampling formula. For h ( 0 , π / σ ] , N N and r > 0 , Qian defined the operator [17]
( G h , N g ) ( λ ) = n Z N ( λ ) g ( n h ) sinc ( h 1 π λ n π ) G ( λ n h 2 r h ) , λ R ,
(1.9)
where G ( t ) : = exp ( t 2 ) , which is called the Gaussian function, Z N ( x ) : = { n Z : | [ h 1 x ] n | N } and [ x ] denotes the integer part of x R ; see also [18, 19]. Qian also derived the following error bound. If g B σ 2 , h ( 0 , π / σ ] and a : = min { r ( π h σ ) , ( N 2 ) / r } 1 , then [17, 18]
| g ( λ ) ( G h , N g ) ( λ ) | 2 σ π g 2 π 2 a 2 ( 2 π a + e 3 / 2 r 2 ) e a 2 / 2 , λ R .
(1.10)
In [16] Schmeisser and Stenger extended the operator (1.9) to the complex domain . For σ > 0 , h ( 0 , π / σ ] and ω : = ( π h σ ) / 2 , they defined the operator [16]
( G h , N g ) ( λ ) : = n Z N ( λ ) g ( n h ) S n ( π λ h ) G ( ω ( λ n h ) N h ) ,
(1.11)
where Z N ( λ ) : = { n Z : | [ h 1 λ + 1 / 2 ] n | N } and N N . Note that the summation limits in (1.11) depend on the real part of λ. Schmeisser and Stenger [16] proved that if g is an entire function such that
| g ( ξ + i η ) | ϕ ( | ξ | ) e σ | η | , ξ , η R ,
(1.12)
where ϕ is a non-decreasing, non-negative function on [ 0 , ) and σ 0 , then for h ( 0 , π / σ ) , ω : = ( π h σ ) / 2 , N N , | λ | < N , we have
| g ( λ ) ( G h , N g ) ( λ ) | 2 | sin ( h 1 π λ ) | ϕ ( | λ | + h ( N + 1 ) ) e ω N π ω N β N ( h 1 λ ) , λ C ,
(1.13)
where
β N ( t ) : = cosh ( 2 ω t ) + 2 e ω t 2 / N π ω N [ 1 ( t / N ) 2 ] + 1 2 [ e 2 ω t e 2 π ( N t ) 1 + e 2 ω t e 2 π ( N + t ) 1 ] .
(1.14)
The amplitude error arises when the exact values g ( n h ) of (1.11) are replaced by the approximations g ˜ ( n h ) . We assume that g ˜ ( n h ) are close to g ( n h ) , i.e., there is ε > 0 sufficiently small such that
sup n Z n ( λ ) | g ( n h ) g ˜ ( n h ) | < ε .
(1.15)
Let h ( 0 , π / σ ) , ω : = ( π h σ ) / 2 and N N be fixed numbers. The authors in [20] proved that if (1.15) is held, then for | λ | < N , we have
| ( G h , N g ) ( λ ) ( G h , N g ˜ ) ( λ ) | A ε , N ( λ ) ,
(1.16)
where
A ε , N ( λ ) = 2 ε e ω / 4 N ( 1 + N / ω π ) exp ( ( ω + π ) h 1 | λ | ) .
(1.17)

Without eigenparameter appearing in any of boundary conditions, in [21] and [12] Tharwat et al. approximately computed the eigenvalues of the discontinuous Dirac system which is studied in the monographs of [22] by Hermite interpolations and regularized sinc-methods, respectively. In the regularized sinc-method, also the same in the Hermite interpolations method, the basic idea is as follows: The eigenvalues are characterized as the zeros of an analytic function F ( λ ) which can be written in the form F ( λ ) = f 0 ( λ ) + f ( λ ) , where f 0 ( λ ) is a known part. The ingenuity of the approach is in trying to choose the function F ( λ ) so that f ( λ ) B σ 2 (unknown part) and can be approximated by the WKS sampling theorem if its values at some equally spaced points are known; see [912]. Recall that, in regularized sinc and Hermite interpolations methods, it is necessary that f ( λ ) is an L 2 -function. In this paper we will use the sinc-Gaussian sampling formula (1.11) to compute eigenvalues of (1.1)-(1.5) numerically. As is expected, the new method reduced the error bounds remarkably (see the examples in Section 4). Also here, the basic idea is to write the function of eigenvalues as the sum of two terms, one known and the other unknown but an entire function of exponential type which satisfies (1.12). In other words, the unknown term is not necessarily an L 2 -function. Then we approximate the unknown part using (1.11) and obtain better results. We would like to mention that the papers in computing eigenvalues by the sinc-Gaussian method are few; see [20, 2325]. In Sections 2, 3 we derive the sinc-Gaussian technique to compute the eigenvalues of (1.1)-(1.5) with error estimates. The last section involves some illustrative examples.

2 Preliminaries

In this section we derive approximate values of the eigenvalues of problem (1.1)-(1.5). Recall that problem (1.1)-(1.5) has a denumerable set of real and simple eigenvalues, cf. [26]; see also [22, 2729]. Let
y ( , λ ) = ( y 1 ( , λ ) y 2 ( , λ ) ) , y i ( x , λ ) = { y i 1 ( x , λ ) , x [ 1 , 0 ) , y i 2 ( x , λ ) , x ( 0 , 1 ] , i = 1 , 2 ,
(2.1)
be the solution of (1.1) satisfying the following initial conditions:
( y 11 ( 1 , λ ) y 12 ( 0 + , λ ) y 21 ( 1 , λ ) y 22 ( 0 + , λ ) ) = ( cos α δ 1 y 11 ( 0 , λ ) sin α δ 1 y 21 ( 0 , λ ) ) .
(2.2)
In [26], Tharwat proved the existence and uniqueness of (2.2). Since y ( , λ ) satisfies (1.2), then the eigenvalues of problem (1.1)-(1.5) are the zeros of the function (see Lemma 2.4 of [[26], p.8])
Δ ( λ ) = δ 2 ( ( a 1 + λ sin β ) y 12 ( 1 , λ ) ( a 2 + λ cos β ) y 22 ( 1 , λ ) ) .
(2.3)
Notice that both y ( , λ ) and Δ ( λ ) are entire functions of λ, and y ( , λ ) satisfies the system of integral equations (cf. [26])
y 11 ( x , λ ) = cos ( λ ( x + 1 ) + α ) S 1 , 1 y 11 ( x , λ ) S ˜ 1 , 2 y 21 ( x , λ ) ,
(2.4)
y 21 ( x , λ ) = sin ( λ ( x + 1 ) + α ) + S ˜ 1 , 1 y 11 ( x , λ ) S 1 , 2 y 21 ( x , λ ) ,
(2.5)
y 12 ( x , λ ) = 1 δ y 11 ( 0 , λ ) cos ( λ x ) 1 δ y 21 ( 0 , λ ) sin ( λ x ) y 12 ( x , λ ) = S 0 , 1 y 12 ( x , λ ) S ˜ 0 , 2 y 22 ( x , λ ) ,
(2.6)
y 22 ( x , λ ) = 1 δ y 11 ( 0 , λ ) sin ( λ x ) + 1 δ y 21 ( 0 , λ ) cos ( λ x ) y 22 ( x , λ ) = + S ˜ 0 , 1 y 12 ( x , λ ) S 0 , 2 y 22 ( x , λ ) ,
(2.7)
where S 1 , i , S ˜ 1 , i , S 0 , i and S ˜ 0 , i , i = 1 , 2 , are the Volterra integral operators defined by
S 1 , i φ ( x , λ ) : = 1 x sin λ ( x t ) r i ( t ) φ ( t , λ ) d t , S ˜ 1 , i φ ( x , λ ) : = 1 x cos λ ( x t ) r i ( t ) φ ( t , λ ) d t , S 0 , i φ ( x , λ ) : = 0 x sin λ ( x t ) r i ( t ) φ ( t , λ ) d t , S ˜ 0 , i φ ( x , λ ) : = 0 x cos λ ( x t ) r i ( t ) φ ( t , λ ) d t .
For convenience, we define the constants
c 1 : = 1 0 [ | r 1 ( t ) | + | r 2 ( t ) | ] d t , c 2 : = c 1 exp ( c 1 ) , c 3 : = 0 1 [ | r 1 ( t ) | + | r 2 ( t ) | ] d t , c 4 : = c 2 + 2 | δ | ( 1 + c 2 ) , c 5 : = max { | a 1 | + | a 2 | , | sin β | + | cos β | } .
(2.8)
Define z 1 , i ( , λ ) and z 0 , i ( , λ ) , i = 1 , 2 , to be
z 1 , 1 ( x , λ ) : = S 1 , 1 y 11 ( x , λ ) + S ˜ 1 , 2 y 21 ( x , λ ) , z 1 , 2 ( x , λ ) : = S ˜ 1 , 1 y 11 ( x , λ ) S 1 , 2 y 21 ( x , λ ) ,
(2.9)
z 0 , 1 ( x , λ ) : = S 0 , 1 y 12 ( x , λ ) + S ˜ 0 , 2 y 22 ( x , λ ) , z 0 , 2 ( x , λ ) : = S ˜ 0 , 1 y 12 ( x , λ ) S 0 , 2 y 22 ( x , λ ) .
(2.10)
Lemma 2.1 The functions z 1 , 1 ( x , λ ) and z 1 , 2 ( x , λ ) are entire in λ for any fixed x [ 1 , 0 ) and satisfy the growth condition
| z 1 , 1 ( x , λ ) | , | z 1 , 2 ( x , λ ) | 2 c 2 e | λ | ( x + 1 ) , λ C .
(2.11)
Proof Since z 1 , 1 ( x , λ ) = S 1 , 1 y 11 ( x , λ ) + S ˜ 1 , 2 y 21 ( x , λ ) , then from (2.4) and (2.5) we obtain
z 1 , 1 ( x , λ ) = S 1 , 1 cos ( λ ( x + 1 ) + α ) + S ˜ 1 , 2 sin ( λ ( x + 1 ) + α ) S 1 , 1 z 1 , 1 ( x , λ ) + S ˜ 1 , 2 z 1 , 2 ( x , λ ) .
Using the inequalities | sin z | e | z | and | cos z | e | z | for z C leads for λ C to
| z 1 , 1 ( x , λ ) | | S 1 , 1 cos ( λ ( x + 1 ) + α ) | + | S ˜ 1 , 2 sin ( λ ( x + 1 ) + α ) | + | S 1 , 1 z 1 , 1 ( x , λ ) | + | S ˜ 1 , 2 z 1 , 2 ( x , λ ) | e | λ | ( x + 1 ) 1 x [ | r 1 ( t ) | | z 1 , 1 ( t , λ ) | + | r 2 ( t ) | | z 1 , 2 ( t , λ ) | ] e | λ | ( t + 1 ) d t + 2 e | λ | ( x + 1 ) 1 x [ | r 1 ( t ) | + | r 2 ( t ) | ] d t 2 c 1 e | λ | ( x + 1 ) + e | λ | ( x + 1 ) 1 x [ | r 1 ( t ) | | z 1 , 1 ( t , λ ) | + | r 2 ( t ) | | z 1 , 2 ( t , λ ) | ] e | λ | ( t + 1 ) d t .
The above inequality can be reduced to
e | λ | ( x + 1 ) | z 1 , 1 ( x , λ ) | 2 c 1 + 1 x [ | r 1 ( t ) | | z 1 , 1 ( t , λ ) | + | r 2 ( t ) | | z 1 , 2 ( t , λ ) | ] e | λ | ( t + 1 ) d t .
(2.12)
Similarly, we can prove that
e | λ | ( x + 1 ) | z 1 , 2 ( x , λ ) | 2 c 1 + 1 x [ | r 1 ( t ) | | z 1 , 1 ( t , λ ) | + | r 2 ( t ) | | z 1 , 2 ( t , λ ) | ] e | λ | ( t + 1 ) d t .
(2.13)

Then from (2.12), (2.13) and Lemma 3.1 of [[28], p.204], we obtain (2.11). □

In a similar manner, we will prove the following lemma for z 0 , 1 ( , λ ) and z 0 , 2 ( , λ ) .

Lemma 2.2 The functions z 0 , 1 ( x , λ ) and z 0 , 2 ( x , λ ) are entire in λ for any fixed x ( 0 , 1 ] and satisfy the growth condition
| z 0 , 1 ( x , λ ) | , | z 0 , 2 ( x , λ ) | 2 c 3 c 4 e | λ | ( x + 1 ) , λ C .
(2.14)
Proof Since z 0 , 1 ( x , λ ) = S 0 , 1 y 12 ( x , λ ) + S ˜ 0 , 2 y 22 ( x , λ ) , then from (2.6) and (2.7) we obtain
z 0 , 1 ( x , λ ) = 1 δ y 11 ( 0 , λ ) S 0 , 1 cos ( λ x ) 1 δ y 21 ( 0 , λ ) S 0 , 1 sin ( λ x ) S 0 , 1 z 1 , 2 ( x , λ ) + 1 δ y 11 ( 0 , λ ) S ˜ 0 , 2 sin ( λ x ) + 1 δ y 21 ( 0 , λ ) S ˜ 0 , 2 cos ( λ x ) + S ˜ 0 , 2 z 1 , 2 ( x , λ ) .
Then from (2.4) and (2.5) and Lemma 2.1, we get
| z 0 , 1 ( x , λ ) | 1 | δ | | y 11 ( 0 , λ ) | | S 0 , 1 cos ( λ x ) | + 1 | δ | | y 21 ( 0 , λ ) | | S 0 , 1 sin ( λ x ) | + | S 0 , 1 z 1 , 2 ( x , λ ) | + 1 | δ | | y 11 ( 0 , λ ) | | S ˜ 0 , 2 sin ( λ x ) | + 1 | δ | | y 21 ( 0 , λ ) | | S ˜ 0 , 2 cos ( λ x ) | + | S ˜ 0 , 2 z 1 , 2 ( x , λ ) | 2 ( c 2 + 2 | δ | ( 1 + c 2 ) ) c 3 e | λ | ( x + 1 ) = 2 c 3 c 4 e | λ | ( x + 1 ) .
Similarly, we can prove that
| z 0 , 2 ( x , λ ) | 2 c 3 c 4 e | λ | ( x + 1 ) .

 □

3 The numerical scheme

In this section we derive the method of computing eigenvalues of problem (1.1)-(1.5) numerically. The basic idea of the scheme is to split Δ ( λ ) into two parts a known part K ( λ ) and an unknown one U ( λ ) . Then we approximate U ( λ ) using (1.11) to get the approximate Δ ( λ ) and then compute the approximate zeros. We first split Δ ( λ ) into two parts as follows:
Δ ( λ ) : = K ( λ ) + U ( λ ) ,
(3.1)
where U ( λ ) is the unknown part involving integral operators
U ( λ ) : = δ [ a 2 sin λ a 1 cos λ + λ sin ( λ β ) ] z 1 , 1 ( 0 , λ ) δ [ a 1 sin λ + a 2 cos λ + λ cos ( λ β ) ] z 1 , 2 ( 0 , λ ) + δ 2 [ ( a 1 + λ sin β ) z 0 , 1 ( 1 , λ ) + ( a 2 + λ cos β ) z 0 , 2 ( 1 , λ ) ]
(3.2)
and K ( λ ) is the known part
K ( λ ) : = δ [ a 1 cos ( 2 λ + α ) a 2 sin ( 2 λ + α ) λ sin ( 2 λ + α β ) ] .
(3.3)

Then, from Lemma 2.1 and Lemma 2.2, we have the following result.

Lemma 3.1 The function U ( λ ) is entire in λ and the following estimate holds:
| U ( λ ) | ϕ ( λ ) e 2 | λ | ,
(3.4)
where
ϕ ( λ ) = : M ( 1 + | λ | ) , M : = 2 | δ | c 5 ( c 2 + | δ | c 3 c 4 ) .
(3.5)
Proof From (3.2) we have
| U ( λ ) | | δ | [ | a 2 | | sin λ | + | a 1 | | cos λ | + | λ | | sin ( λ β ) | ] | z 1 , 1 ( 0 , λ ) | + | δ | [ | a 1 | | sin λ | + | a 2 | | cos λ | + | λ | | cos ( λ β ) | ] | z 1 , 2 ( 0 , λ ) | + δ 2 [ ( | a 1 | + | λ | | sin β | ) | z 0 , 1 ( 1 , λ ) | + ( | a 2 | + | λ | | cos β | ) | z 0 , 2 ( 1 , λ ) | ] .

Using the inequalities | sin λ | e | λ | and | cos λ | e | λ | for λ C , Lemma 2.1 and Lemma 2.2 imply (3.4). □

Thus U ( λ ) is an entire function of exponential type σ = 2 . In the following we let λ R since all eigenvalues are real. Now we approximate the function U ( λ ) using the operator (1.11) where h ( 0 , π / 2 ) and ω : = ( π 2 h ) / 2 and then, from (1.13), we obtain
| U ( λ ) ( G h , N U ) ( λ ) | T h , N ( λ ) ,
(3.6)
where
T h , N ( λ ) : = 2 | sin ( h 1 π λ ) | ϕ ( | λ | + h ( N + 1 ) ) e ω N π ω N β N ( 0 ) , λ R .
(3.7)
The samples U ( n h ) = Δ ( n h ) K ( n h ) , n Z N ( λ ) cannot be computed explicitly in the general case. We approximate these samples numerically by solving the initial value problems defined by (1.1) and (2.2) to obtain the approximate values U ˜ ( n h ) , n Z N ( λ ) , i.e., U ˜ ( n h ) = Δ ˜ ( n h ) K ( n h ) . Here we use the computer algebra system Mathematica to obtain approximate solutions with the required accuracy. However, a separate study for the effect of different numerical schemes and the computational costs would be interesting. Accordingly, we have the explicit expansion
( G h , N U ˜ ) ( λ ) : = n Z N ( λ ) U ˜ ( n h ) sinc ( h 1 π λ n π ) G ( ω ( λ n h ) N h ) .
(3.8)
Therefore we get (cf. (1.16))
| ( G h , N U ) ( λ ) ( G h , N U ˜ ) ( λ ) | A ε , N ( 0 ) , λ R .
(3.9)
Now let Δ ˜ N ( λ ) : = K ( λ ) + ( G h , N U ˜ ) ( λ ) . From (3.6) and (3.9) we obtain
| Δ ( λ ) Δ ˜ N ( λ ) | T h , N ( λ ) + A ε , N ( 0 ) , λ R .
(3.10)
Let λ be an eigenvalue and λ N be its desired approximation, i.e., Δ ( λ ) = 0 and Δ ˜ N ( λ N ) = 0 . From (3.10) we have | Δ ˜ N ( λ ) | T h , N ( λ ) + A ε , N ( 0 ) . Define the curves
a ± ( λ ) = Δ ˜ N ( λ ) ± T h , N ( λ ) + A ε , N ( 0 ) .
(3.11)
The curves a + ( λ ) , a ( λ ) enclose the curve of Δ ( λ ) for suitably large N. Hence the closure interval is determined by solving a ± ( λ ) = 0 , which gives an interval
I ε , N : = [ a , a + ] .

It is worthwhile to mention that the simplicity of the eigenvalues guarantees the existence of approximate eigenvalues, i.e., the λ N for which Δ ˜ N ( λ N ) = 0 . Next we estimate the error | λ λ N | for the eigenvalue λ .

Theorem 3.2 Let λ be an eigenvalue of (1.1)-(1.5) and let λ N be its approximation. Then, for λ R , we have the following estimate:
| λ λ N | < T h , N ( λ N ) + A ε , N ( 0 ) inf ζ I ε , N | Δ ( ζ ) | ,
(3.12)

where the interval I ε , N is defined above.

Proof Replacing λ by λ N in (3.10), we obtain
| Δ ( λ N ) Δ ( λ ) | < T h , N ( λ N ) + A ε , N ( 0 ) ,
(3.13)
where we have used Δ ˜ N ( λ N ) = Δ ( λ ) = 0 . Using the mean value theorem yields that for some ζ J ε , N : = [ min ( λ , λ N ) , max ( λ , λ N ) ] ,
| ( λ λ N ) Δ ( ζ ) | T h , N ( λ N ) + A ε , N ( 0 ) , ζ J ε , N I ε , N .
(3.14)

Since λ is simple and N is sufficiently large, then inf ζ I ε , N | Δ ( ζ ) | > 0 and we get (3.12). □

4 Numerical examples

This section includes two examples illustrating the sinc-Gaussian method. It is clearly seen that the sinc-Gaussian method gives remarkably better results. We indicate in these two examples the effect of the amplitude error in the method by determining enclosure intervals for different values of ε. We also indicate the effect of N and h by several choices. We would like to mention that Mathematica has been used to obtain the exact values for these examples where eigenvalues cannot be computed concretely. Mathematica is also used in rounding off the exact eigenvalues, which are square roots. Each example is presented via figures that accurately illustrate the procedure near some of the approximated eigenvalues. More explanations are given below.

Example 4.1 Consider the system
y 2 ( x ) r ( x ) y 1 ( x ) = λ y 1 ( x ) , y 1 ( x ) + r ( x ) y 2 ( x ) = λ y 2 ( x ) , x [ 1 , 0 ) ( 0 , 1 ] ,
(4.1)
y 1 ( 1 ) = 0 , ( 1 + λ ) y 1 ( 1 ) + y 2 ( 1 ) = 0 ,
(4.2)
y 1 ( 0 ) 2 y 1 ( 0 + ) = 0 , y 2 ( 0 ) 2 y 2 ( 0 + ) = 0 .
(4.3)
Here
r 1 ( x ) = r 2 ( x ) = r ( x ) = { x , x [ 1 , 0 ) , x 2 , ( 0 , 1 ] ,
(4.4)
α = β = π 2 , a 1 = 1 , a 2 = 1 and δ = 2 . Direct calculations give
K ( λ ) = 2 ( cos [ 2 λ ] ( 1 + λ ) sin [ 2 λ ] )
(4.5)
and
Δ ( λ ) = 2 ( cos [ 1 6 2 λ ] + ( 1 + λ ) sin [ 1 6 2 λ ] ) .
(4.6)
As is clearly seen, the eigenvalues cannot be computed explicitly. The following three tables (Tables 1, 2, 3) indicate the application of our technique to this problem and the effect of ε. By exact we mean the zeros of Δ ( λ ) computed by Mathematica.
Table 1

The approximation λ k , N and the exact solution λ k for different choices of h and N

λ k

λ 2

λ 1

λ 0

λ 1

Exact λ k

−1.9050594725435388

−0.8005149927957496

0.3944055848645847

1.8242788740449205

λ k , N

h = 0.8, ω = 0.7714

N = 10

−1.9050945328700728

−0.8005149844410676

0.3943794190610962

1.8242617833701285

N = 20

−1.9050594925575182

−0.8005149927903844

0.39440557044475477

1.8242788645444055

h = 0.2, ω = 1.3714

N = 10

−1.9050594937724747

−0.8005149927866473

0.3944055855507727

1.8242788693330168

N = 20

−1.905059472543563

−0.8005149927957529

0.39440558486458616

1.824278874044914

Table 2

Absolute error | λ k λ k , N |

λ k

λ 2

λ 1

λ 0

λ 1

h = 0.8

N = 10

3.50603 × 10−5

8.35468 × 10−9

2.61658 × 10−5

1.70907 × 10−5

N = 20

2.0014 × 10−8

5.36515 × 10−12

1.44198 × 10−8

9.50052 × 10−9

h = 0.2

N = 10

2.12289 × 10−8

9.10227 × 10−12

6.86188 × 10−10

4.7119 × 10−9

N = 20

2.42029 × 10−14

3.33067 × 10−15

3.33067 × 10−15

6.43929 × 10−15

Table 3

For N = 20 and h = 0.2 , the exact solutions λ k are all inside the interval [ a , a + ] for different values of ε

λ k

λ 2

λ 1

λ 0

λ 1

Exact λ k

−1.9050594725435388

−0.8005149927957496

0.3944055848645847

1.8242788740449205

I ε , N , ε = 10−2

[−1.9270913,−1.8826416]

[−0.8259079,−0.7752883]

[0.3752245,0.4132930]

[1.8121593,1.8363114]

I ε , N , ε = 10−5

[−1.9050816,−1.9050372]

[−0.8005402,−0.8004897]

[0.3944246,0.3944055]

[1.8242667,1.8242909]

Figures 1 and 2 illustrate the enclosure intervals dominating λ 2 for N = 20 , h = 0.2 and ε = 10 2 , ε = 10 5 , respectively. The middle curve represents Δ ( λ ) , while the upper and lower curves represent the curves of a + ( λ ) , a ( λ ) , respectively. We notice that when ε = 10 5 , the two curves are almost identical. Similarly, Figures 3 and 4 illustrate the enclosure intervals dominating λ 1 for h = 0.2 , N = 20 and ε = 10 2 , ε = 10 5 , respectively.
Figure 1
Figure 1

The enclosure interval dominating λ 2 for h = 0.2 , N = 20 and ε = 10 2 .

Figure 2
Figure 2

The enclosure interval dominating λ 2 for h = 0.2 , N = 20 and ε = 10 5 .

Figure 3
Figure 3

The enclosure interval dominating λ 1 for h = 0.2 , N = 20 and ε = 10 2 .

Figure 4
Figure 4

The enclosure interval dominating λ 1 for h = 0.2 , N = 20 and ε = 10 5 .

Example 4.2 In this example we consider the system
y 2 ( x ) r ( x ) y 1 ( x ) = λ y 1 ( x ) , y 1 ( x ) + r ( x ) y 2 ( x ) = λ y 2 ( x ) , x [ 1 , 0 ) ( 0 , 1 ] ,
(4.7)
3 y 1 ( 1 ) y 2 ( 1 ) = 0 , ( 1 + 1 2 λ ) y 1 ( 1 ) ( 1 + 3 2 λ ) y 2 ( 1 ) = 0 ,
(4.8)
y 1 ( 0 ) 3 y 1 ( 0 + ) = 0 , y 2 ( 0 ) 3 y 2 ( 0 + ) = 0 ,
(4.9)
where
r 1 ( x ) = r 2 ( x ) = r ( x ) = { x + 1 , x [ 1 , 0 ) , x , ( 0 , 1 ] ,
(4.10)
a 1 = a 2 = 1 , α = π 3 , β = π 6 and δ = 3 . Direct calculations give
K ( λ ) = 3 [ cos [ π 3 + 2 λ ] λ sin [ π 6 + 2 λ ] sin [ π 3 + 2 λ ] ]
(4.11)
and
Δ ( λ ) = 3 2 [ ( 1 + 3 + λ ) cos [ 1 + 2 λ ] + ( 1 + 3 + 3 λ ) sin [ 1 + 2 λ ] ] .
(4.12)
Tables 4, 5, give the exact eigenvalues { λ k } k = 2 1 and their approximate ones for different values of h, N, ε. In Table 6, we give the absolute error for different values of h and N.
Table 4

The approximation λ k , N and the exact solution λ k for different choices of h and N

λ k

λ 2

λ 1

λ 0

λ 1

Exact λ k

−1.443241990338957

−0.5507950329405884

0.8894376317278696

2.427882996831557

λ k , N

h = 0.6, ω = 0.9714

N = 10

−1.4432116741528003

−0.5507870771754422

0.8894392796056301

2.4278845029050586

N = 20

−1.4432419877352867

−0.5507950322921764

0.8894376316344037

2.427882996941257

h = 0.1, ω = 1.4714

N = 10

−1.443241954240034

−0.5507950143369327

0.8894376262695777

2.4278830194325765

N = 20

−1.4432419903389377

−0.5507950329405837

0.88943763172786576

2.4278829968315647

Table 5

For N = 20 and h = 0.1 , the exact solutions λ k are all inside the interval [ a , a + ] for different values of ε

λ k

λ 2

λ 1

λ 0

λ 1

Exact λ k

−1.443241990338957

−0.5507950329405884

0.8894376317278696

2.427882996831557

I ε , N , ε = 10−2

[−1.4716489,−1.4144426]

[−0.5736938,−0.5287366]

[0.8789632,0.8998212]

[2.4214822,2.4342626]

I ε , N , ε = 10−5

[−1.4432705,−1.4432134]

[−0.5508174,−0.5507725]

[0.8894272,0.8894480]

[2.4278766,2.4278893]

Table 6

Absolute error | λ k λ k , N |

λ k

λ 2

λ 1

λ 0

λ 1

h = 0.6

N = 10

3.03162 × 10−5

7.95577 × 10−6

1.64788 × 10−6

1.50607 × 10−6

N = 20

2.60367 × 10−9

6.48412 × 10−10

9.34659 × 10−11

1.097 × 10−10

h = 0.1

N = 10

3.60989 × 10−8

1.86037 × 10−8

5.45829 × 10−9

2.2601 × 10−8

N = 20

1.93179 × 10−14

4.66294 × 10−15

3.88578 × 10−15

7.54952 × 10−15

Here Figures 5, 6, 7, 8 illustrate the enclosure intervals dominating λ 0 and λ 1 for h = 0.1 , N = 20 and ε = 10 2 , ε = 10 5 , respectively.
Figure 5
Figure 5

The enclosure interval dominating λ 0 for h = 0.1 , N = 20 and ε = 10 2 .

Figure 6
Figure 6

The enclosure interval dominating λ 0 for h = 0.1 , N = 20 and ε = 10 5 .

Figure 7
Figure 7

The enclosure interval dominating λ 1 for h = 0.1 , N = 20 and ε = 10 2 .

Figure 8
Figure 8

The enclosure interval dominating λ 1 for h = 0.1 , N = 20 and ε = 10 5 .

Declarations

Acknowledgements

This research was supported by a grant from the Institute of Scientific Research at Umm Al-Qura University, Saudi Arabia.

Authors’ Affiliations

(1)
Department of Mathematics, Faculty of Science, King Abdulaziz University, North Jeddah, Saudi Arabia
(2)
Department of Mathematics, Faculty of Science, Beni-Suef University, Beni-Suef, Egypt
(3)
Department of Mathematics, University College, Umm Al-Qura University, P.O. Box 8140, Makkah, Saudi Arabia

References

  1. Kotel’nikov V: On the carrying capacity of the ‘ether’ and wire in telecommunications. 55. In Material for the First All-Union Conference on Questions of Communications. Izd. Red. Upr. Svyazi RKKA, Moscow; 1933:55-64. (Russian)Google Scholar
  2. Shannon CE: Communications in the presence of noise. Proc. IRE 1949, 37: 10-21.MathSciNetView ArticleGoogle Scholar
  3. Whittaker ET: On the functions which are represented by the expansion of the interpolation theory. Proc. R. Soc. Edinb., Sect. A 1915, 35: 181-194.View ArticleGoogle Scholar
  4. Butzer PL, Schmeisser G, Stens RL: An introduction to sampling analysis. In Nonuniform Sampling: Theory and Practices. Edited by: Marvasti F. Kluwer Academic, New York; 2001:17-121.View ArticleGoogle Scholar
  5. Kowalski M, Sikorski K, Stenger F: Selected Topics in Approximation and Computation. Oxford University Press, London; 1995.Google Scholar
  6. Lund J, Bowers K: Sinc Methods for Quadrature and Differential Equations. SIAM, Philadelphia; 1992.View ArticleGoogle Scholar
  7. Stenger F: Numerical methods based on Whittaker cardinal, or sinc functions. SIAM Rev. 1981, 23: 156-224.MathSciNetGoogle Scholar
  8. Stenger F: Numerical Methods Based on Sinc and Analytic Functions. Springer, New York; 1993.View ArticleGoogle Scholar
  9. Boumenir A: Higher approximation of eigenvalues by the sampling method. BIT Numer. Math. 2000, 40: 215-225. 10.1023/A:1022334806027MathSciNetView ArticleGoogle Scholar
  10. Boumenir A: Sampling and eigenvalues of non-self-adjoint Sturm-Liouville problems. SIAM J. Sci. Comput. 2001, 23: 219-229. 10.1137/S1064827500374078MathSciNetView ArticleGoogle Scholar
  11. Tharwat MM, Bhrawy AH, Yildirim A: Numerical computation of eigenvalues of discontinuous Sturm-Liouville problems with parameter dependent boundary conditions using sinc method. Numer. Algorithms 2013, 63: 27-48. 10.1007/s11075-012-9609-3MathSciNetView ArticleGoogle Scholar
  12. Tharwat MM, Bhrawy AH, Yildirim A: Numerical computation of eigenvalues of discontinuous Dirac system using sinc method with error analysis. Int. J. Comput. Math. 2012, 89: 2061-2080. 10.1080/00207160.2012.700112MathSciNetView ArticleGoogle Scholar
  13. Butzer PL, Stens RL: A modification of the Whittaker-Kotel’nikov-Shannon sampling series. Aequ. Math. 1985, 28: 305-311. 10.1007/BF02189424MathSciNetView ArticleGoogle Scholar
  14. Gervais R, Rahman QI, Schmeisser G: A bandlimited function simulating a duration-limited one. In Approximation Theory and Functional Analysis. Edited by: Butzer PL, Stens RL. Birkhäuser, Basel; 1984:355-362.Google Scholar
  15. Stens RL: Sampling by generalized kernels. In Sampling Theory in Fourier and Signal Analysis: Advanced Topics. Edited by: Higgins JR, Stens RL. Oxford University Press, Oxford; 1999:130-157.Google Scholar
  16. Schmeisser G, Stenger F: Sinc approximation with a Gaussian multiplier. Sampl. Theory Signal Image Process. 2007, 6: 199-221.MathSciNetGoogle Scholar
  17. Qian L: On the regularized Whittaker-Kotel’nikov-Shannon sampling formula. Proc. Am. Math. Soc. 2002, 131: 1169-1176.View ArticleGoogle Scholar
  18. Qian L, Creamer DB: A modification of the sampling series with a Gaussian multiplier. Sampl. Theory Signal Image Process. 2006, 5: 1-20.MathSciNetGoogle Scholar
  19. Qian L, Creamer DB: Localized sampling in the presence of noise. Appl. Math. Lett. 2006, 19: 351-355. 10.1016/j.aml.2005.05.013MathSciNetView ArticleGoogle Scholar
  20. Annaby MH, Asharabi RM: Computing eigenvalues of boundary-value problems using sinc-Gaussian method. Sampl. Theory Signal Image Process. 2008, 7: 293-312.MathSciNetGoogle Scholar
  21. Tharwat MM, Bhrawy AH: Computation of eigenvalues of discontinuous Dirac system using Hermite interpolation technique. Adv. Differ. Equ. 2012. 10.1186/1687-1847-2012-59Google Scholar
  22. Tharwat MM, Yildirim A, Bhrawy AH: Sampling of discontinuous Dirac systems. Numer. Funct. Anal. Optim. 2013, 34: 323-348. 10.1080/01630563.2012.693565MathSciNetView ArticleGoogle Scholar
  23. Bhrawy AH, Tharwat MM, Al-Fhaid A: Numerical algorithms for computing eigenvalues of discontinuous Dirac system using sinc-Gaussian method. Abstr. Appl. Anal. 2012. 10.1155/2012/925134Google Scholar
  24. Annaby MH, Tharwat MM: A sinc-Gaussian technique for computing eigenvalues of second-order linear pencils. Appl. Numer. Math. 2013, 63: 129-137.MathSciNetView ArticleGoogle Scholar
  25. Tharwat MM, Bhrawy AH, Alofi AS: Approximation of eigenvalues of discontinuous Sturm-Liouville problems with eigenparameter in all boundary conditions. Bound. Value Probl. 2013. 10.1186/1687-2770-2013-132Google Scholar
  26. Tharwat MM: On sampling theories and discontinuous Dirac systems with eigenparameter in the boundary conditions. Bound. Value Probl. 2013. 10.1186/1687-2770-2013-65Google Scholar
  27. Levitan BM, Sargsjan IS Translation of Mathematical Monographs 39. In Introduction to Spectral Theory: Self Adjoint Ordinary Differential Operators. Am. Math. Soc., Providence; 1975.Google Scholar
  28. Levitan BM, Sargsjan IS: Sturm-Liouville and Dirac Operators. Kluwer Academic, Dordrecht; 1991.View ArticleGoogle Scholar
  29. Tharwat MM: Discontinuous Sturm-Liouville problems and associated sampling theories. Abstr. Appl. Anal. 2011. 10.1155/2011/610232Google Scholar

Copyright

Advertisement