Skip to main content

Approximation of eigenvalues of boundary value problems

Abstract

In the present paper we apply a sinc-Gaussian technique to compute approximate values of the eigenvalues of discontinuous Dirac systems, which contain an eigenvalue parameter in one boundary condition, with transmission conditions at the point of discontinuity. The error of this method decays exponentially in terms of the number of involved samples. Therefore the accuracy of the new technique is higher than the classical sinc-method. Numerical worked examples with tables and illustrative figures are given at the end of the paper showing that this method gives us better results.

MSC: 34L16, 94A20, 65L15.

1 Introduction

Consider the discontinuous Dirac system which consists of the system of differential equations

( y 2 ( x ) r 1 ( x ) y 1 ( x ) y 1 ( x ) + r 2 ( x ) y 2 ( x ) ) = ( λ y 1 ( x ) λ y 2 ( x ) ) ,x[1,0)(0,1],
(1.1)

with boundary conditions

U 1 (y):=sinα y 1 (1)cosα y 2 (1)=0,
(1.2)
U 2 (y):=( a 1 +λsinβ) y 1 (1)( a 2 +λcosβ) y 2 (1)=0
(1.3)

and transmission conditions

U 3 (y):= y 1 ( 0 ) δ y 1 ( 0 + ) =0,
(1.4)
U 4 (y):= y 2 ( 0 ) δ y 2 ( 0 + ) =0,
(1.5)

where λC; y= ( y 1 y 2 ) ; the real-valued functions r 1 () and r 2 () are continuous in [1,0) and (0,1], and have finite limits r 1 ( 0 ± ):= lim x 0 ± r 1 (x), r 2 ( 0 ± ):= lim x 0 ± r 2 (x); a 1 , a 2 ,δR, α,β[0,π); δ0 and ρ:= a 1 cosβ a 2 sinβ>0.The aim of the present work is to compute the eigenvalues of (1.1)-(1.5) numerically by the sinc-Gaussian technique with errors analysis, truncation error and amplitude error.

Sampling theory is one of the most important mathematical tools used in communication engineering since it enables engineers to reconstruct signals from some of their sampled data. A fundamental result in information theory is the Whittaker-Kotel’nikov-Shannon (WKS) sampling theorem [13]. It states that any f B σ 2 , σ>0,

B σ 2 := { f : f  entire , | f ( μ ) | C e σ | μ | , R | f ( μ ) | 2 d μ < } ,

can be reconstructed from its sampled values {f(nπ/σ):nZ} by the formula

f(λ)= n Z f(nπ/σ)sinc(σλnπ),λC,
(1.6)

where

sinc(λ):= { sin ( λ ) λ , λ 0 , 1 , λ = 0 .
(1.7)

Series (1.6) converges absolutely and uniformly on compact subsets of , and uniformly on , cf. [4]. Expansion (1.6) is used in several approximation problems which are known as sinc-methods; see, e.g., [58]. In particular the sinc-method is used to approximate eigenvalues of boundary value problems; see, for example, [912]. The sinc-method has a slow rate of decay at infinity, which is as slow as O(| λ 1 |). There have been several attempts to improve the rate of decay. One of the interesting ways is to multiply the sinc-function in (1.6) by a kernel function; see, e.g., [1315]. Let h(0,π/σ] and γ(0,πhσ). Assume that Φ B γ 2 such that Φ(0)=1, then for g B σ 2 we have the expansion [16]

g(λ)= n = g(nh)sinc ( h 1 π λ n π ) Φ ( h 1 λ n ) .
(1.8)

The speed of convergence of the series in (1.8) is determined by the decay of |Φ(λ)|. But the decay of an entire function of exponential type cannot be as fast as e c | x | as |x| for some positive c [16]. In [17], Qian has introduced the following regularized sampling formula. For h(0,π/σ], NN and r>0, Qian defined the operator [17]

( G h , N g)(λ)= n Z N ( λ ) g(nh)sinc ( h 1 π λ n π ) G ( λ n h 2 r h ) ,λR,
(1.9)

where G(t):=exp( t 2 ), which is called the Gaussian function, Z N (x):={nZ:|[ h 1 x]n|N} and [x] denotes the integer part of xR; see also [18, 19]. Qian also derived the following error bound. If g B σ 2 , h(0,π/σ] and a:=min{r(πhσ),(N2)/r}1, then [17, 18]

|g(λ)( G h , N g)(λ)| 2 σ π g 2 π 2 a 2 ( 2 π a + e 3 / 2 r 2 ) e a 2 / 2 ,λR.
(1.10)

In [16] Schmeisser and Stenger extended the operator (1.9) to the complex domain . For σ>0, h(0,π/σ] and ω:=(πhσ)/2, they defined the operator [16]

( G h , N g)(λ):= n Z N ( λ ) g(nh) S n ( π λ h ) G ( ω ( λ n h ) N h ) ,
(1.11)

where Z N (λ):={nZ:|[ h 1 λ+1/2]n|N} and NN. Note that the summation limits in (1.11) depend on the real part of λ. Schmeisser and Stenger [16] proved that if g is an entire function such that

|g(ξ+iη)|ϕ ( | ξ | ) e σ | η | ,ξ,ηR,
(1.12)

where ϕ is a non-decreasing, non-negative function on [0,) and σ0, then for h(0,π/σ), ω:=(πhσ)/2, NN, |λ|<N, we have

| g ( λ ) ( G h , N g ) ( λ ) | 2 | sin ( h 1 π λ ) | ϕ ( | λ | + h ( N + 1 ) ) e ω N π ω N β N ( h 1 λ ) , λ C ,
(1.13)

where

β N (t):=cosh(2ωt)+ 2 e ω t 2 / N π ω N [ 1 ( t / N ) 2 ] + 1 2 [ e 2 ω t e 2 π ( N t ) 1 + e 2 ω t e 2 π ( N + t ) 1 ] .
(1.14)

The amplitude error arises when the exact values g(nh) of (1.11) are replaced by the approximations g ˜ (nh). We assume that g ˜ (nh) are close to g(nh), i.e., there is ε>0 sufficiently small such that

sup n Z n ( λ ) |g(nh) g ˜ (nh)|<ε.
(1.15)

Let h(0,π/σ), ω:=(πhσ)/2 and NN be fixed numbers. The authors in [20] proved that if (1.15) is held, then for |λ|<N, we have

|( G h , N g)(λ)( G h , N g ˜ )(λ)| A ε , N (λ),
(1.16)

where

A ε , N (λ)=2ε e ω / 4 N (1+ N / ω π )exp ( ( ω + π ) h 1 | λ | ) .
(1.17)

Without eigenparameter appearing in any of boundary conditions, in [21] and [12] Tharwat et al. approximately computed the eigenvalues of the discontinuous Dirac system which is studied in the monographs of [22] by Hermite interpolations and regularized sinc-methods, respectively. In the regularized sinc-method, also the same in the Hermite interpolations method, the basic idea is as follows: The eigenvalues are characterized as the zeros of an analytic function F(λ) which can be written in the form F(λ)= f 0 (λ)+f(λ), where f 0 (λ) is a known part. The ingenuity of the approach is in trying to choose the function F(λ) so that f(λ) B σ 2 (unknown part) and can be approximated by the WKS sampling theorem if its values at some equally spaced points are known; see [912]. Recall that, in regularized sinc and Hermite interpolations methods, it is necessary that f(λ) is an L 2 -function. In this paper we will use the sinc-Gaussian sampling formula (1.11) to compute eigenvalues of (1.1)-(1.5) numerically. As is expected, the new method reduced the error bounds remarkably (see the examples in Section 4). Also here, the basic idea is to write the function of eigenvalues as the sum of two terms, one known and the other unknown but an entire function of exponential type which satisfies (1.12). In other words, the unknown term is not necessarily an L 2 -function. Then we approximate the unknown part using (1.11) and obtain better results. We would like to mention that the papers in computing eigenvalues by the sinc-Gaussian method are few; see [20, 2325]. In Sections 2, 3 we derive the sinc-Gaussian technique to compute the eigenvalues of (1.1)-(1.5) with error estimates. The last section involves some illustrative examples.

2 Preliminaries

In this section we derive approximate values of the eigenvalues of problem (1.1)-(1.5). Recall that problem (1.1)-(1.5) has a denumerable set of real and simple eigenvalues, cf. [26]; see also [22, 2729]. Let

y(,λ)= ( y 1 ( , λ ) y 2 ( , λ ) ) , y i (x,λ)= { y i 1 ( x , λ ) , x [ 1 , 0 ) , y i 2 ( x , λ ) , x ( 0 , 1 ] , i=1,2,
(2.1)

be the solution of (1.1) satisfying the following initial conditions:

( y 11 ( 1 , λ ) y 12 ( 0 + , λ ) y 21 ( 1 , λ ) y 22 ( 0 + , λ ) ) = ( cos α δ 1 y 11 ( 0 , λ ) sin α δ 1 y 21 ( 0 , λ ) ) .
(2.2)

In [26], Tharwat proved the existence and uniqueness of (2.2). Since y(,λ) satisfies (1.2), then the eigenvalues of problem (1.1)-(1.5) are the zeros of the function (see Lemma 2.4 of [[26], p.8])

Δ(λ)= δ 2 ( ( a 1 + λ sin β ) y 12 ( 1 , λ ) ( a 2 + λ cos β ) y 22 ( 1 , λ ) ) .
(2.3)

Notice that both y(,λ) and Δ(λ) are entire functions of λ, and y(,λ) satisfies the system of integral equations (cf. [26])

y 11 (x,λ)=cos ( λ ( x + 1 ) + α ) S 1 , 1 y 11 (x,λ) S ˜ 1 , 2 y 21 (x,λ),
(2.4)
y 21 (x,λ)=sin ( λ ( x + 1 ) + α ) + S ˜ 1 , 1 y 11 (x,λ) S 1 , 2 y 21 (x,λ),
(2.5)
y 12 ( x , λ ) = 1 δ y 11 ( 0 , λ ) cos ( λ x ) 1 δ y 21 ( 0 , λ ) sin ( λ x ) y 12 ( x , λ ) = S 0 , 1 y 12 ( x , λ ) S ˜ 0 , 2 y 22 ( x , λ ) ,
(2.6)
y 22 ( x , λ ) = 1 δ y 11 ( 0 , λ ) sin ( λ x ) + 1 δ y 21 ( 0 , λ ) cos ( λ x ) y 22 ( x , λ ) = + S ˜ 0 , 1 y 12 ( x , λ ) S 0 , 2 y 22 ( x , λ ) ,
(2.7)

where S 1 , i , S ˜ 1 , i , S 0 , i and S ˜ 0 , i , i=1,2, are the Volterra integral operators defined by

S 1 , i φ ( x , λ ) : = 1 x sin λ ( x t ) r i ( t ) φ ( t , λ ) d t , S ˜ 1 , i φ ( x , λ ) : = 1 x cos λ ( x t ) r i ( t ) φ ( t , λ ) d t , S 0 , i φ ( x , λ ) : = 0 x sin λ ( x t ) r i ( t ) φ ( t , λ ) d t , S ˜ 0 , i φ ( x , λ ) : = 0 x cos λ ( x t ) r i ( t ) φ ( t , λ ) d t .

For convenience, we define the constants

c 1 : = 1 0 [ | r 1 ( t ) | + | r 2 ( t ) | ] d t , c 2 : = c 1 exp ( c 1 ) , c 3 : = 0 1 [ | r 1 ( t ) | + | r 2 ( t ) | ] d t , c 4 : = c 2 + 2 | δ | ( 1 + c 2 ) , c 5 : = max { | a 1 | + | a 2 | , | sin β | + | cos β | } .
(2.8)

Define z 1 , i (,λ) and z 0 , i (,λ), i=1,2, to be

z 1 , 1 ( x , λ ) : = S 1 , 1 y 11 ( x , λ ) + S ˜ 1 , 2 y 21 ( x , λ ) , z 1 , 2 ( x , λ ) : = S ˜ 1 , 1 y 11 ( x , λ ) S 1 , 2 y 21 ( x , λ ) ,
(2.9)
z 0 , 1 ( x , λ ) : = S 0 , 1 y 12 ( x , λ ) + S ˜ 0 , 2 y 22 ( x , λ ) , z 0 , 2 ( x , λ ) : = S ˜ 0 , 1 y 12 ( x , λ ) S 0 , 2 y 22 ( x , λ ) .
(2.10)

Lemma 2.1 The functions z 1 , 1 (x,λ) and z 1 , 2 (x,λ) are entire in λ for any fixed x[1,0) and satisfy the growth condition

| z 1 , 1 (x,λ)|,| z 1 , 2 (x,λ)|2 c 2 e | λ | ( x + 1 ) ,λC.
(2.11)

Proof Since z 1 , 1 (x,λ)= S 1 , 1 y 11 (x,λ)+ S ˜ 1 , 2 y 21 (x,λ), then from (2.4) and (2.5) we obtain

z 1 , 1 ( x , λ ) = S 1 , 1 cos ( λ ( x + 1 ) + α ) + S ˜ 1 , 2 sin ( λ ( x + 1 ) + α ) S 1 , 1 z 1 , 1 ( x , λ ) + S ˜ 1 , 2 z 1 , 2 ( x , λ ) .

Using the inequalities |sinz| e | z | and |cosz| e | z | for zC leads for λC to

| z 1 , 1 ( x , λ ) | | S 1 , 1 cos ( λ ( x + 1 ) + α ) | + | S ˜ 1 , 2 sin ( λ ( x + 1 ) + α ) | + | S 1 , 1 z 1 , 1 ( x , λ ) | + | S ˜ 1 , 2 z 1 , 2 ( x , λ ) | e | λ | ( x + 1 ) 1 x [ | r 1 ( t ) | | z 1 , 1 ( t , λ ) | + | r 2 ( t ) | | z 1 , 2 ( t , λ ) | ] e | λ | ( t + 1 ) d t + 2 e | λ | ( x + 1 ) 1 x [ | r 1 ( t ) | + | r 2 ( t ) | ] d t 2 c 1 e | λ | ( x + 1 ) + e | λ | ( x + 1 ) 1 x [ | r 1 ( t ) | | z 1 , 1 ( t , λ ) | + | r 2 ( t ) | | z 1 , 2 ( t , λ ) | ] e | λ | ( t + 1 ) d t .

The above inequality can be reduced to

e | λ | ( x + 1 ) | z 1 , 1 ( x , λ ) | 2 c 1 + 1 x [ | r 1 ( t ) | | z 1 , 1 ( t , λ ) | + | r 2 ( t ) | | z 1 , 2 ( t , λ ) | ] e | λ | ( t + 1 ) d t .
(2.12)

Similarly, we can prove that

e | λ | ( x + 1 ) | z 1 , 2 ( x , λ ) | 2 c 1 + 1 x [ | r 1 ( t ) | | z 1 , 1 ( t , λ ) | + | r 2 ( t ) | | z 1 , 2 ( t , λ ) | ] e | λ | ( t + 1 ) d t .
(2.13)

Then from (2.12), (2.13) and Lemma 3.1 of [[28], p.204], we obtain (2.11). □

In a similar manner, we will prove the following lemma for z 0 , 1 (,λ) and z 0 , 2 (,λ).

Lemma 2.2 The functions z 0 , 1 (x,λ) and z 0 , 2 (x,λ) are entire in λ for any fixed x(0,1] and satisfy the growth condition

| z 0 , 1 (x,λ)|,| z 0 , 2 (x,λ)|2 c 3 c 4 e | λ | ( x + 1 ) ,λC.
(2.14)

Proof Since z 0 , 1 (x,λ)= S 0 , 1 y 12 (x,λ)+ S ˜ 0 , 2 y 22 (x,λ), then from (2.6) and (2.7) we obtain

z 0 , 1 ( x , λ ) = 1 δ y 11 ( 0 , λ ) S 0 , 1 cos ( λ x ) 1 δ y 21 ( 0 , λ ) S 0 , 1 sin ( λ x ) S 0 , 1 z 1 , 2 ( x , λ ) + 1 δ y 11 ( 0 , λ ) S ˜ 0 , 2 sin ( λ x ) + 1 δ y 21 ( 0 , λ ) S ˜ 0 , 2 cos ( λ x ) + S ˜ 0 , 2 z 1 , 2 ( x , λ ) .

Then from (2.4) and (2.5) and Lemma 2.1, we get

| z 0 , 1 ( x , λ ) | 1 | δ | | y 11 ( 0 , λ ) | | S 0 , 1 cos ( λ x ) | + 1 | δ | | y 21 ( 0 , λ ) | | S 0 , 1 sin ( λ x ) | + | S 0 , 1 z 1 , 2 ( x , λ ) | + 1 | δ | | y 11 ( 0 , λ ) | | S ˜ 0 , 2 sin ( λ x ) | + 1 | δ | | y 21 ( 0 , λ ) | | S ˜ 0 , 2 cos ( λ x ) | + | S ˜ 0 , 2 z 1 , 2 ( x , λ ) | 2 ( c 2 + 2 | δ | ( 1 + c 2 ) ) c 3 e | λ | ( x + 1 ) = 2 c 3 c 4 e | λ | ( x + 1 ) .

Similarly, we can prove that

| z 0 , 2 (x,λ)|2 c 3 c 4 e | λ | ( x + 1 ) .

 □

3 The numerical scheme

In this section we derive the method of computing eigenvalues of problem (1.1)-(1.5) numerically. The basic idea of the scheme is to split Δ(λ) into two parts a known part K(λ) and an unknown one U(λ). Then we approximate U(λ) using (1.11) to get the approximate Δ(λ) and then compute the approximate zeros. We first split Δ(λ) into two parts as follows:

Δ(λ):=K(λ)+U(λ),
(3.1)

where U(λ) is the unknown part involving integral operators

U ( λ ) : = δ [ a 2 sin λ a 1 cos λ + λ sin ( λ β ) ] z 1 , 1 ( 0 , λ ) δ [ a 1 sin λ + a 2 cos λ + λ cos ( λ β ) ] z 1 , 2 ( 0 , λ ) + δ 2 [ ( a 1 + λ sin β ) z 0 , 1 ( 1 , λ ) + ( a 2 + λ cos β ) z 0 , 2 ( 1 , λ ) ]
(3.2)

and K(λ) is the known part

K(λ):=δ [ a 1 cos ( 2 λ + α ) a 2 sin ( 2 λ + α ) λ sin ( 2 λ + α β ) ] .
(3.3)

Then, from Lemma 2.1 and Lemma 2.2, we have the following result.

Lemma 3.1 The function U(λ) is entire in λ and the following estimate holds:

|U(λ)|ϕ(λ) e 2 | λ | ,
(3.4)

where

ϕ(λ)=:M ( 1 + | λ | ) ,M:=2|δ| c 5 ( c 2 + | δ | c 3 c 4 ) .
(3.5)

Proof From (3.2) we have

| U ( λ ) | | δ | [ | a 2 | | sin λ | + | a 1 | | cos λ | + | λ | | sin ( λ β ) | ] | z 1 , 1 ( 0 , λ ) | + | δ | [ | a 1 | | sin λ | + | a 2 | | cos λ | + | λ | | cos ( λ β ) | ] | z 1 , 2 ( 0 , λ ) | + δ 2 [ ( | a 1 | + | λ | | sin β | ) | z 0 , 1 ( 1 , λ ) | + ( | a 2 | + | λ | | cos β | ) | z 0 , 2 ( 1 , λ ) | ] .

Using the inequalities |sinλ| e | λ | and |cosλ| e | λ | for λC, Lemma 2.1 and Lemma 2.2 imply (3.4). □

Thus U(λ) is an entire function of exponential type σ=2. In the following we let λR since all eigenvalues are real. Now we approximate the function U(λ) using the operator (1.11) where h(0,π/2) and ω:=(π2h)/2 and then, from (1.13), we obtain

|U(λ)( G h , N U)(λ)| T h , N (λ),
(3.6)

where

T h , N (λ):=2|sin ( h 1 π λ ) |ϕ ( | λ | + h ( N + 1 ) ) e ω N π ω N β N (0),λR.
(3.7)

The samples U(nh)=Δ(nh)K(nh), n Z N (λ) cannot be computed explicitly in the general case. We approximate these samples numerically by solving the initial value problems defined by (1.1) and (2.2) to obtain the approximate values U ˜ (nh), n Z N (λ), i.e., U ˜ (nh)= Δ ˜ (nh)K(nh). Here we use the computer algebra system Mathematica to obtain approximate solutions with the required accuracy. However, a separate study for the effect of different numerical schemes and the computational costs would be interesting. Accordingly, we have the explicit expansion

( G h , N U ˜ )(λ):= n Z N ( λ ) U ˜ (nh)sinc ( h 1 π λ n π ) G ( ω ( λ n h ) N h ) .
(3.8)

Therefore we get (cf. (1.16))

|( G h , N U)(λ)( G h , N U ˜ )(λ)| A ε , N (0),λR.
(3.9)

Now let Δ ˜ N (λ):=K(λ)+( G h , N U ˜ )(λ). From (3.6) and (3.9) we obtain

|Δ(λ) Δ ˜ N (λ)| T h , N (λ)+ A ε , N (0),λR.
(3.10)

Let λ be an eigenvalue and λ N be its desired approximation, i.e., Δ( λ )=0 and Δ ˜ N ( λ N )=0. From (3.10) we have | Δ ˜ N ( λ )| T h , N ( λ )+ A ε , N (0). Define the curves

a ± (λ)= Δ ˜ N (λ)± T h , N (λ)+ A ε , N (0).
(3.11)

The curves a + (λ), a (λ) enclose the curve of Δ(λ) for suitably large N. Hence the closure interval is determined by solving a ± (λ)=0, which gives an interval

I ε , N :=[ a , a + ].

It is worthwhile to mention that the simplicity of the eigenvalues guarantees the existence of approximate eigenvalues, i.e., the λ N for which Δ ˜ N ( λ N )=0. Next we estimate the error | λ λ N | for the eigenvalue λ .

Theorem 3.2 Let λ be an eigenvalue of (1.1)-(1.5) and let λ N be its approximation. Then, for λR, we have the following estimate:

| λ λ N |< T h , N ( λ N ) + A ε , N ( 0 ) inf ζ I ε , N | Δ ( ζ ) | ,
(3.12)

where the interval I ε , N is defined above.

Proof Replacing λ by λ N in (3.10), we obtain

|Δ( λ N )Δ ( λ ) |< T h , N ( λ N )+ A ε , N (0),
(3.13)

where we have used Δ ˜ N ( λ N )=Δ( λ )=0. Using the mean value theorem yields that for some ζ J ε , N :=[min( λ , λ N ),max( λ , λ N )],

| ( λ λ N ) Δ (ζ)| T h , N ( λ N )+ A ε , N (0),ζ J ε , N I ε , N .
(3.14)

Since λ is simple and N is sufficiently large, then inf ζ I ε , N | Δ (ζ)|>0 and we get (3.12). □

4 Numerical examples

This section includes two examples illustrating the sinc-Gaussian method. It is clearly seen that the sinc-Gaussian method gives remarkably better results. We indicate in these two examples the effect of the amplitude error in the method by determining enclosure intervals for different values of ε. We also indicate the effect of N and h by several choices. We would like to mention that Mathematica has been used to obtain the exact values for these examples where eigenvalues cannot be computed concretely. Mathematica is also used in rounding off the exact eigenvalues, which are square roots. Each example is presented via figures that accurately illustrate the procedure near some of the approximated eigenvalues. More explanations are given below.

Example 4.1 Consider the system

y 2 (x)r(x) y 1 (x)=λ y 1 (x), y 1 (x)+r(x) y 2 (x)=λ y 2 (x),x[1,0)(0,1],
(4.1)
y 1 (1)=0,(1+λ) y 1 (1)+ y 2 (1)=0,
(4.2)
y 1 ( 0 ) 2 y 1 ( 0 + ) =0, y 2 ( 0 ) 2 y 2 ( 0 + ) =0.
(4.3)

Here

r 1 (x)= r 2 (x)=r(x)= { x , x [ 1 , 0 ) , x 2 , ( 0 , 1 ] ,
(4.4)

α=β= π 2 , a 1 =1, a 2 =1 and δ=2. Direct calculations give

K(λ)=2 ( cos [ 2 λ ] ( 1 + λ ) sin [ 2 λ ] )
(4.5)

and

Δ(λ)=2 ( cos [ 1 6 2 λ ] + ( 1 + λ ) sin [ 1 6 2 λ ] ) .
(4.6)

As is clearly seen, the eigenvalues cannot be computed explicitly. The following three tables (Tables 1, 2, 3) indicate the application of our technique to this problem and the effect of ε. By exact we mean the zeros of Δ(λ) computed by Mathematica.

Table 1 The approximation λ k , N and the exact solution λ k for different choices of h and N
Table 2 Absolute error | λ k λ k , N |
Table 3 For N=20 and h=0.2 , the exact solutions λ k are all inside the interval [ a , a + ] for different values of ε

Figures 1 and 2 illustrate the enclosure intervals dominating λ 2 for N=20, h=0.2 and ε= 10 2 , ε= 10 5 , respectively. The middle curve represents Δ(λ), while the upper and lower curves represent the curves of a + (λ), a (λ), respectively. We notice that when ε= 10 5 , the two curves are almost identical. Similarly, Figures 3 and 4 illustrate the enclosure intervals dominating λ 1 for h=0.2, N=20 and ε= 10 2 , ε= 10 5 , respectively.

Figure 1
figure 1

The enclosure interval dominating λ 2 for h=0.2 , N=20 and ε= 10 2 .

Figure 2
figure 2

The enclosure interval dominating λ 2 for h=0.2 , N=20 and ε= 10 5 .

Figure 3
figure 3

The enclosure interval dominating λ 1 for h=0.2 , N=20 and ε= 10 2 .

Figure 4
figure 4

The enclosure interval dominating λ 1 for h=0.2 , N=20 and ε= 10 5 .

Example 4.2 In this example we consider the system

y 2 (x)r(x) y 1 (x)=λ y 1 (x), y 1 (x)+r(x) y 2 (x)=λ y 2 (x),x[1,0)(0,1],
(4.7)
3 y 1 (1) y 2 (1)=0, ( 1 + 1 2 λ ) y 1 (1) ( 1 + 3 2 λ ) y 2 (1)=0,
(4.8)
y 1 ( 0 ) 3 y 1 ( 0 + ) =0, y 2 ( 0 ) 3 y 2 ( 0 + ) =0,
(4.9)

where

r 1 (x)= r 2 (x)=r(x)= { x + 1 , x [ 1 , 0 ) , x , ( 0 , 1 ] ,
(4.10)

a 1 = a 2 =1, α= π 3 , β= π 6 and δ=3. Direct calculations give

K(λ)=3 [ cos [ π 3 + 2 λ ] λ sin [ π 6 + 2 λ ] sin [ π 3 + 2 λ ] ]
(4.11)

and

Δ(λ)= 3 2 [ ( 1 + 3 + λ ) cos [ 1 + 2 λ ] + ( 1 + 3 + 3 λ ) sin [ 1 + 2 λ ] ] .
(4.12)

Tables 4, 5, give the exact eigenvalues { λ k } k = 2 1 and their approximate ones for different values of h, N, ε. In Table 6, we give the absolute error for different values of h and N.

Table 4 The approximation λ k , N and the exact solution λ k for different choices of h and N
Table 5 For N=20 and h=0.1 , the exact solutions λ k are all inside the interval [ a , a + ] for different values of ε
Table 6 Absolute error | λ k λ k , N |

Here Figures 5, 6, 7, 8 illustrate the enclosure intervals dominating λ 0 and λ 1 for h=0.1, N=20 and ε= 10 2 , ε= 10 5 , respectively.

Figure 5
figure 5

The enclosure interval dominating λ 0 for h=0.1 , N=20 and ε= 10 2 .

Figure 6
figure 6

The enclosure interval dominating λ 0 for h=0.1 , N=20 and ε= 10 5 .

Figure 7
figure 7

The enclosure interval dominating λ 1 for h=0.1 , N=20 and ε= 10 2 .

Figure 8
figure 8

The enclosure interval dominating λ 1 for h=0.1 , N=20 and ε= 10 5 .

References

  1. Kotel’nikov V: On the carrying capacity of the ‘ether’ and wire in telecommunications. 55. In Material for the First All-Union Conference on Questions of Communications. Izd. Red. Upr. Svyazi RKKA, Moscow; 1933:55-64. (Russian)

    Google Scholar 

  2. Shannon CE: Communications in the presence of noise. Proc. IRE 1949, 37: 10-21.

    Article  MathSciNet  Google Scholar 

  3. Whittaker ET: On the functions which are represented by the expansion of the interpolation theory. Proc. R. Soc. Edinb., Sect. A 1915, 35: 181-194.

    Article  Google Scholar 

  4. Butzer PL, Schmeisser G, Stens RL: An introduction to sampling analysis. In Nonuniform Sampling: Theory and Practices. Edited by: Marvasti F. Kluwer Academic, New York; 2001:17-121.

    Chapter  Google Scholar 

  5. Kowalski M, Sikorski K, Stenger F: Selected Topics in Approximation and Computation. Oxford University Press, London; 1995.

    Google Scholar 

  6. Lund J, Bowers K: Sinc Methods for Quadrature and Differential Equations. SIAM, Philadelphia; 1992.

    Book  Google Scholar 

  7. Stenger F: Numerical methods based on Whittaker cardinal, or sinc functions. SIAM Rev. 1981, 23: 156-224.

    MathSciNet  Google Scholar 

  8. Stenger F: Numerical Methods Based on Sinc and Analytic Functions. Springer, New York; 1993.

    Book  Google Scholar 

  9. Boumenir A: Higher approximation of eigenvalues by the sampling method. BIT Numer. Math. 2000, 40: 215-225. 10.1023/A:1022334806027

    Article  MathSciNet  Google Scholar 

  10. Boumenir A: Sampling and eigenvalues of non-self-adjoint Sturm-Liouville problems. SIAM J. Sci. Comput. 2001, 23: 219-229. 10.1137/S1064827500374078

    Article  MathSciNet  Google Scholar 

  11. Tharwat MM, Bhrawy AH, Yildirim A: Numerical computation of eigenvalues of discontinuous Sturm-Liouville problems with parameter dependent boundary conditions using sinc method. Numer. Algorithms 2013, 63: 27-48. 10.1007/s11075-012-9609-3

    Article  MathSciNet  Google Scholar 

  12. Tharwat MM, Bhrawy AH, Yildirim A: Numerical computation of eigenvalues of discontinuous Dirac system using sinc method with error analysis. Int. J. Comput. Math. 2012, 89: 2061-2080. 10.1080/00207160.2012.700112

    Article  MathSciNet  Google Scholar 

  13. Butzer PL, Stens RL: A modification of the Whittaker-Kotel’nikov-Shannon sampling series. Aequ. Math. 1985, 28: 305-311. 10.1007/BF02189424

    Article  MathSciNet  Google Scholar 

  14. Gervais R, Rahman QI, Schmeisser G: A bandlimited function simulating a duration-limited one. In Approximation Theory and Functional Analysis. Edited by: Butzer PL, Stens RL. Birkhäuser, Basel; 1984:355-362.

    Google Scholar 

  15. Stens RL: Sampling by generalized kernels. In Sampling Theory in Fourier and Signal Analysis: Advanced Topics. Edited by: Higgins JR, Stens RL. Oxford University Press, Oxford; 1999:130-157.

    Google Scholar 

  16. Schmeisser G, Stenger F: Sinc approximation with a Gaussian multiplier. Sampl. Theory Signal Image Process. 2007, 6: 199-221.

    MathSciNet  Google Scholar 

  17. Qian L: On the regularized Whittaker-Kotel’nikov-Shannon sampling formula. Proc. Am. Math. Soc. 2002, 131: 1169-1176.

    Article  Google Scholar 

  18. Qian L, Creamer DB: A modification of the sampling series with a Gaussian multiplier. Sampl. Theory Signal Image Process. 2006, 5: 1-20.

    MathSciNet  Google Scholar 

  19. Qian L, Creamer DB: Localized sampling in the presence of noise. Appl. Math. Lett. 2006, 19: 351-355. 10.1016/j.aml.2005.05.013

    Article  MathSciNet  Google Scholar 

  20. Annaby MH, Asharabi RM: Computing eigenvalues of boundary-value problems using sinc-Gaussian method. Sampl. Theory Signal Image Process. 2008, 7: 293-312.

    MathSciNet  Google Scholar 

  21. Tharwat MM, Bhrawy AH: Computation of eigenvalues of discontinuous Dirac system using Hermite interpolation technique. Adv. Differ. Equ. 2012. 10.1186/1687-1847-2012-59

    Google Scholar 

  22. Tharwat MM, Yildirim A, Bhrawy AH: Sampling of discontinuous Dirac systems. Numer. Funct. Anal. Optim. 2013, 34: 323-348. 10.1080/01630563.2012.693565

    Article  MathSciNet  Google Scholar 

  23. Bhrawy AH, Tharwat MM, Al-Fhaid A: Numerical algorithms for computing eigenvalues of discontinuous Dirac system using sinc-Gaussian method. Abstr. Appl. Anal. 2012. 10.1155/2012/925134

    Google Scholar 

  24. Annaby MH, Tharwat MM: A sinc-Gaussian technique for computing eigenvalues of second-order linear pencils. Appl. Numer. Math. 2013, 63: 129-137.

    Article  MathSciNet  Google Scholar 

  25. Tharwat MM, Bhrawy AH, Alofi AS: Approximation of eigenvalues of discontinuous Sturm-Liouville problems with eigenparameter in all boundary conditions. Bound. Value Probl. 2013. 10.1186/1687-2770-2013-132

    Google Scholar 

  26. Tharwat MM: On sampling theories and discontinuous Dirac systems with eigenparameter in the boundary conditions. Bound. Value Probl. 2013. 10.1186/1687-2770-2013-65

    Google Scholar 

  27. Levitan BM, Sargsjan IS Translation of Mathematical Monographs 39. In Introduction to Spectral Theory: Self Adjoint Ordinary Differential Operators. Am. Math. Soc., Providence; 1975.

    Google Scholar 

  28. Levitan BM, Sargsjan IS: Sturm-Liouville and Dirac Operators. Kluwer Academic, Dordrecht; 1991.

    Book  Google Scholar 

  29. Tharwat MM: Discontinuous Sturm-Liouville problems and associated sampling theories. Abstr. Appl. Anal. 2011. 10.1155/2011/610232

    Google Scholar 

Download references

Acknowledgements

This research was supported by a grant from the Institute of Scientific Research at Umm Al-Qura University, Saudi Arabia.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohammed M Tharwat.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The authors have equal contributions to each part of this article. All the authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Tharwat, M.M., Al-Harbi, S.M. Approximation of eigenvalues of boundary value problems. Bound Value Probl 2014, 51 (2014). https://doi.org/10.1186/1687-2770-2014-51

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-2770-2014-51

Keywords