First, let

be the Fourier transform of

:

Taking Fourier transformation for (1.2), we have a family of problems parameterized by

:

The solution can easily be verified to be

Following the idea of Fourier method, we consider (2.2) only for

by cutting off high frequency and define a regularized solution:

where

is the characteristic function of the interval

. The solution

can be found by using the inverse Fourier transform. Define the regularized solution with measured data

by

. The difference between the exact solution

and the regularized solution

can be divided into two parts:

We rewrite (2.2) as a system of ordinary differential equations:

Letting

, we can rewrite (2.7) as

where the matrix

is the one in (2.7). The reason for using

instead of

in the definition of

is that with this choice, the matrix

is normal and hence diagonalized by a unitary matrix. The eigenvalues of

are

. Thus we can factorize

as

where

is a unitary matrix,

it follows that the solution of the system (2.8) can be written as

Since

is unitary,

and therefore

where

denotes both the Euclidean norm in the complex vector space

and the subordinate matrix norm, remembering

in (2.11), we obtain

This inequality is valid for all
and we can integrate over
and use the Parseval theorem to obtain estimates for
and
in the
-norm. First we will prove a bound on the difference between any two regularized solutions (2.4). We have errors in the measured
and
. These two cases are treated separately.

Lemma 2.1.

Assume that one has two regularized solutions

and

defined by (2.4), with the Cauchy data

and

. Then

Proof.

The function

satisfies the differential equation; thus

solves (2.2), with initial data given by

and thus

satisfies inequality(2.12). We have

If

, then

holds. By inserting (2.15) into (2.16) we get

If we insert

and integrate over

, then we obtain

Thus (2.13) holds.

Lemma 2.2.

Assume that one has two regularized solutions

and

defined by (2.4), with the Cauchy data

and

. Then

Proof.

The function

satisfies the differential equation, with initial data given by

and thus inequality (2.12) holds. It follows that

and thus, for

,

By integrating over the interval

, we get

Since

is equal to zero outside the interval

, we can extend the integrals. Inserting

we get

This is precisely (2.19).

Next we prove that, for the regularized problem, we have stable dependence of the data. By using the two previous lemmas, we get the following.

Lemma 2.3 (stability).

Assume that

is the regularized solution (2.4), with exact data

and

is the regularized solution with noisy data

; then

Proof.

Let

be a regularized solution defined by (2.4), with data

. Then by Lemma 2.1,

By the triangle inequality,

This completes the proof.

By Lemma 2.3 the regularized solution depends continuously on the data. Next we derive a bound on the truncation error when we neglect frequencies
in (2.3). So far we have not used any information about
for
we, assume that the Helmholtz equation
is valid in a large interval
. By imposing a priori bounds on the solution at
and at
, we obtain an estimate of the difference between the exact solution
and a regularized solution
with "cutoff'' level
. This is a convergence result in the sense that
as
for the case of exact data
. The following estimate holds.

Lemma 2.4 (convergence).

Suppose that

is the solution of the problem (2.2), and that the Helmholtz equation is valid for

. Then the difference between

and a regularized solution

can be estimated:

where the constant

is defined by

Proof.

The solution of (2.2) can be written in the floowing form:

where

and

can be determined from the boundary conditions:

Solving for

and

we find that the solution can be written:

We make the observation that

Using the expression (2.33) and the triangle inequality, we get

The first term on the right-hand side satisfies

Similarly, the second term can be estimated:

By combining these two expressions, we have

Thus the proof is complete.

Remark 2.5.

The constant
is well defined, but its value has to be estimated. From a numerical computation we conclude that
.

Remark 2.6.

When solving (2.2) numerically we need Cauchy data
, along the line
. The most natural way to obtain this is to use two thermocouples, located at
, and
, and compute
by solving a well-posed problem for the Helmholtz equation in the interval
. Hence it is natural to assume knowledge about
at a second point.

Let us summarize what we have so far. The constant

in Lemma 2.4 is unchanged. The propagated data error is estimated using Lemma 2.3:

and the truncation error is estimated using Lemma 2.4:

These two results can be combined into an error estimate for the spectral method. This is demonstrated in two examples.

Example 2.7.

Suppose that

, and that we have an estimate of the noise level,

. If

and if we choose

, with this choice, then by expression (2.42),

where we have assume that

and used the bound

. By expression (2.41),

Thus we obtain an error estimate:

Note that, under these assumptions,
can be used as a rule for selecting the regularization parameter.

Example 2.8.

Suppose that the Helmholtz equation is valid in the interval

and that we have a priori bounds

and

. Furthermore, we assume that the measured data satisfies

and

. Then we have the estimates:

By balancing these two components
and
, we can find a suitable value for the regularization parameter.