Skip to main content

An inverse spectral problem for the matrix Sturm-Liouville operator on the half-line

Abstract

The matrix Sturm-Liouville operator with an integrable potential on the half-line is considered. The inverse spectral problem is studied, which consists in recovering of this operator by the Weyl matrix. The author provides necessary and sufficient conditions for a meromorphic matrix function being a Weyl matrix of the non-self-adjoint matrix Sturm-Liouville operator. We also investigate the self-adjoint case and obtain the characterization of the spectral data as a corollary of our general result.

1 Introduction and main results

Inverse spectral problems consist in recovering differential operators from their spectral characteristics. Such problems arise in many areas of science and engineering, i.e., quantum mechanics, geophysics, astrophysics, electronics. The most complete results were obtained in the theory of inverse spectral problems for scalar Sturm-Liouville operators \(-y'' + q(x) y\) (see monographs [14] and the references therein). The greatest progress in the study of Sturm-Liouville operators on the half-line has been achieved by Marchenko [1]. He studied the inverse problem for the non-self-adjoint locally integrable potential by the generalized spectral function by using the method of transformation operator. We also mention that Marchenko solved the inverse scattering problem on the half-line. Later Yurko showed that the inverse problem by the generalized spectral function is equivalent to the problem by the generalized Weyl function [4]. These problems are closely related to the inverse problem for the wave equation \(u_{tt} = u_{xx} - q(x) u\). When the potential is integrable on the half-line, the generalized Weyl function turns into the ordinary Weyl function. Yurko has studied inverse problems for the Sturm-Liouville operator with the potential from \(L(0, \infty)\) by the Weyl function and, in the self-adjoint case, by the spectral data. He has developed a constructive algorithm for the solution of these problems and obtained necessary and sufficient conditions for the corresponding spectral characteristics. The details are presented in [4]. In this paper, we generalize his results to the matrix case.

The research on the inverse matrix Sturm-Liouville problems started in connection with their applications in quantum mechanics [5]. Matrix Sturm-Liouville equations can be also used to describe propagation of seismic [6] and electromagnetic waves [7]. Another important application is the integration of matrix nonlinear evolution equations such as matrix KdV and Boomeron equations [8]. The theory of matrix Sturm-Liouville problems has been actively developed during the last twenty years. Trace formulas, eigenvalue asymptotics and some other aspects of direct problems were studied in the papers [913]. The works [1418] contain results of the most resent investigations of inverse problems for matrix Sturm-Liouville operators on a finite interval.

For the matrix Sturm-Liouville operator on the half-line, Agranovich and Marchenko [5] have made an extensive research on the inverse scattering problem, using the transformation operator method [1, 2]. Freiling and Yurko [19] have started the investigation of the inverse spectral problem for the non-self-adjoint matrix Sturm-Liouville operator. They have proved the uniqueness theorem and provided a constructive algorithm for the solution of the inverse problem by the so-called Weyl matrix (the generalization of the scalar Weyl function [1, 4]). Their approach is based on the method of spectral mappings (see [4, 20]), whose main ingredient is the contour integration in the complex plane of the spectral parameter λ. We mention that a related inverse problem for the matrix wave equation was investigated in [21].

In this paper, we study the inverse problem for the matrix Sturm-Liouville operator on the half-line by the Weyl matrix. We present the necessary and sufficient conditions for the solvability of the inverse problem in the general non-self-adjoint situation. As a particular case, we consider the self-adjoint problem, and get the necessary and sufficient conditions on the spectral data of the self-adjoint operator. Our method is based on the approach of [19].

We proceed to the formulation of the problem. Consider the boundary value problem \(L = L(Q(x), h)\) for the matrix Sturm-Liouville equation

$$\begin{aligned}& \ell Y: = -Y''+ Q(x) Y = \lambda Y, \quad x > 0, \end{aligned}$$
(1)
$$\begin{aligned}& U(Y) := Y'(0) - h Y(0) = 0. \end{aligned}$$
(2)

Here, \(Y (x) = [y_{k}(x)]_{k = \overline{1, m}}\) is a column vector, λ is the spectral parameter, \(Q(x) = [Q_{jk}(x)]_{j, k = 1}^{m}\) is an \(m \times m\) matrix function with entries from \(L(0, \infty)\), and \(h = [h_{jk}]_{j, k = 1}^{m}\), where \(h_{jk}\) are complex numbers.

Let \(\lambda= \rho^{2}\), \(\rho= \sigma+ i \tau\), and let for definiteness \(\tau:= \operatorname{Im} \rho\ge0\). Denote by \(\Phi(x, \lambda) = [\Phi_{jk}(x, \lambda)]_{j, k = 1}^{m}\) the matrix solution of equation (1), satisfying boundary conditions \(U(\Phi) = I_{m}\) (\(I_{m}\) is the \(m \times m\) unit matrix), \(\Phi(x, \lambda) = O(\exp(i \rho x))\), \(x \to\infty\), \(\rho\in \Omega:= \{ \rho\colon\operatorname{Im} \rho\ge0, \rho\ne0\}\). Denote \(M(\lambda) = \Phi(0, \lambda)\). We call the matrix functions \(\Phi(x, \lambda)\) and \(M(\lambda)\) the Weyl solution and the Weyl matrix of L, respectively. Further we show that the singularities of \(\Phi(x, \lambda)\) and \(M(\lambda)\) coincide with the spectrum of the problem L. The Weyl functions and their generalizations often appear in applications and in pure mathematical problems for various classes of differential operators. In this paper, we use the Weyl matrix as the main spectral characteristic and study the following problem.

Inverse problem 1

Given the Weyl matrix \(M(\lambda)\), construct the potential Q and the coefficient h.

The paper is organized as follows. In Section 2, we present the most important properties of the Weyl matrix and briefly describe the solution of Inverse problem 1 given in [19]. By the method of spectral mappings, the nonlinear inverse problem is transformed to the linear equation in a Banach space of continuous matrix functions. In Section 3, we use this solution to obtain our main result, necessary and sufficient conditions for the solvability of Inverse problem 1. In the general non-self-adjoint situation, one has to require the solvability of the main equation in the necessary and sufficient conditions. Of course, it not always easy to check this requirement, but one cannot avoid it even for the scalar Sturm-Liouville operator (examples are provided in [4]). Therefore we are particularly interested in the special cases, when the solvability of the main equation can be easily checked. First of all, there is the self-adjoint case, studied in Sections 4 and 5. We introduce the spectral data and get their characterization. We also consider finite perturbations of the spectrum in Section 6. In this case, the main equation turns into a linear algebraic system, and one can easily verify its solvability.

2 Preliminaries

In this section, we provide the properties of the Weyl matrix and the algorithm for the solution of Inverse problem 1 by the method of spectral mappings. We give the results without proofs, one can read [5, 19] for more details.

Start with the introduction of the notation. We consider the space of complex column m-vectors \(\mathbb{C}^{m}\) with the norm

$$\| Y \| = \max_{1 \le j \le m} |y_{j}|,\quad Y = [y_{j}]_{j = \overline{1, m}}, $$

the space of complex \(m \times m\) matrices \(\mathbb{C}^{m \times m}\) with the corresponding induced norm

$$\| A \| = \max_{1 \le j \le m} \sum_{k = 1}^{m} |a_{jk} |,\quad A = [a_{jk}]_{j, k = \overline{1, m}}. $$

The symbols \(I_{m}\) and \(0_{m}\) are used for the unit \(m \times m\) matrix and the zero \(m \times m\) matrix, respectively. The symbol † denotes the conjugate transpose.

We use the notation \(\mathcal{A}(\mathcal{I}; \mathbb{C}^{m \times m})\) for a class of the matrix functions \(F(x) = [f_{jk}(x)]_{k = \overline{1, m}}\) with entries \(f_{jk}(x)\) belonging to the class \(\mathcal{A}(\mathcal{I})\) of scalar functions. The symbol stands for an interval or a segment. For example, the potential Q belongs to the class \(L((0, \infty); \mathbb {C}^{m \times m})\).

Denote by Π the λ-plane with a cut \(\lambda\ge0\), and \(\Pi_{1}= \overline{\Pi} \backslash\{ 0 \}\); note that here Π and \(\Pi_{1}\) must be considered as subsets of the Riemann surface of the square root function. Then, under the map \(\rho\to\rho^{2} = \lambda\), \(\Pi_{1}\) corresponds to the domain \(\Omega= \{\rho\colon\operatorname{Im} \rho \ge0, \rho\ne0 \}\).

Let us introduce the matrix Jost solution \(e(x, \rho)\). Equation (1) has a unique matrix solution \(e(x, \rho) = [e_{jk}(x, \rho)]_{j, k = 1}^{m}\), \(\rho\in\Omega\), \(x \ge0\), satisfying the integral equation

$$ e(x, \rho) = \exp(i \rho x) I_{m} - \frac{1}{2 i \rho} \int_{x}^{\infty} \bigl(\exp \bigl(i \rho(x - t)\bigr) - \exp\bigl(i \rho(t - x)\bigr)\bigr) Q(t) e(t, \rho) \,dt. $$
(3)

The matrix function \(e(x, \rho)\) has the following properties:

(i1):

For \(x \to\infty\), \(\nu= 0, 1\), and each fixed \(\delta> 0\),

$$ e^{(\nu)}(x, \rho) = (i \rho)^{\nu} \exp(i \rho x) \bigl(I_{m} + o(1)\bigr), $$
(4)

uniformly in \(\Omega_{\delta} := \{ \operatorname{Im} \rho\ge0, |\rho| \ge\delta\}\).

(i2):

For \(\rho\to\infty\), \(\rho\in\Omega\), \(\nu= 0, 1\),

$$ e^{(\nu)}(x, \rho) = (i \rho)^{\nu} \exp(i \rho x) \biggl( 1 + \frac{\omega (x)}{i \rho} + o\bigl(\rho^{-1}\bigr) \biggr),\quad \omega(x) := - \frac{1}{2} \int_{x}^{\infty} Q(t) \,dt, $$
(5)

uniformly for \(x \ge0\).

(i3):

For each fixed \(x \ge0\) and \(\nu= 0, 1\), the matrix functions \(e^{(\nu)}(x, \rho)\) are analytic for \(\operatorname{Im} \rho> 0\) and continuous for \(\rho\in\Omega\).

(i4):

For \(\rho\in\mathbb{R} \backslash\{ 0\}\) the columns of the matrix functions \(e(x, \rho)\) and \(e(x, -\rho)\) form a fundamental system of solutions for equation (1).

The construction of the Jost solution in the matrix case was given in the Appendix of [22] for an even more general situation of the matrix pencil. In principle, the proof is not significantly different from the similar proof in the scalar case (see [4, Section 2]).

Along with L we consider the problem \(L^{*} = L^{*}(Q(x), h)\) in the form

$$\begin{aligned}& \ell^{*} Z := -Z'' + Z Q(x) = \lambda Z, \quad x > 0, \end{aligned}$$
(6)
$$\begin{aligned}& U^{*}(Z) := Z'(0) - Z(0)h = 0, \end{aligned}$$
(7)

where Z is a row vector. Denote \(\langle Z, Y \rangle:= Z'Y - Z Y'\). If \(Y(x, \lambda)\) and \(Z(x, \lambda)\) satisfy equations (1) and (6), respectively, then

$$ \frac{d}{dx} \bigl\langle Z(x, \lambda), Y(x, \lambda) \bigr\rangle = 0, $$
(8)

so the expression \(\langle Z(x, \lambda), Y(x, \lambda) \rangle\) does not depend on x.

One can easily construct the Jost solution \(e^{*}(x, \rho)\) of equation (6) satisfying the integral equation

$$ e^{*}(x, \rho) = \exp(i \rho x) I_{m} - \frac{1}{2 i \rho} \int_{x}^{\infty} \bigl(\exp\bigl(i \rho(x - t)\bigr) - \exp\bigl(i \rho(t - x)\bigr) \bigr) e^{*}(t, \rho) Q(t) \,dt $$
(9)

and the same properties (i1)-(i4) as \(e(x, \rho)\).

If \(\rho\in\mathbb{R} \backslash\{0\}\), then

$$ \bigl\langle e^{*}(x, -\rho), e(x, \rho) \bigr\rangle = - 2 i \rho I_{m}. $$
(10)

Indeed, by virtue of (8), the expression \(\langle e^{*}(x, -\rho), e(x, \rho) \rangle\) does not depend on x. So one can take a limit as \(x \to\infty\) and use asymptotics (4) in order to derive (10).

Denote \(u(\rho) := U(e(x, \rho)) = e'(0, \rho) - h e(0, \rho)\), \(\Delta(\rho) = \det u(\rho)\). By property (i3) of the Jost solution, the functions \(u(\rho)\) and \(\Delta(\rho)\) are analytic for \(\operatorname{Im} \rho> 0\) and continuous for \(\rho\in\Omega\).

Introduce the sets

$$\begin{aligned}& \Lambda= \bigl\{ \lambda= \rho^{2} \colon\rho\in\Omega, \Delta(\rho) = 0 \bigr\} , \\& \Lambda' = \bigl\{ \lambda= \rho^{2} \colon \operatorname{Im} \rho> 0, \Delta (\rho) = 0 \bigr\} , \\& \Lambda'' = \bigl\{ \lambda= \rho^{2} \colon\operatorname{Im} \rho= 0, \rho\ne0, \Delta(\rho) = 0 \bigr\} . \end{aligned}$$

It is known (see [19]) that the spectrum of the boundary value problem L consists of the positive half-line \(\{ \lambda\colon \lambda\ge0 \}\) and the discrete bounded set \(\Lambda= \Lambda' \cup\Lambda''\). The set of all nonzero eigenvalues coincides with the at most countable set \(\Lambda'\). The points of \(\Lambda''\) are called spectral singularities of L.

One can easily show that the Weyl solution and the Weyl matrix admit the following representations:

$$\begin{aligned}& \Phi(x, \lambda) = e(x, \rho) \bigl(u(\rho)\bigr)^{-1}, \end{aligned}$$
(11)
$$\begin{aligned}& M(\lambda) = e(0, \rho) \bigl(u(\rho)\bigr)^{-1}. \end{aligned}$$
(12)

Clearly, singularities of the Weyl matrix \(M(\lambda)\) coincide with the zeros of \(\Delta(\rho)\).

Lemma 1

([19, 22])

The Weyl matrix is analytic in Π outside the countable bounded set of poles \(\Lambda'\), and continuous in \(\Pi_{1}\) outside the bounded set Λ. For \(|\rho| \to\infty\), \(\rho\in\Omega\),

$$ M(\lambda) = \frac{1}{i\rho} \biggl( I_{m} + \frac{h}{i\rho} + \frac {\kappa (\rho)}{\rho} \biggr),\quad \kappa(\rho) = -i \int _{0}^{\infty} Q(t) e^{2 i \rho t} \,dt + O\bigl( \rho^{-2}\bigr). $$
(13)

Let \(\varphi(x, \lambda) = [\varphi_{jk}(x, \lambda)]_{j,k =1}^{m}\) and \(S(x, \lambda) = [S_{jk}(x, \lambda)]_{j, k = 1}^{m}\) be the matrix solutions of equation (1) under the initial conditions \(\varphi(0, \lambda) = I_{m}\), \(\varphi'(0, \lambda) = h\), \(S(0, \lambda) = 0_{m}\), \(S'(0, \lambda) = I_{m}\). For each fixed \(x \ge0\), these matrix functions are entire in λ-plane. Further we also need the following relation:

$$ \Phi(x, \lambda) = S(x, \lambda) + \varphi(x, \lambda) M( \lambda). $$
(14)

Symmetrically, one can introduce the matrix solutions \(\Phi^{*}(x, \lambda)\), \(S^{*}(x, \lambda)\) and \(\varphi^{*}(x, \lambda)\) of equation (6) and the Weyl matrix \(M^{*}(\lambda) := \Phi ^{*}(0, \lambda )\) of the problem \(L^{*}\). Then

$$ \Phi^{*}(x, \lambda) = S^{*}(x, \lambda) + M^{*}(\lambda) \varphi^{*}(x, \lambda). $$
(15)

By virtue of (8), the expression \(\langle\Phi^{*}(x, \lambda ), \Phi (x, \lambda) \rangle\) does not depend on x. Since by the boundary conditions

$$\begin{aligned}& \bigl\langle \Phi^{*}(x, \lambda), \Phi(x, \lambda) \bigr\rangle _{x = 0} = U^{*}\bigl(\Phi^{*}\bigr) \Phi (0, \lambda) - \Phi^{*}(0, \lambda) U(\Phi) = M(\lambda) - M^{*}(\lambda), \\& \lim_{x \to\infty} \bigl\langle \Phi^{*}(x, \lambda), \Phi(x, \lambda) \bigr\rangle = 0_{m},\quad \operatorname{Im} \rho> 0, \end{aligned}$$

we have \(M(\lambda) \equiv M^{*}(\lambda)\).

Now we proceed to the constructive solution of Inverse problem 1. Let the Weyl matrix \(M(\lambda)\) of the boundary value problem \(L = L(Q, h)\) be given. Choose an arbitrary model problem \(\tilde{L} = L(\tilde{Q}, \tilde{h})\) in the same form as L, but with other coefficients. We agree that if a certain symbol γ denotes an object related to L, then the corresponding symbol \(\tilde{\gamma}\) with tilde denotes the analogous object related to \(\tilde{L}\). We consider also the problem \(\tilde{L}^{*} = L^{*}(\tilde{Q}, \tilde{h})\).

Denote

$$M^{\pm}(\lambda) := \lim_{z \to0, \operatorname{Re} z > 0} M(\lambda\pm i z), \qquad V(\lambda) := \frac{1}{2\pi i} \bigl(M^{-}(\lambda) - M^{+}(\lambda)\bigr),\quad \lambda> 0. $$

Suppose that the following condition is fulfilled:

$$ \int_{\rho^{*}}^{\infty} \rho^{4} \bigl\Vert {\hat{V}}(\lambda) \bigr\Vert ^{2} \, d\rho< \infty,\quad \hat{V} := V - \tilde{V} $$
(16)

for some \(\rho^{*} > 0\). For example, if \(Q \in L_{2}((0, \infty); \mathbb {C}^{m \times m})\), then \(\kappa(\rho)\) in (13) is \(L_{2}\)-function. Therefore one can take any problem \(\tilde{L}\) with a potential from \(L((0, \infty); \mathbb{C}^{m \times m}) \cap L_{2}((0, \infty); \mathbb{C}^{m \times m})\) and \(\tilde{h} = h\) in order to satisfy (16).

Introduce auxiliary functions

$$ \begin{aligned} &\tilde{D}(x, \lambda, \mu) = \frac{\langle\tilde{\varphi}^{*}(x, \mu ), \tilde{\varphi} (x, \lambda)}{\lambda- \mu} = \int_{0}^{x} \tilde{\varphi}^{*}(t, \mu) \tilde{ \varphi}(t, \lambda) \,dt, \\ &\hat{M}(\mu) = M(\mu) - \tilde{M}(\mu), \qquad \tilde{r}(x, \lambda, \mu) = \hat{M}(\mu ) \tilde{D}(x, \lambda, \mu). \end{aligned} $$
(17)

Let \(\gamma'\) be a bounded closed contour in λ-plane encircling the set of singularities \(\Lambda\cup\tilde{\Lambda}\cup\{ 0 \}\), let \(\gamma''\) be the two-sided cut along the ray \(\{ \lambda\colon \lambda> 0, \lambda\notin\operatorname{int} \gamma' \}\), and let \(\gamma= \gamma' \cup\gamma''\) be a contour with the counter-clockwise circuit (see Figure 1). By contour integration over the contour γ, Freiling and Yurko [19] have obtained the following result.

Figure 1
figure 1

Contour γ .

Theorem 1

For each fixed \(x \ge0\), the following relation holds:

$$ \tilde{\varphi}(x, \lambda) = \varphi(x, \lambda) + \frac{1}{2 \pi i} \int_{\gamma} \varphi(x, \mu) \tilde{r}(x, \lambda, \mu) \,d\mu, $$
(18)

which is called the main equation of Inverse problem 1. This equation is uniquely solvable (with respect to \(\varphi(x, \lambda)\)) in the Banach space B of continuous bounded on γ matrix functions \(z(\lambda) = [z_{jk}(\lambda)]_{j, k = 1}^{m}\) with the norm \(\| z \|_{B} = \sup_{\lambda\in\gamma}\max_{j, k = \overline{1, m}} |z_{jk}(\lambda)|\).

Corollary 1

The analogous relation is valid for \(\Phi(x, \lambda)\):

$$ \tilde{\Phi}(x, \lambda) = \Phi(x, \lambda) + \frac{1}{2\pi i} \int_{\gamma} \varphi(x, \mu)\hat{M}(\mu) \frac{\langle\tilde{\varphi}^{*}(x, \mu ), \tilde{\Phi} (x, \lambda) \rangle}{\lambda- \mu} \,d \mu,\quad \lambda\in J_{\gamma}, $$
(19)

where \(J_{\gamma}:= \{\lambda\colon\lambda\notin\gamma\cup\operatorname {int} \gamma' \}\).

Proof

Following the proof of Theorem 4.1, from [19] we define a block-matrix of spectral mappings \(P(x, \lambda) = [P_{jk}(x, \lambda)]_{j, k = 1, 2}\) by the relation

$$ P(x, \lambda) \begin{bmatrix} \tilde{\varphi}(x, \lambda) & \tilde{\Phi}(x, \lambda ) \\ \tilde{\varphi} '(x, \lambda) & \tilde{\Phi}'(x, \lambda) \end{bmatrix} = \begin{bmatrix} \varphi(x, \lambda) & \Phi(x, \lambda) \\ \varphi '(x, \lambda) & \Phi'(x, \lambda) \end{bmatrix} . $$

In particular,

$$\Phi(x, \lambda) = P_{11}(x, \lambda) \tilde{\Phi}(x, \lambda) + P_{12}(x, \lambda) \tilde{\Phi}'(x, \lambda). $$

Substituting formulas (4.4) from [19],

$$P_{1k}(x, \lambda) = \delta_{1k} I_{m} + \frac{1}{2 \pi i} \int_{\gamma} \frac {P_{1k}(x, \mu)}{\lambda- \mu} \,d\mu, \quad \lambda\in J_{\gamma}, $$

where \(\delta_{jk}\) is the Kronecker delta, we get

$$ \Phi(x, \lambda) = \tilde{\Phi}(x, \lambda) + \frac{1}{2 \pi i} \int_{\gamma} \frac {P_{11}(x, \mu) \tilde{\Phi}(x, \lambda) + P_{12}(x, \mu) \tilde{\Phi}'(x, \lambda )}{\lambda- \mu} \,d\mu,\quad \lambda\in J_{\gamma}. $$
(20)

Note that the matrix functions \(\Phi(x, \lambda)\) and \(\tilde{\Phi} (x, \lambda)\) do not have singularities in \(J_{\gamma}\).

By virtue of relations (3.12) from [19] we have

$$\begin{aligned}& P_{11}(x, \mu) = \varphi(x, \mu) \tilde{\Phi^{*}}'(x, \mu) - \Phi (x, \mu) \tilde{\varphi^{*}}'(x, \mu), \\& P_{12}(x, \mu) = \Phi(x, \mu) \tilde{\varphi}^{*}(x, \mu) - \varphi (x, \mu) \tilde{\Phi}^{*}(x, \mu). \end{aligned}$$

Substitute these relations into (20) and group the terms as follows:

$$\begin{aligned} \Phi(x, \lambda) =& \tilde{\Phi}(x, \lambda) + \frac{1}{2\pi i} \int _{\gamma} \bigl( \varphi(x, \mu) \bigl( \tilde{ \Phi^{*}}'(x, \mu) \tilde{\Phi}(x, \lambda) - \tilde{\Phi}^{*}(x, \mu) \tilde{\Phi}'(x, \lambda) \bigr) \\ &{}-\Phi(x, \mu) \bigl( \tilde{\varphi^{*}}'(x, \mu) \tilde{ \Phi}(x, \lambda) - \tilde{\varphi}^{*}(x, \mu) \tilde{\Phi}'(x, \lambda) \bigr) \bigr)\frac{d \mu }{\lambda- \mu} \\ =&\tilde{\Phi}(x, \lambda) + \frac{1}{2 \pi i} \int_{\gamma} \frac {\varphi(x, \mu) \langle\tilde{\Phi}^{*}(x, \mu), \tilde{\Phi}(x, \lambda)\rangle- \Phi(x, \mu) \langle\tilde{\varphi}^{*}(x, \mu), \tilde{\Phi}(x, \lambda) \rangle }{\lambda- \mu}\,d\mu. \end{aligned}$$

If one expands \(\Phi(x, \mu)\) and \(\tilde{\Phi}^{*}(x, \mu)\), using (14) and (15), the terms with \(S(x, \mu)\) and \(\tilde{S}^{*}(x, \mu)\) will be analytic inside the contour and vanish by the Cauchy theorem. Therefore we get

$$\begin{aligned} \Phi(x, \lambda) =& \tilde{\Phi}(x, \lambda) + \frac{1}{2\pi i} \\ &{}\times\int _{\gamma} \frac {\varphi(x, \mu) \tilde{M}^{*}(\mu) \langle\tilde{\varphi}^{*}(x, \mu), \tilde{\Phi} (x, \lambda)\rangle - \varphi(x, \mu) M(\mu) \langle\tilde{\varphi}^{*}(x, \mu), \tilde{\Phi}(x, \lambda) \rangle}{\lambda- \mu}\,d\mu. \end{aligned}$$

Since \(\tilde{M}^{*}(\mu) \equiv\tilde{M}(\mu)\), we arrive at (19). □

Solving the main equation (18), one gets the matrix function \(\varphi(x, \lambda)\) and can follow the algorithm from [19] to recover the original problem L. But further we need an alternative way to construct the potential Q and the coefficient h.

Let

$$ \varepsilon_{0}(x) = \frac{1}{2 \pi i} \int _{\gamma} \varphi(x, \mu ) \hat{M}(\mu) \tilde{\varphi}^{*}(x, \mu) \,d\mu,\qquad \varepsilon(x) = - 2 \varepsilon_{0}'(x). $$
(21)

Then, similarly to [4, Section 2.2], one can obtain the relations

$$ Q(x) = \tilde{Q}(x) + \varepsilon(x),\qquad h = \tilde{h} - \varepsilon_{0}(0). $$
(22)

Using formulas (21), (22), one can construct Q and h by the solution of the main equation (18) and solve Inverse problem 1.

3 Necessary and sufficient conditions

In this section, we give the necessary and sufficient conditions in a very general form, with requirement of the solvability of the main equation.

Denote by W the class of the matrix functions \(M(\lambda)\), satisfying the conditions of Lemma 1, namely

(i1):

\(M(\lambda)\) is analytic in Π outside the countable bounded set of poles \(\Lambda'\), and continuous in \(\Pi_{1}\) outside the bounded set Λ;

(i2):

\(M(\lambda)\) enjoys the asymptotic representation

$$ M(\lambda) = \frac{1}{i \rho} \biggl( I_{m} + \frac{h}{i \rho} + o\bigl(\rho^{-1}\bigr) \biggr),\quad |\rho| \to \infty, \rho\in\Omega. $$
(23)

Theorem 2

For the matrix function \(M(\lambda) \in W\) to be the Weyl matrix of some boundary value problem L of the form (1), (2), it is necessary and sufficient to satisfy the following conditions.

  1. 1.

    There exists a model problem \(\tilde{L}\) such that (16) holds.

  2. 2.

    For each fixed \(x \ge0\), the main equation (18) is uniquely solvable.

  3. 3.

    \(\varepsilon(x) \in L((0, \infty); \mathbb{C}^{m \times m})\), where \(\varepsilon (x)\) was defined in (21).

Similarly one can study the classes of potentials Q with higher degree of smoothness, then the potential of the model problem \(\tilde{Q}\) and ε should belong to the same classes.

Proof

By necessity, conditions 1 and 3 are obvious, while condition 2 is contained in Theorem 1. So it remains to prove that the potential Q and the coefficient h, constructed by formulas (22), form a problem L with the Weyl matrix, coinciding with the given \(M(\lambda)\).

Step 1. Let \(M \in W\), the model problem \(\tilde{L}\) satisfy condition 1, \(\varphi(x, \lambda)\) be the solution of the main equation (18), and Q, h be constructed via (22). Let us prove that

$$ \ell\varphi(x, \lambda) = \lambda\varphi(x, \lambda), $$
(24)

where the differential expression was defined in (1).

Differentiating (18) and using (17), we get

$$\begin{aligned} \ell\tilde{\varphi}(x, \lambda) =& \ell\varphi(x, \lambda) + \frac {1}{2 \pi i} \int_{\gamma } \ell\varphi(x, \mu) \tilde{r}(x, \lambda, \mu) \,d \mu \\ &{}- \frac{1}{2 \pi i} \int_{\gamma } \bigl( 2 \varphi'(x, \mu) \hat{M}(\mu) \tilde{\varphi}^{*}(x, \mu) \tilde{ \varphi}(x, \lambda) \,d\mu \\ &{}+ \varphi (x, \mu) \hat{M}(\mu) \bigl( \tilde{ \varphi}^{*}(x, \mu) \tilde{\varphi}(x, \lambda) \bigr)' \bigr) \,d \mu. \end{aligned}$$

Since by (22)

$$\begin{aligned} Q(x) =& \tilde{Q}(x) - 2 \varepsilon_{0}'(x) \\ =& \tilde{Q}(x) - \frac{1}{2 \pi i} \int_{\gamma} \bigl( 2 \varphi'(x, \mu) \hat{M}(\mu) \tilde{\varphi}^{*}(x, \mu) + 2 \varphi(x, \mu) \hat{M}(\mu) \tilde{\varphi^{*}}'(x,\mu) \bigr) \,d \mu, \end{aligned}$$

we obtain

$$\begin{aligned} \tilde{\ell}\tilde{\varphi}(x, \lambda) =& \ell\varphi(x, \lambda) + \frac{1}{2 \pi i} \int_{\gamma} \ell\varphi(x, \mu) \tilde{r}(x, \lambda, \mu) \,d\mu \\ &{}+ \frac{1}{2\pi i} \int_{\gamma} \varphi(x, \mu) \hat{M}(\mu) \bigl\langle \tilde{\varphi} ^{*}(x, \mu), \tilde{\varphi}(x, \lambda) \bigr\rangle \,d\mu. \end{aligned}$$

Taking (17) and the relation \(\tilde{\ell}\tilde{\varphi}= \lambda \tilde{\varphi}\) into account, we conclude that

$$\lambda\tilde{\varphi}(x, \lambda) = \ell\varphi(x, \lambda) + \frac{1}{2 \pi i} \int_{\gamma } \ell\varphi(x, \mu) \tilde{r}(x, \lambda, \mu) \,d\mu+ \frac{1}{2 \pi i} \int_{\gamma} (\lambda- \mu) \varphi(x, \mu) \tilde{r}(x, \lambda, \mu)\,d\mu. $$

Substituting (18) into this relation, we arrive at the equation

$$ \eta(x, \lambda) + \frac{1}{2 \pi i} \int_{\gamma} \eta(x, \mu) \tilde{r}(x, \lambda, \mu) \,d\mu= 0_{m},\quad \lambda\in \gamma, $$
(25)

with respect to \(\eta(x, \lambda) = \ell\varphi(x, \lambda) - \lambda\varphi(x, \lambda)\).

Suppose \(\int_{\lambda^{*}}^{\infty} \rho\| \hat{V}(\lambda) \| \,d\lambda< \infty\) (in the general case, under assumption (16), we have \(\int_{\lambda^{*}}^{\infty} \| \hat{V}(\lambda) \| \,d\lambda< \infty \), \(\lambda^{*} = (\rho ^{*})^{2}\)). Then, using the same arguments as in the scalar case [4], one can show that the matrix function \(\eta(x, \lambda)\) belongs to the Banach space B for each fixed \(x \ge0\). Consider the operator \(\tilde{R}(x) \colon B \to B\) acting in the following way:

$$z(\lambda) \tilde{R}(x) = \frac{1}{2 \pi i} \int_{\gamma} z( \mu) \tilde{r}(x, \lambda , \mu) \,d\mu. $$

Here and below in similar situations, we write an operator to the right of an operand, because the action of the operator involves noncommutative matrix multiplication in such order. For each fixed \(x \ge0\), the operator \(\tilde{R}(x)\) is compact, therefore it follows from the unique solvability of the main equation (18) that the corresponding homogeneous equation (25) is also uniquely solvable. Hence \(\eta(x, \lambda) \equiv0\), and (24) is proved.

Step 2. In the general case, when (16) holds, the proof of equality (24) is more complicated, so we only outline the main ideas. Introduce contours \(\gamma_{N} = \gamma\cap \{ |\lambda| \le N^{2} \}\) and consider operators

$$\tilde{R}_{N}(x) \colon B \to B, \qquad z(\lambda) \tilde{R}_{N}(x) = \frac{1}{2 \pi i} \int_{\gamma_{N}} z( \mu) \tilde{r}(x, \lambda, \mu) \,d\mu. $$

The sequence \(\{ \tilde{R}_{N}(x) \}\) converges to \(\tilde{R}(x)\) in the operator norm. In view of the unique solvability of the main equation, the operator \((I + \tilde{R}(x))\) is invertible for each fixed \(x \ge 0\). So, for sufficiently large values of N, the operators \((I + \tilde{R}_{N}(x))\) are also invertible, and the equations

$$\tilde{\varphi}(x, \lambda) = \varphi_{N}(x, \lambda) + \frac{1}{2 \pi i} \int_{\gamma_{N}} \varphi _{N}(x, \mu) \tilde{r}(x, \lambda, \mu) \,d\mu $$

have unique solutions \(\varphi_{N}(x, \lambda)\). Since \(\int_{\rho ^{*}}^{N} \rho\| \hat{V}(\lambda)\| \,d\lambda< \infty\), one can repeat the arguments of Step 1 for the matrix functions \(\varphi_{N}(x, \lambda)\), and prove the relations \(-\varphi_{N}''(x, \lambda) + Q_{N}(x) \varphi_{N}(x, \lambda) = \lambda \varphi_{N}(x, \lambda)\) with matrix potentials \(Q_{N}(x) = Q(x) + \varepsilon_{N}(x)\), where

$$\varepsilon_{N}(x) = -2 \biggl( \frac{1}{2 \pi i} \int _{\gamma_{N}} \varphi_{N}(x, \mu) \hat{M}(\mu) \tilde{ \varphi}^{*}(x, \mu) \,d\mu \biggr)'. $$

The sequence \(\{ \varphi_{N}(x, \lambda) \}\) converges to \(\varphi(x, \lambda)\) uniformly with respect to x and λ on compact sets, and the sequence \(\{ Q_{N}(x) \}\) converges to \(Q(x)\) in L-norm on every bounded interval. These facts yield (24).

Analogously one can prove the relation \(\ell\Phi(x, \lambda) = \lambda\Phi(x, \lambda)\) for the matrix function \(\Phi(x, \lambda)\) constructed via (19).

Step 3. Substituting \(x = 0\) into the main equation (18), we get \(\varphi(0, \lambda) = I_{m}\). Differentiate the main equation

$$\begin{aligned} \tilde{\varphi}'(x, \lambda) =& \varphi'(x, \lambda) + \frac{1}{2\pi i} \int_{\gamma} \varphi '(x, \mu) \tilde{r}(x, \lambda, \mu) \,d\mu \\ &{}+ \frac{1}{2 \pi i} \int _{\gamma} \varphi(x, \mu) \hat{M}(\mu) \tilde{\varphi}^{*}(x, \mu) \tilde{\varphi}(x, \lambda) \,d\mu. \end{aligned}$$

Taking (17), (21) and (22) into account, we obtain

$$\varphi'(0, \lambda) = \tilde{\varphi}'(0, \lambda) - \frac{1}{2 \pi i} \int_{\gamma} \varphi (0, \mu) \hat{M}(\mu) \tilde{\varphi}^{*}(0, \mu) \,d\mu= \tilde{h} - \varepsilon_{0}(0) = h. $$

Similarly, using (19), one can check that \(U(\Phi) = I_{m}\).

The following standard estimates are valid for \(\nu= 0, 1\):

$$\begin{aligned}& \bigl\Vert \varphi^{(\nu)}(x, \mu)\bigr\Vert , \bigl\Vert \varphi^{*(\nu)}(x, \mu) \bigr\Vert \le C |\theta|^{\nu },\quad \mu= \theta^{2} \in\gamma, \\& \bigl\Vert \tilde{\Phi}^{(\nu)}(x, \lambda) \bigr\Vert \le C | \rho|^{\nu- 1} \exp \bigl(-\vert \tau \vert x\bigr),\quad \lambda\notin \tilde{\Lambda}\cup\{ 0 \}. \end{aligned}$$

In view of (23), \(\hat{M}(\mu) = O(\mu^{-1})\), \(|\mu| \to\infty \), \(\mu\in\Pi\). Taking \(\lambda\notin\operatorname{int} \gamma\) and substituting these estimates into (19), we derive

$$\bigl\Vert \Phi(x, \lambda) \exp(- i \rho x) \bigr\Vert \le C \biggl( 1 + \int_{\lambda^{*}}^{\infty} \frac{d\mu}{\mu|\lambda- \mu|} \biggr) \le C_{1}. $$

Thus, we have \(\Phi(x, \lambda) = O(\exp(i \rho x))\), so \(\Phi(x, \lambda)\), constructed via (19), is the Weyl solution of the problem \(L(Q, h)\).

It follows from (19) that

$$\Phi(0, \lambda) = \tilde{M}(\lambda) + \frac{1}{2 \pi i} \int _{\gamma} \frac{\hat{M}(\lambda)}{\lambda- \mu}\,d\mu. $$

Using the Cauchy integral formula, it is easy to show that

$$\hat{M}(\lambda) = \frac{1}{2\pi i} \int_{\gamma} \frac{\hat{M}(\mu )}{\lambda- \mu} \,d\mu. $$

Consequently, \(\Phi(0, \lambda) = M(\lambda)\), i.e., the given matrix \(M(\lambda)\) is the Weyl matrix of the constructed boundary value problem \(L(Q, h)\). □

4 Self-adjoint case: properties of the spectral data

In this section, we assume that the boundary value problem L is self-adjoint: \(Q(x) = Q^{\dagger}(x)\) a.e. on \((0, \infty)\), \(h = h^{\dagger}\). We show that its spectrum has the following Properties (i1)-(i6). Similar facts for the Dirichlet boundary condition were proved in [5].

Property (i1)

The problem L does not have spectral singularities: \(\Lambda'' = \varnothing\).

Proof

We have to prove that \(\det u(\rho) \ne0\) for \(\rho\in\mathbb{R} \backslash\{ 0 \}\). In view of (3) and (9),

$$\begin{aligned} e^{\dagger}(x, \rho) =& \exp(- i\rho x) I_{m} + \frac{1}{2 i \rho} \int_{x}^{\infty} \bigl(\exp\bigl(-i \rho(x - t) \bigr) - \exp\bigl(i \rho(x - t)\bigr) \bigr) e^{\dagger}(t, \rho) Q(t) \,dt \\ =& e^{*}(x, - \rho) \end{aligned}$$

for \(\rho\in\mathbb{R} \backslash\{ 0 \}\), therefore \(u^{\dagger}(\rho) = u^{*}(-\rho)\) for such ρ. Suppose that there exist a real \(\rho_{0} \ne0\) and a nonzero vector a such that \(u(\rho_{0}) a = 0\) and, consequently, \(a^{\dagger} u^{*}(- \rho_{0}) = 0\). On the one hand,

$$a^{\dagger} \bigl\langle e^{*}(x, -\rho_{0}), e(x, \rho_{0}) \bigr\rangle a = \bigl[ a^{\dagger} u^{*}(- \rho_{0}) \bigr] e(0, \rho_{0}) a - a^{\dagger} e^{*}(0, - \rho_{0}) \bigl[u(\rho_{0}) a\bigr] = 0. $$

On the other hand, using (10), we obtain

$$a^{\dagger} \bigl\langle e^{*}(x, -\rho_{0}), e(x, \rho_{0}) \bigr\rangle a = -2 i\rho _{0} a^{\dagger} a \ne0. $$

So we arrive at the contradiction, which proves the property. □

Property (i2)

All the nonzero eigenvalues are real and negative: \(\lambda_{k} = \rho_{k}^{2} < 0\), \(\rho_{k} = i \tau_{k}\), \(\tau_{k} > 0\).

Indeed, the eigenvalues of L are real because of the self-adjointness. In view of [19, Theorem 2.4], they cannot be positive.

Property (i3)

The poles of the matrix function \((u(\rho ))^{-1}\) in the upper half-plane are simple. (They coincide with \(i \tau_{k}\).)

Proof

Start from the relations

$$\begin{aligned}& -e''(x, \rho) + Q(x) e(x, \rho) = \rho^{2} e(x, \rho), \end{aligned}$$
(26)
$$\begin{aligned}& -{e^{*}}''(x, \rho) + e^{*}(x, \rho) Q(x) = \rho^{2} e^{*}(x, \rho). \end{aligned}$$
(27)

Differentiate (27) by ρ, multiply it by \(e(x, \rho)\) and subtract (26) multiplied by \(\frac{d}{d\rho}e^{*}(x, \rho)\) from the left:

$$ \frac{d}{d \rho} e^{*}(x, \rho) e''(x, \rho) - \frac{d}{d\rho} {e^{*}}''(x, \rho) e(x, \rho) = 2 \rho e^{*}(x, \rho) e(x, \rho). $$

Note that the left-hand side of this relation equals \(-\frac {d}{dx}\langle\frac{d}{d\rho}e^{*}(x, \rho), e(x, \rho)\rangle\). Integrating by x from 0 to ∞, we get

$${\biggl.\biggl\langle \frac{d}{d\rho}e^{*}(x, \rho), e(x, \rho)\biggr\rangle \biggr|_{0}^{\infty}} = 2 \rho\int_{0}^{\infty} e^{*}(x, \rho) e(x, \rho) \,dx. $$

Let \(\operatorname{Im} \rho> 0\). Then the matrix functions \(e(x, \rho)\), \(\frac {d}{d\rho}e^{*}(x, \rho)\) and their x-derivatives tend to zero as \(x \to\infty\). Consequently, we obtain

$$ \frac{d}{d\rho}u^{*}(\rho) e(0, \rho) - \frac{d}{d\rho}e^{*}(0, \rho) u(\rho) = 2 \rho\int_{0}^{\infty} e^{*}(x, \rho) e(x, \rho) \,dx. $$
(28)

Let ρ be equal to \(\rho_{0} = \sqrt{\lambda}_{0}\), where \(\lambda _{0}\) is an eigenvalue, and \(u(\rho_{0}) a = 0\), \(a \ne0\). For purely imaginary ρ, one has \(e^{*}(x, \rho) = e^{\dagger}(x, \rho)\) and \(u^{*}(\rho) = u^{\dagger }(\rho )\). So we derive from (28)

$$ -a^{\dagger} \frac{d}{d\rho}u^{\dagger}( \rho_{0}) e(0, \rho_{0}) a = 2 \rho _{0} a^{\dagger} \int_{0}^{\infty} e^{\dagger}(x, \rho_{0}) e(x, \rho _{0}) \,dx\, a \ne0. $$
(29)

In order to prove the simplicity of the poles for \((u(\rho))^{-1}\), we adapt Lemma 2.2.1 from [5]:

The inverse \((u(\rho))^{-1}\) has a simple pole at \(\rho= \rho_{0}\) if and only if the relations

$$ u(\rho_{0}) a = 0, \qquad u(\rho_{0}) b + \frac{d}{d\rho} u(\rho_{0}) a = 0, $$
(30)

where a and b are constant vectors, yield \(a = 0\).

Let vectors a and b satisfy (30). Then

$$-a^{\dagger} \frac{d}{d\rho}u^{\dagger}(\rho_{0}) e(0, \rho_{0}) a = b^{\dagger} u^{\dagger}(\rho_{0}) e(0, \rho_{0}) a. $$

Since

$$\bigl\langle e^{*}(x, \rho), e(x, \rho) \bigr\rangle = \bigl\langle e^{*}(x, \rho), e(x, \rho) \bigr\rangle _{x = \infty} = 0, \quad \operatorname{Im} \rho> 0, $$

one has

$$b^{\dagger} u^{\dagger}(\rho_{0}) e(0, \rho_{0}) a = b^{\dagger} e^{*}(0, \rho _{0}) u(\rho_{0}) a = 0, $$

but this contradicts (29). So \(a = 0\), and a square root of every eigenvalue \(\rho= \rho_{0}\) is a simple pole of \((u(\rho))^{-1}\). □

The next properties take place if the additional condition holds

$$ \int_{0}^{\infty} x \bigl\Vert Q(x) \bigr\Vert \,dx < \infty. $$
(31)

Property (i4)

The number of eigenvalues is finite.

Proof

Prove the assertion by contradiction. Suppose that there is an infinite sequence \(\{ \lambda_{k} \}_{k = 1}^{\infty}\) of negative eigenvalues, \(\rho_{k} = \sqrt{\lambda}_{k}\), and \(\{ Y_{k}(x) \}_{k = 1}^{\infty}\) is an orthogonal sequence of corresponding vector eigenfunctions. Note that there can be multiple eigenvalues, their multiplicities are finite and equal to \(m - \operatorname{rank} u(\rho_{k})\). Multiple eigenvalues occur in the sequence \(\{ \lambda_{k}\}_{k = 1}^{\infty}\) multiple times with different eigenfunctions \(Y_{k}(x)\). The eigenfunctions can be represented in the form \(Y_{k}(x) = e(x, \rho_{k}) N_{k}\), \(\| N_{k} \| = 1\).

Using the orthogonality of the eigenfunctions, we obtain for \(k \ne n\),

$$\begin{aligned} 0 =& \int_{0}^{\infty} Y_{k}^{\dagger}(x) Y_{n}(x) \,dx \\ =& N_{k}^{\dagger}\int_{A}^{\infty} e^{\dagger}(x, \rho_{k}) e(x, \rho_{n}) \,dx\, N_{n} + \int_{0}^{A} Y_{k}^{\dagger}(x) Y_{k}(x) \,dx \\ &{}+ \int_{0}^{A} Y^{\dagger}_{k}(x) \bigl(Y_{n}(x) - Y_{k}(x)\bigr) \,dx =: \mathcal{I}_{1} + \mathcal{I}_{2} + \mathcal{I}_{3}. \end{aligned}$$
(32)

Similarly to the scalar case [4, Theorem 2.3.4], one can show that \(e(x, \rho_{k}) = \exp(-\tau_{k} x) (I_{m} + \alpha_{k}(x))\), where \(\| \alpha_{k}(x) \| \le\frac{1}{8}\) as \(x \ge A\) for all \(k \ge1\) and for sufficiently large A. Consequently,

$$\begin{aligned} \mathcal{I}_{1} =& N_{k}^{\dagger} \int _{A}^{\infty} \exp\bigl(-(\tau_{k} + \tau_{n})x\bigr) \bigl(I_{m} + \beta_{kn}(x)\bigr) \,dx\, N_{k} \\ &{}+ N_{k}^{\dagger} \int_{A}^{\infty} \exp\bigl(-(\tau_{k} + \tau_{n})x\bigr) \bigl(I_{m} + \beta _{kn}(x)\bigr) \,dx \, (N_{n} - N_{k}), \quad \bigl\Vert \beta_{kn}(x) \bigr\Vert \le \frac{3}{8}. \end{aligned}$$

Since the vectors \(N_{k}\) belong to the unit sphere, one can choose a convergent subsequence \(\{ N_{k_{s}} \}_{s = 1}^{\infty}\). Further we consider \(N_{k}\) and \(N_{n}\) from such a subsequence. Then, for sufficiently large k and n, we have

$$\begin{aligned}& \biggl\vert N_{k}^{\dagger} \int_{A}^{\infty} \exp\bigl(-(\tau_{k} + \tau_{n})x\bigr) \bigl(I_{m} + \beta_{kn}(x)\bigr) \,dx \, (N_{n} - N_{k}) \biggr\vert \\& \quad \le\frac{3 \exp(-(\tau_{k} + \tau_{n}) A)}{2 (\tau_{k} + \tau_{n})} \| N_{n} - N_{k} \| \le \frac{\exp(-(\tau_{k} + \tau_{n}) A)}{8 (\tau_{k} + \tau_{n})}. \end{aligned}$$

Hence

$$\mathcal{I}_{1} \ge\frac{\exp(-(\tau_{k} + \tau_{n}) A)}{2 (\tau_{k} + \tau _{n})} \ge\frac{\exp(-2AT)}{4T}, \quad T := \max_{k} \tau_{k}. $$

Clearly, \(\mathcal{I}_{2} \ge0\). Using arguments similar to the proof of [4, Theorem 2.3.4], one can show that \(\mathcal{I}_{3}\) tends to zero as \(k, n \to\infty\). Thus, for sufficiently large k and n, \(\mathcal{I}_{1} + \mathcal {I}_{2} + \mathcal{I}_{3} > 0\), which contradicts (32). Hence, the number of negative eigenvalues is finite. □

Property (i5)

\(\lambda= 0\) is not an eigenvalue of L.

Proof

It was proved in [5] that if condition (31) holds, the Jost solution \(e(x, \rho)\) exists for \(\rho= 0\). So equation (1) for \(\lambda= 0\) has the solution \(e(x, 0) = I_{m} + o(1)\) as \(x \to\infty\). One can easily check that the matrix function

$$z(x) = e(x, 0) \int_{0}^{x} \bigl(e^{*}(t, 0) e(t, 0) \bigr)^{-1} \,dt $$

is also a solution of (1) for \(\lambda= 0\), and it enjoys asymptotic representation \(z(x) = x (I_{m} + o(1))\) as \(x \to\infty\). Thus, the columns of the matrices \(e(x, 0)\) and \(z(x)\) form a fundamental system of solutions for equation (1) for \(\lambda = 0\). If \(\lambda= 0\) is an eigenvalue of L, then the corresponding vector eigenfunction should have an expansion \(Y_{0}(x) = e(x, 0) a + z(x) b\), where a and b are some constant vectors. But in view of asymptotic behavior of \(e(x, 0)\) and \(z(x)\), one has \(\lim_{x \to\infty} Y_{0}(x) = 0\) if and only if \(a = b = 0\). So \(\lambda= 0\) is not an eigenvalue of L. □

Property (i6)

\(\rho(u(\rho))^{-1} = O(1)\) and \(M(\lambda) = O(\rho^{-1})\) as \(\rho\to0\), \(\rho\in\Omega\).

Proof

Consider the matrix function \(g(\rho) = 2 i \rho(u(\rho))^{-1}\). It follows from (10) that

$$u^{*}(-\rho) e(0, \rho) - e^{*}(0, -\rho) u(\rho) = -2 i \rho I_{m}. $$

Hence

$$g(\rho) = e^{*}(0, -\rho) - u^{*}(-\rho) e(0, \rho) \bigl(u(\rho) \bigr)^{-1}. $$

In view of (12) and the equality \(M(\lambda) \equiv M^{*}(\lambda)\), one has

$$e(0, \rho) \bigl(u(\rho)\bigr)^{-1} = M(\lambda) = \bigl(u^{*}(\rho) \bigr)^{-1} e^{*}(0, \rho). $$

Consequently,

$$ g(\rho) = e^{*}(0, -\rho) - \xi(\rho) e^{*}(0, \rho),\quad \xi(\rho) := u^{*}(-\rho) \bigl(u^{*}(\rho)\bigr)^{-1}. $$
(33)

Let \(\rho\in\mathbb{R} \backslash\{ 0 \}\). Expand the solution \(\varphi(x, \lambda)\) by the fundamental system of solutions with some matrix coefficients \(A(\rho)\) and \(B(\rho)\):

$$\begin{aligned}& \varphi(x, \lambda) = e(x, \rho) A(\rho) + e(x, -\rho) B(\rho), \end{aligned}$$
(34)
$$\begin{aligned}& \varphi'(x, \lambda) = e'(x, \rho) A( \rho) + e'(x, -\rho) B(\rho). \end{aligned}$$
(35)

Multiplying (34) by \({e^{*}}'(x, -\rho)\) and (35) by \(e^{*}(x, \rho)\) from the left and using (10), one can derive

$$\begin{aligned}& A(\rho) = -\frac{1}{2 i \rho} \bigl( {e^{*}}'(x, -\rho)\varphi(x, \lambda) - e^{*}(x, -\rho) \varphi'(x, \lambda) \bigr), \\& B(\rho) = \frac{1}{2 i \rho} \bigl( {e^{*}}'(x, \rho) \varphi(x, \lambda) - e^{*}(x, \rho) \varphi'(x, \lambda) \bigr). \end{aligned}$$

Since \(A(\rho)\) and \(B(\rho)\) do not depend on x, one can take \(x = 0\) and obtain

$$A(\rho) = -\frac{1}{2\pi i} u^{*}(-\rho),\qquad B(\rho) = \frac{1}{2 i \rho} u^{*}(\rho). $$

Finally,

$$\varphi(x, \lambda) = -\frac{1}{2 i \rho} \bigl( e(x, \rho) u^{*}(-\rho) - e(x, - \rho) u^{*}(\rho) \bigr). $$

Since \(U(\varphi) = 0\), we get

$$u(\rho) u^{*}(-\rho) = u(-\rho) u^{*}(\rho). $$

Therefore

$$\xi(\rho) = u^{*}(-\rho) \bigl(u^{*}(\rho)\bigr)^{-1} = \bigl(u(\rho) \bigr)^{-1} u(-\rho). $$

One can easily show that \(u^{*}(\rho) = u^{\dagger}(-\rho)\) for real ρ and, consequently, \(\xi(\rho)\) is a unitary matrix for \(\rho\in\mathbb{R} \backslash\{ 0 \}\). Then it follows from (33) that the matrix function \(g(\rho)\) is bounded for \(\rho\in\mathbb{R} \backslash\{ 0 \}\).

Consider the region \(\mathcal{D} := \{ \rho\colon\operatorname{Im} \rho> 0, |\rho| < \tau^{*} \}\), where \(\tau^{*}\) is a number less than all \(\tau_{k}\) (by Property (i4), there is a finite number of them). Obviously, \(g(\rho)\) is analytic in \(\mathcal{D}\) and continuous in \(\overline{\mathcal{D}} \backslash\{ 0 \}\). If it is also analytic in zero, \(\rho= 0\) is a removable singularity. Then \(g(\rho)\) is continuous in \(\overline{\mathcal{D}}\), so \(g(\rho) = O(1)\). In the general case, one can approximate the potential \(Q(x)\) by the sequence of potentials

$$Q_{\beta}(x) = \begin{cases} Q(x), &0 \le x \le\beta, \\ 0,& x > \beta, \end{cases} $$

and use the technique from [5] (see Lemma 2.4.1).

Since under condition (31) the Jost solution exists for \(\rho= 0\), we have \(e(0, \rho) = O(1)\) as \(\rho\to0\). Taking (12) and \(g(\rho) = O(1)\) into account, we arrive at \(M(\lambda) = O(\rho^{-1})\), \(\rho\to0\). □

We combine the properties of the Weyl matrix in the next theorem.

Theorem 3

Let \(L = L(Q, h)\), \(Q = Q^{\dagger}\), \(h = h^{\dagger}\), \(Q \in L((0, \infty); \mathbb{C}^{m \times m})\), and condition (31) holds. Then the Weyl matrix of this problem \(M(\lambda)\) is analytic in Π outside the finite set of simple poles \(\Lambda' = \{ \lambda_{k} \}_{k = 1}^{P}\), \(\lambda_{k} = \rho_{k}^{2} < 0\), and continuous in \(\Pi _{1}\backslash \Lambda\). Moreover,

$$\begin{aligned}& \alpha_{k} := \mathop{\operatorname{Res}}_{\lambda= \lambda_{k}} M(\lambda) \ge 0, \quad k = \overline{1, P}, \\& M(\lambda) = O\bigl(\rho^{-1}\bigr), \quad \rho\to0. \end{aligned}$$

The matrix function \(\rho V(\lambda)\) is continuous and bounded for \(\lambda> 0\) and \(V(\lambda) > 0\) for \(\lambda> 0\).

Proof

Fix an eigenvalue \(\lambda_{k}\), \(k = \overline{1, P}\). Consider two representations (11) and (14) of \(\Phi(x, \lambda)\), and take for both of them the residue with respect to the pole \(\lambda_{k}\). Then we obtain the relation

$$ \varphi(x, \lambda_{k}) \alpha_{k} = e(x, \rho_{k}) u_{k},\quad u_{k} := 2 \rho_{k} \mathop{\operatorname{Res}}_{\rho= \rho_{k}} \bigl(u(\rho) \bigr)^{-1}. $$
(36)

Note that the columns of the left-hand side and the right-hand side of (36) are vector eigenfunctions, corresponding to the eigenvalue \(\lambda_{k}\).

Further we consider ρ such that \(\operatorname{Re} \rho= 0\), \(\operatorname{Im} \rho> 0\). It is easy to check that

$$\bigl\langle e^{*}(x, \rho_{k}), e(x, \rho) \bigr\rangle _{x = \infty} - \bigl\langle e^{*}(x, \rho_{k}), e(x, \rho) \bigr\rangle _{x = 0} = (\lambda- \lambda_{k}) \int_{0}^{\infty} e^{*}(x, \rho_{k}) e(x, \rho) \,dx. $$

Using asymptotics (4) for \(e(x, \rho)\) and \(e^{*}(x, \rho _{k})\), we get

$$\lim_{\lambda\to\lambda_{k}} \frac{1}{\lambda- \lambda_{k}} \bigl\langle e^{*}(x, \rho_{k}), e(x, \rho) \bigr\rangle _{x = \infty} = 0_{m}. $$

By virtue of the self-adjointness, \(e^{*}(x, \rho) = e^{\dagger}(x, \rho )\), \(\varphi^{*}(x, \lambda) = \varphi^{\dagger}(x, \lambda)\), \(\lambda< 0\). Therefore

$$\mathcal{S} := u_{k}^{\dagger} \int_{0}^{\infty} e^{\dagger}(x, \rho _{k}) e(x, \rho_{k}) \,dx\, u_{k} = - \lim_{\lambda\to\lambda_{0}} \frac{u_{k}^{\dagger} \langle e^{\dagger}(x, \rho_{k}), e(x, \rho) \rangle u_{k}}{\lambda- \lambda_{0}}. $$

Substituting (36), we obtain

$$\begin{aligned} \mathcal{S} &= - \lim_{\lambda\to\lambda_{k}} \frac{\alpha _{k}^{\dagger} ({\varphi ^{\dagger}}'(0, \lambda_{k}) e(0, \rho) - \varphi^{\dagger}(0, \lambda_{k}) e'(0, \rho) )}{\lambda- \lambda_{k}} \cdot\lim _{\lambda\to\lambda_{k}}(\lambda- \lambda_{k}) \bigl(u(\rho) \bigr)^{-1} \\ &=-\alpha_{k}^{\dagger} \lim_{\rho\to\rho_{k}} \bigl(h e(0, \rho) - e'(0, \rho)\bigr) \bigl(u(\rho)\bigr)^{-1} = \alpha_{k}^{\dagger}. \end{aligned}$$

Obviously, \(\mathcal{S} = \mathcal{S}^{\dagger} \ge0\). Hence \(\alpha_{k} = \alpha_{k}^{\dagger} \ge0\).

Now consider \(V(\lambda) = \frac{1}{2 \pi i} (M^{-}(\lambda) - M^{+}(\lambda))\), \(\lambda> 0\). Taking the relations (12) and \(M(\lambda) = M^{*}(\lambda)\) into account, we have

$$M^{-}(\lambda) = \bigl(u^{*}(-\rho)\bigr)^{-1} e^{*}(0, -\rho),\qquad M^{+}( \lambda) = e(0, \rho) \bigl(u(\rho)\bigr)^{-1},\quad \rho> 0. $$

Consequently,

$$V(\lambda) = -\frac{1}{2 \pi i} \bigl(u^{*}(-\rho)\bigr)^{-1} \bigl\langle e^{*}(x, -\rho), e(x, \rho) \bigr\rangle \bigl(u(\rho)\bigr)^{-1}. $$

Substituting (10), we get

$$V(\lambda) = \frac{\rho}{\pi} \bigl(u^{*}(-\rho)\bigr)^{-1} \bigl(u( \rho)\bigr)^{-1}. $$

For real values of ρ, one has \(e^{*}(x, -\rho) = e^{\dagger}(x, \rho )\), \(u^{*}(-\rho) = u^{\dagger}(\rho)\). Since in the self-adjoint case the set of spectral singularities \(\Lambda''\) is empty, \(\det u(\rho) \ne0\), \(\rho\in\mathbb{R} \backslash\{ 0 \}\). Hence \(V(\lambda) = V^{\dagger}(\lambda) > 0\).

The remaining assertions of the theorem do not need a proof. □

We call the collection \(( \{ V(\lambda)\}_{\lambda> 0}, \{ \lambda_{k}, \alpha_{k} \} _{k = 1}^{P} )\) the spectral data of L. Similarly to the scalar case (see [4]), the Weyl matrix can be uniquely determined by the spectral data

$$ M(\lambda) = \int_{0}^{\infty} \frac{V(\mu)}{\lambda- \mu} \,d\mu+ \sum_{k = 1}^{P} \frac{\alpha_{k}}{\lambda- \lambda_{k}},\quad \lambda\in\Pi\backslash \Lambda'. $$
(37)

5 Self-adjoint case: the inverse problem

Now we are going to apply the general results of Section 3 to the self-adjoint case.

Let us rewrite the main equation (18) of the inverse problem in terms of the spectral data. Denote

$$\begin{aligned} \begin{aligned} &\lambda_{n0} = \lambda_{n}, \qquad \lambda_{n1} = \tilde{\lambda}_{n},\qquad \alpha_{n0} = \alpha_{n},\qquad \alpha_{n1} = \tilde{\alpha}_{n}, \\ &\varphi_{ni}(x) = \varphi(x, \lambda_{ni}),\qquad \tilde{ \varphi}_{ni}(x) = \tilde{\varphi}(x, \lambda_{ni}),\qquad \mathcal{M} = \bigl\{ (n, 0) \bigr\} _{n = 1}^{P} \cup\bigl\{ (n, 1) \bigr\} _{n = 1}^{\tilde{P}}. \end{aligned} \end{aligned}$$

Then the main equation (18) can be transformed into the system of equations

$$\begin{aligned}& \tilde{\varphi}(x, \lambda) = \varphi(x, \lambda) + \int _{0}^{\infty } \varphi(x, \mu) \hat{V}(\mu) \tilde{D}(x, \lambda, \mu) \,d\mu \\& \hphantom{\tilde{\varphi}(x, \lambda) =}{}+\sum_{k = 1}^{P} \varphi_{k0}(x) \alpha_{k0} \tilde{D}(x, \lambda, \lambda_{k0}) - \sum_{k = 1}^{\tilde{P}} \varphi_{k1}(x) \alpha_{k1} \tilde{D}(x, \lambda, \lambda_{k1}),\quad \lambda> 0, \end{aligned}$$
(38)
$$\begin{aligned}& \tilde{\varphi}_{ni}(x) = \varphi_{ni}(x) + \int _{0}^{\infty} \varphi (x, \mu) \hat{V}(\mu) \tilde{D}(x, \lambda_{ni}, \mu) \,d\mu \\& \hphantom{\tilde{\varphi}_{ni}(x) =}{} +\sum_{k = 1}^{P} \varphi_{k0}(x) \alpha_{k0} \tilde{D}(x, \lambda _{ni}, \lambda_{k0}) - \sum_{k = 1}^{\tilde{P}} \varphi_{k1}(x) \alpha_{k1} \tilde{D}(x, \lambda_{ni}, \lambda_{k1}),\quad (n, i) \in \mathcal{M}, \end{aligned}$$
(39)

with respect to the element \(\psi(x) := (\{ \varphi(x, \lambda ) \}_{\lambda> 0}, \{ \varphi_{ni}(x) \}_{(n, i) \in\mathcal{M}} )\) of the Banach space \(B_{S}\) of pairs \((F_{1}, F_{2})\), where

$$F_{1} \in C\bigl((0, \infty); \mathbb{C}^{m \times m}\bigr),\qquad F_{2} = \{ f_{ni} \}_{(n, i) \in\mathcal{M}},\quad f_{ni} \in\mathbb{C}^{m \times m}, $$

with the norm

$$\bigl\Vert (F_{1}, F_{2}) \bigr\Vert _{B_{S}} = \max \Bigl( \sup_{\lambda> 0} \bigl\Vert F_{1}(\lambda) \bigr\Vert , \max_{(n, i) \in\mathcal{M}} \| f_{ni} \| \Bigr). $$

System (38)-(39) has the form \(\psi(x) (I + \tilde{R}(x)) = \tilde{\psi}(x)\), where \(\tilde{R}(x) \colon B_{S} \to B_{S}\) is a linear compact operator for each fixed \(x \ge0\). By necessity, we have the unique solvability of the main equation (18), so the equivalent system (38)-(39) is uniquely solvable, and the operator \((I + \tilde{R}(x))\) has a bounded inverse. Now we are going to prove that all these facts follow from some simple properties of spectral data.

We will say that data \(( \{ V(\lambda) \}_{\lambda> 0}, \{ \lambda_{k}, \alpha_{k} \} _{k = 1}^{P} )\) belong to the class Sp if

(i1):

\(\lambda_{k}\) are distinct negative numbers,

(i2):

\(\alpha_{k}\) are nonzero Hermitian matrices, \(\alpha_{k} \ge0\),

(i3):

the \(m \times m\) matrix function \(\rho V(\lambda)\) is continuous and bounded as \(\lambda> 0\), \(V(\lambda) > 0\) and \(M(\lambda) = O(\rho^{-1})\) as \(\rho\to0\), where \(M(\lambda)\) is defined by (37),

(i4):

there exists a model problem \(\tilde{L}\) such that (16) holds.

Note that the spectral data of any self-adjoint boundary value problem \(L(Q, h)\) belong to Sp.

Lemma 2

Let data \(( \{ V(\lambda) \}_{\lambda> 0}, \{ \lambda_{k}, \alpha_{k} \}_{k = 1}^{P} )\) belong to Sp. Then, for each fixed \(x \ge0\), system (38)-(39) is uniquely solvable. In other words, the operator \((I + \tilde{R}(x))\) is invertible.

Proof

Fix \(x \ge0\). The operator \(\tilde{R}(x)\) is compact, so it is sufficient to prove the unique solvability of the homogeneous system (38)-(39). Let \((\{ \beta(x, \lambda) \}_{\lambda> 0}, \{ \beta_{ni}(x) \} _{(n, i) \in \mathcal{M}} ) \in B_{S}\) be a solution of the homogeneous system

$$\begin{aligned}& \beta(x, \lambda) + \int_{0}^{\infty} \beta(x, \mu) \hat{V}(\mu) \tilde{D}(x, \lambda , \mu) \,d\mu \\& \quad {}+\sum_{k = 1}^{P} \beta_{k0}(x) \alpha_{k0} \tilde{D}(x, \lambda, \lambda_{k0}) - \sum_{k = 1}^{\tilde{P}} \beta_{k1}(x) \alpha_{k1} \tilde{D}(x, \lambda, \lambda _{k1}) = 0_{m}, \quad \lambda> 0, \\& \beta_{ni}(x) + \int_{0}^{\infty} \beta(x, \mu) \hat{V}(\mu) \tilde{D}(x, \lambda _{ni}, \mu) \,d\mu \\& \qquad {}+\sum_{k = 1}^{P} \beta_{k0}(x) \alpha_{k0} \tilde{D}(x, \lambda_{ni}, \lambda_{k0}) - \sum_{k = 1}^{\tilde{P}} \beta_{k1}(x) \alpha_{k1} \tilde{D}(x, \lambda_{ni}, \lambda_{k1}) \\& \quad = 0_{m},\quad (n, i) \in\mathcal{M}. \end{aligned}$$
(40)

Note that formula (40) gives an analytic continuation of the matrix function \(\beta(x, \lambda)\) to the whole λ-plane. Clearly, \(\beta(x, \lambda_{ni}) = \beta_{ni}(x)\).

Using the standard estimate \(\| \tilde{D}(x, \lambda, \lambda_{kj}) \| \le\frac {C}{|\rho|} \exp(|\tau|x)\) (see [4, 19]) together with (16), one can show that

$$ \bigl\Vert \beta(x, \lambda) \bigr\Vert \le\frac{C}{|\rho|} \exp\bigl(\vert \tau \vert x\bigr). $$
(41)

Define the function

$$\begin{aligned} \Gamma(x, \lambda) =& -\int_{0}^{\infty} \beta(x, \mu) \hat{V}(\mu) \frac{\langle \tilde{\varphi}^{*}(x, \mu), \tilde{\Phi}(x, \lambda) \rangle}{\lambda - \mu} \,d\mu \\ &{}- \sum_{k = 1}^{P} \beta_{k0}(x) \alpha_{k0} \frac{\langle\tilde{\varphi} ^{*}_{k0}(x), \tilde{\Phi}(x, \lambda) \rangle}{\lambda- \lambda _{k0}} + \sum_{k = 1}^{\tilde{P}} \beta_{k1}(x) \alpha_{k1} \frac{\langle \tilde{\varphi} ^{*}_{k1}(x), \tilde{\Phi}(x, \lambda) \rangle}{\lambda- \lambda_{k1}}. \end{aligned}$$
(42)

Using the relations \(\tilde{\Phi}(x, \lambda) = \tilde{S}(x, \lambda) + \tilde{\varphi} (x, \lambda) \tilde{M}(\lambda)\) and (40), one can easily derive the following formula:

$$\begin{aligned} \Gamma(x, \lambda) =& \beta(x, \lambda) \tilde{M}(\lambda) - \int _{0}^{\infty} \beta(x, \mu ) \hat{V}(\mu) \frac{\langle\tilde{\varphi}^{*}(x, \mu), \tilde{S}(x, \lambda) \rangle}{\lambda- \mu } \,d\mu \\ &{}-\sum_{k = 1}^{P} \beta_{k0}(x) \alpha_{k0} \frac{\langle\tilde{\varphi} ^{*}_{k0}(x), \tilde{S}(x, \lambda) \rangle}{\lambda- \lambda_{k0}} + \sum_{k = 1}^{\tilde{P}} \beta_{k1}(x) \alpha_{k1} \frac{\langle \tilde{\varphi} ^{*}_{k1}(x), \tilde{S}(x, \lambda) \rangle}{\lambda- \lambda_{k1}}. \end{aligned}$$

Since \(\langle\tilde{\varphi}^{*}(x, \mu), \tilde{S}(x, \lambda) \rangle_{x = 0} = -I_{m}\), we have

$$\frac{\tilde{\varphi}^{*}(x, \mu), \tilde{S}(x, \lambda)}{\lambda- \mu } = -\frac {I_{m}}{\lambda- \mu} + \int_{0}^{x} \tilde{\varphi}^{*}(t, \mu) \tilde{S}(t, \lambda) \,dt. $$

Consequently, we can represent \(\Gamma(x, \lambda)\) in the following form:

$$\begin{aligned} \Gamma(x, \lambda) =& \beta(x, \lambda) \tilde{M}(\lambda) + \int _{0}^{\infty} \frac{\beta (x, \mu) \hat{V}(\mu)}{\lambda- \mu} \,d\mu \\ &{}+ \sum _{k = 1}^{P} \frac{\beta_{k0}(x) \alpha_{k0}}{\lambda- \lambda _{k0}} - \sum _{k = 1}^{\tilde{P}} \frac{\beta_{k1}(x) \alpha_{k1}}{\lambda- \lambda_{k1}} + \Gamma_{1}(x, \lambda), \end{aligned}$$

where the matrix function \(\Gamma_{1}(x, \lambda)\) is entire by λ, since \(\tilde{S}(x, \lambda)\) is entire. Taking (37) into account, we obtain

$$\Gamma(x, \lambda) = \beta(x, \lambda) M(\lambda) + \Gamma_{1}(x, \lambda) - \Gamma_{2}(x, \lambda), $$

where

$$\begin{aligned} \Gamma_{2}(x, \lambda) =& \int_{0}^{\infty} \frac{(\beta(x, \lambda) - \beta(x, \mu)) \hat{V}(\mu)}{\lambda- \mu}\,d\mu+ \sum_{k = 1}^{P} \frac{(\beta(x, \lambda) - \beta_{k0}(x)) \alpha _{k0}}{\lambda- \lambda_{k0}} \\ &{}- \sum_{k = 1}^{\tilde{P}} \frac{(\beta(x, \lambda) - \beta_{k1}(x)) \alpha _{k1}}{\lambda- \lambda_{k1}}. \end{aligned}$$

Obviously, the function \(\Gamma_{2}(x, \lambda)\) is entire in λ. Therefore, the function \(\Gamma(x, \lambda)\) has simple poles at the points \(\Lambda'\) and

$$ \mathop{\operatorname{Res}}_{\lambda= \lambda_{k0}} \Gamma(x, \lambda) = \beta_{k0}(x) \alpha_{k0}, \quad k = \overline{1, P}. $$

Furthermore,

$$\frac{1}{2 \pi i} \bigl( \Gamma^{-}(\lambda) - \Gamma^{+}(\lambda) \bigr) = \beta (x, \lambda) V(\lambda),\qquad \Gamma^{\pm}(\lambda) := \lim _{z \to0, \operatorname{Re} z > 0} \Gamma (\lambda\pm i z), \quad \lambda> 0. $$

Using (41) and the standard asymptotics for \(\tilde{\varphi}^{*}(x, \mu)\) and \(\tilde{\Phi}(x, \lambda)\), one arrives at the estimate

$$ \bigl\Vert \Gamma(x, \lambda) \bigr\Vert \le C | \rho|^{-2} \exp\bigl(-\vert \tau \vert x\bigr),\quad |\lambda| \to \infty. $$
(43)

Introduce the matrix function B(x,λ):=Γ(x,λ) β (x, λ ¯ ), and consider the integral

I:= 1 2 π i γ R 0 B(x,λ)dλ

over the contour \(\gamma_{R}^{0} := ( \gamma\cap\{ \lambda\colon \lambda\le R\} ) \cup\{ \lambda\colon|\lambda| = R \}\) (see Figure 2). For a sufficiently large radius R, I= 0 m by the Cauchy theorem. In view of estimates (41), (43), we have B(x,λ)C | ρ | 3 , \(|\lambda| \to \infty\). Hence

lim R 1 2 π i | λ | = R B ( x , λ ) d λ = 0 m , 1 2 π i γ B ( x , λ ) d λ = 0 m .

The last integral over γ can be calculated by the residue theorem. It equals

k = 0 β k 0 (x) α k 0 β k 0 (x)+ 1 2 π i | λ | = ε B(x,λ)dλ+ ε β(x,λ)V(λ) β (x,λ)dλ,

where \(\varepsilon> 0\) is sufficiently small. Note that since \(M(\lambda) = O(\rho ^{-1})\) as \(\lambda\to0\), the second term in the sum tends to zero as \(\varepsilon\to0\). Finally, we obtain

$$\sum_{k = 0}^{\infty} \beta_{k0}(x) \alpha_{k0} \beta _{k0}^{\dagger}(x) + \int _{0}^{\infty} \beta(x, \lambda) V(\lambda) \beta^{\dagger}(x, \lambda) \,d\lambda= 0. $$

Since \(\alpha_{k0} \ge0\), \(V(\lambda) > 0\), we get \(\beta_{k0}(x) \alpha_{k0} = 0_{m}\), \(\beta(x, \lambda) V(\lambda) = 0_{m}\), and \(\beta(x, \lambda) = 0\) for \(\lambda> 0\). Since \(\beta(x, \lambda )\) is an entire function in λ, we conclude that \(\beta(x, \lambda) \equiv0_{m}\). Consequently, \(\beta_{k0}(x) = \beta(x, \lambda_{k0}) = 0_{m}\). Thus, the homogeneous system has only a trivial solution, so system (38)-(39) is uniquely solvable.

Figure 2
figure 2

Contour \(\pmb{\gamma^{0}_{R}}\) .

 □

Solving the main equation, one can construct the following matrix functions:

$$ \begin{aligned} &\varepsilon_{0}(x) := \int _{0}^{\infty} \varphi(x, \mu) \hat{V}(\mu) \tilde{ \varphi}(x, \mu) \,d\mu+ \sum_{k = 1}^{P} \varphi_{k0}(x) \alpha_{k0} \tilde{\varphi}^{*}_{k0}(x) - \sum_{k = 1}^{\tilde{P}} \varphi_{k1}(x) \alpha_{k1} \tilde{\varphi} ^{*}_{k1}(x), \\ &\varepsilon(x) := -2 \varepsilon_{0}'(x), \end{aligned} $$
(44)

and then recover \(Q(x)\) and h via (22). Theorem 2 and Lemma 2 yield the following theorem.

Theorem 4

For data \(S := ( \{ V(\lambda)\}_{\lambda> 0}, \{ \lambda_{k}, \alpha_{k} \}_{k = 1}^{P} )\) to be the spectral data of some self-adjoint boundary value problem \(L(Q, h)\), \(Q = Q^{\dagger}\), \(h = h^{\dagger}\), satisfying (31), it is necessary and sufficient to belong to the class Sp and to have such a property that \((1 + x) \varepsilon(x) \in L((0, \infty ); \mathbb {C}^{m \times m})\), where \(\varepsilon(x)\) is constructed via (44) by the unique solution of system (38)-(39) \(\varphi(x, \lambda)\).

6 Perturbation of the discrete spectrum

Return to the general non-self-adjoint problem, and consider one more particular case, when the solvability of the main equation (18) can be easily checked. Let the problem \(\tilde{L}\) be given, and \(\tilde{M}(\lambda)\) is its Weyl matrix. Consider the matrix function

$$ M(\lambda) = \tilde{M}(\lambda) + \sum _{k = 1}^{P} \sum_{\nu= 1}^{m_{k}} \frac{\alpha _{k\nu}}{(\lambda- \lambda_{k})^{\nu}}, $$
(45)

where \(\lambda_{k} \in\mathbb{C}\) are some distinct numbers and \(\alpha_{k\nu} \in\mathbb{C}^{m \times m}\), \(k = \overline{1, P}\), \(\nu= \overline{1, m_{k}}\). Then \(\hat{V}(\lambda) = 0_{m}\), and by virtue of the residue theorem, the main equation (18) takes the form

$$\tilde{\varphi}(x, \lambda) = \varphi(x, \lambda) + \sum _{k = 1}^{P} \sum_{i = 0}^{m_{k} - 1} \frac{\partial^{i}}{\partial\lambda^{i}} \varphi(x, \lambda_{k}) \sum _{\nu= i + 1}^{m_{k}} \alpha_{k\nu} \tilde{D}_{\langle0, \nu- i - 1 \rangle}(x, \lambda, \lambda_{k}), $$

where \(\tilde{D}_{\langle i, j \rangle}(x, \lambda, \mu) := \frac {\partial ^{i + j}}{\partial\lambda^{i}\, \partial\mu^{j}} \tilde{D}(x, \lambda, \mu)\). Differentiating this relation with respect to λ, we arrive at the following system of linear algebraic equations with respect to the unknown variables \(\{ \frac{\partial^{s}}{\partial\lambda^{s}} \varphi(x, \lambda _{n}) \}\):

$$ \frac{\partial^{s}}{\partial\lambda^{s}} \tilde{\varphi}(x, \lambda_{n}) = \frac{\partial^{s}}{\partial\lambda^{s}} \varphi(x, \lambda_{n}) + \sum _{k = 1}^{P} \sum_{i = 0}^{m_{k} - 1} \frac{\partial^{i}}{\partial\lambda^{i}} \tilde{\varphi}(x, \lambda_{k}) \sum _{\nu= i + 1}^{m_{k}} \alpha_{k\nu} \tilde{D}_{\langle s, \nu- i - 1 \rangle}(x, \lambda_{n}, \lambda_{k}), $$
(46)

\(n = \overline{1, P}\), \(s = \overline{0, m_{n} - 1}\). System (46) has a unique solution if and only if its determinant is not zero. Having the solution of (46), one can construct

$$ \varepsilon_{0}(x) = \sum_{k =1}^{P} \sum_{i = 0}^{m_{k} - 1} \frac {\partial ^{i}}{\partial\lambda^{i}} \varphi(x, \lambda_{k}) \sum_{\nu= i + 1}^{m_{k}} \alpha_{k \nu} \frac{\partial^{\nu- i - 1}}{\partial\lambda^{\nu- i - 1}} \tilde{\varphi}^{*}(x, \lambda_{k}),\qquad \varepsilon(x) = -2 \varepsilon_{0}'(x), $$
(47)

and then find \(Q(x)\) and h via (22).

Theorem 5

For the matrix function \(M(\lambda)\) in the form (45) to be the Weyl matrix of a certain boundary value problem L, it is necessary and sufficient that the determinant of system (46) differs from zero, and \(\varepsilon(x) \in L((0, \infty); \mathbb{C}^{m \times m})\), where \(\varepsilon(x)\) is defined in (47).

There is an example, provided in [4, Section 2.3.2], showing that even in the simple case of a finite perturbation, the condition \(\varepsilon(x) \in L((0, \infty); \mathbb{C}^{m \times m})\) is essential and cannot be omitted. So it is crucial in Theorems 2, 4 and 5.

References

  1. Marchenko, VA: Sturm-Liouville Operators and Their Applications. Naukova Dumka, Kiev (1977) (Russian); English transl.: Birkhäuser, Basel (1986)

    MATH  Google Scholar 

  2. Levitan, BM: Inverse Sturm-Liouville Problems. Nauka, Moscow (1984) (Russian); English transl.: VNU Sci. Press, Utrecht (1987)

    MATH  Google Scholar 

  3. Pöschel, J, Trubowitz, E: Inverse Spectral Theory. Academic Press, New York (1987)

    MATH  Google Scholar 

  4. Freiling, G, Yurko, V: Inverse Sturm-Liouville Problems and Their Applications. Nova Science Publishers, Huntington (2001)

    MATH  Google Scholar 

  5. Agranovich, ZS, Marchenko, VA: The Inverse Problem of Scattering Theory. KSU, Kharkov (1960) (Russian); English transl.: Gordon & Breach, New York (1963)

    MATH  Google Scholar 

  6. Beals, R, Henkin, GM, Novikova, NN: The inverse boundary problem for the Rayleigh system. J. Math. Phys. 36(12), 6688-6708 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  7. Boutet de Monvel, A, Shepelsky, D: Inverse scattering problem for anisotropic media. J. Math. Phys. 36(7), 3443-3453 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  8. Calogero, F, Degasperis, A: Nonlinear evolution equations solvable by the inverse spectral transform II. Nuovo Cimento B 39(1), 1-54 (1977)

    Article  MathSciNet  Google Scholar 

  9. Yang, C-F: Trace formulae for the matrix Schrödinger equation with energy-dependent potential. J. Math. Anal. Appl. 393, 526-533 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  10. Carlson, R: Large eigenvalues and trace formulas for matrix Sturm-Liouville problems. SIAM J. Math. Anal. 30(5), 949-962 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  11. Carlson, R: Eigenvalue estimates and trace formulas for the matrix Hill’s equation. J. Differ. Equ. 167, 211-244 (2000)

    Article  MATH  Google Scholar 

  12. Chern, H-H: On the eigenvalues of some vectorial Sturm-Liouville eigenvalue problems. arXiv:math/9902019 [math.SP]

  13. Dwyer, HI, Zettl, A: Computing eigenvalues of regular Sturm-Liouville problems. Electron. J. Differ. Equ. 1994, 6 (1994)

    MathSciNet  MATH  Google Scholar 

  14. Malamud, MM: Uniqueness of the matrix Sturm-Liouville equation given a part of the monodromy matrix, and Borg type results. In: Sturm-Liouville Theory, pp. 237-270. Birkhäuser, Basel (2005)

    Chapter  Google Scholar 

  15. Yurko, VA: Inverse problems for the matrix Sturm-Liouville equation on a finite interval. Inverse Problems 22, 1139-1149 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  16. Chelkak, D, Korotyaev, E: Weyl-Titchmarsh functions of vector-valued Sturm-Liouville operators on the unit interval. J. Funct. Anal. 257, 1546-1588 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  17. Mykytyuk, YV, Trush, NS: Inverse spectral problems for Sturm-Liouville operators with matrix-valued potentials. Inverse Problems 26, 015009 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  18. Bondarenko, N: An inverse problem for the non-self-adjoint matrix Sturm-Liouville operator. arXiv:1407.3581 [math.SP]

  19. Freiling, G, Yurko, V: An inverse problem for the non-self-adjoint matrix Sturm-Liouville equation on the half-line. J. Inverse Ill-Posed Probl. 15, 785-798 (2007)

    MATH  MathSciNet  Google Scholar 

  20. Yurko, VA: Method of Spectral Mappings in the Inverse Problem Theory. Inverse and Ill-Posed Problems Series. VSP, Utrecht (2002)

    Book  MATH  Google Scholar 

  21. Avdonin, SA, Belishev, MI, Ivanov, SA: Boundary control and a matrix inverse problem for the equation \(u_{tt}-u_{xx}+V(x)u=0\). Mat. Sb. 182(3), 307-331 (1991)

    MATH  Google Scholar 

  22. Bondarenko, N, Freiling, G: An inverse problem for the quadratic pencil of non-self-adjoint matrix operators on the half-line. J. Inverse Ill-Posed Probl. 22, 467-495 (2014)

    MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by Grant 1.1436.2014K of the Russian Ministry of Education and Science and by Grants 13-01-00134 and 14-01-31042 of the Russian Foundation for Basic Research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Natalia Bondarenko.

Additional information

Competing interests

The author declares that she has no competing interests.

Author’s contributions

The author has solely obtained the results and prepared the manuscript.

Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bondarenko, N. An inverse spectral problem for the matrix Sturm-Liouville operator on the half-line. Bound Value Probl 2015, 15 (2015). https://doi.org/10.1186/s13661-014-0275-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13661-014-0275-3

MSC

Keywords