In this section, we assume that the boundary value problem L is self-adjoint: \(Q(x) = Q^{\dagger}(x)\) a.e. on \((0, \infty)\), \(h = h^{\dagger}\). We show that its spectrum has the following Properties (i1)-(i6). Similar facts for the Dirichlet boundary condition were proved in [5].
Property (i1)
The problem
L
does not have spectral singularities: \(\Lambda'' = \varnothing\).
Proof
We have to prove that \(\det u(\rho) \ne0\) for \(\rho\in\mathbb{R} \backslash\{ 0 \}\). In view of (3) and (9),
$$\begin{aligned} e^{\dagger}(x, \rho) =& \exp(- i\rho x) I_{m} + \frac{1}{2 i \rho} \int_{x}^{\infty} \bigl(\exp\bigl(-i \rho(x - t) \bigr) - \exp\bigl(i \rho(x - t)\bigr) \bigr) e^{\dagger}(t, \rho) Q(t) \,dt \\ =& e^{*}(x, - \rho) \end{aligned}$$
for \(\rho\in\mathbb{R} \backslash\{ 0 \}\), therefore \(u^{\dagger}(\rho) = u^{*}(-\rho)\) for such ρ. Suppose that there exist a real \(\rho_{0} \ne0\) and a nonzero vector a such that \(u(\rho_{0}) a = 0\) and, consequently, \(a^{\dagger} u^{*}(- \rho_{0}) = 0\). On the one hand,
$$a^{\dagger} \bigl\langle e^{*}(x, -\rho_{0}), e(x, \rho_{0}) \bigr\rangle a = \bigl[ a^{\dagger} u^{*}(- \rho_{0}) \bigr] e(0, \rho_{0}) a - a^{\dagger} e^{*}(0, - \rho_{0}) \bigl[u(\rho_{0}) a\bigr] = 0. $$
On the other hand, using (10), we obtain
$$a^{\dagger} \bigl\langle e^{*}(x, -\rho_{0}), e(x, \rho_{0}) \bigr\rangle a = -2 i\rho _{0} a^{\dagger} a \ne0. $$
So we arrive at the contradiction, which proves the property. □
Property (i2)
All the nonzero eigenvalues are real and negative: \(\lambda_{k} = \rho_{k}^{2} < 0\), \(\rho_{k} = i \tau_{k}\), \(\tau_{k} > 0\).
Indeed, the eigenvalues of L are real because of the self-adjointness. In view of [19, Theorem 2.4], they cannot be positive.
Property (i3)
The poles of the matrix function
\((u(\rho ))^{-1}\)
in the upper half-plane are simple. (They coincide with
\(i \tau_{k}\).)
Proof
Start from the relations
$$\begin{aligned}& -e''(x, \rho) + Q(x) e(x, \rho) = \rho^{2} e(x, \rho), \end{aligned}$$
(26)
$$\begin{aligned}& -{e^{*}}''(x, \rho) + e^{*}(x, \rho) Q(x) = \rho^{2} e^{*}(x, \rho). \end{aligned}$$
(27)
Differentiate (27) by ρ, multiply it by \(e(x, \rho)\) and subtract (26) multiplied by \(\frac{d}{d\rho}e^{*}(x, \rho)\) from the left:
$$ \frac{d}{d \rho} e^{*}(x, \rho) e''(x, \rho) - \frac{d}{d\rho} {e^{*}}''(x, \rho) e(x, \rho) = 2 \rho e^{*}(x, \rho) e(x, \rho). $$
Note that the left-hand side of this relation equals \(-\frac {d}{dx}\langle\frac{d}{d\rho}e^{*}(x, \rho), e(x, \rho)\rangle\). Integrating by x from 0 to ∞, we get
$${\biggl.\biggl\langle \frac{d}{d\rho}e^{*}(x, \rho), e(x, \rho)\biggr\rangle \biggr|_{0}^{\infty}} = 2 \rho\int_{0}^{\infty} e^{*}(x, \rho) e(x, \rho) \,dx. $$
Let \(\operatorname{Im} \rho> 0\). Then the matrix functions \(e(x, \rho)\), \(\frac {d}{d\rho}e^{*}(x, \rho)\) and their x-derivatives tend to zero as \(x \to\infty\). Consequently, we obtain
$$ \frac{d}{d\rho}u^{*}(\rho) e(0, \rho) - \frac{d}{d\rho}e^{*}(0, \rho) u(\rho) = 2 \rho\int_{0}^{\infty} e^{*}(x, \rho) e(x, \rho) \,dx. $$
(28)
Let ρ be equal to \(\rho_{0} = \sqrt{\lambda}_{0}\), where \(\lambda _{0}\) is an eigenvalue, and \(u(\rho_{0}) a = 0\), \(a \ne0\). For purely imaginary ρ, one has \(e^{*}(x, \rho) = e^{\dagger}(x, \rho)\) and \(u^{*}(\rho) = u^{\dagger }(\rho )\). So we derive from (28)
$$ -a^{\dagger} \frac{d}{d\rho}u^{\dagger}( \rho_{0}) e(0, \rho_{0}) a = 2 \rho _{0} a^{\dagger} \int_{0}^{\infty} e^{\dagger}(x, \rho_{0}) e(x, \rho _{0}) \,dx\, a \ne0. $$
(29)
In order to prove the simplicity of the poles for \((u(\rho))^{-1}\), we adapt Lemma 2.2.1 from [5]:
The inverse
\((u(\rho))^{-1}\)
has a simple pole at
\(\rho= \rho_{0}\)
if and only if the relations
$$ u(\rho_{0}) a = 0, \qquad u(\rho_{0}) b + \frac{d}{d\rho} u(\rho_{0}) a = 0, $$
(30)
where
a
and
b
are constant vectors, yield
\(a = 0\).
Let vectors a and b satisfy (30). Then
$$-a^{\dagger} \frac{d}{d\rho}u^{\dagger}(\rho_{0}) e(0, \rho_{0}) a = b^{\dagger} u^{\dagger}(\rho_{0}) e(0, \rho_{0}) a. $$
Since
$$\bigl\langle e^{*}(x, \rho), e(x, \rho) \bigr\rangle = \bigl\langle e^{*}(x, \rho), e(x, \rho) \bigr\rangle _{x = \infty} = 0, \quad \operatorname{Im} \rho> 0, $$
one has
$$b^{\dagger} u^{\dagger}(\rho_{0}) e(0, \rho_{0}) a = b^{\dagger} e^{*}(0, \rho _{0}) u(\rho_{0}) a = 0, $$
but this contradicts (29). So \(a = 0\), and a square root of every eigenvalue \(\rho= \rho_{0}\) is a simple pole of \((u(\rho))^{-1}\). □
The next properties take place if the additional condition holds
$$ \int_{0}^{\infty} x \bigl\Vert Q(x) \bigr\Vert \,dx < \infty. $$
(31)
Property (i4)
The number of eigenvalues is finite.
Proof
Prove the assertion by contradiction. Suppose that there is an infinite sequence \(\{ \lambda_{k} \}_{k = 1}^{\infty}\) of negative eigenvalues, \(\rho_{k} = \sqrt{\lambda}_{k}\), and \(\{ Y_{k}(x) \}_{k = 1}^{\infty}\) is an orthogonal sequence of corresponding vector eigenfunctions. Note that there can be multiple eigenvalues, their multiplicities are finite and equal to \(m - \operatorname{rank} u(\rho_{k})\). Multiple eigenvalues occur in the sequence \(\{ \lambda_{k}\}_{k = 1}^{\infty}\) multiple times with different eigenfunctions \(Y_{k}(x)\). The eigenfunctions can be represented in the form \(Y_{k}(x) = e(x, \rho_{k}) N_{k}\), \(\| N_{k} \| = 1\).
Using the orthogonality of the eigenfunctions, we obtain for \(k \ne n\),
$$\begin{aligned} 0 =& \int_{0}^{\infty} Y_{k}^{\dagger}(x) Y_{n}(x) \,dx \\ =& N_{k}^{\dagger}\int_{A}^{\infty} e^{\dagger}(x, \rho_{k}) e(x, \rho_{n}) \,dx\, N_{n} + \int_{0}^{A} Y_{k}^{\dagger}(x) Y_{k}(x) \,dx \\ &{}+ \int_{0}^{A} Y^{\dagger}_{k}(x) \bigl(Y_{n}(x) - Y_{k}(x)\bigr) \,dx =: \mathcal{I}_{1} + \mathcal{I}_{2} + \mathcal{I}_{3}. \end{aligned}$$
(32)
Similarly to the scalar case [4, Theorem 2.3.4], one can show that \(e(x, \rho_{k}) = \exp(-\tau_{k} x) (I_{m} + \alpha_{k}(x))\), where \(\| \alpha_{k}(x) \| \le\frac{1}{8}\) as \(x \ge A\) for all \(k \ge1\) and for sufficiently large A. Consequently,
$$\begin{aligned} \mathcal{I}_{1} =& N_{k}^{\dagger} \int _{A}^{\infty} \exp\bigl(-(\tau_{k} + \tau_{n})x\bigr) \bigl(I_{m} + \beta_{kn}(x)\bigr) \,dx\, N_{k} \\ &{}+ N_{k}^{\dagger} \int_{A}^{\infty} \exp\bigl(-(\tau_{k} + \tau_{n})x\bigr) \bigl(I_{m} + \beta _{kn}(x)\bigr) \,dx \, (N_{n} - N_{k}), \quad \bigl\Vert \beta_{kn}(x) \bigr\Vert \le \frac{3}{8}. \end{aligned}$$
Since the vectors \(N_{k}\) belong to the unit sphere, one can choose a convergent subsequence \(\{ N_{k_{s}} \}_{s = 1}^{\infty}\). Further we consider \(N_{k}\) and \(N_{n}\) from such a subsequence. Then, for sufficiently large k and n, we have
$$\begin{aligned}& \biggl\vert N_{k}^{\dagger} \int_{A}^{\infty} \exp\bigl(-(\tau_{k} + \tau_{n})x\bigr) \bigl(I_{m} + \beta_{kn}(x)\bigr) \,dx \, (N_{n} - N_{k}) \biggr\vert \\& \quad \le\frac{3 \exp(-(\tau_{k} + \tau_{n}) A)}{2 (\tau_{k} + \tau_{n})} \| N_{n} - N_{k} \| \le \frac{\exp(-(\tau_{k} + \tau_{n}) A)}{8 (\tau_{k} + \tau_{n})}. \end{aligned}$$
Hence
$$\mathcal{I}_{1} \ge\frac{\exp(-(\tau_{k} + \tau_{n}) A)}{2 (\tau_{k} + \tau _{n})} \ge\frac{\exp(-2AT)}{4T}, \quad T := \max_{k} \tau_{k}. $$
Clearly, \(\mathcal{I}_{2} \ge0\). Using arguments similar to the proof of [4, Theorem 2.3.4], one can show that \(\mathcal{I}_{3}\) tends to zero as \(k, n \to\infty\). Thus, for sufficiently large k and n, \(\mathcal{I}_{1} + \mathcal {I}_{2} + \mathcal{I}_{3} > 0\), which contradicts (32). Hence, the number of negative eigenvalues is finite. □
Property (i5)
\(\lambda= 0\)
is not an eigenvalue of
L.
Proof
It was proved in [5] that if condition (31) holds, the Jost solution \(e(x, \rho)\) exists for \(\rho= 0\). So equation (1) for \(\lambda= 0\) has the solution \(e(x, 0) = I_{m} + o(1)\) as \(x \to\infty\). One can easily check that the matrix function
$$z(x) = e(x, 0) \int_{0}^{x} \bigl(e^{*}(t, 0) e(t, 0) \bigr)^{-1} \,dt $$
is also a solution of (1) for \(\lambda= 0\), and it enjoys asymptotic representation \(z(x) = x (I_{m} + o(1))\) as \(x \to\infty\). Thus, the columns of the matrices \(e(x, 0)\) and \(z(x)\) form a fundamental system of solutions for equation (1) for \(\lambda = 0\). If \(\lambda= 0\) is an eigenvalue of L, then the corresponding vector eigenfunction should have an expansion \(Y_{0}(x) = e(x, 0) a + z(x) b\), where a and b are some constant vectors. But in view of asymptotic behavior of \(e(x, 0)\) and \(z(x)\), one has \(\lim_{x \to\infty} Y_{0}(x) = 0\) if and only if \(a = b = 0\). So \(\lambda= 0\) is not an eigenvalue of L. □
Property (i6)
\(\rho(u(\rho))^{-1} = O(1)\)
and
\(M(\lambda) = O(\rho^{-1})\)
as
\(\rho\to0\), \(\rho\in\Omega\).
Proof
Consider the matrix function \(g(\rho) = 2 i \rho(u(\rho))^{-1}\). It follows from (10) that
$$u^{*}(-\rho) e(0, \rho) - e^{*}(0, -\rho) u(\rho) = -2 i \rho I_{m}. $$
Hence
$$g(\rho) = e^{*}(0, -\rho) - u^{*}(-\rho) e(0, \rho) \bigl(u(\rho) \bigr)^{-1}. $$
In view of (12) and the equality \(M(\lambda) \equiv M^{*}(\lambda)\), one has
$$e(0, \rho) \bigl(u(\rho)\bigr)^{-1} = M(\lambda) = \bigl(u^{*}(\rho) \bigr)^{-1} e^{*}(0, \rho). $$
Consequently,
$$ g(\rho) = e^{*}(0, -\rho) - \xi(\rho) e^{*}(0, \rho),\quad \xi(\rho) := u^{*}(-\rho) \bigl(u^{*}(\rho)\bigr)^{-1}. $$
(33)
Let \(\rho\in\mathbb{R} \backslash\{ 0 \}\). Expand the solution \(\varphi(x, \lambda)\) by the fundamental system of solutions with some matrix coefficients \(A(\rho)\) and \(B(\rho)\):
$$\begin{aligned}& \varphi(x, \lambda) = e(x, \rho) A(\rho) + e(x, -\rho) B(\rho), \end{aligned}$$
(34)
$$\begin{aligned}& \varphi'(x, \lambda) = e'(x, \rho) A( \rho) + e'(x, -\rho) B(\rho). \end{aligned}$$
(35)
Multiplying (34) by \({e^{*}}'(x, -\rho)\) and (35) by \(e^{*}(x, \rho)\) from the left and using (10), one can derive
$$\begin{aligned}& A(\rho) = -\frac{1}{2 i \rho} \bigl( {e^{*}}'(x, -\rho)\varphi(x, \lambda) - e^{*}(x, -\rho) \varphi'(x, \lambda) \bigr), \\& B(\rho) = \frac{1}{2 i \rho} \bigl( {e^{*}}'(x, \rho) \varphi(x, \lambda) - e^{*}(x, \rho) \varphi'(x, \lambda) \bigr). \end{aligned}$$
Since \(A(\rho)\) and \(B(\rho)\) do not depend on x, one can take \(x = 0\) and obtain
$$A(\rho) = -\frac{1}{2\pi i} u^{*}(-\rho),\qquad B(\rho) = \frac{1}{2 i \rho} u^{*}(\rho). $$
Finally,
$$\varphi(x, \lambda) = -\frac{1}{2 i \rho} \bigl( e(x, \rho) u^{*}(-\rho) - e(x, - \rho) u^{*}(\rho) \bigr). $$
Since \(U(\varphi) = 0\), we get
$$u(\rho) u^{*}(-\rho) = u(-\rho) u^{*}(\rho). $$
Therefore
$$\xi(\rho) = u^{*}(-\rho) \bigl(u^{*}(\rho)\bigr)^{-1} = \bigl(u(\rho) \bigr)^{-1} u(-\rho). $$
One can easily show that \(u^{*}(\rho) = u^{\dagger}(-\rho)\) for real ρ and, consequently, \(\xi(\rho)\) is a unitary matrix for \(\rho\in\mathbb{R} \backslash\{ 0 \}\). Then it follows from (33) that the matrix function \(g(\rho)\) is bounded for \(\rho\in\mathbb{R} \backslash\{ 0 \}\).
Consider the region \(\mathcal{D} := \{ \rho\colon\operatorname{Im} \rho> 0, |\rho| < \tau^{*} \}\), where \(\tau^{*}\) is a number less than all \(\tau_{k}\) (by Property (i4), there is a finite number of them). Obviously, \(g(\rho)\) is analytic in \(\mathcal{D}\) and continuous in \(\overline{\mathcal{D}} \backslash\{ 0 \}\). If it is also analytic in zero, \(\rho= 0\) is a removable singularity. Then \(g(\rho)\) is continuous in \(\overline{\mathcal{D}}\), so \(g(\rho) = O(1)\). In the general case, one can approximate the potential \(Q(x)\) by the sequence of potentials
$$Q_{\beta}(x) = \begin{cases} Q(x), &0 \le x \le\beta, \\ 0,& x > \beta, \end{cases} $$
and use the technique from [5] (see Lemma 2.4.1).
Since under condition (31) the Jost solution exists for \(\rho= 0\), we have \(e(0, \rho) = O(1)\) as \(\rho\to0\). Taking (12) and \(g(\rho) = O(1)\) into account, we arrive at \(M(\lambda) = O(\rho^{-1})\), \(\rho\to0\). □
We combine the properties of the Weyl matrix in the next theorem.
Theorem 3
Let
\(L = L(Q, h)\), \(Q = Q^{\dagger}\), \(h = h^{\dagger}\), \(Q \in L((0, \infty); \mathbb{C}^{m \times m})\), and condition (31) holds. Then the Weyl matrix of this problem
\(M(\lambda)\)
is analytic in Π outside the finite set of simple poles
\(\Lambda' = \{ \lambda_{k} \}_{k = 1}^{P}\), \(\lambda_{k} = \rho_{k}^{2} < 0\), and continuous in
\(\Pi _{1}\backslash \Lambda\). Moreover,
$$\begin{aligned}& \alpha_{k} := \mathop{\operatorname{Res}}_{\lambda= \lambda_{k}} M(\lambda) \ge 0, \quad k = \overline{1, P}, \\& M(\lambda) = O\bigl(\rho^{-1}\bigr), \quad \rho\to0. \end{aligned}$$
The matrix function
\(\rho V(\lambda)\)
is continuous and bounded for
\(\lambda> 0\)
and
\(V(\lambda) > 0\)
for
\(\lambda> 0\).
Proof
Fix an eigenvalue \(\lambda_{k}\), \(k = \overline{1, P}\). Consider two representations (11) and (14) of \(\Phi(x, \lambda)\), and take for both of them the residue with respect to the pole \(\lambda_{k}\). Then we obtain the relation
$$ \varphi(x, \lambda_{k}) \alpha_{k} = e(x, \rho_{k}) u_{k},\quad u_{k} := 2 \rho_{k} \mathop{\operatorname{Res}}_{\rho= \rho_{k}} \bigl(u(\rho) \bigr)^{-1}. $$
(36)
Note that the columns of the left-hand side and the right-hand side of (36) are vector eigenfunctions, corresponding to the eigenvalue \(\lambda_{k}\).
Further we consider ρ such that \(\operatorname{Re} \rho= 0\), \(\operatorname{Im} \rho> 0\). It is easy to check that
$$\bigl\langle e^{*}(x, \rho_{k}), e(x, \rho) \bigr\rangle _{x = \infty} - \bigl\langle e^{*}(x, \rho_{k}), e(x, \rho) \bigr\rangle _{x = 0} = (\lambda- \lambda_{k}) \int_{0}^{\infty} e^{*}(x, \rho_{k}) e(x, \rho) \,dx. $$
Using asymptotics (4) for \(e(x, \rho)\) and \(e^{*}(x, \rho _{k})\), we get
$$\lim_{\lambda\to\lambda_{k}} \frac{1}{\lambda- \lambda_{k}} \bigl\langle e^{*}(x, \rho_{k}), e(x, \rho) \bigr\rangle _{x = \infty} = 0_{m}. $$
By virtue of the self-adjointness, \(e^{*}(x, \rho) = e^{\dagger}(x, \rho )\), \(\varphi^{*}(x, \lambda) = \varphi^{\dagger}(x, \lambda)\), \(\lambda< 0\). Therefore
$$\mathcal{S} := u_{k}^{\dagger} \int_{0}^{\infty} e^{\dagger}(x, \rho _{k}) e(x, \rho_{k}) \,dx\, u_{k} = - \lim_{\lambda\to\lambda_{0}} \frac{u_{k}^{\dagger} \langle e^{\dagger}(x, \rho_{k}), e(x, \rho) \rangle u_{k}}{\lambda- \lambda_{0}}. $$
Substituting (36), we obtain
$$\begin{aligned} \mathcal{S} &= - \lim_{\lambda\to\lambda_{k}} \frac{\alpha _{k}^{\dagger} ({\varphi ^{\dagger}}'(0, \lambda_{k}) e(0, \rho) - \varphi^{\dagger}(0, \lambda_{k}) e'(0, \rho) )}{\lambda- \lambda_{k}} \cdot\lim _{\lambda\to\lambda_{k}}(\lambda- \lambda_{k}) \bigl(u(\rho) \bigr)^{-1} \\ &=-\alpha_{k}^{\dagger} \lim_{\rho\to\rho_{k}} \bigl(h e(0, \rho) - e'(0, \rho)\bigr) \bigl(u(\rho)\bigr)^{-1} = \alpha_{k}^{\dagger}. \end{aligned}$$
Obviously, \(\mathcal{S} = \mathcal{S}^{\dagger} \ge0\). Hence \(\alpha_{k} = \alpha_{k}^{\dagger} \ge0\).
Now consider \(V(\lambda) = \frac{1}{2 \pi i} (M^{-}(\lambda) - M^{+}(\lambda))\), \(\lambda> 0\). Taking the relations (12) and \(M(\lambda) = M^{*}(\lambda)\) into account, we have
$$M^{-}(\lambda) = \bigl(u^{*}(-\rho)\bigr)^{-1} e^{*}(0, -\rho),\qquad M^{+}( \lambda) = e(0, \rho) \bigl(u(\rho)\bigr)^{-1},\quad \rho> 0. $$
Consequently,
$$V(\lambda) = -\frac{1}{2 \pi i} \bigl(u^{*}(-\rho)\bigr)^{-1} \bigl\langle e^{*}(x, -\rho), e(x, \rho) \bigr\rangle \bigl(u(\rho)\bigr)^{-1}. $$
Substituting (10), we get
$$V(\lambda) = \frac{\rho}{\pi} \bigl(u^{*}(-\rho)\bigr)^{-1} \bigl(u( \rho)\bigr)^{-1}. $$
For real values of ρ, one has \(e^{*}(x, -\rho) = e^{\dagger}(x, \rho )\), \(u^{*}(-\rho) = u^{\dagger}(\rho)\). Since in the self-adjoint case the set of spectral singularities \(\Lambda''\) is empty, \(\det u(\rho) \ne0\), \(\rho\in\mathbb{R} \backslash\{ 0 \}\). Hence \(V(\lambda) = V^{\dagger}(\lambda) > 0\).
The remaining assertions of the theorem do not need a proof. □
We call the collection \(( \{ V(\lambda)\}_{\lambda> 0}, \{ \lambda_{k}, \alpha_{k} \} _{k = 1}^{P} )\)
the spectral data of L. Similarly to the scalar case (see [4]), the Weyl matrix can be uniquely determined by the spectral data
$$ M(\lambda) = \int_{0}^{\infty} \frac{V(\mu)}{\lambda- \mu} \,d\mu+ \sum_{k = 1}^{P} \frac{\alpha_{k}}{\lambda- \lambda_{k}},\quad \lambda\in\Pi\backslash \Lambda'. $$
(37)