Open Access

Self-adjoint higher order differential operators with eigenvalue parameter dependent boundary conditions

Boundary Value Problems20152015:79

Received: 22 January 2015

Accepted: 30 April 2015

Published: 16 May 2015


Eigenvalue problems for even order regular quasi-differential equations with boundary conditions which depend linearly on the eigenvalue parameter λ can be represented by an operator polynomial \(L(\lambda)=\lambda^{2}M-i\lambda K-A\), where M is a self-adjoint operator. Necessary and sufficient conditions are given such that also K and A are self-adjoint.


higher order differential operatoreigenvalue dependent boundary conditionsself-adjoint boundary conditionsquasi-derivative



1 Introduction

In order to solve linear partial differential equations of the form
$$\frac{\partial^{2}u}{\partial t^{2}}+\mathcal{A}u=0, $$
where \(\mathcal{A}\) is a linear differential operator with respect to the variable x on an interval I, the separation of variables method \(u(x,t)=y(x)e^{i\omega t}\) leads to
$$\omega^{2}y=\mathcal{A}y. $$
For t-independent boundary conditions \(Bu=0\), setting \(\lambda =\omega^{2}\), the operator theoretic realization leads to an eigenvalue problem for an operator A in the Lebesgue space \(L^{2}(I)\) with domain
$$\mathscr{D}(A)= \bigl\{ y\in L^{2}(I):\mathcal{A}y\in L^{2}(I), By=0 \bigr\} . $$
Such problems are well studied, and of particular importance is the case that A is self-adjoint. Many applications in physics and engineering can be represented by such self-adjoint operators.
However, problems like the Regge problem and the vibrating beam problem have boundary conditions with partial first order derivatives with respect to t or whose mathematical model leads to an eigenvalue problem with the eigenvalue parameter \(\lambda=\omega\) occurring linearly in the boundary conditions. Such problems have an operator representation of the form
$$ L(\lambda) =\lambda^{2}M-i\lambda K-A $$
in a Hilbert space \(H=L^{2}(I)\oplus\mathbb{C}^{k}\), where k is the number of eigenvalue dependent boundary conditions.

In general, the spectrum of L is no longer real but still has some particularly nice properties if K, M, A are self-adjoint with \(M\ge0\) and \(K\ge0\), the resolvent set of L is nonempty, and L has a compact resolvent: it is symmetric with respect to the imaginary axis and eigenvalues with negative imaginary parts must lie on the imaginary axis. In this situation, the operators M and K are quite simple bounded self-adjoint operators. However, the operator A is determined by three ingredients: the differential equation \(\mathcal{A}\), the parameter independent boundary conditions as homogeneous boundary conditions for A, and the parameter dependent boundary conditions as an inhomogeneous part of A. Hence one cannot make use of the criteria for self-adjointness in the case of parameter independent boundary conditions. Rather, the parameter dependent case is a proper extension of the parameter independent case.

For parameter independent boundary conditions, i.e., \(k=0\), characterizations of self-adjointness for A in the case of formally symmetric even order quasi-differential expressions are known both for the regular and the singular cases, see [1] and in particular [1], Theorem 6 for the regular case. The simplest formulation of these self-adjointness conditions makes use of quasi-derivatives, and we will henceforth mostly use quasi-derivatives \(y^{[j]}\) rather than derivatives \(y^{(j)}\). For the definition of the quasi-derivatives \(y^{[j]}\), we refer the reader to (2.2)-(2.5), see also Remark 3.2.

Some special cases of self-adjoint boundary conditions for regular 2nth order differential equations with \(k>0\) are known. In [2], the second order problem related to the Regge problem was investigated, whereas the fourth order differential equation \(y^{(4)}-(gy')'\) related to a vibrating beam was dealt with in [3], where the boundary conditions are of the form
$$ B_{j}(\lambda)y=y^{[p_{j}]}(a_{j})+\lambda \beta_{j}y^{[q_{j}]}(a_{j}), \quad j=1,\ldots,4, $$
with exactly one boundary condition depending on λ. A classification of all self-adjoint boundary conditions of the form (1.2) was obtained in [4]. A corresponding result for sixth order differential equations was given in [5].

In this paper we consider 2nth order quasi-differential equations and derive necessary and sufficient conditions for 2n boundary conditions of the form (1.2) to generate self-adjoint operators K and A.

In Section 2 we give a precise definition of the boundary value problem and the quadratic operator pencil L associated with it. In Section 3 we derive necessary and sufficient conditions for K to be self-adjoint and for A to be symmetric. In Section 4 it is shown that A is self-adjoint if A is symmetric.

2 The eigenvalue problem

We first summarize some basic facts about quasi-differential equations for the convenience of the reader. For a more comprehensive discussion of quasi-differential equations, the reader is referred to [6] and to [7] in the scalar case and to [8, 9] for the general case with matrix coefficients.

Let \(I=(a, b)\) be an interval with \(-\infty< a <b <\infty\), and let m be a positive integer. For a given set S, \(M_{m}(S)\) denotes the set of \(m\times m\) matrices with entries from S. Let
$$\begin{aligned} Z_{m}(I):={}& \bigl\{ G=(g_{r,s})_{r,s=1}^{m} \in M_{m} \bigl(L^{1}(I) \bigr), \\ &{}g_{r,r+1} \mbox{ invertible a.e. for } 1\le r\le m-1, g_{r,s}=0 \mbox{ for } 2\le r+1< s\le m \bigr\} , \end{aligned}$$
where \(L^{1}(I)\) denotes the complex-valued Lebesgue integrable functions on I.
For \(G\in Z_{m}(I)\), define
$$\begin{aligned} Q_{0}:= \{y:I\to\mathbb{C}, y \mbox{ measurable}\} \end{aligned}$$
$$ y^{[0]}:=y,\quad y\in Q_{0}. $$
Inductively, for \(r=1,\dots,m\), we define
$$\begin{aligned}& Q_{r}= \bigl\{ y\in Q_{r-1} : y^{[r-1]}\in AC(I) \bigr\} , \end{aligned}$$
$$\begin{aligned}& y^{[r]}=g_{r,r+1}^{-1} \Biggl(y^{[r-1]'}-\sum _{s=1}^{r}g_{r,s}y^{[s-1]} \Biggr),\quad y\in Q_{r}, \end{aligned}$$
where \(g_{m,m+1}:=1\) and where \(AC(I)\) denotes the set of complex-valued functions which are absolutely continuous on I. Finally we set
$$\begin{aligned} \mathcal{A}y:=i^{m}y^{[m]},\quad y\in Q_{m}. \end{aligned}$$
The expression \(\mathcal{A}=\mathcal{A}_{G}\) is called the quasi-differential expression associated with G, and the function \(y^{[r]}\), \(0\le r\le m\), is called the rth quasi-derivative of y. We also write \(\mathscr{D}(\mathcal{A})\) for \(Q_{m}\).

Observe that the quasi-derivatives defined in (2.5) depend on G. However, since we are only going to deal with a single quasi-differential equation, we will not indicate this dependence explicitly.

In the remainder of the paper, we assume that \(m=2n\) is an even positive integer, that \(G=(g_{r,s})_{r,s=1}^{2n}\in Z_{2n}(I)\), and that \(w:I\to\mathbb{R}\) is positive a.e. and satisfies \(w\in L^{1}(I)\).

Together with (2.6) we consider the boundary conditions \(B_{j}(\lambda)y=0\), \(j=1,\dots,2n\), taken at the endpoint a for \(j=1,\dots,n\) and at the endpoint b for \(j=n+1,\dots, 2n\). We assume for simplicity that
$$ B_{j}(\lambda)y=y^{[p_{j}]}(a_{j})+i \lambda\beta_{j} y^{[q_{j}]}(a_{j}), $$
where \(a_{j}=a \) for \(j=1,\dots,n\), \(a_{j}=b\) for \(j=n+1,\dots,2n\), \(\beta _{j}\in\mathbb{C}\) and \(0\le p_{j},q_{j}\le2n-1\). Of course, the numbers \(q_{j}\) are ambiguous and irrelevant in case \(\beta_{j}=0\).
The differential expression (2.6) and the boundary conditions (2.7) define the eigenvalue problem
$$\begin{aligned} &(-1)^{n}y^{[2n]}=\lambda^{2}wy, \end{aligned}$$
$$\begin{aligned} &B_{j}(\lambda)y=0,\quad j=1,\dots,2n. \end{aligned}$$
We put
$$\begin{aligned}& \Theta_{1}= \bigl\{ j\in\{1,\dots,2n\}: \beta_{j}\neq0 \bigr\} ,\qquad \Theta_{0}= \{1,\dots,2n \}\setminus\Theta_{1}, \\& \Theta_{r}^{a}=\Theta_{r}\cap\{1,\dots, n \}, \qquad \Theta_{r}^{b}=\Theta_{r}\cap\{n+1,\dots,2n \},\quad \mbox{for } r=0,1, \end{aligned}$$
$$\begin{aligned} k=|\Theta_{1}|. \end{aligned}$$

Assumption 2.1

We assume that the numbers \(p_{1},\dots,p_{n},q_{j}\) for \(j\in\Theta_{1}^{a}\) are distinct and that the numbers \(p_{n+1}, \dots ,p_{2n},q_{j}\) for \(j\in\Theta_{1}^{b}\) are distinct.

Assumption 2.1 means that for any pair \((r, a_{j})\) the term \(y^{[r]}(a_{j})\) occurs at most once in the boundary conditions (2.7).

For \(j\in\Theta_{1}\), we choose \(\alpha_{j}\in\mathbb{R}\) and \(\varepsilon_{j}\in\mathbb{C}\) such that \(\beta_{j}=\alpha_{j}\varepsilon_{j}\).

For \(y\in\mathscr{D}(\mathcal{A})\), we define \(Y_{R}= \bigl ({\scriptsize\begin{matrix} Y(a)\cr Y(b)\end{matrix}} \bigr ) \) with \(Y= (y^{[0]},y^{[1]},\dots,y^{[2n-1]} )^{\mathsf{T}}\). We denote the collection of the 2n boundary conditions (2.9) by U and define the following matrices related to U:
$$ \begin{aligned} &U_{r}Y_{R}= \bigl(y^{[p_{j}]}(a_{j}) \bigr)_{j\in\Theta_{r}},\quad r=0,1, \\ &V_{1}Y_{R}= \bigl(\varepsilon_{j}y^{[q_{j}]}(a_{j}) \bigr)_{j\in\Theta_{1}}, \end{aligned} \quad\mbox{where } y\in \mathscr{D}( \mathcal{A}). $$

Remark 2.2

In case that \(\Theta_{r}=\emptyset\) for \(r=0\) or \(r=1\), the corresponding matrix \(U_{r}\) will be identified with the ‘zero’ operator from \(\mathbb{C}^{2n}\) into \(\{0\}\).

The weighted Lebesgue space \(L^{2}(I,w)\) is the Hilbert space of all equivalence classes of complex-valued measurable functions f such that \((f,f)_{w}:=\int_{I}w(x)|f(x)|^{2}\,dx<\infty\). For convenience we define the operator \(\mathcal{A}_{\max}\) on \(L^{2}(I,w)\) by
$$\mathscr{D}(\mathcal{A}_{\max})= \bigl\{ y\in L^{2}(I,w):w^{-1} \mathcal{A}y\in L^{2}(I,w) \bigr\} , \quad\mathcal{A}_{\max}y =w^{-1}\mathcal{A}y. $$
We will associate the quadratic operator pencil
$$\begin{aligned} L(\lambda)=\lambda^{2}M-i\lambda K-A(U) \end{aligned}$$
in the space \(L^{2}(I,w)\oplus\mathbb{C}^{k}\) with problem (2.8), (2.9), where
$$\begin{aligned} M= \begin{pmatrix} I&0\\ 0&0 \end{pmatrix} \quad\mbox{and}\quad K= \begin{pmatrix} 0&0\\ 0&K_{0} \end{pmatrix} \quad\mbox{with }K_{0}=\operatorname {diag}(\alpha_{j}:j\in\Theta_{1}). \end{aligned}$$
The operator \(A(U)\) in \(L^{2}(I,w)\oplus\mathbb{C}^{k}\) is defined by
$$\begin{aligned} \mathscr{D} \bigl(A(U) \bigr)&= \Biggl\{ \widetilde{y}= \begin{pmatrix} y\\ V_{1}Y_{R} \end{pmatrix} :y\in\mathscr{D}(\mathcal{A}_{\max}), U_{0}Y_{R}=0 \Biggr\} , \\ \bigl(A(U) \bigr)\widetilde{y}&= \begin{pmatrix} \mathcal{A}_{\max}y\\ U_{1}Y_{R} \end{pmatrix},\quad \widetilde{y}\in \mathscr{D} \bigl(A(U) \bigr). \end{aligned}$$
It is easy to see that a function \(y\in\mathscr{D}(\mathcal{A}_{\max})\) satisfies \(\mathcal{A}y=\lambda^{2}wy\) and \(B_{j}(\lambda)y=0\) for \(j=1,\dots,2n\) if and only if there is \(c\in\mathbb{C}^{k}\) such that \((y,c)^{\mathsf{T}}\in \mathscr{D}(A(U))\) such that \(L(\lambda)(y,c)^{\mathsf{T}}=0\). In this case c is uniquely determined by y. Indeed, if \(y\in\mathscr{D}(\mathcal{A}_{\max})\) with \(\mathcal {A}y=\lambda^{2}wy\) and \(B_{j}(\lambda)y=0\) for \(j=1,\dots,2n\), then \(U_{0}Y_{R}=0\) shows that \((y,V_{1}Y_{R})^{\mathsf{T}} \in\mathscr{D}(A(U))\) and
$$L(\lambda) \begin{pmatrix} y\\ V_{1}Y_{R} \end{pmatrix} = \begin{pmatrix} \lambda^{2}y-\mathcal{A}_{\max} y\\ -i\lambda K_{0}V_{1}Y_{R}-U_{1}Y_{R} \end{pmatrix}. $$
Clearly, the first component is 0, and so is the second component since
$$i\lambda K_{0}V_{1}Y_{R}+U_{1}Y_{R} =i\lambda K_{0} \bigl(\varepsilon_{j}y^{[q_{j}]}(a_{j}) \bigr)_{j\in\Theta_{1}}+ \bigl(y^{[p_{j}]}(a_{j}) \bigr)_{j\in\Theta_{1}}= \bigl(B_{j}(\lambda)y \bigr)_{j\in\Theta_{1}}. $$
Hence the operator pencil L is an operator realization of the eigenvalue problem (2.8), (2.9).

It is clear that M and K are bounded self-adjoint operators and that M is non-negative. The operator \(A(U)\) is not self-adjoint, in general, and we will give necessary and sufficient conditions for the operator \(A(U)\) to be self-adjoint.

3 Symmetry conditions for \(A(U)\)

We will denote the canonical inner product in \(L^{2}(I,w)\oplus\mathbb {C}^{k}\) by \(\langle\cdot,\cdot\rangle\).

The Lagrange form of \(A(U)\) is defined by
$$\begin{aligned} F_{U}(\tilde{y},\tilde{z})= \bigl\langle A(U)\tilde{y},\tilde{z} \bigr\rangle - \bigl\langle \tilde{y},A(U)\tilde{z} \bigr\rangle , \quad\tilde{y}, \tilde{z}\in \mathscr{D} \bigl(A(U) \bigr). \end{aligned}$$
The operator \(A(U)\) is symmetric if and only if its Lagrange form is identically zero. For this it is necessary that \(\mathcal{A}\) is formally symmetric, and for the remainder of this paper we make therefore the following assumption.

Assumption 3.1

We assume that
$$G =-CG^{*}C, $$
$$\begin{aligned} C= \bigl((-1)^{r}\delta_{r,2n+1-s} \bigr)_{r,s=1}^{2n} \end{aligned}$$
and δ is the Kronecker delta.
It is easy to verify that Assumption 3.1 holds if and only if
$$ g_{r,s}=(-1)^{r+s+1}\overline{g}_{2n+1-s,2n+1-r}, \quad r,s=1,\dots,2n. $$

Remark 3.2

Classical formally self-adjoint differential expressions are of the form
$$(-1)^{n}\sum_{j=0}^{n} \bigl(g_{j}y^{(j)} \bigr)^{(j)} $$
with \(g_{j} \in C^{j}[0,a]\) for \(j=0,\dots,n\) and invertible \(g_{n}\). It is easy to verify that this is a quasi-differential equation with quasi-derivatives
$$\begin{aligned} &y^{[r]}= y^{(r)},\quad r=0,\dots,n-1, \\ &y^{[n]}=g_{n}y^{(n)}, \\ &y^{[r]}={y^{[r-1]}}'+g_{2n-r}y^{[2n-r]}, \quad r=n+1,\dots,2n. \end{aligned}$$
The corresponding matrix \(G=(g_{r,s})_{r,s=1}^{2n}\) has the entries \(g_{r,r+1}=1\) for \(r=1,\dots,n-1\) and \(r=n+1,\dots, 2n-1\), \(g_{n,n+1}=g_{n}^{-1}\), \(g_{r,2n-r+1}=-g_{2n-r}\) for \(r=n+1,\dots,2n\), while all other entries are zero. It is easy to see that Assumption 3.1 holds in this case if and only if \(g_{j}=\overline{g_{j}}\) for \(j=0,\dots,n\), so that the formal self-adjointness condition reduces to the well-known condition that all \(g_{j}\), \(j=0,\dots,n\), are real-valued functions.
From [10], Lemma 3.3 we know that the Lagrange identity
$$ \bigl(w^{-1}\mathcal{A}y,z \bigr)_{w}- \bigl(y,w^{-1} \mathcal{A}z \bigr)_{w}=Z_{R}^{*}DY_{R}, \quad y,z\in \mathscr{D}(\mathcal{A}_{\max}) $$
holds, where
$$\begin{aligned} D=(-1)^{n} \begin{pmatrix} C&0\\ 0&-C \end{pmatrix} . \end{aligned}$$

Proposition 3.3

The Lagrange form \(F_{U}\) of \(A(U)\) has the representation
$$\begin{aligned} F_{U}(\tilde{y},\tilde{z})=Z_{R}^{*}WY_{R},\quad \tilde{y},\tilde{z}\in\mathscr{D} \bigl(A(U) \bigr), \end{aligned}$$
$$\begin{aligned} W=D+ \bigl(V_{1}^{*}U_{1}-U_{1}^{*}V_{1} \bigr). \end{aligned}$$


Let \(\tilde{y},\tilde{z}\in\mathscr{D}(A(U))\). Then
$$\begin{aligned} F_{U}(\tilde{y},\tilde{z})&= \bigl(w^{-1}\mathcal{A}y,z \bigr)_{w}+(V_{1}Z_{R})^{*}U_{1}Y_{R} - \bigl(y,w^{-1}\mathcal{A}z \bigr)_{w}-(U_{1}Z_{R})^{*}V_{1}Y_{R}, \end{aligned}$$
and an application of the Lagrange identity (3.3) completes the proof of the lemma. □

By definition, an operator in a Hilbert space is symmetric if and only if its Lagrange form is identically zero. Hence we have the following.

Corollary 3.4

The differential operator \(A(U)\) is symmetric if and only if \(Z_{R}^{*}WY_{R}=0\) for all \(\widetilde{y},\widetilde{z}\in\mathscr{D}(A(U))\).

The nullspace and range of a matrix M are denoted by \(N(M)\) and \(R(M)\), respectively.

Proposition 3.5

The differential operator \(A(U)\) is symmetric if and only if \(W(N(U_{0}))\subset(N(U_{0}))^{\perp}\).


From [10], Corollary 5.5 we know that
$$\begin{aligned} \bigl\{ Y_{R} : y\in\mathscr{D}(\mathcal{A}_{\max }) \bigr\} =\mathbb{C}^{4n}. \end{aligned}$$
Hence \(\{Y_{R}:\widetilde{y}\in\mathscr{D}(A(U)) \}=N(U_{0})\). An application of Proposition 3.4 completes the proof. □

Corollary 3.6

If \(A(U)\) is symmetric, then \(\operatorname {rank}W=2(2n-k)\) and \(W(N(U_{0}))=(N(U_{0}))^{\perp}\).


Since \(\dim(N(U_{0}))^{\perp}=\operatorname {rank}U_{0}=2n-k\), we have
$$\begin{aligned} 2n-k\ge\dim W \bigl(N(U_{0}) \bigr)\ge\dim N(U_{0})-(4n-\operatorname {rank}W)=-2n+k+\operatorname {rank}W. \end{aligned}$$
Hence \(\operatorname {rank}W\le2(2n-k)\). Since \(V_{1}^{*}U_{1}-U_{1}^{*}V_{1}\) has 2k non-zero entries and D is invertible, \(\operatorname {rank}W\ge2(2n -k )\) and \(\operatorname {rank}W=2(2n -k )\) follows. In this case, all the inequalities of (3.7) are equalities and \(\dim W(N(U_{0}))=\dim(N(U_{0}))^{\perp}\) holds. Thus it follows from Proposition 3.5 that \(W(N(U_{0}))=(N(U_{0}))^{\perp}\). □

In view of Corollary 3.6, we may assume that \(\operatorname {rank}W=2(2n -k )\) when investigating the symmetry of \(A(U)\). Since \((N(U_{0}))^{\perp }=R(U_{0}^{*})\), see [11], Theorem IV.5.13, Proposition 3.5 and Corollary 3.6 lead to the following.

Corollary 3.7

Let \(\operatorname {rank}W=2(2n -k )\). Then the differential operator \(A(U)\) is symmetric if and only if \(W(N(U_{0}))=R(U_{0}^{*})\).

We now give an explicit description for the condition \(\operatorname {rank}W=2(2n -k )\).

Proposition 3.8

\(\operatorname {rank}W=2(2n -k )\) if and only if the following conditions hold:
  1. 1.

    For \(s\in\Theta_{1}\), \(p_{s}+q_{s}=2n-1\);

  2. 2.

    For \(s\in\Theta_{1}^{(a)}\), \(\varepsilon_{s}=(-1)^{q_{s}+n}\);

  3. 3.

    For \(s \in\Theta_{1}^{(b)}\), \(\varepsilon_{s}=(-1)^{q_{s}+n+1}\).



Note that
$$\begin{aligned} V_{1}^{*}U_{1}-U_{1}^{*}V_{1} = \begin{pmatrix} V_{2}&0\\ 0&V_{3} \end{pmatrix} , \end{aligned}$$
$$\begin{aligned} &V_{2}=\sum_{s\in\Theta_{1}^{(a)}} (\overline{ \varepsilon _{s}}\delta_{i,q_{s}+1} \delta_{j,p_{s}+1}-\varepsilon_{s} \delta_{i,p_{s}+1}\delta_{j,q_{s}+1} )_{i,j=1}^{2n}, \\ &V_{3}=\sum_{s\in\Theta_{1}^{(b)}} (\overline{ \varepsilon_{s}}\delta_{i,q_{s}+1}\delta_{j,p_{s}+1} - \varepsilon_{s} \delta_{i,p_{s}+1}\delta_{j,q_{s}+1} )_{i,j=1}^{2n}. \end{aligned}$$
Since D has exactly one non-zero entry in each row and column and \(V_{1}^{*}V_{0}-V_{0}^{*}V_{1}\) has exactly 2k non-zero entries, it follows that \(\operatorname {rank}W=2(2n -k )\) if and only if each non-zero entry of \(V_{2}\) cancels a non-zero entry of \((-1)^{n-1}C\) and each non-zero entry of \(V_{3}\) cancels a non-zero entry of \((-1)^{n}C\). Since the non-zero entries of C are in rows i and columns j such that \(i+j=2n+1\), we obtain that \(\operatorname {rank}W=2(2n-k)\) if and only if conditions 1, 2, and 3 are satisfied. □

Corollary 3.9

The boundary eigenvalue problem (2.8), (2.9) has an operator pencil representation (2.12) with self-adjoint operator K and symmetric operator \(A(U)\) if and only if
  1. 1.

    \(\beta_{j}\in\mathbb{R}\) and \(p_{j}+q_{j}=2n-1\) for all \(j\in \Theta_{1}\);

  2. 2.




We have seen in Proposition 3.8 that three sets of conditions have to be satisfied in order that the necessary condition \(\operatorname {rank}W = 2(2n-k)\) for symmetry of \(A(U)\) holds. Conditions 2 and 3 can always be satisfied if we put \(\alpha_{j}=\beta_{j} (-1)^{q_{s}+n}\) for \(j\in\Theta_{1}^{a}\) and \(\alpha _{j}=\beta_{j} (-1)^{q_{s}+n+1}\) for \(j\in\Theta_{1}^{b}\), and for K to be self-adjoint it is therefore necessary and sufficient that \(\beta_{j}\) are real. The remaining conditions now follow easily from Proposition 3.8 and Corollary 3.7. □

We could now give explicit conditions for symmetry of \(A(U)\) in terms of the boundary conditions (2.7). However, we will see in the next section that \(A(U)\) is self-adjoint if and only if it is symmetric. In order to avoid duplication we will therefore postpone deriving these explicit conditions to the next section.

4 Self-adjointness conditions for \(A(U)\)

From Corollary 3.9 we know that for self-adjointness of K and \(A(U)\) the condition \(\beta_{j}\in\mathbb{R}\) for all \(j\in\Theta_{1}\) is necessary. Hence we require without loss of generality that the numbers \(\varepsilon_{s}\) for \(s\in\Theta_{1}\) are chosen as in Proposition 3.8, conditions 2 and 3.

Assumption 4.1

For \(s\in\Theta_{1}^{(a)}\), let \(\varepsilon_{s}=(-1)^{q_{s}+n}\), and for \(s \in\Theta_{1}^{(b)}\), let \(\varepsilon_{s}=(-1)^{q_{s}+n+1}\).

For convenience, we set
$$\begin{aligned}& \widetilde{p}_{j}=p_{j}+1, \widetilde{q}_{j}=q_{j}+1 \quad\mbox{for }j=1,\dots,n, \\& \widetilde{p}_{j}=p_{j}+2n+1, \widetilde{q}_{j}=q_{j}+2n+1 \quad\mbox{for }j=n+1,\dots,2n. \end{aligned}$$
The range \(R(U_{r}^{*})\) of \(U_{r}^{*}\) for \(r=0,1\) is the span of all standard unit vectors \(e_{\tilde{p}_{j}}\) in \(\mathbb{C}^{4n}\) with \(j\in\Theta_{r}\), and \(R(V_{1}^{*})\) is the span of all standard unit vectors \(e_{\tilde{q}_{j}}\) in \(\mathbb{C}^{4n}\) with \(j\in\Theta_{1}\). Hence it follows from Assumptions 2.1 and 4.1 that
$$\begin{aligned}& U_{0}U_{0}^{*}=\operatorname{id}_{\mathbb{C}^{2n-k}},\qquad U_{1}U_{1}^{*}=\operatorname{id}_{\mathbb {C}^{k}},\qquad V_{1}V_{1}^{*}= \operatorname{id}_{\mathbb{C}^{k}}, \end{aligned}$$
$$\begin{aligned}& U_{1}U_{0}^{*}=0,\qquad V_{1}U_{0}^{*}=0, \qquad U_{1}V_{1}^{*}=0. \end{aligned}$$

Theorem 4.2

The operator \(A(U)\) is densely defined, the domain \(\mathscr{D}((A(U))^{*})\) of its adjoint \((A(U))^{*}\) is the set of all \(\widetilde{z}= \bigl ({\scriptsize\begin{matrix} z\cr d \end{matrix}} \bigr ) \) in \(L^{2}(I,w)\oplus\mathbb{C}^{k }\) such that there is \(c\in\mathbb{C}^{k}\) such that \(z\in\mathscr {D}(\mathcal{A}_{\max})\) and
$$ D^{*}Z_{R}+U_{1}^{*}d-V_{1}^{*}c\in R \bigl(U_{0}^{*} \bigr). $$
For \(\widetilde{z}= \bigl ({\scriptsize\begin{matrix} z\cr d \end{matrix}} \bigr ) \in\mathscr{D}((A(U)^{*}))\), the vectors d and c are uniquely determined by z, namely, \(d=-U_{1}D^{*}Z_{R}\) and \(c=V_{1}D^{*}Z_{R}\), and
$$ \bigl(A(U) \bigr)^{*}\widetilde{z}= \begin{pmatrix} \mathcal{A}_{\max}z\\ V_{1}D^{*}Z_{R} \end{pmatrix}. $$


By definition of the adjoint (possibly as a linear relation), \(\widetilde{z}= \bigl ({\scriptsize\begin{matrix}z\cr d \end{matrix}} \bigr ) \in L^{2}(I,w)\oplus\mathbb{C}^{k }\) belongs to \(\mathscr{D}((A(U))^{*})\) if and only if there is \(\widetilde{u}= \bigl ({\scriptsize\begin{matrix}u\cr c \end{matrix}} \bigr ) \in L^{2}(I,w)\oplus\mathbb{C}^{k }\) such that for all \(\widetilde{y}= \bigl ({\scriptsize\begin{matrix} y\cr V_{1}Y_{R} \end{matrix}} \bigr ) \in\mathscr{D}(A(U))\) the identity
$$ \bigl\langle A(U)\widetilde{y},\widetilde{z} \bigr\rangle =\langle \widetilde{y},\widetilde{u} \rangle $$
Hence let \(\widetilde{z},\widetilde{u}\in L^{2}(I,w)\oplus\mathbb{C}^{k }\) such that (4.5) holds for all \(\widetilde{y}\in\mathscr{D}(A(U))\). If y has compact support in I, then (4.5) reduces to
$$(\mathcal{A}_{\max}y,z)_{w}=(y,u)_{w}. $$
This, the formal symmetry Assumption 3.1 and [10], Theorem 4.2 show that \(z\in\mathscr {D}(\mathcal{A}_{\max})\) and \(\mathcal{A}_{\max}z=u\). We can now conclude that (4.5) holds if and only if
$$(\mathcal{A}_{\max} y,z)_{w}+d^{*}U_{1}Y_{R}=(y, \mathcal{A}_{\max}z)_{w}+c^{*}V_{1}Y_{R}. $$
In view of the Lagrange identity (3.3), the above is equivalent to
$$Z_{R}^{*}DY_{R}+d^{*}U_{1}Y_{R}=c^{*}V_{1}Y_{R}. $$
Since the range of all \(Y_{R}\) with \(y\in\mathscr{D}(A(U))\) is \(N(U_{0})\), it follows that (4.5) is equivalent to \(z\in\mathscr{D}(\mathcal{A}_{\max})\), \(u=\mathcal{A}_{\max}z\) and
$$ D^{*}Z_{R}+U_{1}^{*}d-V_{1}^{*}c\in N(U_{0})^{\perp}= R \bigl(U_{0}^{*} \bigr). $$
Applying \(U_{1}\) and \(V_{1}\), respectively, to (4.6) and observing (4.1) and (4.2) it follows that d and c are uniquely given by \(d=-U_{1}D^{*}Z_{R}\) and \(c=V_{1}D^{*}Z_{R}\). From the uniqueness of u and c we see that \((A(U))^{*}\) is not only a linear relation but a linear operator, so that \(A(U)\) is densely defined. □

Remark 4.3

The matrix D is invertible and
$$\begin{aligned} D^{-1}=-D=D^{*}, \end{aligned}$$
see [8], (2.7).

Proposition 4.4

Assume that \(\operatorname {rank}W=2(2n-k)\). Then \(U_{1}D=V_{1}\) and \(V_{1}D=-U_{1}\).


By definition of \(U_{1}\) and D we can write
$$U_{1}D=(-1)^{n} \begin{pmatrix} U_{1}^{a}C&0\\ 0&-U_{1}^{b}C \end{pmatrix} , $$
where \(U_{1}^{\alpha}=(\delta_{j,p_{i}+1})_{i\in\Theta_{1}^{\alpha},j=1,\dots ,2n}\) for \(\alpha=a,b\). In view of Proposition 3.8 we conclude that
$$\begin{aligned} U_{1}^{\alpha}C&= \bigl(\delta_{2n+1-j,p_{i}+1}(-1)^{p_{i}+1} \bigr)_{i\in\Theta _{1}^{\alpha},j=1,\dots,2n} \\ &= \bigl(\delta_{j,q_{i}+1}(-1)^{q_{i}} \bigr)_{i\in\Theta_{1}^{\alpha},j=1,\dots ,2n} . \end{aligned}$$
Hence \(U_{1}D=V_{1}\), and (4.7) gives \(V_{1}D=U_{1}D^{2}=-U_{1}\). □

Proposition 4.5

If \(A(U)\) is symmetric, then \(A(U)\) is self-adjoint.


We have to show that \(\mathscr{D}((A(U))^{*})\subset\mathscr{D}(A(U))\). By Theorem 4.2, \(\mathscr{D}((A(U))^{*})\) is the set of all \(\bigl ({\scriptsize\begin{matrix} z\cr V_{1}Z_{R} \end{matrix}} \bigr ) \) such that \(z\in\mathscr{D}(\mathcal{A}_{\max})\) and \(D^{*}Z_{R}+U_{1}^{*}d-V_{1}c\in R(U_{0}^{*})\). But Theorem 4.2, Proposition 4.4 and (4.7) imply
$$\begin{aligned} D^{*}Z_{R}-V_{1}^{*}c+U_{1}^{*}d&=D^{*}Z_{R}-V_{1}^{*}V_{1}D^{*}Z_{R}-U_{1}^{*}U_{1}D^{*}Z_{R} \\ &=-DZ_{R}-V_{1}^{*}U_{1}Z_{R}+U_{1}^{*}V_{1}Z_{R} \\ &=-WZ_{R}, \end{aligned}$$
so that \(\mathscr{D}((A(U))^{*})\subset\mathscr{D}(A(U))\) if and only if \(W^{-1}(R(U_{0}^{*}))\subset N(U_{0})\).
We know that \(\operatorname {rank}U_{0}=2n-k\) and \(\dim N(U_{0})=4n-\operatorname {rank}U_{0}=2n+k\), whereas \(\dim N(W)=4n-\operatorname {rank}W=2k\) by Corollary 3.6. Altogether, we conclude
$$\dim W^{-1} \bigl(R \bigl(U_{0}^{*} \bigr) \bigr)\le\dim N(W)+ \operatorname {rank}U_{0}=2n+k=\dim N(U_{0}). $$
But from Corollary 3.7 we conclude that \(N(U_{0})\subset W^{-1}(R(U_{0}^{*}))\), and it follows that \(N(U_{0})= W^{-1}(R(U_{0}^{*}))\). □

Proposition 4.6

Assume \(\operatorname {rank}W=2(2n-k)\). Then \(W(N(U_{0}))=R(U_{0}^{*})\) if and only if
  1. (i)

    \(p_{s}+p_{r}\ne2n-1\) for all \(r,s\in\Theta_{0}^{a}\),

  2. (ii)

    \(p_{s}+p_{r}\ne2n-1\) for all \(r,s\in\Theta_{0}^{b}\).



Defining for \(c=a,b\),
$$\begin{aligned}& M_{c}=\operatorname {span}\bigl\{ e_{p_{j}+1}:j\in\Theta_{0}^{c} \bigr\} \subset\mathbb{C}^{2n},\quad c=a,b, \\& N_{c}=\mathbb{C}^{2n}\ominus M_{c}=\operatorname {span}\bigl\{ e_{j}:j\in\{1,\dots,2n\}\setminus \bigl\{ p_{s}+1:s\in \Theta_{0}^{c} \bigr\} \bigr\} \subset\mathbb{C}^{2n}, \\& W_{a}=(-1)^{n}C+V_{2},\qquad W_{b}=(-1)^{n+1}C+V_{3}, \end{aligned}$$
where \(V_{2}\) and \(V_{3}\) are as in (3.7), it follows that
$$R \bigl(U_{0}^{*} \bigr)= \Biggl\{ \begin{pmatrix} u_{a}\\ u_{b} \end{pmatrix} :u_{a} \in M_{a}, u_{b}\in M_{b} \Biggr\} ,\qquad N(U_{0})= \Biggl\{ \begin{pmatrix} u_{a}\\ u_{b} \end{pmatrix} :u_{a}\in N_{a}, u_{b}\in N_{b} \Biggr\} , $$
$$W=D+V_{1}^{*}U_{1}- U_{1}^{*}V_{1}= \begin{pmatrix} W_{a}&0\\ 0&W_{b} \end{pmatrix} $$
in view of (3.5) and (3.8). Therefore \(W(N(U_{0}))=R(U_{0}^{*})\) if and only if \(W_{c}(N_{c})=M_{c}\) for \(c=a,b\). Now let \(c\in\{a,b\}\). From Proposition 3.8 and its proof we find for \(j\in\{1,\dots ,2n\}\) that
$$\begin{aligned} W_{c}(e_{j})&= \begin{cases} \pm e_{2n+1-j} &\mbox{if }j\in\{1,\dots,2n\}\setminus\{ p_{s}+1,q_{s}+1:s\in\Theta_{1}^{c}\},\\ 0&\mbox{if }j\in\{p_{s}+1,q_{s}+1:s\in\Theta_{1}^{c}\}. \end{cases} \end{aligned}$$
Observing condition 1 in Proposition 3.8 it follows that
$$\begin{aligned} W_{c}(N_{c})={}&\operatorname {span}\bigl\{ e_{2n+1-j}:j\in\{1, \dots,2n\} \\ &{} \setminus \bigl( \bigl\{ p_{s}+1,q_{s}+1:s\in \Theta_{1}^{c} \bigr\} \cup \bigl\{ p_{s}+1:s\in \Theta_{0}^{c} \bigr\} \bigr) \bigr\} \\ ={}&\operatorname {span}\bigl\{ e_{j}:j\in\{1,\dots,2n\} \\ &{}\setminus \bigl( \bigl\{ p_{s}+1,q_{s}+1:s\in \Theta_{1}^{c} \bigr\} \cup \bigl\{ 2n-p_{s}:s\in \Theta_{0}^{c} \bigr\} \bigr) \bigr\} . \end{aligned}$$
Hence \(W_{c}(N_{c})=M_{c}\) holds if and only if the sets
$$\Psi_{1}^{c}:= \bigl\{ p_{s}+1,q_{s}+1:s \in\Theta_{1}^{c} \bigr\} \cup \bigl\{ 2n-p_{s}:s \in \Theta_{0}^{c} \bigr\} \quad\mbox{and}\quad \Psi_{0}^{c}:= \bigl\{ p_{s}+1:s\in \Theta_{0}^{c} \bigr\} $$
are complementary subsets of \(\{1,\dots,2n\}\). But by Assumption 2.1 and condition 1 in Proposition 3.8 the listed elements in \(\Psi_{0}^{c}\) as well as in \(\Psi_{1}^{c}\) are mutually distinct, so that the sets \(\Psi_{0}^{c}\) and \(\Psi_{1}^{c}\) are complementary if and only if they are disjoint. It is clear that this latter property holds if and only if \(2n-p_{j}\notin\Psi_{0}^{c}\) for all \(j\in\Theta_{0}^{c}\). This completes the proof of the proposition. □

Theorem 4.7

The boundary eigenvalue problem (2.8), (2.9) has an operator pencil representation (2.12) with self-adjoint operators K and \(A(U)\) if and only if
  1. 1.

    \(\beta_{j}\in\mathbb{R}\) and \(p_{j}+q_{j}=2n-1\) for all \(j\in \Theta_{1}\);

  2. 2.

    \(p_{s}+p_{r}\ne2n-1\) for all \(r,s\in\Theta_{0}^{a}\),

  3. 3.

    \(p_{s}+p_{r}\ne2n-1\) for all \(r,s\in\Theta_{0}^{b}\).



This theorem is an immediate consequence of Corollary 3.9 and Propositions 4.5 and 4.6. □



This research was partially supported by a grant from the NRF of South Africa, grant number 80956.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

School of Mathematics, University of the Witwatersrand


  1. Wang, A, Sun, J, Zettl, A: Characterization of domains of self-adjoint ordinary differential operators. J. Differ. Equ. 246, 1600-1622 (2009) View ArticleMATHMathSciNetGoogle Scholar
  2. Pivovarchik, V, van der Mee, C: The inverse generalized Regge problem. Inverse Probl. 17, 1831-1845 (2001) View ArticleMATHMathSciNetGoogle Scholar
  3. Möller, M, Pivovarchik, V: Spectral properties of a fourth order differential equation. J. Anal. Appl. 25, 341-366 (2006) MATHMathSciNetGoogle Scholar
  4. Möller, M, Zinsou, B: Self-adjoint fourth order differential operators with eigenvalue parameter dependent boundary conditions. Quaest. Math. 34, 393-406 (2011) View ArticleMathSciNetMATHGoogle Scholar
  5. Möller, M, Zinsou, B: Sixth order differential operators with eigenvalue dependent boundary conditions. Appl. Anal. Discrete Math. 7(2), 378-389 (2013) View ArticleMATHMathSciNetGoogle Scholar
  6. Zettl, A: Formally self-adjoint quasi-differential operators. Rocky Mt. J. Math. 5, 453-474 (1975) View ArticleMATHMathSciNetGoogle Scholar
  7. Everitt, WN, Zettl, A: Generalized symmetric ordinary differential expressions I: the general theory. Nieuw Arch. Wiskd. 27, 363-397 (1979) MATHMathSciNetGoogle Scholar
  8. Möller, M, Zettl, A: Symmetric differential operators and their Friedrichs extension. J. Differ. Equ. 115(1), 50-69 (1995). doi:10.1006/jdeq.1995.1003 View ArticleMATHMathSciNetGoogle Scholar
  9. Frentzen, H: Equivalence, adjoints and symmetric of quasi-differential expressions with matrix-valued coefficients and polynomial in them. Proc. R. Soc. A 92, 123-146 (1982) MATHMathSciNetGoogle Scholar
  10. Möller, M, Zettl, A: Semi-boundedness of ordinary differential operators. J. Differ. Equ. 115(1), 24-49 (1995). doi:10.1006/jdeq.1995.1002 View ArticleMATHMathSciNetGoogle Scholar
  11. Kato, T: Perturbation Theory for Linear Operators. Springer, Berlin (1966) View ArticleMATHGoogle Scholar


© Möller and Zinsou. 2015