Open Access

Self-adjoint fourth order differential operators with eigenvalue parameter dependent and periodic boundary conditions

Boundary Value Problems20172017:33

https://doi.org/10.1186/s13661-017-0768-y

Received: 19 October 2016

Accepted: 6 March 2017

Published: 14 March 2017

Abstract

Fourth order eigenvalue problems with periodic and separated boundary conditions are considered. One of the separated boundary conditions depends linearly on the eigenvalue parameter λ. These problems can be represented by an operator polynomial \(L(\lambda)=\lambda ^{2}M-i\alpha\lambda K-A\), where \(\alpha>0\), M and K are self-adjoint operators. Necessary and sufficient conditions are given such that A is self-adjoint.

Keywords

fourth order differential operatorseigenvalue dependent boundary conditionsperiodic boundary conditionsquadratic operator pencilself-adjoint operators

MSC

34B0734B0847E05

1 Introduction

Higher order linear differential equations occur in applications with or without the eigenvalue parameter in the boundary conditions. Such problems are realized as operator polynomials, also called operator pencils. Higher order eigenvalue problems are experiencing slow but steady developments. Some recent developments of higher order problems whose eigenvalue boundary conditions may depend on the eigenvalue parameter, including asymptotics of the eigenvalues can be found in [114].

The generalized Regge problem and the small transversal vibration of a homogeneous beam compressed or stretched problems have boundary conditions with partial first derivatives with respect to the time variable t. The self-adjoint sixth order problem [8], the self-adjoint higher order problems [9] and the fourth order Birkhoff regular problems [10] also have the same type of boundary conditions described above. The mathematical model of these problems leads to eigenvalue problems with the eigenvalue parameter λ occurring linearly in the boundary conditions. Such problems have an operator representation of the form
$$ L(\lambda)=\lambda^{2}M-i\lambda K-A $$
(1.1)
in the Hilbert space \(H=L_{2}(I)\oplus\mathbb{C}^{k}\), where I is an interval, k is the number of eigenvalue dependent boundary conditions, M, K, and A are coefficient operators.

A classification of separated eigenvalue boundary conditions of 2nth order problems for which all the coefficient operators of the operator pencil (1.1) are self-adjoint is given in [9], Theorem 4.7, while an equivalent classification for fourth order problems is given in [4]. The boundary conditions investigated in [4, 9] are all separated. Möller and Pivovarchik [15] give necessary and sufficient conditions for an operator to be self-adjoint in terms of the null and image spaces of matrices defined by any type of boundary conditions for a 2nth order differential equation.

In this paper we extend the work of [1], using the same fourth order differential equation (3.1), to a class of boundary conditions, where two boundary conditions are periodic or anti-periodic at the end points, the remaining two boundary conditions are separated, one of them depends linearly on the eigenvalue parameter λ. The genesis of this problem is the problem studied by Möller and Pivovarchik [1]. In that problem the boundary condition that has dependence on the eigenvalue parameter has a specific meaning: that the hinge connection at the right end is subjected to a viscous friction \(\alpha>0\) in the hinge. In keeping with the pattern of the boundary conditions of the operator studied in [1], we confine our boundary conditions such that the two terms of the boundary condition that depends on the eigenvalue parameter are at one end point of the interval and the order of the derivative of the eigenvalue dependent part of this boundary condition is one less than the order of the eigenvalue independent part. We have associated to the problem under consideration the operator polynomial
$$ L(\lambda)=\lambda^{2}M-i\alpha\lambda K-A, $$
(1.2)
where \(\alpha>0\), while M and K are self-adjoint operators. We give necessary and sufficient conditions for the operator A to be self-adjoint.

We give basic definitions and properties needed to conduct the study under investigation in Section 2. In Section 3 we prove that a particular fourth order periodic eigenvalue problem is self-adjoint using two different characterizations of self-adjoint operators. These characterizations are the Möller and Pivovarchik characterization for general boundary conditions [15] and the Möller and Zinsou characterization for separated boundary conditions [4, 9]. In Section 4 we present, for the fourth order eigenvalue problems investigated in this paper, the two different characterizations of self-adjoint operators used in Section 3 as matrix equations. The Möller and Pivovarchik characterization is given by \(U_{3}(N(U_{1}))=R(U^{*})\), while the Möller and Zinsou characterization is \(W(N(U_{1}))=R(U_{1}^{*})\), where \(U_{3}\) is a \(8\times10\) matrix, \(U_{1}\) is a \(3\times8\) matrix, U is a \(5\times10\) matrix and W a \(8\times8\) matrix of rank6. Finally, in Section 5 we consider a class of periodic eigenvalue problems consisting of two periodic boundary conditions and two separated boundary conditions, one of them depends on the eigenvalue parameter. We derive necessary and sufficient conditions for which the coefficient operator A is self-adjoint and we provide the structure of the boundary conditions using singular value decomposition.

2 Preliminaries

A Sobolev space is defined as
$$W_{2}^{m}(0,a):= \bigl\{ g\in L_{2}(0,a): \forall j \in\{1, \ldots,m\}\ g^{(j)}\in L_{2}(0,a) \bigr\} , $$
where \(a>0\) and \(m\in\mathbb {N}\).
Let \(n=2k\) where \(k\in\mathbb{N}\). We consider an nth order differential expression of the form
$$ \ell y=\sum_{m=0}^{k} \bigl(g_{m}y^{(m)} \bigr)^{(m)} $$
(2.1)
on an interval \([0,a]\), \(a>0 \), where \(g_{m}\in W_{2}^{m}(0,a)\), \(m=0,\ldots ,k\), are real valued functions and \(\vert g_{k}(x)\vert >\varepsilon\) for some \(\varepsilon>0\) and \(x\in[0,a]\). The differential expression ℓy is well defined for \(y\in W_{2}^{n}(0,a)\) in which case \(\ell y\in L_{2}(0,a)\). The operator \(L_{0}\) defined by
$$ D(L_{0})=W_{2}^{n}(0,a), \quad\quad L_{0}y=\ell y, \quad y\in W_{2}^{n}(0,a), $$
(2.2)
is called the maximal operator associated with the differential expression on \([0,a]\).

Definition 2.1

Let \(y\in W_{2}^{n}(0,a)\). For \(j=0,\ldots,n\) the jth quasi-derivative of y, denoted \(y^{[j]}\), is recursively defined by
$$\begin{aligned}& y^{[j]} =y^{(j)} \quad\text{for } j=0,\ldots,k-1, \\& y^{[k]} =g_{k}y^{(k)}, \\& y^{[j]} = \bigl(y^{[j-1]} \bigr)'+g_{n-j}y^{(n-j)} \quad\text{for } j=k+1,\ldots,n. \end{aligned}$$
The quasi-derivatives depend on the differential expression (2.1). They are convenient for the formulation of the Lagrange identity when dealing with differential operators which have fairly general coefficients. Let
$$\begin{aligned} Y= \begin{pmatrix} y \\ c \end{pmatrix} ,\quad \quad Z= \begin{pmatrix} z \\ d \end{pmatrix} , \quad \quad W= \begin{pmatrix} w \\ e \end{pmatrix} \end{aligned}$$
(2.3)
be elements of the Hilbert space \(L_{2}(0,a)\oplus\mathbb{C}\), \(y,z,w\in W_{2}^{n}(0,a)\).

A formulation of the Lagrange identity and Green’s formula is quoted below from [15], Theorem 10.2.3.

Theorem 2.2

For a differential expression and \(y,z \in W_{2}^{n}(0,a)\), the Lagrange identity
$$\begin{aligned} (\ell y)\overline{z}-y(\ell\overline{z})=\frac{d}{dx}[y,z] \end{aligned}$$
(2.4)
holds on \([0,a]\) almost everywhere, where
$$\begin{aligned}{} [y,z]=\sum_{j=1}^{k} (-1)^{j} \bigl(y^{[j-1]}\overline{z^{[n-j]}}-y^{[n-j]} \overline{z^{[j-1]}} \bigr) \end{aligned}$$
(2.5)
and Green’s formula
$$\begin{aligned} (\ell y,z)-(y, \ell z)=[y,z](a)-[y,z](0) \end{aligned}$$
(2.6)
is valid, where \((\cdot,\cdot)\) is the inner product in \(L_{2}(0,a)\).
Let \(r,q\in\mathbb{N}_{0}\), \(U_{1}\) a \(r\times2n\) matrix, \(U_{2}\) a \(q\times2n\) matrix, V a \(q\times2n\) matrix. Then the operator A in the Hilbert space \(L_{2}(0,a)\oplus\mathbb{C}\) is defined by
$$\begin{aligned} & D(A)= \bigl\{ Y\in W_{2}^{n}(0,a)\oplus \mathbb{C}, U_{1}\hat{Y}=0, c=U_{2}\hat{Y} \bigr\} , \end{aligned}$$
(2.7)
$$\begin{aligned} &AY= \begin{pmatrix} \ell y \\ V\hat{Y} \end{pmatrix} , \end{aligned}$$
(2.8)
where
$$\begin{aligned} \hat{Y}= \bigl( y(0),\ldots,y^{[n-1]}(0),y(a), \ldots,y^{[n-1]}(a) \bigr)^{\intercal}. \end{aligned}$$
(2.9)
For \(m\in\mathbb{N}\) define
$$\begin{aligned}& \textstyle\begin{cases} J_{m,0}= ((-1)^{s-1}\delta_{s,m+1-t} )^{m}_{s,t=1}, \quad\quad J_{m,1}= \bigl({\scriptsize\begin{matrix}{}0& J_{m,0}\cr -J^{*}_{m,0} & 0\end{matrix}} \bigr) , \\ J_{m}= \bigl({\scriptsize\begin{matrix}{}-J_{m,1}& 0\cr 0 & J_{m,1}\end{matrix}} \bigr) . \end{cases}\displaystyle \end{aligned}$$
(2.10)
Finally define
$$\begin{aligned} &U_{3}= \begin{pmatrix} J_{2} \\ V\\ -U_{2} \end{pmatrix} , \end{aligned}$$
(2.11)
$$\begin{aligned} &U= \begin{pmatrix} U_{1}&0&0 \\ U_{2}&-I&0\\ V&0&-I \end{pmatrix} . \end{aligned}$$
(2.12)
Before stipulating a criterion of self-adjointness, we give a proposition which states conditions under which \(Z\in D(A^{*})\), quoted from [15], Proposition 10.3.3.

Proposition 2.3

Assume that \(\operatorname {rank}\bigl({\scriptsize\begin{matrix}{}U_{1} \cr U_{2}\end{matrix}} \bigr) =r+q\). Then \(Z\in D(A^{*})\) if and only if \(Z \in W^{n}_{2}(0,a) \oplus \mathbb{C}\) and there is \(e \in\mathbb{C}\) such that
$$ [y,z](a) - [y,z](0) + d^{*}V\hat{Y} -e^{*}U_{2}\hat{Y}=0 $$
(2.13)
for all \(\hat{Y} \in N(U_{1})\). For \(Z\in D(A^{*})\), e is unique and
$$A^{*}Z= \begin{pmatrix}\ell z\\ e \end{pmatrix} . $$

A criterion of self-adjointness as given by [15], Theorem 10.3.5, is quoted below.

Theorem 2.4

Assume that
$$\operatorname {rank} \begin{pmatrix}U_{1} \\ U_{2} \end{pmatrix} =r+q. $$
Then A is self-adjoint if and only if
$$U_{3} \bigl(N(U_{1}) \bigr)=R \bigl(U^{*} \bigr). $$

In addition to determining if A is self-adjoint, we use [15], Theorem 10.3.8, quoted below to conclude that A is bounded below.

Theorem 2.5

Assume that A is self-adjoint. Then A has a compact resolvent. Assume additionally that
  1. (i)

    \((-1)^{k}g_{k}>0\),

     
  2. (ii)

    each component of \(U_{1}\hat{Y}\) either contains only quasi-derivatives \(y^{[m]}\) with \(m< k\) or contains only quasi-derivatives \(m\geq k\),

     
  3. (iii)

    each component of \(U_{2}\hat{Y}\) either contains only quasi-derivatives \(y^{[m]}\) with \(m< k\) or contains only quasi-derivatives \(m\geq k\),

     
  4. (iv)

    for each component of \(U_{2}\hat{Y}\) which only contains quasi-derivatives \(y^{[m]}\) with \(m\geq k\), the corresponding component of only contains quasi-derivatives \(y^{[m]}\) with \(m< k\).

     
Then A is bounded below.

Any \(m\times n\) matrix can be decomposed into a diagonal matrix of its singular values and orthogonal matrices of order m and n as stated in [16], Theorem 6.1, quoted below as

Theorem 2.6

Any \(m\times n\) real matrix Γ, with \(m\geq n\), can be factorized as
$$\begin{aligned} \Gamma=\Delta \begin{pmatrix} \Sigma\\ 0 \end{pmatrix} \Theta^{\intercal}, \end{aligned}$$
(2.14)
where \(\Delta\in\mathbb{R}^{m\times m}\) and \(\Theta\in\mathbb {R}^{n\times n}\) are orthogonal, and \(\Sigma\in\mathbb{R}^{n\times n}\) is diagonal,
$$\Sigma=\operatorname {diag}(\sigma_{1}, \sigma_{2},\ldots, \sigma_{n}), $$
where \(\sigma_{1}\geq\sigma_{2}\geq\cdots\geq\sigma_{n}\geq0\).

3 A particular problem

The boundary value problem with a fourth order differential equation
$$ y^{(4)}(\lambda,x) - \bigl(gy' \bigr)'(\lambda,x)=\lambda^{2} y(\lambda,x), $$
(3.1)
together with the following boundary conditions:
$$\begin{aligned}& y(\lambda,0) - y(\lambda,a) = 0, \end{aligned}$$
(3.2)
$$\begin{aligned}& y^{[3]}(\lambda,0)- y^{[3]}(\lambda,a) = 0, \end{aligned}$$
(3.3)
$$\begin{aligned}& y'(\lambda,0) = 0, \end{aligned}$$
(3.4)
$$\begin{aligned}& y''(\lambda,a) + i\alpha\lambda y'(\lambda,a) = 0, \end{aligned}$$
(3.5)
defined on the interval \([0,a]\), where \(a>0\), \(\alpha>0\), and \(g\in C^{1}[0,a]\) initiates the study. The boundary conditions (3.2) and (3.3) are periodic, while the boundary conditions (3.4) and (3.5) are separated, the boundary condition (3.5) is also dependent on the eigenvalue parameter λ. We will establish an operator approach to this problem by defining operators A, K, and M, which are coefficients of the operator polynomial in the eigenvalue parameter; see (3.6) below. Then the eigenfunctions of the operator polynomial L given by
$$\begin{aligned} L(\lambda)=\lambda^{2}M-i\alpha\lambda K -A \end{aligned}$$
(3.6)
correspond to the non-trivial solutions of (3.1)-(3.5), where the operators A, K, and M are defined by
$$\begin{aligned} &D(A)= \left\{ Y= \begin{pmatrix} y \\ c \end{pmatrix} : y\in W_{2}^{4}(0,a), y(\lambda,0)-y(\lambda,a)=y'( \lambda,0)=0, \right. \\ &\hphantom{D(A)=}\left. \vphantom{ \begin{pmatrix} y \\ c \end{pmatrix} } y^{[3]}(\lambda,0)- y^{[3]}( \lambda,a)=0, c=y'(\lambda,a) \right\}, \\ &D(K)=D(M)=L_{2}(0,a)\oplus\mathbb{C}, \end{aligned}$$
and
$$A \begin{pmatrix} y \\ c \end{pmatrix} = \begin{pmatrix} y^{(4)} -(gy')' \\ y''(a) \end{pmatrix} ,\quad \quad K= \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} \quad \text{and} \quad M= \begin{pmatrix} I & 0 \\ 0 & 0 \end{pmatrix} . $$

Proposition 3.1

The operators A, K, and M are self-adjoint, M and K are bounded, K has rank 1, \(M\geq0\), \(K\geq0\), \(M+K\gg0\), \(N(M)\cap N(A)= \{0\}\) and A is bounded below and has a compact resolvent.

Proof

The statements about K and M are obvious. If \((y,c)^{\intercal}\in N(M)\cap N(A)\) then \((y,c)^{\intercal}\in N(M)\) gives \(y=0\), and \((y,c)^{\intercal}\in D(A)\) where \(c=y'(a)\) leads to \(c=y'(a)=0\). Hence \(N(M)\cap N(A)= \{0\}\). We are going to use Theorem 2.4 to verify that A is self-adjoint. By the differential expression (2.1) with \(n=4\), \(g_{0}=0\), \(g_{1}=-g \in C^{1}[0,a]\) and \(g_{2}=1\) one represents (3.1) as
$$ \ell y=(g_{0}y)+ \bigl(g_{1}y' \bigr)'+ \bigl(g_{2}y'' \bigr)'' =y^{(4)} - \bigl(gy' \bigr)'=L_{0}(\lambda)y. $$
(3.7)
The quasi-derivatives associated with (3.1) are
$$y^{[0]}=y, \quad\quad y^{[1]}=y',\quad\quad y^{[2]}=y'',\quad\quad y^{[3]}=y^{(3)} -gy', \quad\quad y^{[4]}=y^{(4)} - \bigl(gy' \bigr)'. $$
The number of eigenvalue independent boundary conditions as given by (3.2)-(3.4) corresponds with \(r=3\), leaving only one boundary condition dependent on the eigenvalue parameter meaning that \(q=1\). \(U_{1}\) is a \(3 \times8\) matrix and both \(U_{2}\) and V are \(1 \times8\) matrices given by
$$\begin{aligned} &U_{1}= \begin{pmatrix} 1 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & -1 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix} , \end{aligned}$$
(3.8)
$$\begin{aligned} &U_{2}= \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \end{pmatrix} , \end{aligned}$$
(3.9)
$$\begin{aligned} &V= \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{pmatrix} . \end{aligned}$$
(3.10)
Then the operator A can also be defined in terms of these matrices as
$$\begin{aligned} &AY= \begin{pmatrix} \ell y \\ V\hat{Y} \end{pmatrix} , \\ &D(A)= \bigl\{ Y\in W_{2}^{4}(0,a)\oplus\mathbb{C}, U_{1}\hat{Y}=0, c=U_{2}\hat{Y} \bigr\} , \end{aligned}$$
similar to (2.8) and (2.7). We now specify the matrices \(J_{2}\), \(U_{3}\), and U as
$$\begin{aligned}& J_{2}= \begin{pmatrix} 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \end{pmatrix} , \end{aligned}$$
(3.11)
$$\begin{aligned}& U_{3}= \begin{pmatrix} 0& 0 &0&-1&0&0&0&0\\0& 0 &1&0&0&0&0&0\\0& -1 &0&0&0&0&0&0\\1& 0 &0&0&0&0&0&0\\0& 0 &0&0&0&0&0&1\\0& 0 &0&0&0&0&-1&0\\ 0& 0 &0&0&0&1&0&0\\0& 0 &0&0&-1&0&0&0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \end{pmatrix} , \end{aligned}$$
(3.12)
and
$$\begin{aligned}& U= \begin{pmatrix} 1 & 0 & 0 & 0 & -1 & 0 & 0 & 0&0&0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & -1&0&0 \\ 0 & 1& 0 & 0 & 0 & 0 & 0 & 0 & 0&0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0&-1&0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0&0&-1 \end{pmatrix} . \end{aligned}$$
(3.13)
\(J_{2}\) is a \(8\times8\) matrix, \(U_{3}\) is a \(10\times8\) matrix and U is a \(5\times10\) matrix where I in (2.12) is a \(1\times1\) matrix.
We find \(N(U_{1})\) and \(R(U^{*})\) as
$$\begin{aligned}& N(U_{1})=\operatorname {span}\{e_{1}+e_{5},e_{3},e_{4}+e_{8},e_{6},e_{7} \}\subset\mathbb{C}^{8} \end{aligned}$$
(3.14)
and
$$\begin{aligned}& R \bigl(U^{*} \bigr)=\operatorname {span}\{ e_{1}-e_{5},e_{4}-e_{8},e_{2},e_{6}-e_{9},e_{7}-e_{10} \}\subset\mathbb{C}^{10}. \end{aligned}$$
(3.15)
Then we compare \(U_{3}(N(U_{1}))\) with \(R(U^{*})\), showing that they are equal and A is self-adjoint as postulated by Theorem 2.4. Lastly, A has a compact resolvent in view of Theorem 2.5. The coefficient of the highest derivative in the differential component of A is \(g_{2} = 1>0\) as required by Theorem 2.5(i). Particular values in assumptions of Theorem 2.5(ii)-(iv) for (3.1)-(3.5) are
$$\begin{aligned} &U_{1}\hat{Y}= \begin{pmatrix}y(0) -y(a)\\y^{[3]}(0)-y^{[3]}(a)\\y^{[1]}(0) \end{pmatrix} , \end{aligned}$$
(3.16)
$$\begin{aligned} &U_{2}\hat{Y}= y^{[1]}(a), \end{aligned}$$
(3.17)
$$\begin{aligned} &V\hat{Y}= y^{[2]}(a). \end{aligned}$$
(3.18)
The first and third component of \(U_{1}\hat{Y}\) have quasi-derivatives of order zero and one. Hence their order given by m is less than \(k=2\), half the order of the differential equation and the second component has order three which is \(m=3\geq k=2\). The component of \(U_{2}\hat{Y}\) has order one which is less than k. A does not have components of \(U_{2}\hat{Y}\) with quasi-derivatives that are greater than k and the condition on is irrelevant. Thus all the conditions of Theorem 2.5 are fulfilled and A is bounded below. □
An alternative criterion is used to show that (3.1)-(3.5) is self-adjoint. First, define
$$\begin{aligned} W=J_{2}+U_{2}^{*}V-V^{*}U_{2}. \end{aligned}$$
(3.19)
Then \(W(N(U_{1}))\) and \(R(U_{1}^{*})\) are given by
$$\begin{aligned} W \bigl(N(U_{1}) \bigr)=\operatorname {span}\{e_{4}-e_{8},e_{2},-e_{1}+e_{5} \}\subset\mathbb{C}^{8\times3} \end{aligned}$$
(3.20)
and
$$\begin{aligned} R \bigl(U_{1}^{*} \bigr)=\operatorname {span}\{e_{1}-e_{5},e_{2},e_{4}-e_{8} \}\subset\mathbb{C}^{8\times3}. \end{aligned}$$
(3.21)
A comparison of \(W(N(U_{1}))\) and \(R(U_{1}^{*})\) shows that \(W(N(U_{1}))=R(U_{1}^{*})\), which is a necessary and sufficient condition of self-adjointness, see Theorem 4.4 on p.10.

4 Periodic and a single eigenvalue dependent boundary condition

Consider on the interval \([0,a]\), where \(a>0\), the differential equation (3.1) with boundary conditions
$$\begin{aligned}& U_{1}\hat{Y} =0, \end{aligned}$$
(4.1)
$$\begin{aligned}& (V+i\alpha U_{2})\hat{Y} =0, \end{aligned}$$
(4.2)
where the matrices \(U_{1}\), \(U_{2}\), and V are of the following form:
$$\begin{aligned}& U_{1}= \bigl(u^{1}_{i,j} \bigr)_{i=1, j=1}^{3,8}, \end{aligned}$$
(4.3)
$$\begin{aligned}& U_{2}= \bigl(u^{2}_{i,j} \bigr)_{i=1, j=1}^{1,8}, \end{aligned}$$
(4.4)
$$\begin{aligned}& V=(v_{i,j})_{i=1, j=1}^{1,8}. \end{aligned}$$
(4.5)
Consider a particular case where \(U_{2}\) and V contain exactly one non-zero element such that the non-zero element of \(U_{2}\) is in a different column to the non-zero element of V and the non-zero elements of \(U_{1}\) are positioned such that the first column of (2.12) has linearly independent rows. The operator A in (2.8) is given by
$$\begin{aligned}& AY= \begin{pmatrix} \ell y \\ V\hat{Y} \end{pmatrix} , \\& D(A)= \bigl\{ Y\in W_{2}^{4}(0,a)\oplus\mathbb{C}, U_{1}\hat{Y}=0, c=U_{2}\hat{Y} \bigr\} . \end{aligned}$$
We recall that the dimension of the domain of a linear map between two spaces is given by the sum of the dimension of the null space and the rank of this linear map. In addition, two finite dimensional spaces coincide if one space is contained in the other and their dimensions are equal. A vector space \(\mathbb{C}^{8}\) acted upon by these three matrices \(U_{1}\), \(U_{2}\), and V means that \(\operatorname {rank}U_{2}\) and rankV are given by
$$8-\dim\bigl(N(U_{2}) \bigr)=1 \quad\text{and} \quad 8-\dim\bigl(N(V) \bigr)=1, $$
respectively.

Proposition 4.1

Let \(U_{2}\) and V contain exactly one non-zero element such that the non-zero element in \(U_{2}\) is in a different column to the non-zero element in V. Let
$$ W=J_{2}+U_{2}^{*}V-V^{*}U_{2}. $$
(4.6)
Then \(U_{2}^{*}V\) and \(V^{*}U_{2}\) are \(8\times8\) matrices of rank 1, \(U_{2}^{*}V-V^{*}U_{2}\) is an \(8\times8\) matrix of rank 2 and W is an \(8\times8\) matrix of rank at least 6.

Proof

Let the non-zero element of V be at \(j=p\) and that of \(U_{2}\) be at \(j=s\), \(s\neq p\). Then
$$\begin{aligned}& U_{2}^{*}V= \bigl( \bigl(\overline{u^{2}_{ij}} \bigr)_{i=1, j=1}^{1,8} \bigr)^{\intercal} (v_{ij})_{i=1, j=1}^{1,8} = \bigl(\overline{u^{2}}_{1j} v_{1i} \bigr)_{j=1, i=1}^{8,8}, \end{aligned}$$
has exactly one non-zero element, \(\overline{u^{2}}_{1s} v_{1p}\), at \(j=s\), \(i=p\). The position of the only non-zero element of \(V^{*}U_{2}\) is in row p and column s, thus \(U_{2}^{*}V-V^{*}U_{2}\) has rank 2. \(J_{2}\) in (4.6) is invertible with rank 8 and \(U_{2}^{*}V-V^{*}U_{2}\) has rank 2. Hence, the rank of W is at least 6. □

Remark 4.2

Whenever \(Y\in D(A)\) then \(\hat{Y}\in N(U_{1})\), and for every \(u \in N(U_{1})\) there is a \(Y\in D(A)\) such that \(\hat{Y}=u\).

Corollary 4.3

If A is self-adjoint then \(\operatorname {rank}W=6 \) and \(W(N(U_{1}))=R(U_{1}^{*})\).

Proof

Proposition 2.3 states that \(Z\in D(A^{*})\) if and only if \(Z \in W^{4}_{2}(0,a) \oplus \mathbb{C}\) and there is \(e \in\mathbb{C}\) such that
$$ [y,z](a) - [y,z](0) + d^{*}V\hat{Y} -e^{*}U_{2}\hat{Y}=0 $$
(4.7)
for all \(\hat{Y} \in N(U_{1})\). For \(Z\in D(A^{*})\), e is unique and
$$A^{*}Z= \begin{pmatrix}\ell z\\ e \end{pmatrix} . $$
We use (2.3) for \(W, Z\in D(A)=D(A^{*})\) and
$$[y,z](a)-[y,z](0)=\hat{Z}^{*}J_{2}\hat{Y} $$
together with values of e and d as implied by (2.8) and (2.7), respectively, which we substitute into (4.7) to get
$$\begin{aligned} 0 &=[y,z](a) -[y,z](0) +d^{*}V\hat{Y} - e^{*}U_{2}\hat{Y} \\ &=[y,z](a) -[y,z](0) +(U_{2}\hat{Z})^{*}V\hat{Y} - (V \hat{Z})^{*}U_{2}\hat{Y} \\ &=\hat{Z}^{*}J_{2}\hat{Y} +\hat{Z}^{*}U_{2}^{*}V\hat{Y} -\hat{Z}^{*} V^{*}U_{2}\hat{Y} \\ &=\hat{Z}^{*} \bigl(J_{2} +U_{2}^{*}V - V^{*}U_{2} \bigr) \hat{Y} \\ &=\hat{Z}^{*}W\hat{Y}, \end{aligned}$$
where Ŷ and are as defined in (2.9). This means that \(W\hat{Y}\perp\hat{Z}\), i.e. \(W(N(U_{1}))\subset( N(U_{1}))^{\perp}=R(U_{1}^{*})\). We use this containment of \(W(N(U_{1}))\) in \(R(U_{1}^{*})\) to compare their dimensions as
$$\begin{aligned} 3&=\operatorname {rank}U_{1}^{*}\geq\dim\bigl(W \bigl(N(U_{1}) \bigr) \bigr) \\ &\geq\dim\bigl(N(U_{1}) \bigr) - (8-\operatorname {rank}W) \\ &=-3+\operatorname {rank}W. \end{aligned}$$
(4.8)
Hence \(\operatorname {rank}W \leq6\). By Proposition 4.1 \(\operatorname {rank}W=6\), and hence all the inequalities in (4.8) are equalities and \(\dim(W(N(U_{1})))= \dim( R(U_{1}^{*}))\) holds. Thus \(W(N(U_{1}))= R(U_{1}^{*})\). □

Theorem 4.4

The following statements are equivalent:
  1. (i)

    A is self-adjoint,

     
  2. (ii)

    \(U_{3}(N(U_{1}))=R(U^{*})\),

     
  3. (iii)

    \(W(N(U_{1}))=R(U_{1}^{*})\).

     

Proof

Suppose (i) holds. Then Corollary 4.3 implies (iii).

Suppose (iii) holds. Let \(u\in N(U_{1})\). Then there is \(v \in D(U_{1}^{*}) \) such that \(Wu = U_{1}^{*}v\) i.e.
$$\begin{aligned} U_{1}^{*}v=Wu= \bigl(J_{2}+U_{2}^{*} V - V^{*} U_{2} \bigr)u. \end{aligned}$$
(4.9)
Consider
$$\begin{aligned}& U_{3}u= \begin{pmatrix} J_{2} \\ V\\ -U_{2} \end{pmatrix} u= \begin{pmatrix} J_{2}u \\ Vu\\ -U_{2}u \end{pmatrix} . \end{aligned}$$
(4.10)
Let \(b=-Vu\) and \(c=U_{2}u\) i.e. \(0=Vu+b\) and \(0=U_{2}u-c\) and substitute (4.9) below. Then
$$\begin{aligned} \begin{pmatrix}J_{2}u \\ Vu\\ -U_{2}u \end{pmatrix} &= \begin{pmatrix} J_{2}u+U_{2}^{*}(Vu+b) - V^{*} (U_{2}u-c)\\ -b \\ -c \end{pmatrix} \\ &= \begin{pmatrix} (J_{2}+U_{2}^{*} V - V^{*} U_{2})u +U_{2}^{*}b+V^{*}c\\ -b \\ -c \end{pmatrix} \\ &= \begin{pmatrix}U_{1}^{*}v +U_{2}^{*}b+V^{*}c\\ -b \\ -c \end{pmatrix} \\ &= \begin{pmatrix} U_{1}^{*}&U_{2}^{*}&V^{*} \\ 0&-I&0\\ 0&0&-I \end{pmatrix} \begin{pmatrix} v \\ b\\ c \end{pmatrix} =U^{*} \begin{pmatrix} v \\ b\\ c \end{pmatrix} . \end{aligned}$$
Thus \(U_{3}(N(U_{1}))\subset R(U^{*})\) and \(\dim(U_{3}(N(U_{1})))\leq \operatorname {rank}U^{*}\). The map \(U_{1}: \mathbb{C}^{8} \rightarrow \mathbb{C}^{3}\), in (4.3), has \(\dim(N(U_{1}))= \dim(\mathbb{C}^{8}) - \operatorname {rank}U_{1}= 8 - 3=5\) as given by the rank nullity theorem. Similarly U with the first column given by (4.3)-(4.5) has \(\operatorname {rank}U= 5\) thus \(\dim(N(U_{1})) =\operatorname {rank}U^{*}\). We then conclude that \(U_{3}(N(U_{1}))= R(U^{*})\) by showing that \(U_{3}\) is injective i.e. 0 is the only element in \(N(U_{3})\). Suppose \(U_{3}u=0\). Then
$$\begin{aligned}& 0=U_{3}u= \begin{pmatrix} J_{2}u \\ Vu\\ -U_{2}u \end{pmatrix} , \end{aligned}$$
(4.11)
and \(J_{2}u=0\) implies \(u=0\) since \(J_{2}\) is invertible. Hence (ii) follows.

Suppose that (ii) holds. Then by Theorem 2.4 we have (i). □

5 Further examples of self-adjoint operators with periodic and a single eigenvalue dependent boundary conditions

Keeping with the pattern of the boundary conditions of the operator studied in [1], using the differential equation (3.1) and Theorem 2.4, we identify the boundary conditions of the self-adjoint operators under investigation as follows:
$$\begin{aligned}& y^{[\beta_{1}]}(\lambda,0) - \epsilon_{1}y^{[\beta _{1}]}( \lambda,a) = 0, \end{aligned}$$
(5.1)
$$\begin{aligned}& y^{[\beta_{2}]}(\lambda,0)- \epsilon_{2}y^{[\beta _{2}]}( \lambda,a) =0, \end{aligned}$$
(5.2)
$$\begin{aligned}& \delta y^{[\beta_{3}]}(\lambda,0)+(1-\delta)y^{[\beta _{3}]}( \lambda,a) =0, \end{aligned}$$
(5.3)
$$\begin{aligned}& (1-\delta) \bigl(y^{[\beta_{4}]}(\lambda,0) + \epsilon_{3} i\alpha\lambda y^{[\beta_{5}]}(\lambda,0) \bigr) =\delta \bigl(y^{[\beta _{4}]}(\lambda,a) + \epsilon_{3}i\alpha\lambda y^{[\beta_{5}]}(\lambda,a) \bigr), \end{aligned}$$
(5.4)
where \(\beta_{m}\in\{0,1,2,3\}\), \(m=1,2,\ldots,5\); the \(\beta_{m}\) are distinct for \(m=1,2,3\) i.e. \(\beta_{s}\neq \beta_{m}\) for \(s\neq m\) with \(s,m=1,2,3\). \(\beta_{1}\), \(\beta_{2}\), \(\beta _{4}\), \(\beta_{5}\) are different from each other and \(\beta_{5}=\beta_{4}-1\), \(\beta_{1}<\beta_{2}\), \(\epsilon_{j}=\pm1\) for \(j=1,2,3\) and \(\delta\in \{0,1\}\). We give necessary and sufficient conditions for which the main operator A is self-adjoint.

Theorem 5.1

The quadratic operator polynomial representing the fourth order differential equation (3.1) with the boundary conditions (5.1)-(5.4) is self-adjoint if and only if these boundary conditions have the following structure:
$$\begin{aligned}& \epsilon_{1}\epsilon_{2}=1, \end{aligned}$$
(5.5)
$$\begin{aligned}& \epsilon_{3} =-1 \quad \textit{for } \delta=0, \end{aligned}$$
(5.6)
$$\begin{aligned}& \epsilon_{3} =1 \quad \textit{for } \delta=1, \end{aligned}$$
(5.7)
$$\begin{aligned}& \beta_{1} =0, \end{aligned}$$
(5.8)
$$\begin{aligned}& \beta_{2} =3, \end{aligned}$$
(5.9)
$$\begin{aligned}& \beta_{3} =1,2. \end{aligned}$$
(5.10)

Proof

Consider the matrices \(U_{1}\), \(U_{2}\), and V of the form (4.3)-(4.5). Let the non-zero elements of \(U_{2}\) and V be at \(u^{2}_{1,2}\) and \(v_{1,3}\), respectively. Using the representation of (5.4), these corresponds to \(\beta_{4}=2\) and \(\beta_{5}=1\). Let \(\epsilon_{3}=-1\), \(\beta_{1}=0\), \(\epsilon_{1}=-1\), \(\beta_{2}=3\), \(\epsilon_{2}=-1\) and \(\beta_{3}=2\). Starting with this choice of \(U_{2}\) and V, which implies that \(\delta=0\), \(U_{1}\) given by these parameters is
$$\begin{aligned}& U_{1}= \begin{pmatrix} 1 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & -1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{pmatrix} . \end{aligned}$$
(5.11)
Then we consider \(U_{2}\) and V where the non-zero elements are at \(u^{2}_{1,6}\) and \(v_{1,7}\), respectively, correspond to \(\beta _{4}=2\), \(\beta_{5}=1\), \(\delta=1\) and \(\epsilon_{3}=1\). A matrix \(U_{1}\) with such periodic boundary conditions is given by
$$\begin{aligned}& U_{1}= \begin{pmatrix} 1 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 &1 \\ 0 & 1 & 0 & 0 & 0 &0 & 0 & 0 \end{pmatrix} . \end{aligned}$$
(5.12)
The assumption of Theorem 2.4 is fulfilled since \(\operatorname {rank}\bigl({\scriptsize\begin{matrix}{}U_{1} \cr U_{2}\end{matrix}} \bigr) =4\) for both (5.11) and (5.12) together with their corresponding \(U_{2}\). For each \(U_{1}\) we compute \(U_{3}(N(U_{1}))\) and the corresponding \(R(U^{*})\). The result is that \(U_{3}(N(U_{1}))=R(U^{*})\) for each of the two cases and any of the combination of the parameters stated. Thus the operator A for each of the 12 cases is self-adjoint. A self-adjoint quadratic operator polynomial representing the fourth order differential equation (3.1) with boundary conditions that satisfy (5.1)-(5.4) satisfies
$$U_{3} \bigl(N(U_{1}) \bigr)=R \bigl(U^{*} \bigr). $$
If we represent boundary conditions with
$$\begin{aligned}& U_{1}= \begin{pmatrix} a & 0 & 0 & 0 & \epsilon_{1}a& 0 & 0 & 0 \\ 0 & 0 & 0 & b & 0 & 0 & 0 &\epsilon_{2}b \\ 0 & c & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix} , \end{aligned}$$
(5.13)
$$\begin{aligned}& U_{2}= \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & \epsilon_{3}d & 0 & 0 \end{pmatrix} , \end{aligned}$$
(5.14)
$$\begin{aligned}& V= \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & e & 0 \end{pmatrix} , \end{aligned}$$
(5.15)
such that (5.1)-(5.4) is satisfied. Then, using Matlab, we prove that if \(U_{3}(N(U_{1}))=R(U^{*})\) then the values of parameters β, δ, and ϵ are as given by (5.5)-(5.10). □

We find an unifying structure of boundary conditions that are periodic or anti-periodic at the end points of the interval and have an eigenvalue parameter dependence in one of them as described by Theorem 2.6. The matrix \(U_{4}\) defined below was decomposed into its singular values and orthogonal matrices in an effort to find a relationship in all the cases.

Define a matrix
$$\begin{aligned}& U_{4}:= \begin{pmatrix} U_{1} \\ U_{2}\\ V \end{pmatrix} . \end{aligned}$$
(5.16)
All the \(U_{4}\) that result from (5.1)-(5.4) and satisfy Theorem 5.1 are such that each \(U_{4}\) column has at most one non-zero element and each of its rows has at least one non-zero element.

Theorem 5.2

The self-adjoint quadratic operator polynomial representing the fourth order differential equation (3.1) with boundary conditions (5.1)-(5.4) that satisfy Theorem  5.1 has
$$\begin{aligned} &U_{4}=\Theta \begin{pmatrix} \Sigma& 0 \end{pmatrix} \Delta^{\intercal}, \end{aligned}$$
where \(\Theta=I_{5}\), \(\Sigma=\operatorname {diag}(\sqrt{2},\sqrt {2},1,1,1)\) and \(\Delta^{\intercal}\in\mathbb{R}^{8\times8}\).

Proof

Consider (5.1)-(5.4) with \(\beta_{1}=0\), \(\beta _{2}=3\), \(\beta_{3}=1\), \(\beta_{4}=2\), \(\beta_{5}=1\), \(\delta=0\) and \(\epsilon_{1},\epsilon_{2}, \epsilon_{3}=1\). This choice of parameters results in \(U_{1}\), \(U_{2}\), and V given in (3.8)-(3.10). We then compute singular values of \(U_{4}\) with
$$\begin{aligned}& U_{4}^{\intercal}= \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0& 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & -1 & 0 & 0 & 0 \end{pmatrix} . \end{aligned}$$
(5.17)
Then
$$\begin{aligned}& U_{4}U_{4}^{\intercal}= \begin{pmatrix} 2& 0 & 0 & 0 & 0 \\ 0 & 2 & 0& 0 & 0 \\ 0 & 0& 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{pmatrix} . \end{aligned}$$
(5.18)
The eigenvalues of \(U_{4}U_{4}^{\intercal}\) are \(\sigma_{1}=2\) with eigenvectors \((1\ 0\ 0\ 0\ 0 ) ^{\intercal}\), \((0\ 1\ 0\ 0\ 0 ) ^{\intercal}\) and \(\sigma_{2}=1\) with \((0\ 0\ 1\ 0\ 0 ) ^{\intercal}\), \((0\ 0\ 0\ 1\ 0 ) ^{\intercal}\), and \((0\ 0\ 0\ 0\ 1 ) ^{\intercal}\). We construct a matrix C whose columns are the eigenvectors of \(U_{4}U_{4}^{\intercal}\) and order these eigenvectors by the magnitude of their eigenvalues i.e. \(C= ( e_{1}\ e_{2}\ e_{3}\ e_{4} \ e_{5} ) \). Then we implement the Gram-Schmidt orthonormalization process which in this case is \(\Theta= I_{5\times5}\). We repeat the process with \(U_{4}^{\intercal}U_{4}\) to find \(\Delta ^{\intercal}\). The eigenvalues of \(U_{4}^{\intercal}U_{4}\) are 2, 1, and 0 with multiplicities of two, three and three, respectively. We list the eigenvectors of \(U_{4}^{\intercal}U_{4}\) as columns of \(D= (d_{i})_{1}^{8}\) ordered below in decreasing magnitude of their eigenvalues as
$$\begin{aligned}& d_{1} =-\frac{1}{\sqrt{2}}(e_{4}-e_{8}), \\& d_{2} =-\frac{1}{\sqrt{2}}(e_{1}-e_{5}), \\& d_{3} =e_{7}, \\& d_{4} =e_{2}, \\& d_{5} =e_{6}, \\& d_{6} =-\frac{1}{\sqrt{2}}(e_{1}+e_{5}), \\& d_{7} =e_{3}, \\& d_{8} =-\frac{1}{\sqrt{2}}(e_{4}+e_{8}). \end{aligned}$$
Then we project the \(d_{i}\) and normalize them as before, which gives
$$\begin{aligned}& \Delta^{\intercal}= \begin{pmatrix}0&\frac{1}{\sqrt{2}}&0&0&0&0&-\frac{1}{\sqrt{2}}&0\\ 0&0&0&1&0&0&0&0\\ 0&0&0&0&0&-\frac{1}{\sqrt{2}}&0&-\frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}&0&0&0&0&-\frac{1}{2}&0&\frac{1}{2}\\ 0&-\frac{1}{\sqrt{2}}&0&0&0&0&-\frac{1}{\sqrt{2}}&0\\ 0&0&0&0&1&0&0&0\\ 0&0&1&0&0&0&0&0\\ -\frac{1}{\sqrt{2}}&0&0&0&0&-\frac{1}{2}&0&\frac{1}{2} \end{pmatrix} ^{\intercal}. \end{aligned}$$
(5.19)
The operators have the same Θ and Σ with \(\Delta^{\intercal}\) being the only distinguishing matrix in the decompositions of their \(U_{4}\), where \(W=J_{2}+U_{2}^{*}V-V^{*}U_{2}\), and \(U_{3}= \left({\scriptsize\begin{matrix}{}J_{2}\cr V\cr -U_{2}\end{matrix}} \right) \). □

Declarations

Acknowledgements

This research was supported by a grant from the NRF of South Africa, Grant number 80956. Various of the above calculations have been verified with MathLab.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Mathematics, University of the Witwatersrand

References

  1. Möller, M, Pivovarchik, V: Spectral properties of a fourth order differential equation. J. Anal. Appl. 25, 341-366 (2006) MathSciNetMATHGoogle Scholar
  2. Wang, A, Sun, J, Zettl, A: The classification of self-adjoint boundary conditions: separated, coupled, and mixed. J. Funct. Anal. 255, 1554-1573 (2008) MathSciNetView ArticleMATHGoogle Scholar
  3. Wang, A, Sun, J, Zettl, A: Characterization of domains of self-adjoint ordinary differential operators. J. Differ. Equ. 246, 1600-1622 (2009) MathSciNetView ArticleMATHGoogle Scholar
  4. Möller, M, Zinsou, B: Self-adjoint fourth order differential operators with eigenvalue parameter dependent boundary conditions. Quaest. Math. 34, 393-406 (2011). doi:10.2989/16073606.2011.622913 MathSciNetView ArticleMATHGoogle Scholar
  5. Möller, M, Zinsou, B: Spectral asymptotics of self-adjoint fourth order boundary value problem with eigenvalue parameter dependent boundary conditions. Bound. Value Probl. 2012, 106 (2012). doi:10.1186/1687-2770-2012-106 MathSciNetView ArticleMATHGoogle Scholar
  6. Möller, M, Zinsou, B: Spectral asymptotics of self-adjoint fourth order differential operators with eigenvalue parameter dependent boundary conditions. Complex Anal. Oper. Theory 6, 799-818 (2012). doi:10.1007/s11785-011-0162-1 MathSciNetView ArticleMATHGoogle Scholar
  7. Möller, M, Zinsou, B: Asymptotics of the eigenvalues of a self-adjoint fourth order boundary value problem with four eigenvalue parameter dependent boundary conditions. J. Funct. Spaces Appl. (2013). doi:10.1155/2013/280970 MathSciNetMATHGoogle Scholar
  8. Möller, M, Zinsou, B: Sixth order differential operators with eigenvalue dependent boundary conditions. Appl. Anal. Discrete Math. 7, 378-389 (2013). doi:10.2298/AADM130608010M MathSciNetView ArticleMATHGoogle Scholar
  9. Möller, M, Zinsou, B: Self-adjoint higher order differential operators with eigenvalue parameter dependent boundary conditions. Bound. Value Probl. 2015, 79 (2015). doi:10.1186/s13661-015-0341-5 MathSciNetView ArticleMATHGoogle Scholar
  10. Zinsou, B: Fourth order Birkhoff regular problems with eigenvalue parameter dependent boundary conditions. Turk. J. Math. 40, 864-873 (2016). doi:10.3906/mat-1508-61 MathSciNetView ArticleGoogle Scholar
  11. Möller, M, Zinsou, B: Asymptotics of the eigenvalues of self-adjoint fourth order differential operators with separated eigenvalue parameter dependent boundary conditions. Rocky Mt. J. Math. (to appear) Google Scholar
  12. Demarque, R, Miyagaki, O: Radial solutions of inhomogeneous fourth order elliptic equations and weighted Sobolev embeddings. Adv. Nonlinear Anal. (2014). doi:10.1515/anona-2014-0041 MATHGoogle Scholar
  13. Baraket, S, Radulescu, V: Combined effects of concave-convex nonlinearities in a fourth order problem with variable exponent. Adv. Nonlinear Stud. 16(3), 409-419 (2016) MathSciNetView ArticleMATHGoogle Scholar
  14. Han, X, Gao, T: A priori bounds and existence of non-real eigenvalues of fourth-order boundary value problem with definite weight function. Electron. J. Differ. Equ. 2016, 82 (2016) View ArticleMATHGoogle Scholar
  15. Möller, M, Pivovarchik, V: Spectral Theory of Operator Pencils, Hermite-Biehler Functions, and Their Applications. Oper. Theory: Advances and Applications, vol. 246. Birkhäuser, Basel (2015) View ArticleMATHGoogle Scholar
  16. Eldén, L: Matrix Methods in Data Mining and Pattern Recognition. Society for Industrial and Applied Mathematics, Philadelphia (2007) View ArticleMATHGoogle Scholar

Copyright

© The Author(s) 2017