# Linear overdetermined boundary value problems in Hilbert space

## Abstract

The general linear boundary value problem for an abstract functional differential equation is considered in the case that the number of boundary conditions is greater than the dimension of the null-space to the corresponding homogeneous equation. Sufficient conditions of the solvability of the problem are obtained. A case of a functional differential system with aftereffect is considered separately.

## Introduction

Linear boundary value problems (BVPs) for differential equations with ordinary derivatives that lack the everywhere and unique solvability are met with in various applications. Among these applications are some problems in oscillation theory (see, for examples, []) and economic dynamics []. Results on the solvability and solutions representation for these BVPs are widely used as an instrument of investigating weakly nonlinear BVPs []. General results concerning linear BVPs for an abstract functional differential equation (AFDE) are given in []. In this paper, we consider a case that the number of linearly independent boundary conditions is greater than the dimension of the null-space of the corresponding homogeneous equation and obtain sufficient conditions of the solvability without recourse to the adjoint BVP and an extension of the original BVP. Our approach is based in essence on the assumption that the derivative of the solution does belong to a Hilbert space. Then we consider a system of functional differential equations that, formally speaking, is a concrete realization of the AFDE and, on the other hand, covers many kinds of dynamic models with aftereffect (integro-differential, delayed differential, differential difference) [–]. For this case sufficient conditions are derived in an explicit form.

## Preliminaries

In this section, we give some necessary facts from the theory of AFDE [, , ]. The linear abstract functional differential equation is the equation

$\mathcal{L}x=f,$
(1)

where $\mathcal{L}:D\to B$ is a linear bounded operator, D and B are Banach spaces such that D is isomorphic to the direct product $B×{R}^{n}$. Let us denote by $\mathcal{J}=\left\{\mathrm{\Lambda },Y\right\}:B×{R}^{n}\to D$ an isomorphism and let ${\mathcal{J}}^{-1}=\left[\delta ,r\right]$.

A linear operator acting from the direct product $B×{R}^{n}$ of the Banach spaces B and ${R}^{n}$ into a Banach space D is defined by a pair of linear operators $\mathrm{\Lambda }:B\to D$ and $Y:{R}^{n}\to D$ in such a way that

$\left\{\mathrm{\Lambda },Y\right\}\left\{z,\beta \right\}=\mathrm{\Lambda }z+Y\beta ,\phantom{\rule{1em}{0ex}}z\in B,\beta \in {R}^{n}.$
(2)

A linear operator acting from a space D into a direct product $B×{R}^{n}$ is defined by a pair of linear operators $\delta :D\to B$ and $r:D\to {R}^{n}$ so that

$\left[\delta ,r\right]x=\left\{\delta x,rx\right\},\phantom{\rule{1em}{0ex}}x\in D.$
(3)

Under the norm

${\parallel \left\{z,\beta \right\}\parallel }_{B×{R}^{n}}={\parallel z\parallel }_{B}+|\beta |,$
(4)

the space $B×{R}^{n}$ is Banach (here and in what follows, $|\cdot |$ denotes a norm in ${R}^{n}$). If the bounded operator $\left\{\mathrm{\Lambda },Y\right\}:B×{R}^{n}\to D$ is the inverse to the bounded operator $\left[\delta ,r\right]:D\to B×{R}^{n}$, then

$x=\mathrm{\Lambda }\delta x+Yrx,\phantom{\rule{1em}{0ex}}x\in D,$
(5)

$\delta \left(\mathrm{\Lambda }z+Y\beta \right)=z$, $r\left(\mathrm{\Lambda }z+Y\beta \right)=\beta$, $\left\{z,\beta \right\}\in B×{R}^{n}$.

Hence

$\mathrm{\Lambda }\delta +Yr=I,\phantom{\rule{2em}{0ex}}\delta \mathrm{\Lambda }=I,\phantom{\rule{2em}{0ex}}\delta Y=0,\phantom{\rule{2em}{0ex}}r\mathrm{\Lambda }=0,\phantom{\rule{2em}{0ex}}rY=I,$
(6)

where I is the identity operator. We will identify the finite-dimensional operator $Y:{R}^{n}\to D$ with a vector $\left({y}_{1},\dots ,{y}_{n}\right)$, ${y}_{i}\in D$, such that $Y\beta ={\sum }_{i=1}^{n}{y}_{i}{\beta }^{i}$, $\beta =col\left\{{\beta }^{1},\dots ,{\beta }^{n}\right\}$.

Denote the components of the vector functional r by ${r}^{1},\dots ,{r}^{n}$. If $\ell =\left[{\ell }^{1},\dots ,{\ell }^{m}\right]:D\to {R}^{m}$ is a linear vector functional, and $X=\left({x}_{1},\dots ,{x}_{n}\right)$ is a vector with components ${x}_{i}\in D$, then lX denotes the $m×n$-matrix, whose columns are the values of the vector functional l on the components of $X:lX=\left({l}^{i}{x}_{j}\right)$, $i=1,\dots ,m$; $j=1,\dots ,n$.

Applying to both parts of (2.4), we obtain the decomposition

$\mathcal{L}x=Q\delta x+Arx,$
(7)

where $Q=\mathcal{L}\mathrm{\Lambda }:B\to B$ is the principal part, and $A=\mathcal{L}Y:{R}^{n}\to B$ is the finite-dimensional part of . Similarly, by application of to the two parts of (2.4), we get

$\ell x=\mathrm{\Phi }\delta x+\mathrm{\Psi }rx,$
(8)

where $\mathrm{\Phi }:B\to {R}^{m}$ is a linear bounded vector functional.

Let $\ell =\left[{\ell }^{1},\dots ,{\ell }^{m}\right]:D\to {R}^{m}$ be a linear bounded vector functional with linearly independent components, $\gamma =col\left({\gamma }^{1},\dots ,{\gamma }^{m}\right)\in {R}^{m}$. The system

$\mathcal{L}x=f,\phantom{\rule{2em}{0ex}}\ell x=\gamma$
(9)

is called a linear boundary value problem.

Taking into account (2.5) and (2.6), we can rewrite BVP (2.7) in the form

$\left(\begin{array}{cc}Q& A\\ \mathrm{\Phi }& \mathrm{\Psi }\end{array}\right)\left(\begin{array}{c}\delta x\\ rx\end{array}\right)=\left(\begin{array}{c}f\\ \gamma \end{array}\right).$
(10)

The operator

$\left(\begin{array}{cc}{Q}^{\ast }& {\mathrm{\Phi }}^{\ast }\\ {A}^{\ast }& {\mathrm{\Psi }}^{\ast }\end{array}\right):{B}^{\ast }×{\left({R}^{m}\right)}^{\ast }\to {B}^{\ast }×{\left({R}^{n}\right)}^{\ast }$
(11)

is the adjoint one to the operator

$\left(\begin{array}{cc}Q& A\\ \mathrm{\Phi }& \mathrm{\Psi }\end{array}\right):B×{R}^{n}\to B×{R}^{m}.$
(12)

Taking into account the isomorphism between the spaces ${B}^{\ast }×{\left({R}^{n}\right)}^{\ast }$ and ${D}^{\ast }$, we therefore call the equation

$\left(\begin{array}{cc}{Q}^{\ast }& {\mathrm{\Phi }}^{\ast }\\ {A}^{\ast }& {\mathrm{\Psi }}^{\ast }\end{array}\right)\left(\begin{array}{c}\omega \\ \beta \end{array}\right)=\left(\begin{array}{c}g\\ \eta \end{array}\right)$
(13)

the adjoint equation to the problem (2.8).

In the sequel it is assumed that the so-called principal BVP

$\mathcal{L}x=f,\phantom{\rule{2em}{0ex}}rx=\alpha$
(14)

is uniquely solvable for any $f\in B$, $\alpha \in {R}^{n}$. Recall that in such a case we have the representation [] (Theorem 1.16, p.11)

$x=X\alpha +Gf$
(15)

to the solution of (2.10) with X called the fundamental vector and G called the Green operator.

Problem (2.7) covers a wide class of BVPs for ordinary differential systems, differential delay systems, some singular and impulsive systems []. This problem is well-posed if $m=n$. In such a situation, BVP (2.7) is uniquely solvable for any $f\in B$ and $\gamma \in {R}^{n}$ if and only if the matrix

$\ell X=\left(\ell {x}^{1},\dots ,\ell {x}^{n}\right),$
(16)

where ${x}^{j}$ is the j th element of X, is nonsingular, i.e.$det\ell X\ne 0$.

In the case that $m>n$ BVP (2.7) lacks the everywhere and unique solvability, namely, it is solvable if and only if the right-hand side $\left\{f,\gamma \right\}\in B×{R}^{m}$ is orthogonal to all the solutions $\left\{\omega ,\beta \right\}$ of the homogeneous adjoint equation (2.9), i.e.$\omega f+\beta \gamma =0$ [] (Corollary 1.15, p.11).

In what follows we derive conditions of solvability for (2.7) in a more explicit form without recourse to the adjoint BVP. Our approach is based in essence on the assumption that the space B is a Hilbert space H with an inner product $〈\cdot ,\cdot 〉$.

## A case of AFDE

Consider BVP (2.7) under the assumption that $m=N>n$ and the system ${\ell }^{i}:D\to R$, $i=1,\dots ,N$ can be split into two subsystems ${\ell }_{1}:D\to {R}^{n}$ and ${\ell }_{2}:D\to {R}^{N-n}$ such that the BVP

$\mathcal{L}x=f,\phantom{\rule{2em}{0ex}}{\ell }_{1}x={\gamma }_{1}$
(17)

is uniquely solvable. Without loss of generality we will consider that ${\ell }_{1}$ is formed by the first n components of and the elements of ${\gamma }_{1}$ in (3.1) are the corresponding components of γ. Thus ${\ell }_{2}$ will stand for the final $\left(N-n\right)$ components of , and elements of ${\gamma }_{2}\in {R}^{N-n}$ are defined as the final $\left(N-n\right)$ components of γ. For $\alpha \in {R}^{q}$, $\alpha =col\left({\alpha }^{1},\dots ,{\alpha }^{q}\right)$, we put ${⌊\alpha ⌋}^{j}={\alpha }^{j}$. Thus in the cases that a vector V is expressed by a complicated formula we will use ${⌊V⌋}^{j}$ instead of ${V}^{j}$ to indicate the j th component of V.

Define the vector functional $\lambda :H\to {R}^{N-n}$, $\lambda =col\left({\lambda }^{1},\dots ,{\lambda }^{N-n}\right)$ by the equality

$\lambda ={\ell }_{2}G-\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\ell }_{1}G$
(18)

and preserve the symbol ${\lambda }^{j}$ for an element of H that generates the functional ${\lambda }^{j}$: for any $f\in H$

${\lambda }^{j}f=〈{\lambda }^{j},f〉.$
(19)

Let us define the $\left(N-n\right)×\left(N-n\right)$-matrix $W={\left\{{w}_{jk}\right\}}_{j,k=1,\dots ,N-n}$ by the equalities

${w}_{jk}=〈{\lambda }^{j},{\lambda }^{k}〉,\phantom{\rule{1em}{0ex}}j,k=1,\dots ,N-n.$
(20)

### Theorem 1

Let W be nonsingular. Then BVP (2.7) is solvable for any$f\in H$of the form

$f={f}_{0}+\phi ,$
(21)

where

${f}_{0}=\sum _{k=1}^{N-n}{\lambda }^{k}{⌊{W}^{-1}{\gamma }_{2}-{W}^{-1}\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\gamma }_{1}⌋}^{k},$
(22)

and$\phi \in H$is arbitrary element that is orthogonal to each${\lambda }^{k}$, $k=1,\dots ,N-n$.

### Proof

The general solution of the equation $\mathcal{L}x=f$ has the representation

$x=X\alpha +Gf$
(23)

with an arbitrary $\alpha \in {R}^{n}$. Apply ${\ell }_{1}$ to both parts of (3.2):

${\ell }_{1}x={\ell }_{1}X\alpha +{\ell }_{1}Gf.$
(24)

By the unique solvability of BVP (3.1) the condition $det{\ell }_{1}X\ne 0$ holds, therefore the equation

${\ell }_{1}x\equiv {\ell }_{1}X\alpha +{\ell }_{1}Gf={\gamma }_{1}$
(25)

is uniquely solvable with respect to α:

$\alpha ={\left({\ell }_{1}X\right)}^{-1}{\gamma }_{1}-{\left({\ell }_{1}X\right)}^{-1}{\ell }_{1}Gf.$
(26)

Hence, for any $f\in H$,

$x=X{\left({\ell }_{1}X\right)}^{-1}{\gamma }_{1}-X{\left({\ell }_{1}X\right)}^{-1}{\ell }_{1}Gf+Gf$
(27)

is a solution to BVP (3.1). Now we shall search for $f\in H$ such that the corresponding x of the form (3.4) satisfies the equality ${\ell }_{2}x={\gamma }_{2}$. For this purpose, apply ${\ell }_{2}$ to both parts of (3.4):

${\ell }_{2}x=\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\gamma }_{1}-\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\ell }_{1}Gf+{\ell }_{2}Gf={\gamma }_{2}.$
(28)

Rewrite this as the equation with respect to $f\in H$:

$\left({\ell }_{2}G-\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\ell }_{1}G\right)f={\gamma }_{2}-\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\gamma }_{1}.$
(29)

The left-hand side of (3.5) defines a linear bounded vector functional λ over the space H:

$\lambda f=\left({\ell }_{2}G-\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\ell }_{1}G\right)f\phantom{\rule{1em}{0ex}}\mathrm{\forall }f\in H,$
(30)

$\lambda :H\to {R}^{N-n}$, $\lambda =col\left({\lambda }^{1},\dots ,{\lambda }^{N-n}\right)$ with components ${\lambda }^{j}:H\to R$ that are linear bounded functionals. Therefore,

${\lambda }^{j}f=〈{\lambda }^{j},f〉,\phantom{\rule{1em}{0ex}}j=1,\dots ,N-n.$
(31)

Thus, for any $f\in H$, the representation

$f=\sum _{k=1}^{N-n}{\lambda }^{k}{c}_{k}+\phi$
(32)

holds, where ${c}_{k}$, $k=1,\dots ,N-n$ are constants and φ is orthogonal to ${\lambda }^{j}$: $〈{\lambda }^{j},\phi 〉=0$ for any $j=1,\dots ,N-n$. Let us use the substitution (3.7) as applied to (3.5):

$col\left\{\sum _{k=1}^{N-n}〈{\lambda }^{1},{\lambda }^{k}〉{c}_{k},\dots ,\sum _{k=1}^{N-n}〈{\lambda }^{N-n},{\lambda }^{k}〉{c}_{k}\right\}={\gamma }_{2}-\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\gamma }_{1}.$
(33)

Put $c=col\left({c}_{1},\dots ,{c}_{N-n}\right)$. Then (3.8) takes the form

$Wc={\gamma }_{2}-\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\gamma }_{1},$
(34)

and hence

$c={W}^{-1}{\gamma }_{2}-{W}^{-1}\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\gamma }_{1}.$
(35)

To complete the proof, it remains now to substitute c into (3.7). □

## A case of systems with aftereffect

In this section, we consider a system of functional differential equations with aftereffect that, formally speaking, is a concrete realization of the AFDE, and, on the other hand, it covers many kinds of dynamic models with aftereffect (integro-differential, delayed differential, differential difference) [, , ].

Despite the case considered in Sections 2, 3 is more general, we derive here conditions of the solvability in detail since the corresponding transformations are based on the properties of operators and spaces as applied to the case under consideration.

Let us introduce the functional spaces where operators and equations are considered. Fix a segment $\left[0,T\right]\subset R$. By ${L}_{2}^{n}={L}_{2}^{n}\left[0,T\right]$ we denote the Hilbert space of square summable functions $v:\left[0,T\right]\to {R}^{n}$ endowed with the inner product $\left(u,v\right)={\int }_{0}^{T}{u}^{\prime }\left(t\right)v\left(t\right)\phantom{\rule{0.2em}{0ex}}dt$ ( is the symbol of transposition). The space $A{C}_{2}^{n}=A{C}_{2}^{n}\left[0,T\right]$ is the space of absolutely continuous functions $x:\left[0,T\right]\to {R}^{n}$ such that $\stackrel{˙}{x}\in {L}_{2}^{n}$ with the norm ${\parallel x\parallel }_{A{C}_{2}^{n}}=|x\left(0\right)|+\sqrt{\left(\stackrel{˙}{x},\stackrel{˙}{x}\right)}$, where $|\cdot |$ stands for the norm of ${R}^{n}$. Thus we have here $D=A{C}_{2}^{n}$, $H={L}_{2}^{n}$, $A{C}_{2}^{n}\cong {L}_{2}^{n}×{R}^{n}$, and $x\left(t\right)={\int }_{0}^{t}z\left(s\right)\phantom{\rule{0.2em}{0ex}}ds+x\left(0\right)$, $\left(\mathrm{\Lambda }z\right)\left(t\right)={\int }_{0}^{t}z\left(s\right)\phantom{\rule{0.2em}{0ex}}ds$, $Y=I$, $\delta x=\stackrel{˙}{x}$, $rx=x\left(0\right)$ (see (2.2)-(2.4)).

Consider the functional differential equation

$\mathcal{L}x\equiv \stackrel{˙}{x}-\mathcal{K}\stackrel{˙}{x},\phantom{\rule{2em}{0ex}}-A\left(\cdot \right)x\left(0\right)=f,$
(36)

where the linear bounded operator $\mathcal{K}:{L}_{2}^{n}\to {L}_{2}^{n}$ is defined by

$\left(\mathcal{K}z\right)\left(t\right)={\int }_{0}^{t}K\left(t,s\right)z\left(s\right)\phantom{\rule{0.2em}{0ex}}ds,\phantom{\rule{1em}{0ex}}t\in \left[0,T\right],$
(37)

the elements ${k}_{ij}\left(t,s\right)$ of the kernel $K\left(t,s\right)$ are measurable on the set $0\le s\le t\le T$ and such that $|{k}_{ij}\left(t,s\right)|\le u\left(t\right)v\left(s\right)$, $i,j=1,\dots ,n$, $u,v\in {L}_{2}^{1}\left[0,T\right]$, $\left(n×n\right)$-matrix A has elements that are square summable on $\left[0,T\right]$. Therefore, we have here $Q=I-\mathcal{K}$, $Arx=A\left(\cdot \right)x\left(0\right)$ (see (2.5)).

Recall that, under some natural assumptions, the following equations can be rewritten in the form (4.1):

• the differential equation with concentrated delay

$\stackrel{˙}{x}\left(t\right)-P\left(t\right)x\left[h\left(t\right)\right]=f\left(t\right)$
(38)

(here, for any measurable function $h:\left[0,T\right]\to {R}^{1}$ such that $h\left(t\right)\le t$, $t\in \left[0,T\right]$, $x\left[h\left(t\right)\right]$ stands for a given function $g\left(t\right)$ if $h\left(t\right)<0$);

• the differential equation with distributed delay

$\stackrel{˙}{x}\left(t\right)-{\int }_{0}^{t}\phantom{\rule{0.2em}{0ex}}{d}_{s}H\left(t,s\right)x\left(s\right)=f\left(t\right)$
(39)

(with the Stieltjes integral);

• the integro-differential equation

$\stackrel{˙}{x}\left(t\right)-{\int }_{0}^{t}F\left(t,s\right)x\left(s\right)\phantom{\rule{0.2em}{0ex}}ds=f\left(t\right).$
(40)

In what follows we will use some results from [, , , ] concerning (4.1). The homogeneous equation (4.1) ($f\left(t\right)=0$, $t\in \left[0,T\right]$) has the fundamental $\left(n×n\right)$-matrix $X\left(t\right)$:

$X\left(t\right)={E}_{n}+V\left(t\right),$
(41)

where ${E}_{n}$ is the identity $\left(n×n\right)$-matrix, each column ${v}_{i}\left(t\right)$ of the $\left(n×n\right)$-matrix $V\left(t\right)$ is a unique solution to the Cauchy problem

$\stackrel{˙}{v}\left(t\right)={\int }_{0}^{t}K\left(t,s\right)\stackrel{˙}{v}\left(s\right)\phantom{\rule{0.2em}{0ex}}ds+{a}_{i}\left(t\right),\phantom{\rule{2em}{0ex}}v\left(0\right)=0,\phantom{\rule{1em}{0ex}}t\in \left[0,T\right],$
(42)

where ${a}_{i}\left(t\right)$ is the i th column of A.

The solution of (4.1) with the initial condition $x\left(0\right)=0$ has the representation

$x\left(t\right)=\left(Cf\right)\left(t\right)={\int }_{0}^{t}C\left(t,s\right)f\left(s\right)\phantom{\rule{0.2em}{0ex}}ds,$
(43)

where $C\left(t,s\right)$ is the Cauchy matrix of the operator . This matrix can be defined (and constructed) as the solution to

$\frac{\partial }{\partial t}C\left(t,s\right)={\int }_{s}^{t}K\left(t,\tau \right)\frac{\partial }{\partial \tau }C\left(\tau ,s\right)\phantom{\rule{0.2em}{0ex}}d\tau +K\left(t,s\right),\phantom{\rule{1em}{0ex}}0\le s\le t\le T,$
(44)

under the condition $C\left(s,s\right)={E}_{n}$.

The matrix $C\left(t,s\right)$ is expressed in terms of the resolvent kernel $R\left(t,s\right)$ of the kernel $K\left(t,s\right)$. Namely,

$C\left(t,s\right)={E}_{n}+{\int }_{s}^{t}R\left(\tau ,s\right)\phantom{\rule{0.2em}{0ex}}d\tau .$
(45)

Thus $\frac{\partial }{\partial t}C\left(t,s\right)=R\left(t,s\right)$, and the above equation for $\frac{\partial }{\partial t}C\left(t,s\right)$ is the well-known relationship between the kernel $K\left(t,s\right)$ and its resolvent kernel $R\left(t,s\right)$.

The general solution of (4.1) has the form

$x\left(t\right)=X\left(t\right)\alpha +{\int }_{0}^{t}C\left(t,s\right)f\left(s\right)\phantom{\rule{0.2em}{0ex}}ds,$
(46)

with an arbitrary $\alpha \in {R}^{n}$.

The general linear BVP is the system (4.1) supplemented by linear boundary conditions

$\ell x=\gamma ,\phantom{\rule{1em}{0ex}}\gamma \in {R}^{N},$
(47)

where $\ell :A{C}_{2}^{n}\to {R}^{N}$ is a linear bounded vector functional. Let us recall the representation of :

$\ell x={\int }_{0}^{T}\mathrm{\Phi }\left(s\right)\stackrel{˙}{x}\left(s\right)\phantom{\rule{0.2em}{0ex}}ds+\mathrm{\Psi }x\left(0\right).$
(48)

Here Ψ is a constant $\left(N×n\right)$-matrix, Φ is $\left(N×n\right)$-matrix with elements that are square summable on $\left[0,T\right]$. We assume that the components ${\ell }^{i}:A{C}_{2}^{n}\to R$, $i=1,\dots ,N$ of are linearly independent.

BVP (4.1), (4.4) is well-posed if $N=n$. In such a situation, the BVP is uniquely solvable for any $f\in {L}_{2}^{n}\left[0,T\right]$ and $\gamma \in {R}^{n}$ if and only if the matrix

$\ell X=\left(\ell {X}^{1},\dots ,\ell {X}^{n}\right),$
(49)

where ${X}^{j}$ is the j th column of X, is nonsingular, i.e.$det\ell X\ne 0$. It should be noted that this condition cannot be verified immediately because X cannot be (as a rule) evaluated explicitly. In addition, even if X were known, then the elements of ℓX, generally speaking, could not be evaluated explicitly. By the theorem about inverse operators, the matrix ℓX is invertible if one can find an invertible matrix Γ such that $\parallel \ell X-\mathrm{\Gamma }\parallel <1/\parallel {\mathrm{\Gamma }}^{-1}\parallel$. As has been shown in [], such a matrix Γ for the invertible matrix ℓX always can be found among the matrices $\mathrm{\Gamma }=\overline{\ell }\overline{X}$, where $\overline{\ell }:A{C}_{2}^{n}\to {R}^{n}$ is a vector functional near , and $\overline{X}$ is an approximation of X. That is why the basis of the so-called constructive study of linear BVPs includes a special technique of approximate constructing the solutions to FDE with guaranteed explicit error bounds as well as the reliable computing experiment (RCE) [, , ] which opens a way to the computer-assisted study of BVPs.

We assume in the sequel that $N>n$ and the system ${\ell }^{i}:A{C}_{2}^{n}\to R$, $i=1,\dots ,N$ can be split into two subsystems ${\ell }_{1}:A{C}_{2}^{n}\to {R}^{n}$ and ${\ell }_{2}:A{C}_{2}^{n}\to {R}^{N-n}$ such that the BVP

$\mathcal{L}x=f,\phantom{\rule{2em}{0ex}}{\ell }_{1}x={\gamma }_{1}$
(50)

is uniquely solvable. Without loss of generality we will consider that ${\ell }_{1}$ is formed by the first n components of and the elements of ${\gamma }_{1}$ in (4.6) are the corresponding components of γ. Thus ${\ell }_{2}$ will stand for the final $\left(N-n\right)$ components of , and elements of ${\gamma }_{2}\in {R}^{N-n}$ are defined as the final $\left(N-n\right)$ components of γ. Let us write ${\ell }_{1}$ in the form

${\ell }_{1}x={\int }_{0}^{T}{\mathrm{\Phi }}_{1}\left(s\right)\stackrel{˙}{x}\left(s\right)\phantom{\rule{0.2em}{0ex}}ds+{\mathrm{\Psi }}_{1}x\left(0\right),$
(51)

where ${\mathrm{\Phi }}_{1}\left(s\right)$ and ${\mathrm{\Psi }}_{1}$ are the corresponding rows of $\mathrm{\Phi }\left(s\right)$ and Ψ, respectively. Similarly,

${\ell }_{2}x={\int }_{0}^{T}{\mathrm{\Phi }}_{2}\left(s\right)\stackrel{˙}{x}\left(s\right)\phantom{\rule{0.2em}{0ex}}ds+{\mathrm{\Psi }}_{2}x\left(0\right).$
(52)

Put

${\mathrm{\Theta }}_{i}\left(s\right)={\mathrm{\Phi }}_{i}\left(s\right)+{\int }_{s}^{T}{\mathrm{\Phi }}_{i}\left(\tau \right){C}_{\tau }^{\prime }\left(\tau ,s\right)\phantom{\rule{0.2em}{0ex}}d\tau ,\phantom{\rule{1em}{0ex}}i=1,2,$
(53)

and

$F\left(s\right)={\mathrm{\Theta }}_{2}\left(s\right)-\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\mathrm{\Theta }}_{1}\left(s\right).$
(54)

### Theorem 2

Let the matrix$W={\int }_{0}^{T}F\left(s\right){F}^{\prime }\left(s\right)\phantom{\rule{0.2em}{0ex}}ds$, where F is defined by (4.10), be nonsingular. Then BVP (4.1), (4.4) is solvable for all$f\in {L}_{2}^{n}\left[0,T\right]$of the form

$f\left(t\right)={f}_{0}\left(t\right)+\phi \left(t\right),$
(55)

where

${f}_{0}\left(t\right)={F}^{\prime }\left(t\right)\left[{W}^{-1}{\gamma }_{2}-{W}^{-1}\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\gamma }_{1}\right],$
(56)

and$\phi \left(\cdot \right)\in {L}_{2}^{n}$is an arbitrary function that is orthogonal to each column of${F}^{\prime }\left(\cdot \right)$:

${\int }_{0}^{T}F\left(s\right)\phi \left(s\right)\phantom{\rule{0.2em}{0ex}}ds=0.$
(57)

### Proof

Let us apply ${\ell }_{1}$ to both parts of (4.3):

${\ell }_{1}x={\ell }_{1}X\alpha +{\ell }_{1}Cf.$
(58)

In virtue of the unique solvability of BVP (4.6), the condition $det{\ell }_{1}X\ne 0$ holds, therefore the equation

${\ell }_{1}x\equiv {\ell }_{1}X\alpha +{\ell }_{1}Cf={\gamma }_{1}$
(59)

is uniquely solvable with respect to α:

$\alpha ={\left({\ell }_{1}X\right)}^{-1}{\gamma }_{1}-{\left({\ell }_{1}X\right)}^{-1}{\ell }_{1}Cf.$
(60)

Hence, for any $f\in {L}_{2}^{n}\left[0,T\right]$,

$x=X{\left({\ell }_{1}X\right)}^{-1}{\gamma }_{1}-X{\left({\ell }_{1}X\right)}^{-1}{\ell }_{1}Cf+Cf$
(61)

is a solution to BVP (4.6). Now we shall search for $f\in {L}_{2}^{n}\left[0,T\right]$ such that the corresponding x of the form (4.11) satisfies the equality ${\ell }_{2}x={\gamma }_{2}$. For this purpose, apply ${\ell }_{2}$ to both parts of (4.11):

${\ell }_{2}x=\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\gamma }_{1}-\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\ell }_{1}Cf+{\ell }_{2}Cf={\gamma }_{2},$
(62)

or

${\ell }_{2}Cf-\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\ell }_{1}Cf={\gamma }_{2}-\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\gamma }_{1}.$
(63)

Now we show that the left-hand side of the latter equality can be written, for all $f\in {L}_{2}^{n}\left[0,T\right]$, in the form

${\ell }_{2}Cf-\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\ell }_{1}Cf={\int }_{0}^{T}F\left(s\right)f\left(s\right)\phantom{\rule{0.2em}{0ex}}ds$
(64)

with a $\left(\left(N-n\right)×n\right)$-matrix F whose columns belong to ${L}_{2}^{N-n}\left[0,T\right]$.

An explicit form of F is simple to derive by elementary transformations taking into account (4.5) and the properties of the Cauchy matrix. To do this, first note that

$\frac{d}{dt}\left\{{\int }_{0}^{t}C\left(t,s\right)f\left(s\right)\phantom{\rule{0.2em}{0ex}}ds\right\}={\int }_{0}^{t}{C}_{t}^{\prime }\left(t,s\right)f\left(s\right)\phantom{\rule{0.2em}{0ex}}ds+f\left(t\right).$
(65)

This follows from (4.2) and the equality ${C}_{t}^{\prime }\left(t,s\right)=R\left(t,s\right)$. Next, we have

$\begin{array}{rcl}{\ell }_{1}Cf& =& {\int }_{0}^{T}{\mathrm{\Phi }}_{1}\left(s\right){\int }_{0}^{s}{C}_{s}^{\prime }\left(s,\tau \right)f\left(\tau \right)\phantom{\rule{0.2em}{0ex}}d\tau \phantom{\rule{0.2em}{0ex}}ds+{\int }_{0}^{T}{\mathrm{\Phi }}_{1}\left(s\right)f\left(s\right)\phantom{\rule{0.2em}{0ex}}ds\\ =& {\int }_{0}^{T}{\int }_{\tau }^{T}{\mathrm{\Phi }}_{1}\left(s\right){C}_{s}^{\prime }\left(s,\tau \right)\phantom{\rule{0.2em}{0ex}}ds\phantom{\rule{0.2em}{0ex}}f\left(\tau \right)\phantom{\rule{0.2em}{0ex}}d\tau +{\int }_{0}^{T}{\mathrm{\Phi }}_{1}\left(s\right)f\left(s\right)\phantom{\rule{0.2em}{0ex}}ds\\ =& {\int }_{0}^{T}{\int }_{s}^{T}{\mathrm{\Phi }}_{1}\left(\tau \right){C}_{\tau }^{\prime }\left(\tau ,s\right)\phantom{\rule{0.2em}{0ex}}d\tau \phantom{\rule{0.2em}{0ex}}f\left(s\right)\phantom{\rule{0.2em}{0ex}}ds+{\int }_{0}^{T}{\mathrm{\Phi }}_{1}\left(s\right)f\left(s\right)\phantom{\rule{0.2em}{0ex}}ds\\ =& {\int }_{0}^{T}{\mathrm{\Theta }}_{1}\left(s\right)f\left(s\right)\phantom{\rule{0.2em}{0ex}}ds.\end{array}$
(66)

Notice that the interchangeability of the order of integration in the iterated integrals above is proved in []. In a similar way,

${\ell }_{2}Cf={\int }_{0}^{T}{\mathrm{\Theta }}_{2}\left(s\right)f\left(s\right)\phantom{\rule{0.2em}{0ex}}ds.$
(67)

Thus

$F\left(s\right)={\mathrm{\Theta }}_{2}\left(s\right)-\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\mathrm{\Theta }}_{1}\left(s\right).$
(68)

Now it remains to find $f\in {L}_{2}^{n}\left[0,T\right]$ such that

${\int }_{0}^{T}F\left(s\right)f\left(s\right)\phantom{\rule{0.2em}{0ex}}ds={\gamma }_{2}-\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\gamma }_{1}.$
(69)

As is well known, any $f\in {L}_{2}^{n}\left[0,T\right]$ can be represented in the form

$f\left(s\right)={F}^{\prime }\left(s\right)\cdot c+\phi \left(s\right)$
(70)

with $c\in {R}^{N-n}$ and $\phi \in {L}_{2}^{n}\left[0,T\right]$ such that ${\int }_{0}^{T}F\left(s\right)\phi \left(s\right)\phantom{\rule{0.2em}{0ex}}ds=0$. By virtue of the condition $detW\ne 0$, we obtain after substitution of (4.14) into (4.13) that the vector $c={W}^{-1}{\gamma }_{2}-{W}^{-1}\left({\ell }_{2}X\right){\left({\ell }_{1}X\right)}^{-1}{\gamma }_{1}$ gives the corresponding f (see (4.14)) that solves (4.13). This completes the proof. □

In view of Theorem 2, the solvability of BVP (4.1), (4.4) can be investigated on the base of the reliable computing experiment [, , ]. A somewhat different approach to the study of BVP (4.1), (4.4) with $N>n$ is proposed in [].

## References

1. Nayfeh AN, Mook DT: Nonlinear Oscillations. Wiley, New York; 1970.

2. Maksimov VP, Rumyantsev AN: Boundary value problems and problems of pulse control in economic dynamics: constructive study. Russ. Math. 1993, 37: 48-62.

3. Boichuk AA: Constructive Methods of Analysis of Boundary Value Problems. Naukova Dumka, Kyiv; 1990.

4. Azbelev NV, Rakhmatullina LF: Theory of linear abstract functional differential equations and applications. Mem. Differ. Equ. Math. Phys. 1996, 8: 1-102.

5. Azbelev NV, Maksimov VP, Rakhmatullina LF: Introduction to the Theory of Functional Differential Equations. Nauka, Moscow; 1991.

6. Maksimov VP: Theory of functional differential equations and some problems in economic dynamics. In Proceedings of the Conference on Differential and Difference Equations and Applications. Edited by: Agarwal R, Perera K. Hindawi Publishing Corporation, New York; 2006:757-765.

7. Azbelev NV, Maksimov VP, Simonov PM: Theory of functional differential equations and applications. Int. J. Pure Appl. Math. 2011, 69: 203-235.

8. Azbelev NV, Maksimov VP, Rakhmatullina LF: Introduction to the Theory of Functional Differential Equations: Methods and Applications. Hindawi Publishing Corporation, New York; 2007.

9. Azbelev NV, Maksimov VP, Rakhmatullina LF: Elements of the Contemporary Theory of Functional Differential Equations. Methods and Applications. Institute of Computer-Assisted Studies, Moscow; 2002.

10. Maksimov VP, Rumyantsev AN: Reliable computing experiment in the study of generalized controllability of linear functional differential systems. In Mathematical Modelling. Problems, Methods, Applications. Edited by: Uvarova L, Latyshev A. Kluwer Academic, New York; 2002:91-98.

11. Maksimov VP: Cauchy’s formula for a functional-differential equation. Differ. Equ. 1977, 13: 405-409.

12. Maksimov VP: Questions of the General Theory of Functional Differential Equations. Perm State University, Perm; 2003.

13. Rumyantsev AN: Reliable Computing Experiment in the Study of Boundary Value Problems. Perm State University, Perm; 1999.

14. Maksimov VP, Chadov AL: The constructive investigation of boundary-value problems with approximate satisfaction of boundary conditions. Russ. Math. 2010, 54: 71-74. 10.3103/S1066369X10100105

## Acknowledgements

The author thanks the referees for their careful reading of the manuscript and useful comments. The author acknowledges the support by the company Prognoz, Perm.

## Author information

Authors

### Competing interests

The author declares that he has no competing interests.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions 