In this section, we consider a system of functional differential equations with aftereffect that, formally speaking, is a concrete realization of the AFDE, and, on the other hand, it covers many kinds of dynamic models with aftereffect (integro-differential, delayed differential, differential difference) [[2], [6], [10]].

Despite the case considered in Sections 2, 3 is more general, we derive here conditions of the solvability in detail since the corresponding transformations are based on the properties of operators and spaces as applied to the case under consideration.

Let us introduce the functional spaces where operators and equations are considered. Fix a segment $[0,T]\subset R$. By ${L}_{2}^{n}={L}_{2}^{n}[0,T]$ we denote the Hilbert space of square summable functions $v:[0,T]\to {R}^{n}$ endowed with the inner product $(u,v)={\int}_{0}^{T}{u}^{\prime}(t)v(t)\phantom{\rule{0.2em}{0ex}}dt$ (⋅^{′} is the symbol of transposition). The space $A{C}_{2}^{n}=A{C}_{2}^{n}[0,T]$ is the space of absolutely continuous functions $x:[0,T]\to {R}^{n}$ such that $\dot{x}\in {L}_{2}^{n}$ with the norm ${\parallel x\parallel}_{A{C}_{2}^{n}}=|x(0)|+\sqrt{(\dot{x},\dot{x})}$, where $|\cdot |$ stands for the norm of ${R}^{n}$. Thus we have here $D=A{C}_{2}^{n}$, $H={L}_{2}^{n}$, $A{C}_{2}^{n}\cong {L}_{2}^{n}\times {R}^{n}$, and $x(t)={\int}_{0}^{t}z(s)\phantom{\rule{0.2em}{0ex}}ds+x(0)$, $(\mathrm{\Lambda}z)(t)={\int}_{0}^{t}z(s)\phantom{\rule{0.2em}{0ex}}ds$, $Y=I$, $\delta x=\dot{x}$, $rx=x(0)$ (see (2.2)-(2.4)).

Consider the functional differential equation

$\mathcal{L}x\equiv \dot{x}-\mathcal{K}\dot{x},\phantom{\rule{2em}{0ex}}-A(\cdot )x(0)=f,$

(36)

where the linear bounded operator

$\mathcal{K}:{L}_{2}^{n}\to {L}_{2}^{n}$ is defined by

$(\mathcal{K}z)(t)={\int}_{0}^{t}K(t,s)z(s)\phantom{\rule{0.2em}{0ex}}ds,\phantom{\rule{1em}{0ex}}t\in [0,T],$

(37)

the elements

${k}_{ij}(t,s)$ of the kernel

$K(t,s)$ are measurable on the set

$0\le s\le t\le T$ and such that

$|{k}_{ij}(t,s)|\le u(t)v(s)$,

$i,j=1,\dots ,n$,

$u,v\in {L}_{2}^{1}[0,T]$,

$(n\times n)$-matrix

*A* has elements that are square summable on

$[0,T]$. Therefore, we have here

$Q=I-\mathcal{K}$,

$Arx=A(\cdot )x(0)$ (see (

2.5)).

Recall that, under some natural assumptions, the following equations can be rewritten in the form (4.1):

• the differential equation with concentrated delay

$\dot{x}(t)-P(t)x[h(t)]=f(t)$

(38)

(here, for any measurable function

$h:[0,T]\to {R}^{1}$ such that

$h(t)\le t$,

$t\in [0,T]$,

$x[h(t)]$ stands for a given function

$g(t)$ if

$h(t)<0$);

• the differential equation with distributed delay

$\dot{x}(t)-{\int}_{0}^{t}\phantom{\rule{0.2em}{0ex}}{d}_{s}H(t,s)x(s)=f(t)$

(39)

(with the Stieltjes integral);

• the integro-differential equation

$\dot{x}(t)-{\int}_{0}^{t}F(t,s)x(s)\phantom{\rule{0.2em}{0ex}}ds=f(t).$

(40)

In what follows we will use some results from [[

5], [

8], [

11], [

12]] concerning (

4.1). The homogeneous equation (

4.1) (

$f(t)=0$,

$t\in [0,T]$) has the fundamental

$(n\times n)$-matrix

$X(t)$:

$X(t)={E}_{n}+V(t),$

(41)

where

${E}_{n}$ is the identity

$(n\times n)$-matrix, each column

${v}_{i}(t)$ of the

$(n\times n)$-matrix

$V(t)$ is a unique solution to the Cauchy problem

$\dot{v}(t)={\int}_{0}^{t}K(t,s)\dot{v}(s)\phantom{\rule{0.2em}{0ex}}ds+{a}_{i}(t),\phantom{\rule{2em}{0ex}}v(0)=0,\phantom{\rule{1em}{0ex}}t\in [0,T],$

(42)

where

${a}_{i}(t)$ is the

*i* th column of

*A*.

The solution of (

4.1) with the initial condition

$x(0)=0$ has the representation

$x(t)=(Cf)(t)={\int}_{0}^{t}C(t,s)f(s)\phantom{\rule{0.2em}{0ex}}ds,$

(43)

where

$C(t,s)$ is the Cauchy matrix of the operator ℒ. This matrix can be defined (and constructed) as the solution to

$\frac{\partial}{\partial t}C(t,s)={\int}_{s}^{t}K(t,\tau )\frac{\partial}{\partial \tau}C(\tau ,s)\phantom{\rule{0.2em}{0ex}}d\tau +K(t,s),\phantom{\rule{1em}{0ex}}0\le s\le t\le T,$

(44)

under the condition

$C(s,s)={E}_{n}$.

The matrix

$C(t,s)$ is expressed in terms of the resolvent kernel

$R(t,s)$ of the kernel

$K(t,s)$. Namely,

$C(t,s)={E}_{n}+{\int}_{s}^{t}R(\tau ,s)\phantom{\rule{0.2em}{0ex}}d\tau .$

(45)

Thus

$\frac{\partial}{\partial t}C(t,s)=R(t,s)$, and the above equation for

$\frac{\partial}{\partial t}C(t,s)$ is the well-known relationship between the kernel

$K(t,s)$ and its resolvent kernel

$R(t,s)$.

The general solution of (

4.1) has the form

$x(t)=X(t)\alpha +{\int}_{0}^{t}C(t,s)f(s)\phantom{\rule{0.2em}{0ex}}ds,$

(46)

with an arbitrary

$\alpha \in {R}^{n}$.

The general linear BVP is the system (

4.1) supplemented by linear boundary conditions

$\ell x=\gamma ,\phantom{\rule{1em}{0ex}}\gamma \in {R}^{N},$

(47)

where

$\ell :A{C}_{2}^{n}\to {R}^{N}$ is a linear bounded vector functional. Let us recall the representation of

*ℓ*:

$\ell x={\int}_{0}^{T}\mathrm{\Phi}(s)\dot{x}(s)\phantom{\rule{0.2em}{0ex}}ds+\mathrm{\Psi}x(0).$

(48)

Here Ψ is a constant

$(N\times n)$-matrix, Φ is

$(N\times n)$-matrix with elements that are square summable on

$[0,T]$. We assume that the components

${\ell}^{i}:A{C}_{2}^{n}\to R$,

$i=1,\dots ,N$ of

*ℓ* are linearly independent.

BVP (

4.1), (

4.4) is well-posed if

$N=n$. In such a situation, the BVP is uniquely solvable for any

$f\in {L}_{2}^{n}[0,T]$ and

$\gamma \in {R}^{n}$ if and only if the matrix

$\ell X=(\ell {X}^{1},\dots ,\ell {X}^{n}),$

(49)

where

${X}^{j}$ is the

*j* th column of

*X*, is nonsingular,

*i.e.*$det\ell X\ne 0$. It should be noted that this condition cannot be verified immediately because

*X* cannot be (as a rule) evaluated explicitly. In addition, even if

*X* were known, then the elements of

*ℓX*, generally speaking, could not be evaluated explicitly. By the theorem about inverse operators, the matrix

*ℓX* is invertible if one can find an invertible matrix Γ such that

$\parallel \ell X-\mathrm{\Gamma}\parallel <1/\parallel {\mathrm{\Gamma}}^{-1}\parallel $. As has been shown in [[

13]], such a matrix Γ for the invertible matrix

*ℓX* always can be found among the matrices

$\mathrm{\Gamma}=\overline{\ell}\overline{X}$, where

$\overline{\ell}:A{C}_{2}^{n}\to {R}^{n}$ is a vector functional near

*ℓ*, and

$\overline{X}$ is an approximation of

*X*. That is why the basis of the so-called constructive study of linear BVPs includes a special technique of approximate constructing the solutions to FDE with guaranteed explicit error bounds as well as the reliable computing experiment (RCE) [[

2], [

10], [

13]] which opens a way to the computer-assisted study of BVPs.

We assume in the sequel that

$N>n$ and the system

${\ell}^{i}:A{C}_{2}^{n}\to R$,

$i=1,\dots ,N$ can be split into two subsystems

${\ell}_{1}:A{C}_{2}^{n}\to {R}^{n}$ and

${\ell}_{2}:A{C}_{2}^{n}\to {R}^{N-n}$ such that the BVP

$\mathcal{L}x=f,\phantom{\rule{2em}{0ex}}{\ell}_{1}x={\gamma}_{1}$

(50)

is uniquely solvable. Without loss of generality we will consider that

${\ell}_{1}$ is formed by the first

*n* components of

*ℓ* and the elements of

${\gamma}_{1}$ in (

4.6) are the corresponding components of

*γ*. Thus

${\ell}_{2}$ will stand for the final

$(N-n)$ components of

*ℓ*, and elements of

${\gamma}_{2}\in {R}^{N-n}$ are defined as the final

$(N-n)$ components of

*γ*. Let us write

${\ell}_{1}$ in the form

${\ell}_{1}x={\int}_{0}^{T}{\mathrm{\Phi}}_{1}(s)\dot{x}(s)\phantom{\rule{0.2em}{0ex}}ds+{\mathrm{\Psi}}_{1}x(0),$

(51)

where

${\mathrm{\Phi}}_{1}(s)$ and

${\mathrm{\Psi}}_{1}$ are the corresponding rows of

$\mathrm{\Phi}(s)$ and Ψ, respectively. Similarly,

${\ell}_{2}x={\int}_{0}^{T}{\mathrm{\Phi}}_{2}(s)\dot{x}(s)\phantom{\rule{0.2em}{0ex}}ds+{\mathrm{\Psi}}_{2}x(0).$

(52)

Put

${\mathrm{\Theta}}_{i}(s)={\mathrm{\Phi}}_{i}(s)+{\int}_{s}^{T}{\mathrm{\Phi}}_{i}(\tau ){C}_{\tau}^{\prime}(\tau ,s)\phantom{\rule{0.2em}{0ex}}d\tau ,\phantom{\rule{1em}{0ex}}i=1,2,$

(53)

and

$F(s)={\mathrm{\Theta}}_{2}(s)-({\ell}_{2}X){({\ell}_{1}X)}^{-1}{\mathrm{\Theta}}_{1}(s).$

(54)

### Theorem 2

*Let the matrix*$W={\int}_{0}^{T}F(s){F}^{\prime}(s)\phantom{\rule{0.2em}{0ex}}ds$,

*where* *F* *is defined by* (

4.10),

*be nonsingular*.

*Then BVP* (

4.1), (

4.4)

*is solvable for all*$f\in {L}_{2}^{n}[0,T]$*of the form*$f(t)={f}_{0}(t)+\phi (t),$

(55)

*where*${f}_{0}(t)={F}^{\prime}(t)[{W}^{-1}{\gamma}_{2}-{W}^{-1}({\ell}_{2}X){({\ell}_{1}X)}^{-1}{\gamma}_{1}],$

(56)

*and*$\phi (\cdot )\in {L}_{2}^{n}$*is an arbitrary function that is orthogonal to each column of*${F}^{\prime}(\cdot )$:

${\int}_{0}^{T}F(s)\phi (s)\phantom{\rule{0.2em}{0ex}}ds=0.$

(57)

### Proof

Let us apply

${\ell}_{1}$ to both parts of (

4.3):

${\ell}_{1}x={\ell}_{1}X\alpha +{\ell}_{1}Cf.$

(58)

In virtue of the unique solvability of BVP (

4.6), the condition

$det{\ell}_{1}X\ne 0$ holds, therefore the equation

${\ell}_{1}x\equiv {\ell}_{1}X\alpha +{\ell}_{1}Cf={\gamma}_{1}$

(59)

is uniquely solvable with respect to

*α*:

$\alpha ={({\ell}_{1}X)}^{-1}{\gamma}_{1}-{({\ell}_{1}X)}^{-1}{\ell}_{1}Cf.$

(60)

Hence, for any

$f\in {L}_{2}^{n}[0,T]$,

$x=X{({\ell}_{1}X)}^{-1}{\gamma}_{1}-X{({\ell}_{1}X)}^{-1}{\ell}_{1}Cf+Cf$

(61)

is a solution to BVP (

4.6). Now we shall search for

$f\in {L}_{2}^{n}[0,T]$ such that the corresponding

*x* of the form (

4.11) satisfies the equality

${\ell}_{2}x={\gamma}_{2}$. For this purpose, apply

${\ell}_{2}$ to both parts of (

4.11):

${\ell}_{2}x=({\ell}_{2}X){({\ell}_{1}X)}^{-1}{\gamma}_{1}-({\ell}_{2}X){({\ell}_{1}X)}^{-1}{\ell}_{1}Cf+{\ell}_{2}Cf={\gamma}_{2},$

(62)

or

${\ell}_{2}Cf-({\ell}_{2}X){({\ell}_{1}X)}^{-1}{\ell}_{1}Cf={\gamma}_{2}-({\ell}_{2}X){({\ell}_{1}X)}^{-1}{\gamma}_{1}.$

(63)

Now we show that the left-hand side of the latter equality can be written, for all

$f\in {L}_{2}^{n}[0,T]$, in the form

${\ell}_{2}Cf-({\ell}_{2}X){({\ell}_{1}X)}^{-1}{\ell}_{1}Cf={\int}_{0}^{T}F(s)f(s)\phantom{\rule{0.2em}{0ex}}ds$

(64)

with a

$((N-n)\times n)$-matrix

*F* whose columns belong to

${L}_{2}^{N-n}[0,T]$.

An explicit form of

*F* is simple to derive by elementary transformations taking into account (

4.5) and the properties of the Cauchy matrix. To do this, first note that

$\frac{d}{dt}\{{\int}_{0}^{t}C(t,s)f(s)\phantom{\rule{0.2em}{0ex}}ds\}={\int}_{0}^{t}{C}_{t}^{\prime}(t,s)f(s)\phantom{\rule{0.2em}{0ex}}ds+f(t).$

(65)

This follows from (

4.2) and the equality

${C}_{t}^{\prime}(t,s)=R(t,s)$. Next, we have

$\begin{array}{rcl}{\ell}_{1}Cf& =& {\int}_{0}^{T}{\mathrm{\Phi}}_{1}(s){\int}_{0}^{s}{C}_{s}^{\prime}(s,\tau )f(\tau )\phantom{\rule{0.2em}{0ex}}d\tau \phantom{\rule{0.2em}{0ex}}ds+{\int}_{0}^{T}{\mathrm{\Phi}}_{1}(s)f(s)\phantom{\rule{0.2em}{0ex}}ds\\ =& {\int}_{0}^{T}{\int}_{\tau}^{T}{\mathrm{\Phi}}_{1}(s){C}_{s}^{\prime}(s,\tau )\phantom{\rule{0.2em}{0ex}}ds\phantom{\rule{0.2em}{0ex}}f(\tau )\phantom{\rule{0.2em}{0ex}}d\tau +{\int}_{0}^{T}{\mathrm{\Phi}}_{1}(s)f(s)\phantom{\rule{0.2em}{0ex}}ds\\ =& {\int}_{0}^{T}{\int}_{s}^{T}{\mathrm{\Phi}}_{1}(\tau ){C}_{\tau}^{\prime}(\tau ,s)\phantom{\rule{0.2em}{0ex}}d\tau \phantom{\rule{0.2em}{0ex}}f(s)\phantom{\rule{0.2em}{0ex}}ds+{\int}_{0}^{T}{\mathrm{\Phi}}_{1}(s)f(s)\phantom{\rule{0.2em}{0ex}}ds\\ =& {\int}_{0}^{T}{\mathrm{\Theta}}_{1}(s)f(s)\phantom{\rule{0.2em}{0ex}}ds.\end{array}$

(66)

Notice that the interchangeability of the order of integration in the iterated integrals above is proved in [[

11]]. In a similar way,

${\ell}_{2}Cf={\int}_{0}^{T}{\mathrm{\Theta}}_{2}(s)f(s)\phantom{\rule{0.2em}{0ex}}ds.$

(67)

Thus

$F(s)={\mathrm{\Theta}}_{2}(s)-({\ell}_{2}X){({\ell}_{1}X)}^{-1}{\mathrm{\Theta}}_{1}(s).$

(68)

Now it remains to find

$f\in {L}_{2}^{n}[0,T]$ such that

${\int}_{0}^{T}F(s)f(s)\phantom{\rule{0.2em}{0ex}}ds={\gamma}_{2}-({\ell}_{2}X){({\ell}_{1}X)}^{-1}{\gamma}_{1}.$

(69)

As is well known, any

$f\in {L}_{2}^{n}[0,T]$ can be represented in the form

$f(s)={F}^{\prime}(s)\cdot c+\phi (s)$

(70)

with

$c\in {R}^{N-n}$ and

$\phi \in {L}_{2}^{n}[0,T]$ such that

${\int}_{0}^{T}F(s)\phi (s)\phantom{\rule{0.2em}{0ex}}ds=0$. By virtue of the condition

$detW\ne 0$, we obtain after substitution of (

4.14) into (

4.13) that the vector

$c={W}^{-1}{\gamma}_{2}-{W}^{-1}({\ell}_{2}X){({\ell}_{1}X)}^{-1}{\gamma}_{1}$ gives the corresponding

*f* (see (

4.14)) that solves (

4.13). This completes the proof. □

In view of Theorem 2, the solvability of BVP (4.1), (4.4) can be investigated on the base of the reliable computing experiment [[2], [10], [13]]. A somewhat different approach to the study of BVP (4.1), (4.4) with $N>n$ is proposed in [[14]].