You are viewing the new article page. Let us know what you think. Return to old version

# Existence of solutions of abstract non-autonomous second order integro-differential equations

## Abstract

In this paper the existence of solutions of a non-autonomous abstract integro-differential equation of second order is considered. Assuming the existence of an evolution operator corresponding to the associate abstract non-autonomous Cauchy problem of second order, we establish the existence of a resolvent operator for the homogeneous integro-differential equation and the existence of mild solutions to the inhomogeneous integro-differential equation. Furthermore, we study the existence of classical solutions of the integro-differential equation. Finally, we apply our results to the study of the existence of solutions of a non-autonomous wave equation.

## Introduction

Abstract integro-differential equations have been used to model various physical phenomena. For this reason, this type of equations has attracted the attention of authors in recent years. We refer the reader to the recent papers of Diagana , Vijayakumar et al.  and the references therein.

This paper is devoted to the study of the existence of mild and classical solutions for initial value problems described as an abstract non-autonomous second order integro-differential equation in Banach spaces.

Throughout this paper, X denotes a Banach space endowed with a norm $\|\cdot\|$. Specifically, in this work we will be concerned with the existence of solutions to initial value problems that can be modeled as

\begin{aligned}& x^{\prime\prime}(t) = A(t) x(t) + \int_{0}^{t} P(t,s) x(s) \,ds + f(t), \quad 0 \leq t \leq a, \end{aligned}
(1.1)
\begin{aligned}& x(0) = y,\qquad x^{\prime}(0) = z \end{aligned}
(1.2)

for fixed $y, z \in X$, and where $A(t) : D(A(t)) \subseteq X \to X$ for $t \in I = [0,a]$ denotes a closed linear operator, $P(t, s) : D(P) \subseteq X \to X$ is a linear operator, and $f : [0, a] \to X$ is an integrable function. We always assume that $D(P)$ is independent of $(t,s)$.

On the other hand, in recent times there has been an increasing interest in studying the abstract non-autonomous second order initial value problem

\begin{aligned}& x^{\prime\prime}(t) = A(t) x(t) + f(t), \quad 0 \leq s, t \leq a, \end{aligned}
(1.3)
\begin{aligned}& x(s) = y, \qquad x^{\prime}(s) = z. \end{aligned}
(1.4)

The reader is referred to  and the references mentioned in these works. In most of these, the existence of solutions to problem (1.3)-(1.4) is related to the existence of an evolution operator S for the homogeneous equation

$$x^{\prime\prime}(t)= A(t) x(t), \quad 0 \leq t \leq a.$$
(1.5)

Throughout this work we assume that the domain of $A(t)$ is a subspace D dense in X and independent of t, and that for each $x \in D$ the function $t \mapsto A(t) x$ is continuous. Henceforth in this text, we will assume that A generates an evolution operator S, in the sense introduced in Kozak , Definition 2.1 (see also Henríquez , Definition 1.1).

To abbreviate the text, we will denote

$$C(t,s) = - \frac{\partial S(t,s)}{\partial s}.$$

In addition, we denote by N a positive constant such that both $\|S(t,s)\| \leq N$ and $\|C(t,s)\| \leq N$ for all $0 \leq s, t \leq a$.

The existence of solutions of problem (1.3)-(1.4) has been studied by several authors [11, 14, 15]. At this point, we will just say that the function $x : [0,a] \to X$ given by

$$x(t) = C(t,s) y + S(t,s) z + \int_{s}^{t}S(t, \xi) f(\xi) \,d\xi$$
(1.6)

is a mild solution of problem (1.3)-(1.4).

This paper has five sections. In the next section we study the existence of solutions of (1.1)-(1.2) when $P(t,s)$ is a bounded linear map from X into X. In Section 3, we discuss the existence of a resolvent operator for the homogeneous integro-differential equation. In Section 4, we are concerned with the existence of mild and classical solutions of the inhomogeneous non-autonomous integro-differential equation (1.1)-(1.2). Finally, in Section 5 we apply our results to the study of the existence of solutions to non-autonomous wave equations.

The general terminology and notations used in this text are the following. When $(Y,\|\cdot\|_{Y})$ and $(Z,\|\cdot\|_{Z})$ are Banach spaces, we denote by $\mathcal{L}(Y,Z)$ the Banach space of the bounded linear operators from Y into Z endowed with the norm of operators and we abbreviate this notation to $\mathcal{L}(Y)$ whenever $Z=Y$. For a compact set K, we denote by $C(K,X)$ the space of continuous functions from K into X provided with the norm of uniform convergence. For a closed linear operator $A : D(A) \subseteq X \to X$, we denote by $\sigma (A)$ (resp. $\rho(A)$) its spectrum (resp. its resolvent set). Moreover, we represent by $[D(A)]$ the domain of A endowed with the graph norm $\| x \|_{A} = \| x \| + \| Ax \|$, $x \in D(A)$. Finally, $\operatorname{Im}(T)$ denotes the range space of a linear operator T.

## Existence of mild solutions

In this section we study the existence of mild solutions of problem (1.1)-(1.2). Throughout this section we assume that $P: \Delta= \{(t,s): 0 \leq s \leq t \leq a \}\to\mathcal{L}(X)$ is a strongly continuous map.

Initially we study the problem

\begin{aligned}& x^{\prime\prime}(t) = A(t) x(t) + \int_{\sigma }^{t} P(t,s) x(s) \,ds + f(t), \quad \sigma \leq t \leq a, \end{aligned}
(2.1)
\begin{aligned}& x(\sigma ) = y,\qquad x^{\prime}(\sigma ) = z \in X \end{aligned}
(2.2)

for $0 \leq\sigma \leq a$.

Motivated by (1.6), we consider the following concept of solution.

### Definition 2.1

A continuous function $x(\cdot, \sigma ) : [0, a] \to X$ is said to be a mild solution of problem (1.1)-(1.2) if

\begin{aligned} x(t, \sigma ) =& C(t, \sigma ) y + S(t, \sigma ) z + \int_{\sigma }^{t} S(t, s) \int_{\sigma }^{s} P(s, \xi) x(\xi) \,d\xi \,ds \\ &{}+ \int_{\sigma }^{t} S(t, s) f(s) \,ds \end{aligned}
(2.3)

for all $\sigma \leq t \leq a$.

The following result is an immediate consequence of this definition. We next abbreviate the notation by writing ${ \|P\| = \sup_{0 \leq s \leq t \leq a} \|P(t,s)\|}$.

### Theorem 2.1

Assume the function $f : [\sigma , a] \to X$ is integrable. Then for each $y, z \in X$, problem (1.1)-(1.2) has a unique mild solution.

### Proof

We define the map $\Gamma: C([\sigma , a], X) \to C([\sigma , a], X)$ by the expression

$$\Gamma x (t) = C(t,\sigma ) y + S(t, \sigma ) z + \int_{\sigma }^{t} \int _{\sigma }^{s} S(t,s) P(s,\xi) x(\xi) \,d\xi \,ds + \int_{\sigma }^{t} S(t, s) f(s) \,ds$$

for $\sigma \leq t \leq a$. For $u, v \in C([0, a], X)$, we have

\begin{aligned} \bigl\Vert \Gamma u(t) - \Gamma v(t)\bigr\Vert \leq& \biggl\Vert \int_{\sigma }^{t} \int_{\sigma }^{s} S(t,s) P(s,\xi) \bigl[u(\xi) - v(\xi) \bigr] \,d\xi \,ds \biggr\Vert \\ \leq& N \int_{\sigma }^{t} \int_{\sigma }^{s} \bigl\Vert P(s,\xi) \bigr\Vert \bigl\Vert u(\xi) - v(\xi) \bigr\Vert \,d\xi \,ds \\ \leq& N \Vert P \Vert \int_{\sigma }^{t} (t - \xi) \bigl\Vert u(\xi) - v(\xi) \bigr\Vert \,d\xi \\ \leq& N \Vert P\Vert \frac{(t - \sigma )^{2}}{2} \sup_{0 \leq s \leq t} \bigl\Vert u(s) - v(s)\bigr\Vert . \end{aligned}

Repeating this argument, we can show that $\Gamma^{n}$ is a contraction for n large enough. If $x(\cdot)$ is the fixed point of Γ, then the integral equation (2.3) is verified for $\sigma \leq t \leq a$. □

Now, we proceed to define the resolvent operator corresponding to problem (2.1)-(2.2). Let $x(t, \sigma , y, z)$ be the mild solution of problem (2.1)-(2.2) for $f = 0$. We define $R(t, \sigma ) z = x(t, \sigma , 0, z)$.

### Theorem 2.2

The map $R: \Delta\to\mathcal{L}(X)$ is strongly continuous, $R(t, \cdot) z$ is continuously differentiable for all $z \in X$, and

$$x(t, \sigma , y, 0) = - \frac{\partial R(t, \sigma ) y}{\partial \sigma }.$$

Moreover, if $f : [0, a] \to X$ is integrable, then problem (1.1)-(1.2) has a unique mild solution which is given by

$$x (t) = - \frac{\partial R(t, 0) y}{\partial s} + R(t, 0) z + \int _{0}^{t} R(t,s) f(s) \,ds$$
(2.4)

for all $y, z \in X$.

### Proof

The first assertion is an immediate consequence of the definition of mild solution. Moreover, since $R(t, \sigma ) z$ satisfies the equation

$$R(t, \sigma ) z = S(t, \sigma ) z + \int_{\sigma }^{t} S(t, s) \int _{\sigma }^{s} P(s, \xi) R(\xi, \sigma ) z \,d\xi \,ds,$$
(2.5)

we are led to consider the integral equation

$$u(t, \sigma ) = \frac{\partial S(t, \sigma ) z}{\partial\sigma } + \int_{\sigma }^{t} S(t, s) \int_{\sigma }^{s} P(s, \xi) u(\xi, \sigma ) \,d\xi \,ds$$
(2.6)

for $0 \leq\sigma \leq t \leq a$. Proceeding as in the proof of Theorem 2.1, we can show that equation (2.6) has a unique continuous solution $u : \Delta\to X$. Integrating in (2.6), we obtain

\begin{aligned} \int_{s}^{t} u(t, \sigma ) \,d\sigma =& \int_{s}^{t} \frac{\partial S(t, \sigma ) z}{\partial\sigma } \,d\sigma + \int_{s}^{t} \biggl[ \int_{\sigma }^{t} S(t, \tau) \int_{\sigma }^{\tau} P(\tau, \xi) u(\xi, \tau) \,d\xi \,d \tau \biggr] \,d\sigma \\ =& - S(t, s) z + \int_{s}^{t} S(t, \tau) \biggl[ \int_{s}^{\tau} \int_{\sigma }^{\tau} P(\tau, \xi) u(\xi, \sigma ) \,d\xi \,d \sigma \biggr] \,d\tau \\ =& - S(t, s) z + \int_{s}^{t} S(t, \tau) \biggl[ \int_{s}^{\tau} \int_{s}^{\xi} P(\tau, \xi) u(\xi, \sigma ) \,d\sigma \,d\xi \biggr] \,d\tau \\ =& - S(t, s) z + \int_{s}^{t} S(t, \tau) \int_{s}^{\tau} P(\tau , \xi) \biggl[ \int_{s}^{\xi} u(\xi, \sigma ) \,d\sigma \biggr] \,d\xi \,d\tau. \end{aligned}

By comparing this expression with equation (2.5), and using the uniqueness of solutions, we infer that $R(t, s) z = - \int_{s}^{t} u(t, \sigma ) \,d\sigma$. This implies that $R(t, \cdot) z$ is a function continuously differentiable and ${ \frac{\partial R(t, s) z}{\partial s} = u(t, s)}$. Collecting now (2.6) with (2.3), we infer that $- u(t, s) = x(t, s, z, 0)$.

Finally, since equation (1.1) is linear, in order to establish (2.4) we can assume that $y = z = 0$. We define ${ u(t) = \int_{0}^{t} R(t,s) f(s) \,ds}$. Using (2.5), we obtain

\begin{aligned}& \int_{0}^{t} S(t, s) \int_{0}^{s} P(s, \xi) u(\xi) \,d\xi \,ds + \int_{0}^{t} S(t, s) f(s) \,ds \\ & \quad = \int_{0}^{t} S(t, s) \int_{0}^{s} P(s, \xi) \biggl[ \int _{0}^{\xi} R(\xi, \tau) f(\tau) \,d\tau \biggr] \,d\xi \,ds + \int _{0}^{t} S(t, s) f(s) \,ds \\ & \quad = \int_{0}^{t} S(t, s) \int_{0}^{s} \int_{\tau}^{s} P(s, \xi) R(\xi, \tau) f(\tau) \,d\xi \,d\tau \,ds + \int_{0}^{t} S(t, s) f(s) \,ds \\ & \quad = \int_{0}^{t} \biggl[ \int_{\tau}^{t} S(t, s) \int_{\tau}^{s} P(s, \xi) R(\xi, \tau) f(\tau) \,d\xi \,ds \biggr] \,d\tau+ \int _{0}^{t} S(t, s) f(s) \,ds \\ & \quad = \int_{0}^{t} \bigl[ R(t, \tau) f(\tau) - S(t, \tau) f(\tau) \bigr] \,d\tau+ \int_{0}^{t} S(t, s) f(s) \,ds \\ & \quad = \int_{0}^{t} R(t, \tau) f(\tau) \,d\tau \\ & \quad = u(t), \end{aligned}

which implies that $u(\cdot)$ satisfies (2.3), and completes the proof. □

In most of cases, even when $f = 0$, the mild solution constructed in Theorem 2.1 does not satisfy equation (1.1). In the next sections, through the introduction of the concept of resolvent operator for the homogeneous equation, we analyze the existence of solution of problem (1.1)-(1.2). We will establish our results for a general situation when the operator $P(t,s)$ is unbounded.

## Existence of resolvent

The purpose of this section is to define a resolvent operator when the linear operators $P(t, s)$ are not bounded. To reach our aim, in this section we study the existence of solutions for the initial value problem

\begin{aligned}& x^{\prime\prime}(t) = A(t) x(t) + \int_{\sigma }^{t} P(t,s) x(s) \,ds , \quad \sigma \leq t \leq a, \end{aligned}
(3.1)
\begin{aligned}& x(\sigma ) = 0, \qquad x^{\prime}(\sigma ) = z \in X \end{aligned}
(3.2)

for $0 \leq\sigma \leq a$.

To establish our results we introduce the following terminology. Let $[D]$ be the space D endowed with the graph norm corresponding to the operator $A(0)$. Since $A(t)$ is a closed linear operator on D, by the closed graph theorem we see that $[D]$ is a Banach space, and $A(t) \in\mathcal{L}([D],X)$ for all $t \in I$. Let now $[D]_{t}$ be the Banach space D endowed with the graph norm corresponding to the operator $A(t)$. It is an immediate consequence of the previous assertion that $[D]$ and $[D]_{t}$ are equivalent Banach spaces. Moreover, since $A : I \to\mathcal{L}([D],X)$ is strongly continuous, it follows from the uniform boundedness principle that there is a constant Ñ such that $\|A(t)\|_{\mathcal{L}([D],X)} \leq\widetilde{N}$ for all $t \in I$.

We define the twice differentiation space Z of S consisting of vectors $z \in X$ such that the function $A(t) S(t, s) z$ is continuous at $(t, s)$. We consider Z endowed with the norm

$$\Vert z\Vert _{Z} = \Vert z\Vert + \sup_{0 \leq s \leq t \leq a} \bigl\Vert A(t) S(t, s) z\bigr\Vert ,\quad z \in Z.$$

It is easy to see that $(Z, \|\cdot\|_{Z})$ is a Banach space continuously included in X, and $S(t,s) \in\mathcal{L}(Z, [D])$. Moreover, from the properties of $S(\cdot, \cdot)$ it follows that $D \subseteq Z$. Hence, there exist constants $\widetilde{N}_{1}, \widetilde{N}_{2} > 0$ such that

\begin{aligned}& \bigl\Vert S(t, s) z\bigr\Vert _{[D]} \leq \widetilde{N}_{1} \Vert z\Vert _{Z}, \quad z \in Z, \\& \Vert z\Vert _{Z} \leq \widetilde{N}_{2} \Vert z \Vert _{[D]}, \quad z \in D. \end{aligned}

We consider the following conditions.

1. (H1)

For every $0 \leq s \leq t \leq a$, $P(t, s) : [D] \to Z$ is a bounded linear operator, and for each $x \in D$, the function $P(\cdot, \cdot) x$ is continuous with values in Z, and

$$\bigl\Vert P(t,s) x \bigr\Vert _{Z} \leq b \Vert x\Vert _{[D]}$$

for some constant $b > 0$ independent of $s, t \in\Delta$.

2. (H2)

There exists a constant $N_{2} \geq0$, such that

$$\biggl\Vert \int_{\xi}^{t} S(t,s) P(s, \xi) x \,ds\biggr\Vert \leq N_{2} \Vert x\Vert$$

for all $\xi\leq t \in[0, a]$ and all $x \in D$.

Motivated by the results in , we consider the following concept of solution (or classical solution).

### Definition 3.1

A function $x : [\sigma , a] \to[D]$ is said to be a solution of problem (3.1)-(3.2) if $x(\cdot) \in C([\sigma , a], [D]) \cap C^{2}([\sigma , a], X)$ and (3.1) and (3.2) are satisfied.

We are in a position to establish our first result of the existence of solutions.

### Theorem 3.1

Assume that (H1) holds. For each $z \in D$, problem (3.1)-(3.2) has a unique solution $x(\cdot , \sigma , z)$. Moreover, there exist constants $M_{1}, M_{2} > 0$ such that

$$\bigl\Vert x(t, \sigma , z)\bigr\Vert _{[D]} \leq M_{1} e^{M_{2}(t - \sigma )} \Vert z\Vert _{[D]},\quad t \in[\sigma , a].$$
(3.3)

If further (H2) holds, then there exist constants $M_{1}^{\prime}, M_{2}^{\prime} > 0$ such that

$$\bigl\Vert x(t, \sigma , z)\bigr\Vert \leq M_{1}^{\prime} e^{M_{2}^{\prime} (t - \sigma )} \Vert z\Vert , \quad t \in[\sigma , a].$$
(3.4)

### Proof

We define the map Γ on $C([\sigma , a], [D])$ by

$$\Gamma x (t) = S(t, \sigma ) z + \int_{\sigma }^{t} \int_{\sigma }^{s} S(t,s) P(s,\xi) x(\xi) \,d\xi \,ds, \quad \sigma \leq t \leq a.$$
(3.5)

It follows from assumption (H1) that for each $x \in C([\sigma , a], [D])$, the function $s \mapsto\int_{\sigma }^{s} P(s,\xi) x(\xi) \,d\xi$ is a continuous function with values in Z. Proceeding now as in , Theorem 3.6, we infer that Γx is a continuous function with values in $[D]$. Hence, $\Gamma: C([\sigma , a], [D]) \to C([\sigma , a], [D])$ is well defined. In addition, for $u, v \in C([\sigma , a], [D])$, we have

\begin{aligned} \bigl\Vert \Gamma u(t) - \Gamma v(t)\bigr\Vert _{[D]} \leq& \biggl\Vert \int_{\sigma }^{t} \int_{\sigma }^{s} S(t,s) P(s,\xi) \bigl[u(\xi) - v(\xi) \bigr] \,d\xi \,ds \biggr\Vert _{[D]} \\ \leq& \widetilde{N}_{1} \int_{\sigma }^{t} \int_{\sigma }^{s} \bigl\Vert P(s,\xi) \bigl[u(\xi) - v( \xi)\bigr] \bigr\Vert _{Z} \,d\xi \,ds \\ \leq& \widetilde{N}_{1} b \int_{\sigma }^{t} (t - \xi) \bigl\Vert u(\xi) - v(\xi) \bigr\Vert _{[D]} \,d\xi \\ \leq& \widetilde{N}_{1} b \frac{ (t - \sigma )^{2}}{2} \sup _{\sigma \leq s \leq t} \bigl\Vert u(s) - v(s)\bigr\Vert _{[D]}. \end{aligned}

Repeating this argument, we can show that $\Gamma^{n}$ is a contraction for n large enough. If $x(\cdot) = x(\cdot, \sigma , z)$ is the fixed point of Γ, then the integral equation

$$x(t)= S(t,\sigma ) z + \int_{\sigma }^{t} \int_{\sigma }^{s} S(t,s) P(s,\xi) x(\xi) \,d\xi \,ds,$$
(3.6)

is verified for $\sigma \leq t \leq a$. Using again , Theorem 3.6, we conclude that $x(\cdot)$ is a solution of problem (3.1)-(3.2).

On the other hand, applying (3.6) and proceeding as earlier, we can estimate

$$\bigl\Vert x(t)\bigr\Vert _{[D]} \leq\widetilde{N}_{1} \widetilde{N}_{2} \Vert z \Vert _{[D]} + \widetilde{N}_{1} b (a - \sigma ) \int_{\sigma }^{t} \bigl\Vert x(s) \bigr\Vert _{[D]} \,ds,$$

and using the Gronwall-Bellman lemma we obtain (3.3). Assume now that (H2) holds, proceeding as above, we have

\begin{aligned} \bigl\Vert x(t)\bigr\Vert \leq& N \Vert z \Vert + \biggl\Vert \int_{\sigma }^{t} \int _{\sigma }^{s} S(t,s) P(s,\xi) x(\xi) \,d\xi \,ds \biggr\Vert \\ = & N \Vert z \Vert + \biggl\Vert \int_{\sigma }^{t} \int_{\xi}^{t} S(t,s) P(s,\xi) x(\xi) \,ds \,d\xi\biggr\Vert \\ \leq& N \Vert z \Vert + N_{2} \int_{\sigma }^{t} \bigl\Vert x(\xi)\bigr\Vert \,d\xi, \end{aligned}

and turning to an application of the Gronwall-Bellman lemma we get (3.4). □

We can also avoid the condition P be Z-valued. We consider the following conditions on P.

1. (H3)

For every $0 \leq s \leq t \leq a$, $P(t, s) : [D] \to X$ is a bounded linear operator, and for each $x \in D$, $P(\cdot, \cdot) x$ is continuous and

$$\bigl\Vert P(t,s) x \bigr\Vert _{X} \leq b \Vert x\Vert _{[D]}$$

for some $b > 0$ independent of $s, t \in\Delta$.

2. (H4)

There exists a constant $L_{P} > 0$ such that

$$\bigl\Vert P(t_{2}, s) x - P(t_{1}, s) x \bigr\Vert \leq L_{P} |t_{2} - t_{1} | \Vert x\Vert _{[D]}$$

for all $x \in D$, $0 \leq s \leq t_{1} \leq t_{2}$.

Using Kozak , Theorem 2.1, and Henríquez , Corollary 3.4, we can establish the following result. We abbreviate by RNP the Radon-Nikodym property, and we refer the reader to  for the properties of spaces with the RNP.

### Theorem 3.2

Assume that conditions (H3)-(H4) hold. Assume further that the following conditions are fulfilled:

1. (i)

The space X verifies the RNP.

2. (ii)

For each $x \in D$ the function $A(\cdot) x$ is continuously differentiable.

3. (iii)

There exists $\lambda \in\mathbb{C}$ such that $\lambda \in\rho (A(t))$ for all $t \in[0, a]$.

Then for each $z \in D$, problem (3.1)-(3.2) has a unique solution $x(\cdot, \sigma , z)$ and there exists a constant $M_{1} > 0$ such that

$$\sup_{\sigma \leq t \leq a} \bigl\Vert x(t, \sigma , z)\bigr\Vert _{[D]} \leq M_{1} \Vert z\Vert _{[D]}.$$
(3.7)

Moreover, if there exists a constant $k > 0$ such that

$$\bigl\Vert S(t, s) P(s, \theta ) x\bigr\Vert \leq k \Vert x\Vert ,\quad x \in D,$$
(3.8)

then inequality (3.4) holds for certain constants $M_{1}^{\prime}, M_{2}^{\prime} > 0$.

### Proof

We define the map Γ by (3.5). We can rewrite

$$\Gamma x(t) = S(t, \sigma ) z + \int_{\sigma }^{t} S(t,s) \int _{\sigma }^{s} P(s,\xi) x(\xi) \,d\xi \,ds.$$

Let ${ w(s) = \int_{\sigma }^{s} P(s,\xi) x(\xi) \,d\xi}$. It follows from (H3)-(H4) that w is a Lipschitz continuous function with values in X. Consequently, from , Corollary 3.4, we infer that $y = \Gamma x$ is a solution of problem

\begin{aligned}& y^{\prime\prime} (t) = A(t) y(t) + w(t), \quad \sigma \leq t \leq a, \\& y(\sigma ) = 0, \qquad y^{\prime}(\sigma ) = z. \end{aligned}

Hence, $\Gamma: C([\sigma , a], [D]) \to C([\sigma , a], [D])$ is well defined. In addition, it follows from the construction in , Corollary 3.4, that

$$\bigl\Vert \Gamma u(t) - \Gamma v(t)\bigr\Vert _{[D]} \leq M (t - \sigma ) \sup_{\sigma \leq s \leq t} \bigl\Vert u(s) - v(s)\bigr\Vert _{[D]}$$

for all $u, v \in C([\sigma , a], [D])$ and some constant $M > 0$, which is independent of t, u, v.

Repeating this argument, we can show that $\Gamma^{n}$ is a contraction for n large enough. If $x(\cdot) = x(\cdot, \sigma , z)$ is the fixed point of Γ, then $x(\cdot)$ is a solution of problem (3.1)-(3.2).

Let $\widetilde{\Gamma} : [D] \to C([\sigma , a], [D])$ be the map

$$\widetilde{\Gamma}(z) (t) = x(t, \sigma , z), \quad z \in D.$$

It follows from (H3) that Γ̃ is a closed linear map, and applying the closed graph theorem, we obtain that Γ̃ is a bounded linear map, which implies (3.7).

On the other hand, combining (3.6) with (3.8) and proceeding as earlier, we can estimate

\begin{aligned} \bigl\Vert x(t)\bigr\Vert \leq& N \Vert z \Vert + \biggl\Vert \int_{\sigma }^{t} \int _{\sigma }^{s} S(t,s) P(s,\xi) x(\xi) \,d\xi \,ds \biggr\Vert \\ = & N \Vert z \Vert + k \int_{\sigma }^{t} \int_{\sigma }^{s} \bigl\Vert x(\xi ) \bigr\Vert \,d \xi \,ds \\ \leq& N \Vert z \Vert + k (a - \sigma ) \int_{\sigma }^{t} \max_{\sigma \leq\xi \leq s } \bigl\Vert x(\xi)\bigr\Vert \,ds, \end{aligned}

and turning to apply the Gronwall-Bellman lemma we obtain (3.4). □

It is worth to point out that condition (3.8) implies condition (H2). We also can obtain a similar result modifying this condition.

### Corollary 3.1

Assume that $D \subseteq D(P(t, s))$ for $0 \leq s \leq t \leq a$ and conditions (H3)-(H4) hold. Assume further that the following conditions are fulfilled:

1. (i)

The space X verifies the RNP.

2. (ii)

For each $x \in D$ the function $A(\cdot) x$ is continuously differentiable.

3. (iii)

There exists $\lambda \in\mathbb{C}$ such that $\lambda \in\rho (A(t))$ for all $t \in[0, a]$.

Then for each $z \in D$, problem (3.1)-(3.2) has a unique solution $x(\cdot, \sigma , z)$ and inequality (3.7) holds. Moreover, if $\operatorname{Im}(S(\tau, s)) \subseteq D(P(t, \tau))$ and there exists a constant $k > 0$ such that

$$\bigl\Vert P(t, \theta ) S(t, s) x\bigr\Vert \leq k \Vert x\Vert , \quad x \in X,$$
(3.9)

then inequality (3.4) holds for certain constants $M_{1}^{\prime}, M_{2}^{\prime} > 0$.

### Proof

The existence of the solution $x(\cdot) = x(\cdot, \sigma , z)$ as well as the inequality (3.7) follows from Theorem 3.2. Hence, it only remains to establish (3.4). Let

$$q(t) = \int_{\sigma }^{t} P(t, \theta ) x(\theta ) \,d\theta .$$
(3.10)

Using (3.6) we can write

\begin{aligned} q(t) = & \int_{\sigma }^{t} P(t, \tau) S(\tau, \sigma ) z \,d\tau + \int _{\sigma }^{t} P(t, \tau) \int_{\sigma }^{\tau} S(\tau, s) q(s) \,ds \,d\tau \\ = & \int_{\sigma }^{t} P(t, \tau) S(\tau, \sigma ) z \,d\tau+ \int_{\sigma }^{t} \int_{\sigma }^{\tau} P(t, \tau) S(\tau, s) q(s) \,ds \,d \tau, \end{aligned}

which implies that

\begin{aligned} \bigl\Vert q(t) \bigr\Vert = & k (t - \sigma ) \Vert z\Vert + k \int_{\sigma }^{t} \int _{\sigma }^{\tau} \bigl\Vert q(s) \bigr\Vert \,ds \,d\tau \\ = & k (t - \sigma ) \Vert z\Vert + k (a - \sigma ) \int_{\sigma }^{t} \max_{\sigma \leq s \leq\tau} \bigl\Vert q(s) \bigr\Vert \,d\tau, \end{aligned}

and applying the Gronwall-Bellman lemma, we obtain

$$\max_{\sigma \leq s \leq t} \bigl\Vert q(s) \bigr\Vert \leq k (a - \sigma ) \Vert z\Vert e^{k (a - \sigma )(t -\sigma )}.$$

Using again (3.6), we infer that

$$\bigl\Vert x(t)\bigr\Vert \leq N \Vert z\Vert + N \int_{\sigma }^{t} \bigl\Vert q(s) \bigr\Vert \,ds \leq N \Vert z\Vert e^{k (a - \sigma )(t -\sigma )},$$

which establishes (3.4). □

The above theorems characterize broad classes of operators P for which there is a solution of problem (3.1)-(3.2). There are also some particular situations in which it is possible to establish the existence of solution of problem (3.1)-(3.2). Below we will mention one of them.

In the rest of this section we specialize our development to consider the operator $A(t)$ as an additive perturbation of an infinitesimal generator of a cosine function of operators. Specifically, we consider the following condition.

1. (A)

The operator $A(t) = A_{0} + B(t)$, where $A_{0}$ is the infinitesimal generator of a cosine function $(C_{0}(t))_{t \in\mathbb{R}}$ with associated sine function $(S_{0}(t))_{t \in\mathbb{R}}$, and $B : [0, a] \to\mathcal{L}(X)$ is a strongly differentiable map.

We refer to  for the theory of cosine functions of operators. We only include here the following property because it is essential to our development. We consider the second order abstract Cauchy problem

\begin{aligned}& y^{\prime\prime}(t) = A_{0} y(t) + f(t), \quad \sigma \leq t, \end{aligned}
(3.11)
\begin{aligned}& y(\sigma ) = 0, \qquad y^{\prime}(\sigma ) = 0. \end{aligned}
(3.12)

### Lemma 3.1

Assume that $f : [\sigma , a] \to X$ is continuously differentiable. Let

$$y(t) = \int_{\sigma }^{t} S_{0}(t - s) f(s) \,ds.$$

Then $y(\cdot)$ is a classical solution of (3.11)-(3.12). Moreover,

\begin{aligned} \bigl\Vert y(t)\bigr\Vert _{A_{0}} \leq&\bigl\Vert f(t) \bigr\Vert + \bigl\Vert C_{0}(t -\sigma ) f(\sigma ) \bigr\Vert \\ &{}+ K_{0}(t) \int_{\sigma }^{t} \bigl\Vert f(s)\bigr\Vert \,ds \\ &{}+ K_{1}(t) \int_{\sigma }^{t} \bigl\Vert f^{\prime}(s)\bigr\Vert \,ds, \quad t \geq \sigma \end{aligned}
(3.13)

for some constants $K_{0}(t), K_{1}(t) \geq0$.

### Proof

The first assertion is consequence of , Proposition 5.5. To prove the estimate (3.13), we note that

\begin{aligned}& y^{\prime}(t) = \int_{\sigma }^{t} C_{0}(t - s) f(s) \,ds, \\& y^{\prime\prime}(t) = C_{0}(t - \sigma ) f(\sigma ) + \int _{\sigma }^{t} C_{0}(t - s) f^{\prime}(s) \,ds. \end{aligned}

This implies that

\begin{aligned} \bigl\Vert y(t)\bigr\Vert _{A_{0}} =& \bigl\Vert y(t)\bigr\Vert + \bigl\Vert A_{0} y(t)\bigr\Vert \\ =& \bigl\Vert y(t)\bigr\Vert + \bigl\Vert y^{\prime\prime}(t) - f(t)\bigr\Vert \\ \leq& \sup_{0 \leq\tau\leq t} \bigl\Vert S_{0}(\tau) \bigr\Vert \int_{\sigma }^{t} \bigl\Vert f(s) \bigr\Vert \,ds + \bigl\Vert f(t)\bigr\Vert \\ &{} + \bigl\Vert C_{0}(t - \sigma ) f(\sigma ) \bigr\Vert + \sup_{0 \leq\tau\leq t} \bigl\Vert C_{0}(\tau) \bigr\Vert \int_{\sigma }^{t} \bigl\Vert f^{\prime}(s) \bigr\Vert \,ds, \end{aligned}

which shows (3.13). □

Let E be the space consisting of vectors $x \in X$ for which the function $C_{0}(\cdot) x$ is of class $C^{1}$. It was proved by Kisińsky  that E endowed with the norm

$$\Vert x\Vert _{1} = \Vert x\Vert + \sup_{0 \leq t \leq1} \bigl\Vert A_{0} S_{0}(t) x \bigr\Vert ,\quad x \in E,$$

is a Banach space, and $S_{0} : \mathbb{R}\to\mathcal{L}(E, [D(A_{0})])$ is a strongly continuous operator valued map. Furthermore, if $x :[0, \infty) \to E$ is a continuous function, then $y(t) = \int_{0}^{t} S_{0}(t -s) x(s)\,ds$ defines a $[D(A_{0})]$-valued continuous function.

On the other hand, assuming that condition (A) holds, $A(\cdot)$ generates an evolution operator $S(t, s)$ which satisfies

$$S(t,s) z = S_{0}(t -s) z + \int_{s}^{t} S_{0}(t - \tau) B(\tau) S( \tau, s) z \,d\tau.$$
(3.14)

We refer the reader to  for details. Returning to problem (3.1)-(3.2), it is clear that in this case $D= D(A_{0})$.

### Theorem 3.3

Assume that condition (A) holds. Let $P : \Delta\to\mathcal{L}([D(A_{0})], X)$ be a strongly continuous map such that ${ \frac{\partial P(t,s)}{\partial t} : \Delta\to \mathcal{L} ([D(A_{0})], X)}$ is also strongly continuous. Then for each $z \in D(A_{0})$, problem (3.1)-(3.2) has a unique solution $x(\cdot, \sigma , z)$ that satisfies inequality (3.3). Moreover, if there exists a constant $k > 0$ such that

$$\bigl\Vert S_{0}(\tau) P(s, \theta ) x\bigr\Vert \leq k \Vert x \Vert , \quad x \in D(A_{0})$$
(3.15)

for all $0 \leq\tau\leq a$ and $0 \leq\theta \leq s \leq a$, then inequality (3.4) holds for certain constants $M_{1}^{\prime}, M_{2}^{\prime} > 0$.

### Proof

Let Γ be the map defined by (3.5). For $x(\cdot) \in C([\sigma , a], [D])$, we denote by $q(s)$ the function defined by (3.10). It follows from our hypotheses that q is continuously differentiable. Substituting $S(t,s)$ from (3.14) in (3.5), we have

\begin{aligned} \begin{aligned} \Gamma x(t) & = S(t, \sigma ) z + \int_{\sigma }^{t} S_{0}(t- s) q(s) \,ds + \int_{\sigma }^{t} \int_{s}^{t} S_{0}(t- \tau) B(\tau) S( \tau, s) q(s) \,d\tau \,ds \\ & = S(t, \sigma ) z + \int_{\sigma }^{t} S_{0}(t- s) q(s) \,ds + \int_{\sigma }^{t} S_{0}(t- \tau) B(\tau) \int_{\sigma }^{\tau} S(\tau, s) q(s) \,ds \,d\tau \\ & = S(t, \sigma ) z + f_{1}(t) + f_{2}(t). \end{aligned} \end{aligned}

Since the function q is continuously differentiable, it follows from Lemma 3.1 that $f_{1}(t)$ is a classical solution of problem (3.11)-(3.12) with $f(t) = q(t)$. In a similar way, since the function

$$f(\tau) = B(\tau) \int_{\sigma }^{\tau} S(\tau, s) q(s) \,ds$$

is continuously differentiable, and using again Lemma 3.1 we infer that $f_{2}(t)$ is a classical solution of problem (3.11)-(3.12). Combining these assertions, we conclude that $\Gamma x \in C([\sigma , a], [D])$. It follows from (3.10) that

\begin{aligned} \bigl\Vert q(s) \bigr\Vert \leq& \sup_{(s, t) \in\Delta} \bigl\Vert P(s, t)\bigr\Vert _{\mathcal{L}([D], X)} \int_{\sigma }^{s} \bigl\Vert x(\theta )\bigr\Vert _{[D]} \,d\theta \\ \leq& \sup_{(s, t) \in\Delta} \bigl\Vert P(s, t)\bigr\Vert _{\mathcal{L}([D], X)} (s - \sigma ) \max_{\sigma \leq\theta \leq s} \bigl\Vert x(\theta )\bigr\Vert _{[D]}. \end{aligned}

Furthermore,

$$q^{\prime}(s) = P(s, s) x(s) + \int_{\sigma }^{s} \frac{\partial P(s, \theta )}{\partial s} x(\theta ) \,d\theta$$

and

\begin{aligned} \bigl\Vert q^{\prime}(s) \bigr\Vert \leq& \sup_{(s, t) \in\Delta} \bigl\Vert P(s, t)\bigr\Vert _{\mathcal{L}([D], X)} \bigl\Vert x(s)\bigr\Vert _{[D]} + \sup_{(s, t) \in\Delta} \biggl\Vert \frac{\partial P(s, \theta )}{\partial s} \biggr\Vert _{\mathcal{L}([D], X)} \int_{\sigma }^{s} \bigl\Vert x(\theta )\bigr\Vert _{[D]} \,d\theta \\ \leq& \sup_{(s, t) \in\Delta} \bigl\Vert P(s, t)\bigr\Vert _{\mathcal{L}([D], X)} \bigl\Vert x(s)\bigr\Vert _{[D]} \\ &{} + \sup_{(s, t) \in\Delta} \biggl\Vert \frac{\partial P(s, \theta )}{\partial s} \biggr\Vert _{\mathcal{L}([D], X)} (s - \sigma ) \max_{\sigma \leq\theta \leq s} \bigl\Vert x( \theta )\bigr\Vert _{[D]}. \end{aligned}

Using that $q(\sigma ) = 0$, $0 \leq t \leq a$, and substituting the previous estimates in (3.13), we can assert that

$$\bigl\Vert f_{1}(t) \bigr\Vert = \biggl\Vert \int_{\sigma }^{t} S_{0}(t- s) q(s) \,ds \biggr\Vert \leq K \int_{\sigma }^{t} \max_{\sigma \leq\theta \leq s} \bigl\Vert x(\theta )\bigr\Vert _{[D]} \,ds \leq K (t -\sigma ) \max _{\sigma \leq \theta \leq t} \bigl\Vert x(\theta )\bigr\Vert _{[D]}.$$

A similar argument allows us to estimate

\begin{aligned} \bigl\Vert f_{2}(t) \bigr\Vert = & \biggl\Vert \int_{\sigma }^{t} S_{0}(t- \tau) B(\tau) \int_{\sigma }^{\tau} S(\tau, s) q(s) \,ds \,d\tau\biggr\Vert \leq K \int_{\sigma }^{t} \max_{\sigma \leq\theta \leq s} \bigl\Vert x(\theta )\bigr\Vert _{[D]} \,ds \\ \leq& K (t -\sigma ) \max_{\sigma \leq\theta \leq t} \bigl\Vert x(\theta )\bigr\Vert _{[D]}. \end{aligned}

In these estimates K denotes a generic constant. Consequently, combining the earlier estimates we obtain

$$\bigl\Vert \Gamma x(t) - \Gamma y(t)\bigr\Vert _{[D]} \leq2 K \int_{\sigma }^{t} \max_{\sigma \leq\theta \leq s} \bigl\Vert x(\theta )\bigr\Vert _{[D]} \,ds \leq 2 K (t - \sigma ) \sup _{\sigma \leq s \leq t} \bigl\Vert x(s) - y(s)\bigr\Vert _{[D]}$$

for all $x, y \in C([\sigma , a], [D])$ and some constant $K > 0$, which is independent of t, x, y.

Repeating this argument, we can show that $\Gamma^{n}$ is a contraction for n large enough. If $x(\cdot)$ denotes the fixed point of Γ, then $x(\cdot)$ is a solution of problem (3.1)-(3.2).

To obtain (3.3) we proceed as in the proof of Theorem 3.2. Besides, for $z \in D(A_{0})$, from (3.14) we get

$$S(t,s) P(s, \theta ) z = S_{0}(t -s) P(s, \theta ) z + \int_{s}^{t} S_{0}(t - \tau) B(\tau) S( \tau, s) P(s, \theta ) z \,d\tau,$$

which implies

$$\bigl\Vert S(t,s) P(s, \theta ) z\bigr\Vert \leq k \Vert z \Vert + k_{1} \int_{s}^{t} \bigl\Vert S(\tau , s) P(s, \theta ) z \bigr\Vert \,d\tau$$

for some constant $k_{1} > 0$. Hence, using the Gronwall-Bellman lemma we infer that there exists a $k_{2} > 0$ such that

$$\bigl\Vert S(t,s) P(s, \theta ) z\bigr\Vert \leq k_{2} \Vert z \Vert$$

for all $0 \leq\theta \leq s \leq a$, $0 \leq\tau\leq a$. This shows that condition (3.8) holds, and arguing as in the proof of Theorem 3.2 we can conclude that inequality (3.4) is fulfilled. □

In the rest of this section we assume that the hypotheses of Theorem 3.1, Theorem 3.2 or Theorem 3.3 are satisfied including condition (H2). Therefore, in all cases, inequality (3.4) holds. Let now $R(t,s) : [D] \to[D]$ be the map

$$R(t,s) z = x(t, s, z),$$

where $x(\cdot, s, z)$ is the solution of problem (3.1)-(3.2) constructed in the above development. It is clear that $R(t,s)$ is linear and bounded. Moreover, $R(t,s) : D \to X$ is also bounded, and therefore, it has a bounded linear extension to X. We keep the symbol R to denote this extension. We say that R is the resolvent operator for system (3.1)-(3.2). Next we study some properties of R.

(i) It is clear from our construction that $R : \Delta\to\mathcal {L}(X)$ is a strongly continuous map. We denote by $M > 0$ a constant such that

$$\bigl\Vert R(t, s) \bigr\Vert \leq M, (t, s) \in\Delta.$$
(3.16)

(ii) Let $z \in D$. Since $R(\cdot, \sigma ) z$ is a solution of problem (3.1)-(3.2) we have

$$\frac{\partial^{2}}{\partial t^{2}} R(t, \sigma ) z = A(t) R(t,\sigma ) z + \int_{\sigma }^{t} P(t, \xi) R(\xi,\sigma ) z \,d\xi.$$

We will denote by $G(t, \xi) : X \to X$ the extension of the operator $\int_{\xi}^{t} S(t,s) P(s, \xi) x \,ds$. It is clear that $G(\cdot, \cdot)$ is a strongly continuous map from Δ into $\mathcal{L}(X)$, and

$$R(t,s) x = S(t,s) x + \int_{s}^{t} G(t, \xi) R(\xi, s) x \,d\xi,\quad x \in X.$$
(3.17)

We consider the equation

$$v(t,s) = \frac{\partial}{\partial s} S(t,s) x + \int_{s}^{t} G(t, \xi) v(\xi, s) \,d\xi.$$

It is not difficult to show that this integral equation has a unique solution $v \in C(\Delta, X)$. By integration with respect to the second variable, we obtain

\begin{aligned} \int_{s}^{t} v(t, \tau) \,d\tau = & \int_{s}^{t} \biggl[\frac{\partial }{\partial\tau} S(t, \tau) x + \int_{\tau}^{t} G(t, \xi) v(\xi, \tau) \,d\xi\biggr] \,d \tau \\ = & - S(t, s) x + \int_{s}^{t} G(t, \xi) \int_{s}^{\xi} v(\xi, \tau) \,d\tau \,d\xi. \end{aligned}

By comparing with the equality (3.17) we infer that ${ \int_{s}^{t} v(t, \tau) \,d\tau= - R(t,s) x}$. Consequently, the function $s \mapsto\frac{\partial}{\partial s} R(t,s) x$ is continuous and satisfies the equation

$$\frac{\partial}{\partial s} R(t,s) x = \frac{\partial}{\partial s} S(t,s) x + \int_{s}^{t} G(t, \xi) \frac{\partial}{\partial s} R(\xi, s) x \,d\xi$$
(3.18)

for all $x \in X$. Moreover, since ${ \frac{\partial }{\partial s} S(t,s) x|_{t = s} = -x}$, it follows from (3.18) that ${ \frac{\partial}{\partial s} R(t,s) x|_{t = s} = -x}$.

Let now $z \in D$. We consider the equation

$$v(t,s) = \frac{\partial^{2}}{\partial s^{2}} S(t,s) z + G(t, s) z + \int_{s}^{t} G(t, \xi) v(\xi, s) \,d\xi.$$

Proceeding as above we can prove that the above equation has a unique solution in $C(\Delta, X)$ and that ${ v(t,s) = \frac{\partial ^{2}}{\partial s^{2}} R(t,s) z}$.

### Corollary 3.2

Under the above conditions, for each $y \in D$, problem (3.1) with initial condition

$$x(\sigma ) = y, \qquad x^{\prime}(\sigma ) = 0,$$
(3.19)

has a unique solution $x(\cdot)$ given by

$$x(t) = - \frac{\partial}{\partial s} R(t, \sigma ) y.$$

### Proof

It follows from (1.6) that the solution $x(\cdot)$ of (3.1)-(3.19) satisfies

\begin{aligned} x(t) = & - \frac{\partial}{\partial s} S(t,\sigma ) y + \int _{\sigma }^{t} \int_{\sigma }^{s} S(t,s) P(s,\xi) x(\xi) \,d\xi \,ds \\ = & - \frac{\partial}{\partial s} S(t,\sigma ) y + \int_{\sigma }^{t} \int_{\xi}^{t} S(t,s) P(s,\xi) x(\xi) \,d\xi \,ds \\ = & - \frac{\partial}{\partial s} S(t,\sigma ) y + \int_{\sigma }^{t} G(t, \xi) x(\xi) \,d\xi, \end{aligned}

and comparing with equation (3.18) we infer that $x(t) = - \frac{\partial}{\partial s} R(t,\sigma ) y$. □

We next apply the previous construction to the study of problem (1.1)-(1.2), where $f :[0, a] \to X$ is an integrable function. We generalize to inhomogeneous equations the notion introduced in Definition 3.1.

### Definition 3.2

A function $x : [0, a] \to X$ is said to be a solution (or classical solution) of problem (1.1)-(1.2) if $x \in C([0, a], [D]) \cap C^{2}([0, a], X)$ and (1.1)-(1.2) are satisfied.

### Corollary 3.3

Under the above conditions, let $y, z \in D$, and let $f :[0, a] \to D$ be a continuous function. Then the solution $x(\cdot)$ of problem (1.1)-(1.2) is given by

$$x(t) = - \frac{\partial}{\partial s} R(t, 0) y + R(t, 0) z + \int _{0}^{t} R(t, \xi) f(\xi) \,d\xi.$$
(3.20)

### Proof

As a consequence of the previous discussion, we can assume that $y = z = 0$. Furthermore, combining the properties of $R(\cdot)$ with the fact that $A(t)$ are closed linear operators, we obtain

\begin{aligned}& \frac{\partial x(t)}{\partial t} = R(t, t) f(t) + \int_{0}^{t} \frac{\partial}{\partial t} R(t, \xi) f(\xi) \,d \xi \\& \hphantom{\frac{\partial x(t)}{\partial t}}= \int_{0}^{t} \frac{\partial}{\partial t} R(t, \xi) f(\xi) \,d \xi, \\& \frac{\partial^{2} x(t)}{\partial t^{2}} = \frac{\partial }{\partial t} R(t, t) f(t) + \int_{0}^{t} \biggl[ A(t) R(t, \xi) f(\xi) + \int_{\xi}^{t} P(t, s) R(s, \xi) f(\xi) \,ds \biggr] \,d\xi \\& \hphantom{\frac{\partial^{2} x(t)}{\partial t^{2}}}= f(t) + A(t) x(t) + \int_{0}^{t} \int_{\xi}^{t} P(t, s) R(s, \xi) f(\xi) \,ds \,d\xi \\& \hphantom{\frac{\partial^{2} x(t)}{\partial t^{2}}}= f(t) + A(t) x(t) + \int_{0}^{t} \int_{0}^{s} P(t, s) R(s, \xi) f(\xi) \,d\xi \,ds \\& \hphantom{\frac{\partial^{2} x(t)}{\partial t^{2}}}= A(t) x(t) + \int_{0}^{t} P(t, s) x(s) \,ds + f(t), \end{aligned}

which completes the proof. □

By comparing Corollary 3.3 with Definition 2.1 and Theorem 2.2, we are led to establish the following concept.

### Definition 3.3

The function $x : [0, a] \to X$ given by (3.20) is said to be the mild solution of problem (1.1)-(1.2).

## Existence of solutions

In this section we study the existence of solutions for problem (1.1)-(1.2). Throughout this section we assume that the hypotheses of Theorem 3.1, Theorem 3.2 or Theorem 3.3 are satisfied including condition (H2). As a consequence, there exists a resolvent operator $R(\cdot)$ defined on Δ. In addition, we assume that $f: [0, a] \to X$ is an integrable function that satisfies additional properties which will be specified later. Under these general conditions, our aim is to study the differentiability of the mild solution $x(\cdot)$ defined by (3.20). Assuming that $y, z \in D$, this issue is reduced to a study of the differentiability of the function

$$u(t) = \int_{0}^{t} R(t, \xi) f(\xi) \,d\xi.$$

It follows from (3.16) that

$$\bigl\Vert u(t)\bigr\Vert \leq M \Vert f\Vert _{L^{1}([0, a], X)}.$$
(4.1)

We begin by establishing a few general properties of the function $u(\cdot)$.

### Lemma 4.1

Let $\varphi : [0, a] \to[D]$ be a continuous function. Let $\Phi: [0, a] \to X$ be the function given by

$$\Phi(t) = \int_{0}^{t} S(t, \xi) \varphi (\xi) \,d\xi, \quad 0 \leq t \leq a.$$
(4.2)

Then $\Phi: [0, a] \to[D]$ is a continuous function.

### Proof

It is a direct consequence of properties of evolution operator S. □

We consider the following condition for f.

1. (F)

Let $f : [0, a] \to X$ be a function such that the function $F : [0, a] \to X$ given by

$$F(t) = \int_{0}^{t} S(t, \xi) f(\xi) \,d\xi$$

is a solution of the abstract Cauchy problem (1.5) with initial conditions $F(0) = F^{\prime}(0) = 0$.

### Lemma 4.2

Assume condition (F) holds. Then $u(t) \in D$ and $u : [0, a] \to[D]$ is continuous.

### Proof

Let $v : [0, a] \to[D]$ be a continuous function. We consider the integral equation

$$x(t) = v(t) + \int_{0}^{t} \int_{0}^{s} S(t, s) P(s, \xi) x(\xi) \,d\xi \,ds, \quad 0 \leq t \leq a,$$
(4.3)

in the space $C([0, a], [D])$. Since we are assuming that the hypotheses of Theorem 3.1, Theorem 3.2 or Theorem 3.3 are fulfilled, proceeding as in the proof of Theorem 3.1, Theorem 3.2 or Theorem 3.3, respectively, we can see that (4.3) has a unique solution. Furthermore, the solutions of equation (4.3) have the following properties:

(i) If $f : [0, a] \to[D]$ is continuous, and $v(t) = F(t)$, then $x(t) = u(t)$. In fact, substituting $R(t,s) f(s)$ from (3.5), we have

\begin{aligned} u(t) = & \int_{0}^{t} \biggl[S(t, s) f(s) + \int_{s}^{t} \int _{s}^{\tau} S(t, \tau) P(\tau, \xi) R(\xi, s) f(s) \,d\xi \,d\tau \biggr] \,ds \\ = & \int_{0}^{t} S(t, s) f(s) \,ds + \int_{0}^{t} S(t, \tau) \int _{0}^{\tau} \int_{s}^{\tau} P(\tau, \xi) R(\xi, s) f(s) \,d\xi \,ds \,d\tau \\ = & \int_{0}^{t} S(t, s) f(s) \,ds + \int_{0}^{t} S(t, \tau) \int _{0}^{\tau} P(\tau, \xi) \biggl( \int_{0}^{\xi} R(\xi, s) f(s) \,ds \biggr) \,d\xi \,d \tau \\ = & \int_{0}^{t} S(t, s) f(s) \,ds + \int_{0}^{t} S(t, \tau) \int _{0}^{\tau} P(\tau, \xi) u(\xi) \,d\xi \,d\tau, \end{aligned}

which implies that $u(t) = x(t)$.

(ii) Since D is dense in X, for an integrable function $f : [0, a] \to X$ there exists a sequence $(\varphi _{n})_{n}$ in $C([0, a], D)$ such that $\varphi _{n} \to f$ as $n \to\infty$ for the norm in $L^{1}([0, a], X)$. As a matter of fact, we can take $\varphi _{n}$ as a trapezoidal function with values in D. Consequently, we can assume that $\varphi _{n} \in C([0, a], [D])$. Let $\Phi_{n}$ be the function defined by (4.2) with $\varphi _{n}$ instead of φ. Let $x_{n}(\cdot)$ be the solution of equation (4.3) with $v(t) = \Phi_{n}(t)$, and let ${ u_{n}(t) = \int_{0}^{t} R(t, \xi) \varphi _{n}(\xi) \,d\xi}$.

It follows from (i) that $x_{n}(t) = u_{n}(t)$. Moreover, it is immediate from (4.1) that

$$\bigl\Vert u_{n}(t) - u(t) \bigr\Vert \leq M \Vert \varphi _{n} - f\Vert _{L^{1}([0, a], X)}.$$

Furthermore, if $x(\cdot)$ denotes the solution of equation (4.3) with $v(t) = F(t)$, as in this statement we are assuming that the condition (H2) is fulfilled, proceeding as in the proof of Theorem 3.1 to obtain the estimate (3.4), we can assert that

\begin{aligned} \bigl\Vert x_{n}(t) - x(t) \bigr\Vert \leq& \biggl\Vert \int_{0}^{t} S(t,s) \bigl(\varphi _{n}(s) - f(s) \bigr)\,ds \biggr\Vert \\ & {}+ \biggl\Vert \int_{0}^{t} \int_{0}^{s} S(t, s) P(s, \xi) \bigl(x_{n}( \xi) - x(\xi)\bigr) \,d\xi \,ds \biggr\Vert \\ \leq& N \Vert \varphi _{n} - f\Vert _{L^{1}([0, a], X)} + N_{2} \int _{0}^{t} \bigl\Vert x_{n}(\xi) - x( \xi)\bigr\Vert \,d\xi, \end{aligned}

which shows that the sequence $(x_{n}(t))_{n}$ converges to $x(t)$. Clearly, by the previous remark, $x(t) = u(t)$ and $u \in C([0, a], [D])$. □

### Lemma 4.3

Assume that conditions (H4) and (F) hold. Let $g :[0, a] \to X$ be the function given by

$$g(t) = \int_{0}^{t} P(t, s) u(s) \,ds,\quad 0 \leq t \leq a.$$

Then g is a Lipschitz continuous function.

### Proof

For $0 \leq t_{1} \leq t_{2} \leq a$, we have

\begin{aligned} g(t_{2}) - g(t_{1}) = & \int_{0}^{t_{2}} P(t_{2}, s) u(s) \,ds - \int _{0}^{t_{1}} P(t_{1}, s) u(s) \,ds \\ = & \int_{0}^{t_{1}} \bigl[P(t_{2}, s) - P(t_{1}, s)\bigr] u(s) \,ds + \int _{t_{1}}^{t_{2}} P(t_{2}, s) u(s) \,ds \end{aligned}

and, applying (H1) and (H3),

\begin{aligned} \bigl\Vert g(t_{2}) - g(t_{1}) \bigr\Vert \leq& \biggl\Vert \int_{0}^{t_{1}} \bigl[P(t_{2}, s) - P(t_{1}, s)\bigr] u(s) \,ds \biggr\Vert + \biggl\Vert \int_{t_{1}}^{t_{2}} P(t_{2}, s) u(s) \,ds \biggr\Vert \\ \leq& L_{P} |t_{2} -t_{1}| \int_{0}^{t_{1}} \bigl\Vert u(s) \bigr\Vert _{[D]} \,ds + b \int_{t_{1}}^{t_{2}} \bigl\Vert u(s) \bigr\Vert _{[D]} \,ds, \end{aligned}

which shows the assertion. □

We are in a position to establish our first result of the existence of solutions of problem (1.1)-(1.2). This result is an immediate consequence of Kozak , Theorem 2.1, and Henríquez , Corollary 3.4.

### Theorem 4.1

Let $y, z \in D$. Assume that condition (H4) holds. Assume further that the following conditions are fulfilled:

1. (i)

The space X verifies the RNP.

2. (ii)

For each $x \in D$ the function $A(\cdot) x$ is continuously differentiable.

3. (iii)

There exists $\lambda \in\mathbb{C}$ such that $\lambda \in\rho (A(t))$ for all $t \in[0, a]$.

4. (iv)

The function f is Lipschitz continuous.

Then the function $x(\cdot)$ given by (3.20) is the solution of problem (1.1)-(1.2).

### Proof

We can assume that $y = z = 0$. With the notation introduced in Lemma 4.3, we consider the abstract Cauchy problem

\begin{aligned}& w^{\prime\prime}(t) = A(t) w(t) + g(t) + f(t), \end{aligned}
(4.4)
\begin{aligned}& w(0) = w^{\prime}(0) = 0. \end{aligned}
(4.5)

Since the function $g + f$ is Lipschitz continuous, using , Corollary 3.4, we see that the solution of (4.4)-(4.5) is given by

$$w(t) = \int_{0}^{t} S(t, s) \bigl(g(s) + f(s) \bigr) \,ds.$$

On the other hand, let $z(\cdot)$ be the solution of equation (4.3) with $v(t) = F(t)$. It follows from Lemma 4.2 that

\begin{aligned} u(t) = & z(t) = \int_{0}^{t} S(t, s) f(s) \,ds + \int_{0}^{t} S(t, s) \int_{0}^{s} P(s, \xi) z(\xi) \,d\xi \,ds \\ = & \int_{0}^{t} S(t, s) f(s) \,ds + \int_{0}^{t} S(t, s) \int _{0}^{s} P(s, \xi) u(\xi) \,d\xi \,ds \\ = & w(t), \end{aligned}

which completes the proof. □

We finish this section with an application of Theorem 3.3.

### Theorem 4.2

Let $y, z \in D$. Assume that conditions of Theorem  3.3 are fulfilled. Assume further that $B: [0, a] \to\mathcal{L}([D], E)$ is a strongly continuous operator valued map. Let $f : [0, a] \to X$ be a continuously differentiable function. Then the function $x(\cdot)$ given by (3.20) is the solution of problem (1.1)-(1.2).

### Proof

We proceed as in the proof of Theorem 4.1. It follows from Lemma 4.2 that $u \in C([0, a], [D])$. Applying the properties of P, we infer that the function g is continuously differentiable. Consequently, the function $g + f$ is continuously differentiable. Using the properties of $B(\cdot)$, we can easily show that the conditions of Theorem 3.12 in  are satisfied. As a consequence we see that the solution of (4.4)-(4.5) is given by

$$w(t) = \int_{0}^{t} S(t, s) \bigl(g(s) + f(s) \bigr) \,ds.$$

We complete the proof arguing as in the proof of Theorem 4.1 □

## Applications

The one dimensional wave equation modeled as an abstract Cauchy problem has been studied extensively. See for example . In this section, we apply the results established previously to study the existence of solutions to a non-autonomous wave equation modeled by an integro-differential equation. To avoid technical difficulties, we will consider only a pair of simple types of wave equation.

Initially, we will study the initial value problem

\begin{aligned}& \frac{\partial^{2} w(t, \xi)}{\partial t^{2}} = \frac{\partial ^{2} w(t, \xi)}{\partial\xi^{2}} + b(t) w(t, \xi) + \int_{0}^{t} \alpha (t -s) \frac{\partial w(s, \xi)}{ \partial\xi} \,ds + \tilde{f}(t, \xi), \end{aligned}
(5.1)
\begin{aligned}& w(t, 0) = w(t, \pi) = 0, \end{aligned}
(5.2)
\begin{aligned}& w(0, \xi) = \varphi (\xi),\qquad \frac{\partial w(0, \xi)}{\partial t} = z(\xi) \end{aligned}
(5.3)

for $t \geq0$ and $0 \leq\xi\leq\pi$. Here we assume that $\alpha , b : [0, \infty) \to\mathbb{R}$ are continuously differentiable functions, and that , φ and z satisfy appropriate conditions which will be specified later.

We model this problem in the space $X = L^{2}([0, \pi])$. For this reason, we assume that $\varphi , z \in X$. Similarly, $H^{2}([0, \pi])$ denotes the Sobolev space of functions $x : \mathbb{R}\to\mathbb{C}$ such that $x^{\prime\prime} \in L^{2}([0, \pi])$.

We consider the operator $A_{0} x(\xi) = x^{\prime\prime}(\xi)$ with domain

$$D(A_{0}) = \bigl\{ x \in H^{2}\bigl([0, \pi]\bigr) : x(0) = x(\pi) = 0 \bigr\} .$$

Furthermore, we assume that the function $\tilde{f} : [0, \infty) \times[0, \pi] \to\mathbb{R}$ satisfies the following conditions:

1. (i)

For each $\xi\in[0, \pi]$, $\tilde{f}(\cdot, \xi )$ is continuous.

2. (ii)

For $t \geq0$, $\tilde{f}(t, \cdot)$ is measurable, and there exists a positive function $\eta\in L^{2}([0, \pi])$ such that $|\tilde{f}(t, \xi)| \leq\eta(\xi)$ for $\xi\in[0, \pi]$.

Under these conditions, the function $f : [0, \infty) \to X$ given by $f(t)(\xi) = \tilde{f}(t, \xi)$, for $\xi\in[0, \pi]$, is continuous.

It is well known that $A_{0}$ is the infinitesimal generator of a strongly continuous cosine function $(C_{0}(t))_{t \in\mathbb{R}}$ on X. The spectrum of $A_{0}$ consists of eigenvalues $- n^{2}$ for $n \in \mathbb{N}$, with associated eigenvectors

$$z_{n}(\xi) = \sqrt{\frac{2}{\pi}} \sin{ n \xi},\quad n \in \mathbb{N}.$$

Furthermore, the set $\{z_{n} : n \in\mathbb{Z}\}$ is an orthonormal basis of X. Using this orthonormal basis, we obtain that

$$A_{0} u = \sum_{n \in\mathbb{N}} - n^{2} \langle u , z_{n} \rangle z_{n}$$

for $u \in D(A_{0})$, and the cosine function $(C_{0}(t))_{t \in \mathbb{R} }$ is given by

$$C_{0}(t) u = \sum_{n = 1}^{\infty} \cos{(n t)} \langle u , z_{n} \rangle z_{n},\quad t \in \mathbb{R},$$

with associated sine function

$$S_{0}(t) u = \sum_{n = 1}^{\infty} \frac{\sin{(n t)}}{n} \langle u, z_{n} \rangle z_{n},\quad t \in \mathbb{R}.$$
(5.4)

It is clear that $\|C_{0}(t)\| \leq1$ for all $t \in\mathbb{R}$. Thus, $C_{0}(\cdot)$ is uniformly bounded on $\mathbb{R}$.

We take $B(t) x (\xi) = b(t) x(\xi)$ defined on X. It is easy to see that $A(t) = A_{0} + B(t)$ is a closed linear operator. Moreover, it is clear that condition (A) is fulfilled. Consequently, $A(t)$ generates an evolution operator $S(t,s)$. In addition, since $B(t) : [D(A_{0})] \to[D(A_{0})]$, we see that $B : [0, \infty) \to\mathcal{L}([D(A_{0})], E)$ is a strongly continuous map.

To complete our construction, we specify the map P. Let $P(t, s) : [D(A_{0})] \to X$ be given by

$$P(t, s) x(\xi) = \alpha (t -s) \frac{\partial x(\xi)}{\partial\xi},\quad \xi\in[0, \pi].$$

It is clear that $P : \Delta\to\mathcal{L}([D(A_{0})], X)$ is a strongly continuous map such that the map $\frac{\partial P(t,s)}{\partial t} : \Delta\to\mathcal{L}([D(A_{0})], X)$ is also strongly continuous.

Using this construction, and defining $x(t) = w(t, \cdot) \in X$, problem (5.1)-(5.3) is modeled in the abstract form (1.1)-(1.2). The following result is a consequence of Theorem 3.3 and Theorem 4.2.

### Corollary 5.1

Under the above conditions, the following properties are fulfilled:

1. (a)

There exists a resolvent operator $R(t,s)$ for equation (5.1).

2. (b)

Problem (5.1)-(5.3) has a mild solution given by

$$w(t, \xi) = - \frac{\partial R(t, 0) \varphi }{\partial s}(\xi) + R(t, 0) z (\xi) + \biggl[ \int_{0}^{t} R(t, s) f(s) \,ds \biggr](\xi).$$
(5.5)
3. (c)

If $\varphi , z \in D(A_{0})$, and satisfies the local Lipschitz condition

$$\bigl\vert \tilde{f}(t_{2}, \xi) - \tilde{f}(t_{1}, \xi) \bigr\vert \leq L_{a} \vert t_{2} - t_{1} \vert$$
(5.6)

for all $t_{2}, t_{1} \in[0, a]$ and $\xi\in[0 \pi]$, where $L_{a} \geq0$, then problem (5.1)-(5.3) has a classical solution given by (5.5).

### Proof

Assertion (a) is a consequence of Theorem 3.3. We only need to show that (3.15) holds. We define ${ w_{n} = \sqrt{\frac{2}{\pi}} \cos{n \xi }}$ for $n \in\mathbb{N}$ and ${ w_{0} = \frac{1}{\sqrt{\pi}}}$. Let $x \in D(A_{0})$. Then

$$x^{\prime} = \sum_{n = 1}^{\infty} \bigl\langle x^{\prime}, z_{n} \bigr\rangle z_{n},$$

where

\begin{aligned} \bigl\langle x^{\prime}, z_{n} \bigr\rangle = & \sqrt{ \frac{2}{\pi}} \int _{0}^{\pi} x^{\prime}(\xi) \sin{n \xi} \,d \xi \\ = & - \sqrt{\frac{2}{\pi}} n \int_{0}^{\pi} x(\xi) \cos{n \xi } \,d\xi \\ = & - n \langle x, w_{n} \rangle. \end{aligned}

Therefore,

$$S_{0}(t) x^{\prime} = \sum_{n = 1}^{\infty} \frac{\sin{n t}}{n} \bigl\langle x^{\prime}, z_{n} \bigr\rangle z_{n} = \sum_{n = 1}^{\infty} - \sin{(n t)} \langle x, w_{n} \rangle z_{n}.$$

Since $\{w_{n} : n \in\mathbb{N}_{0} \}$ is an orthonormal basis of X, we obtain

\begin{aligned} \bigl\Vert S_{0}(t) x^{\prime}\bigr\Vert ^{2} = & \sum_{n = 1}^{\infty} \sin^{2}{(n t)} \bigl\vert \langle x, w_{n} \rangle\bigr\vert ^{2} \\ \leq& \sum_{n = 1}^{\infty} \bigl\vert \langle x, w_{n} \rangle\bigr\vert ^{2} \\ \leq& \Vert x\Vert ^{2}. \end{aligned}

Consequently

$$\bigl\Vert S_{0}(\tau) P(s, \theta ) x\bigr\Vert \leq\bigl\vert \alpha (s - \theta )\bigr\vert \Vert x\Vert \leq \Vert \alpha \Vert _{\infty} \Vert x\Vert ,$$

which establishes (3.15).

(b) This assertion is a consequence of (i) and Definition 3.3.

(c) Since X is Hilbert, X satisfies RNP. We fix $a > 0$ and study problem (5.1)-(5.3) for $t \in[0, a]$. Since the function $b(\cdot)$ is continuous, then $b(\cdot)$ is bounded on $[0,a]$. This implies that $\rho(A(t)) = \rho(A_{0} + b(t) I) \neq \emptyset$. In addition,

$$\bigl\Vert f(t_{2}) - f(t_{1}) \bigr\Vert = \biggl( \int_{0}^{\pi} \bigl\vert \tilde {f}(t_{2}, \xi) - \tilde{f}(t_{1}, \xi)\bigr\vert ^{2} \,d\xi \biggr)^{1/2} \leq L_{a} \pi^{1/2} |t_{2} - t_{1}|$$

for $t_{2}, t_{1} \in[0, a]$. This shows that f is Lipschitz continuous. Consequently, the hypotheses of Theorem 4.1 are satisfied, which implies that problem (5.1)-(5.3) has a classical solution given by (5.5). □

Finally we study a non-autonomous wave equation where the operator $A(t)$ is obtained as a multiplicative perturbation of $A_{0}$. Specifically, in what follows we are concerned with the initial value problem

\begin{aligned}& \frac{\partial^{2} w(t, \xi)}{\partial t^{2}} = b(t) \frac {\partial^{2} w(t, \xi)}{\partial\xi^{2}} + \int_{0}^{t} \alpha (t -s) \frac{\partial w(s, \xi)}{\partial\xi } \,ds + \tilde{f}(t, \xi),\quad t \geq0, 0 < \xi< \pi, \end{aligned}
(5.7)
\begin{aligned}& w(t, 0) = w(t, \pi) = 0, \quad t \geq0, \end{aligned}
(5.8)
\begin{aligned}& w(0, \xi) = \varphi (\xi), \qquad \frac{\partial w(0, \xi)}{\partial t} = z(\xi), \quad 0 \leq\xi \leq\pi. \end{aligned}
(5.9)

We proceed as in the previous development. We model this problem in the space $X = L^{2}([0, \pi])$. We assume that $\varphi , z \in X$, α, and satisfy the general conditions mentioned above. Moreover, we assume that $b : [0, \infty) \to\mathbb{R}$ is a continuously differentiable function such that $b(t) \geq1$ for all $t \geq0$.

We consider the operator $A(t) = b(t) A_{0}$ with domain $D(A(t)) = D(A_{0})$. Initially we will show that $A(t)$ generates an evolution operator $S(t,s)$. It is known (, Theorem 9.3.16) that the solution of the scalar initial value problem

\begin{aligned}& r^{\prime\prime} (t) + n^{2} b(t) r(t) = 0, \quad t \geq s, \end{aligned}
(5.10)
\begin{aligned}& r(s) = 0, \qquad r^{\prime} (s) = 1, \end{aligned}
(5.11)

satisfies

\begin{aligned}& \bigl\vert r(t, s)\bigr\vert \leq \frac{1}{\sqrt{b(s)} n}, \end{aligned}
(5.12)
\begin{aligned}& \biggl\vert \frac{\partial r(t, s)}{\partial s} \biggr\vert \leq 1 \end{aligned}
(5.13)

for $t \geq s$.

We denote by $r_{n}(t,s)$ the solution of (5.10)-(5.11). We define

$$S(t,s) x = \sum_{n =1}^{\infty} r_{n}(t,s) \langle x, z_{n} \rangle z_{n}.$$

It follows from the estimate (5.12)-(5.13) that $S(t,s) : X \to X$ is well defined. It is not difficult to show that S is an evolution operator.

Using again that every Hilbert space satisfies the RNP we get the following consequence.

### Corollary 5.2

Under the above conditions, the following properties are fulfilled.

1. (a)

There exists a resolvent operator $R(t, s)$ for problem (5.7)-(5.8).

2. (b)

There exists a mild solution of (5.7)-(5.9) given by (5.5).

3. (c)

If $\varphi , z \in D(A_{0})$, and satisfies the local Lipschitz condition (5.6), then there exists a classical solution of (5.7)-(5.9) given by (5.5).

### Proof

Assertion (a) is a consequence of Theorem 3.2. Conditions (H3)-(H4) are immediate consequences of the definition of P and the properties of $\alpha (\cdot )$. It only remains to show that estimate (3.8) holds. We proceed as in the proof of Corollary 5.1.

Let $x \in D(A_{0})$. Then

\begin{aligned} S(t, s) x^{\prime} = & \sum_{n = 1}^{\infty} r_{n}(t,s) \bigl\langle x^{\prime}, z_{n} \bigr\rangle z_{n} \\ = & \sum_{n = 1}^{\infty} - r_{n}(t,s) n \langle x, w_{n} \rangle z_{n}. \end{aligned}

Using again that $\{w_{n} : n \in\mathbb{N}_{0} \}$ is an orthonormal basis of X, and applying (5.13), we obtain

\begin{aligned} \bigl\Vert S(t, s) x^{\prime}\bigr\Vert ^{2} = & \sum _{n = 1}^{\infty} r_{n}(t,s)^{2} n^{2} \bigl\vert \langle x, w_{n} \rangle\bigr\vert ^{2} \\ \leq& \frac{1}{b(s)} \sum_{n = 1}^{\infty} \bigl\vert \langle x, w_{n} \rangle\bigr\vert ^{2} \\ \leq& \frac{1}{b(s)} \Vert x\Vert ^{2}. \end{aligned}

Consequently

$$\bigl\Vert S(t, s) P(s, \theta ) x\bigr\Vert \leq\frac{|\alpha (s - \theta )|}{b(s)} \Vert x\Vert \leq \Vert \alpha \Vert _{\infty} \Vert x\Vert ,$$

which establishes (3.8). We complete the proof as in the proof of Corollary 5.1. □

## References

1. 1.

Diagana, T: Existence results for some damped second-order Volterra integro-differential equations. Appl. Math. Comput. 237, 304-317 (2014)

2. 2.

Vijayakumar, V, Sivasankaran, S, Arjunan, MM: Existence of global solutions for second order impulsive abstract functional integro-differential equations. Dyn. Contin. Discrete Impuls. Syst., Ser. A Math. Anal. 18(6), 747-766 (2011)

3. 3.

Arthi, G, Balachandran, K: Controllability of second-order impulsive evolution systems with infinite delay. Nonlinear Anal. Hybrid Syst. 11, 139-153 (2014)

4. 4.

Arthi, G, Park, JH, Jung, HY: Existence and controllability results for second-order impulsive stochastic evolution systems with state-dependent delay. Appl. Math. Comput. 248, 328-341 (2014)

5. 5.

Batty, CJK, Chill, R, Srivastava, S: Maximal regularity for second order non-autonomous Cauchy problems. Stud. Math. 189(3), 205-223 (2008)

6. 6.

Bochenek, J: Existence of the fundamental solution of a second order evolution equation. Ann. Pol. Math. LXVI, 15-35 (1997)

7. 7.

Faraci, F, Iannizzotto, A: A multiplicity theorem for a perturbed second-order non-autonomous system. Proc. Edinb. Math. Soc. 49, 267-275 (2006)

8. 8.

Ha, J, Nakagiri, S-I, Tanabe, H: Gateaux differentiability of solution mappings for semilinear second-order evolution equations. J. Math. Anal. Appl. 310, 518-532 (2005)

9. 9.

Ha, J, Nakagiri, S-I, Tanabe, H: Frechet differentiability of solution mappings for semilinear second order evolution equations. J. Math. Anal. Appl. 346, 374-383 (2008)

10. 10.

Henríquez, HR, Poblete, V, Pozo, JC: Mild solutions of non-autonomous second order problems with nonlocal initial conditions. J. Math. Anal. Appl. 412, 1064-1083 (2014)

11. 11.

Kozak, M: A fundamental solution of a second-order differential equation in a Banach space. Univ. Iagel. Acta Math. XXXII, 275-289 (1995)

12. 12.

Lin, Y: Time-dependent perturbation theory for abstract evolution equations of second order. Stud. Math. 130(3), 263-274 (1998)

13. 13.

Lutz, D: On bounded time-dependent perturbations of operator cosine functions. Aequ. Math. 23, 197-203 (1981)

14. 14.

Obrecht, E: Evolution operators for higher order abstract parabolic equations. Czechoslov. Math. J. 36(2), 210-222 (1986)

15. 15.

Obrecht, E: The Cauchy problem for time-dependent abstract parabolic equations of higher order. J. Math. Anal. Appl. 125, 508-530 (1987)

16. 16.

Peng, Y, Xiang, X: Second-order nonlinear impulsive time-variant systems with unbounded perturbation and optimal controls. J. Ind. Manag. Optim. 4(1), 17-32 (2008)

17. 17.

Peng, Y, Xiang, X, Wei, W: Second-order nonlinear impulsive integro-differential equations of mixed type with time-varying generating operators and optimal controls on Banach spaces. Comput. Math. Appl. 57, 42-53 (2009)

18. 18.

Serizawa, H, Watanabe, M: Time-dependent perturbation for cosine families in Banach spaces. Houst. J. Math. 12(4), 579-586 (1986)

19. 19.

Winiarska, T: Evolution equations of second order with operator dependent on t. Sel. Probl. Math. Cracow Univ. Tech. 6, 299-314 (1995)

20. 20.

Winiarska, T: Quasilinear evolution equations with operators dependent on t. Mat. Stud. 21(2), 170-178 (2004)

21. 21.

Henríquez, HR: Existence of solutions of non-autonomous second order functional differential equations with infinite delay. Nonlinear Anal. 74, 3333-3352 (2011)

22. 22.

Diestel, J, Uhl, JJ: Vector Measures. Am. Math. Soc., Providence (1972)

23. 23.

Fattorini, HO: Second Order Linear Differential Equations in Banach Spaces. North-Holland Mathematics Studies, vol. 108. North-Holland, Amsterdam (1985)

24. 24.

Haase, M: The Functional Calculus for Sectorial Operators. Birkhäuser, Basel (2006)

25. 25.

Henríquez, HR, Pierri, M, Rolnik, V: Pseudo S-asymptotically periodic solutions of second-order abstract Cauchy problems. Appl. Math. Comput. 274, 590-603 (2016)

26. 26.

Piskarev, SI: Evolution equations in Banach spaces. Theory of cosine operator functions. http://www.icmc.usp.br/~andcarva/minicurso.pdf (2004). Accessed 29 Dec 2011

27. 27.

Sivasankaran, S, Arjunan, MM, Vijayakumar, V: Existence of global solutions for second order impulsive abstract partial differential equations. Nonlinear Anal. TMA 74, 6747-6757 (2011)

28. 28.

Travis, CC, Webb, GF: Second order differential equations in Banach space. In: Proc. Internat. Sympos. on Nonlinear Equations in Abstract Spaces, pp. 331-361. Academic Press, New York (1987)

29. 29.

Vasilev, VV, Piskarev, SI: Differential equations in Banach spaces II. Theory of cosine operator functions. J. Math. Sci. 122, 3055-3174 (2004)

30. 30.

Kisyński, J: On cosine operator functions and one parameter group of operators. Stud. Math. 49, 93-105 (1972)

31. 31.

Xiao, T-J, Liang, J: The Cauchy Problem for Higher-Order Abstract Differential Equations. Lecture Notes in Math., vol. 1701. Springer, Berlin (1998)

32. 32.

Novo, S, Obaya, R, Rojo, J: Ecuaciones y Sistemas Diferenciales. McGraw-Hill, Madrid (1995)

## Acknowledgements

The research of Hernán R Henríquez was supported in part by CONICYT under Grant FONDECYT 1130144 and DICYT-USACH, and the research of Juan C Pozo was supported in part by CONICYT, under Grant FONDECYT 3140103.

## Author information

Correspondence to Hernán R Henríquez.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

This work was developed during a stay of Professor Juan C Pozo at the University of Santiago as a postdoc, sponsored by Professor Hernán R Henríquez. In this context, it is difficult to separate individual contributions, since the work emerged as the product of a joint seminar. The work was written and reviewed by both authors, who in the course of a one semester seminar were introducing numerous amendments to the original manuscript. In order to mention something specific, we can say that HR Henríquez worked especially on the properties of evolution operators for the non-autonomous second order abstract Cauchy problem, while JC Pozo worked on the existence of a resolvent for the integral equation. Both authors read and approved the final manuscript.

## Rights and permissions 