Skip to main content

Optimal control problems for a von Kármán system with long memory

Abstract

In this paper, we study quadratic cost optimal control problems governed by a von Kármán system with long memory. We prove the existence of an optimal control for the cost. Then, by proving the strong Gâteaux differentiability of nonlinear solution mapping we establish necessary optimality condition for the optimal control corresponding to the quadratic cost. Further, we study the time local uniqueness of the optimal controls for distributive observation.

1 Introduction

We consider a von Kármán plate model with internal damping and long memory. In the context of control theory, early results for the von Kármán plate can be found in [1], which gives the derivation of the model and asymptotic energy estimates for the system.

In this paper, our system may be described as follows: Let Ω be an open bounded domain in \(R^{2}\) with a sufficiently smooth boundary Ω. In \((0, T) \times\Omega\), we consider the following von Kármán system with long memory and the clamped boundary condition in the variables y, representing the position of the plate and the Airy’s stress function v:

$$ \textstyle\begin{cases} y_{tt} - \Delta y_{tt} + \Delta^{2} y + \int^{t}_{0} k(t-s) \Delta^{2} y(s)\,ds = [y, v] + f\quad\text{in }Q = (0, T) \times\Omega, \\ \Delta^{2} v = -[y, y ] \quad\text{in } Q = (0, T) \times\Omega, \\ y= \frac{\partial y}{\partial\nu} = v = \frac{\partial v}{\partial\nu } = 0 \quad \text{on } \Sigma= (0, T) \times\partial\Omega, \\ y(0,x) = y_{0}(x),\qquad y_{t} (0,x) = y_{1} (x) \quad \text{in } \Omega, \end{cases} $$
(1.1)

where the vector ν denotes an outward normal, \(k \in C^{1}([0,T])\) is a memory kernel, f is a forcing function, and the von Kármán bracket is given by

$$[\psi, \phi] = \frac{\partial^{2} \psi}{\partial x^{2}_{1}}\frac{\partial^{2} \phi}{\partial x^{2}_{2}} + \frac{\partial^{2} \phi}{\partial x^{2}_{1}} \frac{\partial^{2} \psi}{\partial x^{2}_{2}} -2 \frac{\partial^{2} \psi}{\partial x_{1} \,\partial x_{2}} \frac{\partial^{2} \phi}{\partial x_{1} \,\partial x_{2}}. $$

The aim of this paper can be summarized as follows.

Firstly, we survey the well-posedness of Eq. (1.1) with respect to y in the Hadamard sense relying on some previous results. To name just a few, we can refer to [24], and references therein. Especially, in order to prove the local Lipschitz continuity of the solution mapping, we employ the energy equality of Volterra-type integro-differential equation which is proved in [5].

Secondly, based on this result, we study the following optimal control problem:

$$ \text{Minimize}\quad J(u) $$
(1.2)

subject to

$$ \textstyle\begin{cases} y_{tt}(u) - \Delta y_{tt}(u) + \Delta^{2} y(u) + \int^{t}_{0} k(t-s) \Delta ^{2} y(u;s)\,ds = [y(u), v(u)] + Bu \quad\mbox{in } Q, \\ \Delta^{2} v(u) = -[y(u), y(u) ] \quad\mbox{in } Q, \\ y(u)= \frac{\partial y(u)}{\partial\nu} = v(u) = \frac{\partial v(u)}{\partial\nu} = 0 \quad \mbox{on } \Sigma, \\ y(u;0,x) = y_{0}(x),\qquad y_{t} (u;0,x) = y_{1} (x) \quad \mbox{in } \Omega, \end{cases} $$
(1.3)

where B is a controller, u is a control, J is a quadratic cost function, \(y(u)\) denotes the state for a given \(u \in{ \mathcal {U}}\), and \({\mathcal {U}}\) is a Hilbert space of control variables. In order to apply the variational approach due to Lions [6] to our problem, we propose the quadratic cost functional J as studied in Lions [6], which is to be minimized within \({\mathcal {U}}_{\mathrm{ad}}\), an admissible set of control variables in \({\mathcal {U}}\).

The quadratic cost optimal control problem consists of two problems, to show the existence of optimal control and to derive a necessary condition for the optimal control.

We need to show the existence of \(u^{*} \in {\mathcal {U}}_{\mathrm{ad}} \) that minimizes the quadratic cost functional J. However, differently from the linear equation case, we are faced with difficulty that the weak convergence of the controlled state \(y(u_{n} )\) is insufficient to cover the convergence of the nonlinear part of Eq. (1.3). Therefore, it is necessary to improve the convergence of the controlled state \(y(u_{n} )\). Thus, to improve the convergence, we newly adapt the idea of Dautray and Lions ([7], pp.578-581), namely, the strong convergence result studied in linear evolution equations. Also, this method is quite useful in proving the strong Gâteaux differentiability of the nonlinear solution mapping \(u \rightarrow y(u)\), which is used to define the associate adjoint system. Then, we establish a necessary condition of optimality of the optimal control \(u^{*}\) for some physically meaningful observation case employing the associate adjoint system.

In author’s knowledge, this is a newly developed method.

In fact, the extension of optimal control theory to quasilinear equations is not easy. Only few researches have been devoted to the study of optimal control or identification problems in specific quasilinear equations. For instance, we can refer to Hwang and Nakagiri [8, 9] and Hwang [10, 11].

Moreover, in this paper, we discuss the time local uniqueness of optimal control for distributive observation. As is widely known, it is unclear and difficult to verify the uniqueness of optimal control in nonlinear control problems.

Following the idea in [12], we show the strict convexity of the quadratic cost J for distributive observation in local time interval by making use of the second-order Gâteaux differentiability of the nonlinear solution mapping \(u \rightarrow y(u)\). As a consequence, we prove the time local uniqueness of optimal control. This is another novelty of the paper.

2 Notation and preliminaries

If X is a Banach space, we denote by \(X'\) its topological dual and by \(\langle\cdot, \cdot \rangle_{X', X}\) the duality pairing between \(X'\) and X. We introduce the following abbreviations:

$$L^{p} = L^{p} (\Omega),\qquad W^{k, p} = W^{k,p} (\Omega),\qquad \Vert \cdot \Vert _{p} = \Vert \cdot \Vert _{L^{p} },\qquad \Vert \cdot \Vert = \Vert \cdot \Vert _{L^{2} }, $$

with \(p \ge1\), where \(W^{k, p}\) is the \(L^{p} \)-based Sobolev space. When \(p=2\), the space becomes a Hilbert space, and we use the special notation \(H^{k}\) to denote \(W^{k, 2}\) for \(k \ge1\), and \(H^{k}_{0}\) mean the completions of \(C^{\infty}_{0}(\Omega)\) in \(H^{k}\) for \(k \ge1\).

We denote the scalar product on \(L^{2}\) by \((\cdot,\cdot)_{2}\). Then the scalar products on \(H^{k}_{0}\) (\(k=1,2\)) are given as follows:

$$\begin{aligned}& \bigl(( \psi, \phi) \bigr)_{H^{1}_{0}} = (\nabla\psi, \nabla\phi )_{2};\quad \forall\psi, \phi\in H^{1}_{0}, \\& \bigl(( \psi, \phi) \bigr)_{H^{2}_{0}} = (\Delta\psi, \Delta\phi )_{2};\quad \forall\psi, \phi\in H^{2}_{0}. \end{aligned}$$

Then obviously,

$$\Vert \psi \Vert _{H^{1}_{0}} = \Vert \nabla\psi \Vert ,\quad \forall\psi \in H^{1}_{0},\qquad \Vert \phi \Vert _{H^{2}_{0}} = \Vert \Delta\phi \Vert ,\quad \forall\phi\in H^{2}_{0}, $$

and \(D(\Delta^{2} ) = H^{4} \cap H^{2}_{0}\).

Especially, the duality pairs between \(H^{k}_{0}\) and \(H^{-k}\) (\(k =1,2\)) are abbreviated by \(\langle\cdot, \cdot\rangle_{k, -k}\). It is clear that \(H^{2}_{0} \hookrightarrow H^{1}_{0} \hookrightarrow L^{2} \hookrightarrow H^{-1} \hookrightarrow H^{-2}\), each space is dense in the next one, and the injections are continuous.

It is well known that the biharmonic operator

$$\Delta^{2}: H^{4} \cap H^{2}_{0} \to L^{2} $$

is bijective and admits an isometric extension

$$\Delta^{2}: H^{2}_{0} \to H^{-2}. $$

Thus, we can define the operator \(G \in {\mathcal {L}}(L^{2}, H^{4} \cap H^{2}_{0})\) (or \({\mathcal {L}}(H^{-2}, H^{2}_{0})\)) by

$$ G f = g\quad\mbox{iff}\quad \Delta^{2} g = f\quad \mbox{in } \Omega,\qquad g = \frac{\partial g}{\partial\nu} = 0\quad\mbox{on } \partial\Omega. $$
(2.1)

Therefore, from Eq. (1.1) we can also note that

$$ v = -G [y, y]\quad \forall y \in H^{2}_{0}. $$
(2.2)

We further collect some results for the Airy stress function and von Kármán bracket.

Lemma 2.1

The trilinear form \(b: H^{2}_{0} \times H^{2}_{0} \times H^{2}_{0} \to R\) given by

$$b ( \psi, \phi, \varphi) \equiv \bigl([\psi, \phi], \varphi \bigr)_{2} $$

satisfies the property

$$b ( \psi, \phi, \varphi) = b (\psi, \varphi, \phi). $$

Proof

See [13]. □

Lemma 2.2

  1. (1)

    [3, 14] The bilinear forms \((\psi, \phi) \to G [\psi, \phi] \) from \(H^{2} \times H^{2}\) into \(W^{2, \infty}\) and \((\psi, \phi) \to [\psi, \phi] \) from \(H^{1} \times H^{2}\) into \(H^{-2}\) are continuous. We also have the following estimates:

    $$\begin{aligned}& \bigl\Vert G [ \psi, \phi] \bigr\Vert _{W^{2, \infty}} \le C \Vert \psi \Vert _{H^{2}} \Vert \phi \Vert _{H^{2}},\quad \psi, \phi\in H^{2}, \end{aligned}$$
    (2.3)
    $$\begin{aligned}& \bigl\Vert [\psi, \phi] \bigr\Vert _{H^{-2} } \le C \Vert \psi \Vert _{H^{1}} \Vert \phi \Vert _{H^{2}},\quad \psi \in H^{1}, \phi\in H^{2}. \end{aligned}$$
    (2.4)

    Consequently,

    $$ \bigl\Vert \bigl[\varphi, G[\psi, \phi] \bigr] \bigr\Vert \le C \Vert \varphi \Vert _{H^{2}}\Vert \psi \Vert _{H^{2}} \Vert \phi \Vert _{H^{2}},\quad \varphi, \psi, \phi\in H^{2}. $$
    (2.5)
  2. (2)

    [2], Lemma 3.2. The bilinear form \([\cdot, \cdot ]: H^{2}_{0} \times H^{2}_{0} \to H^{-1-\epsilon} \) given by

    $$(\psi, \phi) \to[\psi, \phi] $$

    is continuous for every \(\epsilon> 0\). Moreover,

    $$\bigl\Vert [\psi, \phi] \bigr\Vert _{H^{-1-\epsilon}} \le C \Vert \psi \Vert _{H^{2}_{0} } \Vert \phi \Vert _{H^{2}_{0}}. $$

3 Von Kármán equation with long memory

The solution Hilbert space \(W(0, T)\) of Eq. (1.1) is defined by

$$W(0, T) = \bigl\{ \psi | \psi\in L^{2} \bigl(0, T; H^{2}_{0} \bigr), \psi' \in L^{2} \bigl(0, T; H^{1}_{0} \bigr), \psi'' \in L^{2} \bigl(0, T; L^{2} \bigr) \bigr\} $$

endowed with the norm

$$\Vert \psi \Vert _{W(0, T)} = \bigl( \Vert \psi \Vert ^{2}_{L^{2} (0, T; H^{2}_{0})} + \bigl\Vert \psi ' \bigr\Vert ^{2}_{L^{2} (0, T; H^{1}_{0})} + \bigl\Vert \psi'' \bigr\Vert ^{2}_{L^{2}(0, T; L^{2})} \bigr)^{\frac{1}{2}}. $$

Definition 3.1

We say that a function y is a weak solution of Eq. (1.1) if \(y \in W(0, T)\) and satisfies

$$ \textstyle\begin{cases} \langle y''(\cdot) - \Delta y''(\cdot), \phi \rangle_{-2, 2} + ( \Delta y(\cdot) + k * \Delta y(\cdot), \Delta\phi )_{2} = ([y(\cdot), v(\cdot)]+ f(\cdot), \phi)_{2}, \\ (\Delta v(\cdot), \Delta\phi)_{2} = -([y(\cdot), y(\cdot)], \phi)_{2}\quad \mbox{for all } \phi\in H^{2}_{0} \mbox{ in the sense of } {\mathcal {D}}'(0,T),\\ y(0)=y_{0},\qquad y'(0)=y_{1}. \end{cases} $$
(3.1)

In the sequel, we give the important energy equality of weak solutions of Eq. (1.1). However, we are faced with the difficulty of regularity of weak solutions of Eq. (1.1), that is, \(y' \) generally does not belong to \(H^{2}_{0}\) as notified before. In order to overcome this difficulty, we employ the idea of Lions and Magenes [15], pp.276-279, namely, double regularization method used in linear hyperbolic equations. We also note that the method has been applied in [5], Proposition 2.1, to study a semilinear second-order integro-differential equation.

Lemma 3.1

Let X, Y be two Banach spaces, \(X \subset Y\) densely, and X be reflexive. Set

$$\begin{aligned} C_{s} \bigl([0, T]; Y \bigr) =& \bigl\{ \psi\in L^{\infty}(0, T; Y) |\forall \phi \in Y', t \to\langle\psi, \phi \rangle_{Y, Y'} \\ &{} \textit{is continuous of } [0, T] \to R \bigr\} . \end{aligned}$$

Then

$$L^{\infty} (0, T; X) \cap C_{s} \bigl([0, T]; Y \bigr) = C_{s} \bigl([0, T]; X \bigr). $$

Proof

See [15], p.275. □

Lemma 3.2

Assume that y is a weak solution of Eq. (1.1). Then we can assert (after possibly a modification on a set of measure zero) that

$$ y \in C_{s} \bigl([0, T]; H^{2}_{0} \bigr),\qquad y' \in C_{s} \bigl([0, T]; H^{1}_{0} \bigr). $$
(3.2)

Proof

Assume that y is a weak solution of Eq. (1.1). Then by referring to the results as in [2] (cf. [4]) we have

$$ y \in L^{\infty} \bigl(0, T; H^{2}_{0} \bigr),\qquad y' \in L^{\infty} \bigl(0, T; H^{1}_{0} \bigr). $$
(3.3)

From the inclusion \(W(0, T) \subset C([0, T]; H^{1}_{0} ) \cap C^{1} ( [0, T]; L^{2})\) (see [7]) and also from \(C([0, T]; H^{k}_{0}) \subset C_{s}([0, T]; H^{k}_{0} )\) (\(k=1,2\)) we can obtain by (3.3) that

$$y \in L^{\infty} \bigl(0, T; H^{2}_{0} \bigr) \cap C_{s} \bigl([0, T]; H^{1}_{0} \bigr),\qquad y_{t} \in L^{\infty} \bigl(0, T; H^{1}_{0} \bigr) \cap C_{s} \bigl([0, T]; L^{2} \bigr). $$

Thus, by Lemma 3.1 we have (3.2). □

Proposition 3.1

Assume that y is a weak solution of Eq. (1.1). Then, for each \(t \in[0, T]\), we have the energy equality

$$\begin{aligned}& \bigl\Vert y' (t) \bigr\Vert ^{2} + \bigl\Vert \nabla y'(t) \bigr\Vert ^{2} + \bigl\Vert \Delta y (t) \bigr\Vert ^{2} + \frac {1}{2} \bigl\Vert \Delta v (t) \bigr\Vert ^{2} \\& \quad = - 2 \bigl( k * \Delta y(t), \Delta y(t) \bigr)_{2} \\& \qquad{}+ 2 \int^{t}_{0} \bigl(k' * \Delta y(s), \Delta y(s) \bigr)_{2} \,ds + 2 \int^{t}_{0} k(0) \bigl\Vert \Delta y(s) \bigr\Vert ^{2} \,ds \\& \qquad{} + 2 \int^{t}_{0} \bigl(f(s), y' (s) \bigr)_{2} \,ds + \Vert y_{1} \Vert ^{2} + \Vert \nabla y_{1} \Vert ^{2} + \Vert \Delta y_{0} \Vert ^{2} + \frac{1}{2} \Vert \Delta v_{0} \Vert ^{2}, \end{aligned}$$
(3.4)

where \(\Delta v_{0} = - \Delta^{-1}[y_{0}, y_{0}]\).

Proof

By Lemma 3.2 and the uniform boundedness theorem, we have \(y(t) \in H^{2}_{0} \) and \(y'(t) \in H^{1}_{0} \) for all \(t \in[0, T]\). Thus, all functions in (3.4) have meaning for all \(t \in[0, T]\). Then, we can proceed the proof as in [5], Proposition 2.1. By regarding f in [5], Proposition 2.1, as \([y, v] + f\) in Eq. (1.1), we can infer by [5], Proposition 2.1, that the weak solution y of Eq. (1.1) satisfies

$$\begin{aligned}& \bigl\Vert y' (t) \bigr\Vert ^{2} + \bigl\Vert \nabla y'(t) \bigr\Vert ^{2} + \bigl\Vert \Delta y (t) \bigr\Vert ^{2} + 2 \bigl( k * \Delta y(t), \Delta y(t) \bigr)_{2} \\ & \quad = 2 \int^{t}_{0} \bigl(k' * \Delta y(s), \Delta y(s) \bigr)_{2} \,ds + 2 \int^{t}_{0} k(0) \bigl\Vert \Delta y(s) \bigr\Vert ^{2} \,ds \\ & \qquad{} + 2 \int^{t}_{0} \bigl( \bigl[y(s), v(s) \bigr] + f(s), y' (s) \bigr)_{2} \,ds + \Vert y_{1} \Vert ^{2} + \Vert \nabla y_{1} \Vert ^{2} + \Vert \Delta y_{0} \Vert ^{2}. \end{aligned}$$
(3.5)

By Lemma 2.1, (2.4), and (3.2) we can obtain for every fixed \(t \in[0, T]\) that

$$\begin{aligned}& 2 \int^{t}_{0} \bigl( \bigl[y(s), v(s) \bigr], y' (s) \bigr)_{2} \,ds \\ & \quad = 2 \lim_{h \to0} \int^{t}_{0} \biggl( \bigl[y(s), v(s) \bigr], \int^{1}_{0} {\hat{y}}'(s+ \theta h) \,d \theta \biggr)_{2} \,ds \\ & \quad = 2 \lim_{h \to0} \int^{t}_{0} \biggl( \biggl[y(s), \int^{1}_{0} {\hat{y}}'(s+ \theta h) \,d \theta \biggr], v(s) \biggr)_{2} \,ds \\ & \quad = 2 \int^{t}_{0} \bigl\langle \bigl[y(s), y'(s) \bigr], v(s) \bigr\rangle _{-2, 2} \,ds \\ & \quad = - \int^{t}_{0} \bigl\langle \Delta^{2} v'(s), v(s) \bigr\rangle _{-2, 2} \,ds \\ & \quad = - \int^{t}_{0} \frac{1}{2} \frac{d}{ds} \bigl\Vert \Delta v(s) \bigr\Vert ^{2} \,ds \\ & \quad = - \frac{1}{2} \bigl\Vert \Delta v(t) \bigr\Vert ^{2} + \frac{1}{2} \Vert \Delta v_{0} \Vert ^{2}, \end{aligned}$$
(3.6)

where \({\hat{y}}'(\cdot) = y'(\cdot) {\mathcal {X}}_{[0, t]} (\cdot)\).

Thus, we have (3.4). □

It is verified from the assumptions on f and k that the right-hand side of (3.4) is continuous in t. Hence, we have that \(t \to \Vert \nabla y'(t) \Vert + \Vert \Delta y(t) \Vert \) is continuous on \([0, T]\). Therefore, as in the proof of Lions and Magenes [15], p.279, we have

$$y \in C \bigl([0, T]; H^{2}_{0} \bigr) \cap C^{1} \bigl([0, T]; H^{1}_{0} \bigr). $$

Theorem 3.1

Assume that \((y_{0}, y_{1} ) \in H^{2}_{0} \times H^{1}_{0}\), \(k \in C^{1}([0, T])\), and \(f \in L^{2}(0, T; L^{2})\). Then Eq. (1.1) has a unique weak solution y in \({ S}(0,T) \equiv W (0, T) \cap C([0, T]; H^{2}_{0}) \cap C^{1} ([0, T]; H^{1}_{0})\). Moreover, the solution mapping \(p=(y_{0}, y_{1}, f) \to( y(p), y_{t} (p), v(p))\) of \({\mathcal {P}} \equiv H^{2}_{0} \times H^{1}_{0} \times L^{2}(0, T; L^{2}) \) into \(C([0, T]; H^{2}_{0} ) \times C([0, T]; H^{1}_{0}) \times C([0, T]; W^{2, \infty}) \) is locally Lipschitz continuous.

Indeed, let \(p_{1}= (y_{0}^{1}, y^{1}_{1}, f_{1}) \in{\mathcal {P}}\) and \(p_{2}= (y_{0}^{2}, y^{2}_{1}, f_{2}) \in{\mathcal {P}}\). We prove Theorem 3.1 by showing the inequality

$$\begin{aligned}& \bigl\Vert \nabla \bigl(y' (p_{1};t)-y' (p_{2};t) \bigr) \bigr\Vert + \bigl\Vert \Delta \bigl(y(p_{1};t)-y(p_{2};t) \bigr) \bigr\Vert + \bigl\Vert v(p_{1};t) - v(p_{2};t) \bigr\Vert _{W^{2, \infty}} \\ & \quad \leq C \Vert p_{1} - p_{2} \Vert _{{\mathcal {P}}}, \end{aligned}$$
(3.7)

where \(C > 0\) is a constant depending on data, and

$$\Vert p_{1} - p_{2} \Vert _{{\mathcal {P}}} = \bigl( \bigl\Vert y_{0}^{1} - y_{0}^{2} \bigr\Vert ^{2}_{H^{2}_{0}} + \bigl\Vert y_{1}^{1} - y_{1}^{2} \bigr\Vert ^{2}_{H^{1}_{0}} + \Vert f_{1} - f_{2} \Vert ^{2}_{L^{2} (0, T; L^{2})} \bigr)^{\frac{1}{2}}. $$

We will omit writing the integral variables in the definite integral without any confusion.

Proof of Theorem 3.1

For the well-posedness of weak solutions of Eq. (1.1), we can refer to [3, 4] (without memory term in Eq. (1.1)) and [2] (with memory term but without viscosity damping term \(- \Delta y_{tt}\) in Eq. (1.1)). As explained in [2], von Kármán nonlinearity is subcritical; thus, the issues of well-posedness and regularity of weak solutions are standard. Therefore, combining those results in [3, 4] and [2], we can deduce that Eq. (1.1) possesses a unique weak solution \(y \in{ S}(0,T) \) under the data condition \(p=(y_{0}, y_{1}, f) \in H^{2}_{0} \times H^{1}_{0} \times L^{2}(0, T; L^{2}) \) such that

$$ \Vert y \Vert _{S(0, T)} \le C \Vert p \Vert _{{\mathcal {P}}}. $$
(3.8)

Based on this result, we prove inequality (3.7). For this purpose, we denote \(y_{1} - y_{2} \equiv y(p_{1})- y(p_{2})\) by ψ and \(v_{1} - v_{2} \equiv v(p_{1})- v(p_{2})\) by V. Then, we can get from Eq. (1.1) that ψ and V satisfy the following equation in weak sense:

$$ \textstyle\begin{cases} \psi_{tt} - \Delta\psi_{tt} + \Delta^{2} \psi+ \int^{t}_{0} k(t-s) \Delta ^{2} \psi(s)\,ds = [\psi, v_{1}] + [y_{2}, V] + f_{1} - f_{2}\quad \mbox{in } Q, \\ \Delta^{2} V = -[\psi, y_{1} + y_{2} ] \quad\mbox{in } Q, \\ \psi= \frac{\partial\psi}{\partial\nu} = V = \frac{\partial V}{\partial\nu} = 0 \quad\mbox{on } \Sigma, \\ \psi(0) = y^{1}_{0} - y^{2}_{0},\qquad \psi_{t} (0) = y^{1}_{1} - y^{2}_{1} \quad\mbox{in } \Omega. \end{cases} $$
(3.9)

We note that

$$\begin{aligned}{} [y_{2}, V] = \bigl[y_{2}, -G[ \psi, y_{1} + y_{2}] \bigr]. \end{aligned}$$
(3.10)

In view of Eq. (3.5), corresponding to Eq. (1.1), we can get that the weak solution ψ of Eq. (3.9) satisfies

$$\begin{aligned}& \bigl\Vert \psi' (t) \bigr\Vert ^{2} + \bigl\Vert \nabla\psi'(t) \bigr\Vert ^{2} + \bigl\Vert \Delta\psi(t) \bigr\Vert ^{2} \\& \quad = - 2 \bigl( k * \Delta\psi(t), \Delta\psi(t) \bigr)_{2} \\& \qquad{} +2 \int^{t}_{0} \bigl(k' * \Delta\psi, \Delta\psi \bigr)_{2} \,ds + 2 \int^{t}_{0} k(0) \Vert \Delta\psi \Vert ^{2} \,ds \\& \qquad{} + 2 \int^{t}_{0} \bigl([\psi, v_{1}] + [y_{2}, V] + f_{1} -f_{2}, \psi' \bigr)_{2} \,ds \\& \qquad{} + \bigl\Vert \psi' (0) \bigr\Vert ^{2} + \bigl\Vert \nabla\psi' (0) \bigr\Vert ^{2} + \bigl\Vert \Delta\psi(0) \bigr\Vert ^{2}. \end{aligned}$$
(3.11)

The right-hand side of (3.11) can be estimated as follows:

$$\begin{aligned}& \bigl\vert 2 \bigl( k * \Delta\psi(t), \Delta\psi(t) \bigr)_{2} \bigr\vert \\& \quad \le 2\Vert k \Vert _{C^{0} ([0, T])} \bigl\Vert \Delta\psi(t) \bigr\Vert \int^{t}_{0} \Vert \Delta\psi \Vert \,ds \\& \quad \le \Vert k \Vert _{C^{0} ([0, T])} \biggl( \frac{1}{2( \Vert k \Vert _{C^{0} ([0, T])}+1)} \bigl\Vert \Delta\psi(t) \bigr\Vert ^{2} \\& \qquad{} + 2 \bigl( \Vert k \Vert _{C^{0} ([0, T])}+1 \bigr) \biggl( \int^{t}_{0} \Vert \Delta\psi \Vert \,ds \biggr)^{2} \biggr) \\& \quad \le 2 \bigl( \Vert k \Vert ^{2}_{C^{0} ([0, T])}+ \Vert k \Vert _{C^{0} ([0, T])} \bigr)T \int^{t}_{0} \Vert \Delta\psi \Vert ^{2} \,ds + \frac{1}{2} \bigl\Vert \Delta\psi(t) \bigr\Vert ^{2}; \end{aligned}$$
(3.12)
$$\begin{aligned}& \biggl\vert 2 \int^{t}_{0} \bigl( k' * \Delta\psi, \Delta\psi \bigr)_{2} \,ds \biggr\vert \\& \quad \le 2\Vert k \Vert _{C^{1} ([0, T])} \int^{t}_{0} \int^{s}_{0} \Vert \Delta\psi \Vert \,d\sigma \Vert \Delta\psi \Vert \,ds \\& \quad \le 2\Vert k \Vert _{C^{1} ([0, T])} \biggl( \int^{t}_{0} \biggl( \int^{s}_{0} \Vert \Delta\psi \Vert \,d\sigma \biggr)^{2} \,ds \biggr)^{\frac{1}{2}} \biggl( \int^{t}_{0} \Vert \Delta\psi \Vert ^{2} \,ds \biggr)^{\frac{1}{2}} \\& \quad \le 2\Vert k \Vert _{C^{1} ([0, T])} \biggl( \int^{t}_{0} s \biggl( \int^{s}_{0} \Vert \Delta\psi \Vert ^{2} \,d\sigma \biggr) \,ds \biggr)^{\frac{1}{2}} \biggl( \int^{t}_{0} \Vert \Delta\psi \Vert ^{2} \,ds \biggr)^{\frac{1}{2}} \\& \quad \le 2T \Vert k \Vert _{C^{1} ([0, T])} \int^{t}_{0} \Vert \Delta\psi \Vert ^{2} \,ds; \end{aligned}$$
(3.13)
$$\begin{aligned}& \biggl\vert 2 \int^{t}_{0} k(0) \Vert \Delta\psi \Vert ^{2} \,ds \biggr\vert \le 2 \Vert k \Vert _{C([0, T])} \int^{t}_{0} \Vert \Delta\psi \Vert ^{2} \,ds; \end{aligned}$$
(3.14)
$$\begin{aligned}& \biggl\vert 2 \int^{t}_{0} \bigl( f_{1} - f_{2}, \psi' \bigr)_{2} \,ds \biggr\vert \le 2 \int^{t}_{0} \Vert f_{1} - f_{2} \Vert \bigl\Vert \psi' \bigr\Vert \,ds \\& \hphantom{ \biggl\vert 2 \int^{t}_{0} \bigl( f_{1} - f_{2}, \psi' \bigr)_{2} \,ds \biggr\vert } \le \int^{T}_{0} \Vert f_{1} - f_{2} \Vert ^{2} \,dt + \int^{t}_{0} \bigl\Vert \psi' \bigr\Vert ^{2} \,ds. \end{aligned}$$
(3.15)

By Lemma 2.2 we can obtain the following:

$$\begin{aligned}& \biggl\vert 2 \int^{t}_{0} \bigl( [\psi, v_{1}], \psi' \bigr)_{2} \,ds \biggr\vert \le 2 \int^{t}_{0} \bigl\Vert [\psi, v_{1}] \bigr\Vert \bigl\Vert \psi' \bigr\Vert \,ds \\& \hphantom{\biggl\vert 2 \int^{t}_{0} \bigl( [\psi, v_{1}], \psi' \bigr)_{2} \,ds \biggr\vert } \le C \int^{t}_{0} \Vert \psi \Vert _{H^{2}_{0} } \Vert v_{1} \Vert _{W^{2, \infty}} \bigl\Vert \psi' \bigr\Vert \,ds \\& \hphantom{\biggl\vert 2 \int^{t}_{0} \bigl( [\psi, v_{1}], \psi' \bigr)_{2} \,ds \biggr\vert }\le C \int^{t}_{0} \Vert \psi \Vert _{H^{2}_{0} } \Vert y_{1} \Vert ^{2}_{H^{2}_{0}} \bigl\Vert \psi' \bigr\Vert \,ds \\& \hphantom{\biggl\vert 2 \int^{t}_{0} \bigl( [\psi, v_{1}], \psi' \bigr)_{2} \,ds \biggr\vert } \le C \Vert y_{1} \Vert ^{2}_{L^{\infty}(0,T; H^{2}_{0})} \int^{t}_{0} \Vert \Delta\psi \Vert \bigl\Vert \psi' \bigr\Vert \,ds \\& \hphantom{\biggl\vert 2 \int^{t}_{0} \bigl( [\psi, v_{1}], \psi' \bigr)_{2} \,ds \biggr\vert }\le C \Vert p_{1} \Vert ^{2}_{{\mathcal {P}}} \int^{t}_{0} \bigl( \Vert \Delta\psi \Vert ^{2} + \bigl\Vert \psi' \bigr\Vert ^{2} \bigr) \,ds; \end{aligned}$$
(3.16)
$$\begin{aligned}& \biggl\vert 2 \int^{t}_{0} \bigl( [y_{2}, V], \psi' \bigr)_{2} \,ds \biggr\vert \le 2 \int^{t}_{0} \bigl\Vert \bigl[y_{2}, -G [ \psi, y_{1} + y_{2}] \bigr] \bigr\Vert \bigl\Vert \psi' \bigr\Vert \,ds \\& \hphantom{\biggl\vert 2 \int^{t}_{0} \bigl( [y_{2}, V], \psi' \bigr)_{2} \,ds \biggr\vert } \le C \int^{t}_{0} \Vert y_{2} \Vert _{H^{2}_{0} } \bigl\Vert G [\psi, y_{1} + y_{2}] \bigr\Vert _{W^{2, \infty}} \bigl\Vert \psi' \bigr\Vert \,ds \\& \hphantom{\biggl\vert 2 \int^{t}_{0} \bigl( [y_{2}, V], \psi' \bigr)_{2} \,ds \biggr\vert } \le C \int^{t}_{0} \Vert y_{2} \Vert _{H^{2}_{0} } \Vert \psi \Vert _{H^{2}_{0} } \bigl( \Vert y_{1} \Vert _{H^{2}_{0}} + \Vert y_{2} \Vert _{H^{2}_{0} } \bigr) \bigl\Vert \psi' \bigr\Vert \,ds \\& \hphantom{\biggl\vert 2 \int^{t}_{0} \bigl( [y_{2}, V], \psi' \bigr)_{2} \,ds \biggr\vert } \le C \Vert y_{2} \Vert _{L^{\infty}(0,T; H^{2}_{0})} \bigl(\Vert y_{1} \Vert _{L^{\infty} (0, T; H^{2}_{0})} + \Vert y_{2} \Vert _{L^{\infty}(0, T; H^{2}_{0} )} \bigr) \\& \hphantom{\biggl\vert 2 \int^{t}_{0} \bigl( [y_{2}, V], \psi' \bigr)_{2} \,ds \biggr\vert \leq} {} \times \int^{t}_{0} \Vert \Delta\psi \Vert \bigl\Vert \psi' \bigr\Vert \,ds \\& \hphantom{\biggl\vert 2 \int^{t}_{0} \bigl( [y_{2}, V], \psi' \bigr)_{2} \,ds \biggr\vert } \le C \Vert p_{2} \Vert _{{\mathcal {P}}} \bigl(\Vert p_{1} \Vert _{{\mathcal {P}}} + \Vert p_{2} \Vert _{{\mathcal {P}}} \bigr) \int^{t}_{0} \bigl( \Vert \Delta\psi \Vert ^{2} + \bigl\Vert \psi' \bigr\Vert ^{2} \bigr) \,ds \\& \hphantom{\biggl\vert 2 \int^{t}_{0} \bigl( [y_{2}, V], \psi' \bigr)_{2} \,ds \biggr\vert } \le C \bigl(\Vert p_{1} \Vert ^{2}_{{\mathcal {P}}} + \Vert p_{2} \Vert ^{2}_{{\mathcal {P}}} \bigr) \int ^{t}_{0} \bigl( \Vert \Delta\psi \Vert ^{2} + \bigl\Vert \psi' \bigr\Vert ^{2} \bigr) \,ds. \end{aligned}$$
(3.17)

We replace the right-hand side of (3.11) by the right members of (3.12)-(3.17) to obtain

$$\begin{aligned}& \bigl\Vert \psi' (t) \bigr\Vert ^{2} + \bigl\Vert \nabla\psi' (t) \bigr\Vert ^{2} + \bigl\Vert \Delta\psi(t) \bigr\Vert ^{2} \\& \quad \le C \bigl(1+ (T+1)\Vert k\Vert ^{2}_{C^{1}([0, T])} + \Vert p_{1}\Vert ^{2}_{{\mathcal {P}}} + \Vert p_{2} \Vert ^{2}_{{\mathcal {P}}} \bigr) \int^{t}_{0} \bigl( \Vert \Delta\psi \Vert ^{2} + \bigl\Vert \psi' \bigr\Vert ^{2} \bigr) \,ds \\& \qquad{} + \bigl\Vert \psi' (0) \bigr\Vert ^{2} + \bigl\Vert \nabla\psi' (0) \bigr\Vert ^{2} + \bigl\Vert \Delta\psi(0) \bigr\Vert ^{2} + \int^{T}_{0} \Vert f_{1} - f_{2} \Vert ^{2} \,dt. \end{aligned}$$
(3.18)

By applying Poincaré’s and Gronwall’s inequality to (3.18) we have

$$\begin{aligned}& \bigl\Vert \nabla\psi' (t) \bigr\Vert ^{2} + \bigl\Vert \Delta\psi(t) \bigr\Vert ^{2} \\& \quad \le C (T, k, p_{1}, p_{2} ) \bigl( \bigl\Vert \nabla \psi' (0) \bigr\Vert ^{2} + \bigl\Vert \Delta \psi(0) \bigr\Vert ^{2} + \Vert f_{1} - f_{2} \Vert ^{2}_{L^{2}(0, T; L^{2})} \bigr) \\& \quad = C (T, k, p_{1}, p_{2} ) \Vert p_{1} - p_{2} \Vert ^{2}_{{\mathcal {P}}}. \end{aligned}$$
(3.19)

Also, for almost \(t \in[0, T]\), we have

$$\begin{aligned} \bigl\Vert V (t) \bigr\Vert ^{2}_{W^{2, \infty}} =& \bigl\Vert -G \bigl[ \psi(t), y_{1} (t) + y_{2} (t) \bigr] \bigr\Vert ^{2}_{W^{2, \infty}} \\ \le& C \bigl\Vert \psi(t) \bigr\Vert ^{2}_{H^{2}_{0}} \bigl\Vert y_{1} (t) + y_{2} (t) \bigr\Vert ^{2}_{H^{2}_{0}} \\ \le& C \bigl(\Vert y_{1} \Vert ^{2}_{L^{\infty} (0, T; H^{2}_{0})} + \Vert y_{2} \Vert ^{2}_{L^{\infty } (0, T; H^{2}_{0})} \bigr) \bigl\Vert \Delta\psi(t) \bigr\Vert ^{2} \\ \le& C \bigl(\Vert p_{1} \Vert ^{2}_{{\mathcal {P}}} + \Vert p_{2} \Vert ^{2}_{{\mathcal {P}}} \bigr) \bigl\Vert \Delta\psi(t) \bigr\Vert ^{2}. \end{aligned}$$
(3.20)

By (3.19) and (3.20) we can deduce

$$ \bigl\Vert V(t) \bigr\Vert ^{2}_{W^{2,\infty}} \le C_{1} (T, k, p_{1}, p_{2} ) \Vert p_{1} - p_{2} \Vert ^{2}_{{\mathcal {P}}}. $$
(3.21)

Finally, by combining (3.19) and (3.21) we obtain (3.7).

This completes the proof. □

4 Quadratic cost optimal control problems

Let \({\mathcal {U}}\) be a Hilbert space of control variables, and let B be an operator,

$$ B \in{\mathcal {L}} \bigl({ \mathcal {U}},L^{2} \bigl(0,T;L^{2} \bigr) \bigr), $$
(4.1)

called a controller.

We consider the following nonlinear control system:

$$ \textstyle\begin{cases} y_{tt}(u) - \Delta y_{tt}(u) + \Delta^{2} y(u) + \int^{t}_{0} k(t-s) \Delta ^{2} y(u;s)\,ds = [y(u), v(u)] + Bu \quad\mbox{in } Q, \\ \Delta^{2} v(u) = -[y(u), y(u) ] \quad\mbox{in } Q, \\ y(u)= \frac{\partial y(u)}{\partial\nu} = v(u) = \frac{\partial v(u)}{\partial\nu} = 0 \quad\mbox{on } \Sigma, \\ y(u;0,x) = y_{0}(x),\qquad y_{t} (u;0,x) = y_{1} (x) \quad\mbox{in } \Omega, \end{cases} $$
(4.2)

where \(y_{0} \in H^{2}_{0}\), \(y_{1} \in H^{1}_{0}\), and \(u \in{ \mathcal {U}}\) is a control. By Theorem 3.1 and (4.1) we can define uniquely the solution map \(u\to y(u)\) of \({ \mathcal {U}}\) into \(S(0, T)\). The observation of the state is assumed to be given by

$$ Y(u)=Cy(u),\qquad C \in{ \mathcal {L}} \bigl({S}(0,T), M \bigr), $$
(4.3)

where C is an operator called the observer, and M is a Hilbert space of observation variables. The quadratic cost function associated with the control system (4.2) is given by

$$ J(u)= \bigl\Vert Cy(u)-Y_{d} \bigr\Vert ^{2}_{M}+(Ru,u)_{\mathcal {U}} \qquad\mbox{for } u \in { \mathcal {U}}, $$
(4.4)

where \(Y_{d} \in M\) is a desired value of \(y(u)\), and \(R \in{ \mathcal {L}}({ \mathcal {U}},{\mathcal {U}}) \) is symmetric and positive, that is,

$$ (Ru,u)_{{\mathcal {U}}}=(u,Ru)_{\mathcal {U}} \geq d \Vert u \Vert ^{2}_{{\mathcal {U}}} $$
(4.5)

for some \(d > 0\). Let \({\mathcal {U}}_{\mathrm{ad}}\) be a closed convex subset of \({\mathcal {U}}\), which is called the admissible set. An element \(u^{*} \in {\mathcal {U}}_{\mathrm{ad}}\) that attains the minimum of J over \({\mathcal {U}}_{\mathrm{ad}}\) is called an optimal control for the cost (4.4).

4.1 Existence of an optimal control

As indicated in Introduction, we need to show the existence of an optimal control and to give its characterization. The existence of an optimal control \(u^{*}\) for the cost (4.4) can be stated by the following theorem.

Theorem 4.1

Assume that the hypotheses of Theorem  3.1 are satisfied. Then there exists at least one optimal control u for the control problem (4.2) with the cost (4.4).

Proof

Set \(J_{0}= \inf_{u \in{ \mathcal {U}}_{\mathrm{ad}}}J(u)\). Since \({\mathcal {U}}_{\mathrm{ad}}\) is nonempty, there is a sequence \(\{ u_{n} \}\) in \({\mathcal {U}}\) such that

$$\inf_{u \in{\mathcal {U}}_{\mathrm{ad}}}J(u)=\lim _{n \rightarrow \infty}J(u_{n})=J_{0}. $$

Obviously, \(\{J(u_{n})\}\) is bounded in \({\mathbf {R}}^{+}\). Then by (4.5) there exists a constant \(K_{0}>0\) such that

$$ d\Vert u_{n}\Vert _{\mathcal {U}}^{2} \leq(Ru_{n},u_{n})_{\mathcal {U}} \leq J(u_{n}) \leq K_{0}. $$
(4.6)

This shows that \(\{u_{n}\}\) is bounded in \({\mathcal {U}}\). Since \({\mathcal {U}}_{\mathrm{ad}}\) is closed and convex, we can choose a subsequence (denoted again by \(\{ u_{n} \}\)) of \(\{u_{n}\}\) and find \(u \in{\mathcal {U}}_{\mathrm{ad}} \) such that

$$ u_{n} \rightarrow u^{*} \quad\mbox{weakly in } {\mathcal {U}} $$
(4.7)

as \(n \rightarrow\infty\). From now on, each state \(y_{n}=y(u_{n}) \in{ S}(0,T)\) corresponding to \(u_{n}\) is a solution of

$$ \textstyle\begin{cases} y_{n,tt} - \Delta y_{n,tt} + \Delta^{2} y_{n} + \int^{t}_{0} k(t-s) \Delta^{2} y_{n}(s)\,ds = [y_{n}, v_{n}] + Bu_{n} \quad\mbox{in } Q, \\ \Delta^{2} v_{n} = -[y_{n}, y_{n}] \quad\mbox{in } Q, \\ y_{n} = \frac{\partial y_{n} }{\partial\nu} = v_{n} = \frac{\partial v_{n}}{\partial\nu} = 0 \quad\mbox{on } \Sigma, \\ y_{n} (0) = y_{0},\qquad y_{n, t} (0) = y_{1} \quad\mbox{in } \Omega. \end{cases} $$
(4.8)

By (4.6) the term \(Bu_{n}\) is estimated as

$$\begin{aligned} \Vert Bu_{n}\Vert _{L^{2}(0,T;L^{2} )} \leq& \Vert B \Vert _{ { \mathcal {L}}({\mathcal {U}}, L^{2}(0,T;L^{2})) } \Vert u_{n}\Vert _{\mathcal {U}} \\ \leq& \Vert B\Vert _{ {\mathcal {L}}({ \mathcal {U}}, L^{2}(0,T;L^{2} )) } \sqrt{K_{0} d^{-1}} \equiv K_{1}. \end{aligned}$$
(4.9)

Hence, noting that \(y(0,0,0,t) =0\) and \(v(0,0,0,t)=0\), it follows from Theorem 3.1 that

$$\begin{aligned}& \Vert y_{n} \Vert _{{ W}(0, T)} + \bigl\Vert y_{n} (t) \bigr\Vert _{ H^{2}_{0}}+ \bigl\Vert y'_{n} (t) \bigr\Vert _{H^{1}_{0}}+ \bigl\Vert v_{n} (t) \bigr\Vert _{W^{2, \infty}} \\& \quad \le C \bigl( \Vert y_{0} \Vert _{H^{2}_{0} } + \Vert y_{1} \Vert _{H^{1}_{0}} + K_{1} \bigr). \end{aligned}$$
(4.10)

By (4.10) we easily verify that \([y_{n}, v_{n}]\) is bounded in \(L^{2}(0, T;L^{2})\). Therefore, by the extraction theorem of Rellich we can find a subsequence of \(\{ y_{n}\}\), say again \(\{ y_{n} \}\), and find \(y\in { W}(0,T) \cap L^{\infty}(0, T; H^{2}_{0}) \) with \(y' \in L^{\infty} (0, T; H^{1}_{0})\) and \(F \in L^{2}(0, T; L^{2})\) such that

$$\begin{aligned}& y_{n} \to y \quad\mbox{weakly in } W(0,T), \end{aligned}$$
(4.11)
$$\begin{aligned}& y_{n} \to y\quad\mbox{weakly * in } L^{\infty} \bigl(0, T; H^{2}_{0} \bigr), \end{aligned}$$
(4.12)
$$\begin{aligned}& y'_{n} \to y'\quad \mbox{weakly * in } L^{\infty } \bigl(0, T; H^{1}_{0} \bigr), \end{aligned}$$
(4.13)
$$\begin{aligned}& [y_{n}, v_{n} ] \to F\quad\mbox{weakly in } L^{2} \bigl(0,T; L^{2} \bigr). \end{aligned}$$
(4.14)

To prove \(F = [y, -G[y,y]]\), we employ the idea given in Dautray and Lions [7]. By similar manipulations given in Dautray and Lions [7], pp.564-566, we can deduce that the weak limit y in (4.11) is a weak solution of the linear problem

$$ \textstyle\begin{cases} y_{tt} - \Delta y_{tt} + \Delta^{2} y + \int^{t}_{0} k(t-s) \Delta^{2} y(s)\,ds = F + Bu^{*} \quad \mbox{in } Q, \\ y = \frac{\partial y }{\partial\nu} = 0 \quad \mbox{on } \Sigma, \\ y (0) = y_{0},\qquad y_{ t} (0) = y_{1} \quad \mbox{in } \Omega. \end{cases} $$
(4.15)

As in (3.5), the weak solution y of Eq. (4.15) satisfies the following energy equality:

$$\begin{aligned}& \bigl\Vert y'(t) \bigr\Vert ^{2} + \bigl\Vert \nabla y'(t) \bigr\Vert ^{2} + \bigl\Vert \Delta y (t) \bigr\Vert ^{2} \\& \qquad{} + 2 \bigl( k * \Delta y(t), \Delta y(t) \bigr)_{2} \\& \quad = 2 \int^{t}_{0} \bigl(k' * \Delta y, \Delta y \bigr)_{2} \,ds + 2 \int^{t}_{0} k(0) \Vert \Delta y \Vert ^{2} \,ds \\& \qquad{} + 2 \int^{t}_{0} \bigl(F + Bu^{*}, y' \bigr)_{2} \,ds + \Vert y_{1} \Vert ^{2} + \Vert \nabla y_{1} \Vert ^{2} + \Vert \Delta y_{0} \Vert ^{2}. \end{aligned}$$
(4.16)

We can also deduce, as in (3.5), that the weak solution \(y_{n}\) of Eq. (4.8) satisfies the following energy equality:

$$\begin{aligned}& \bigl\Vert y'_{n} (t) \bigr\Vert ^{2} + \bigl\Vert \nabla y'_{n}(t) \bigr\Vert ^{2} + \bigl\Vert \Delta y_{n} (t) \bigr\Vert ^{2} \\& \qquad{} + 2 \bigl( k * \Delta y_{n} (t), \Delta y_{n} (t) \bigr)_{2} \\& \quad = 2 \int^{t}_{0} \bigl(k' * \Delta y_{n}, \Delta y_{n} \bigr)_{2} \,ds + 2 \int^{t}_{0} k(0) \Vert \Delta y_{n} \Vert ^{2} \,ds \\& \qquad{} + 2 \int^{t}_{0} \bigl([y_{n}, v_{n}] + Bu_{n}, y'_{n} \bigr)_{2} \,ds + \Vert y_{1} \Vert ^{2} + \Vert \nabla y_{1} \Vert ^{2} + \Vert \Delta y_{0} \Vert ^{2}. \end{aligned}$$
(4.17)

We note the following simple equalities:

$$\begin{aligned}& \Vert a \Vert ^{2} + \Vert b \Vert ^{2} = \Vert a- b \Vert ^{2} + 2 (a,b)_{2},\quad \forall a, b \in L^{2}; \\& (a_{1}, a_{2} )_{2} + (b_{1}, b_{2} )_{2} = (a_{1} - b_{1}, a_{2} - b_{2} )_{2} + (b_{1}, a_{2} )_{2} + ( a_{1}, b_{2})_{2}, \quad \forall a_{i}, b_{i} ( i=1,2) \in L^{2}. \end{aligned}$$

Adding (4.16) to (4.17), denoting \(y_{n} - y\) by \(\phi_{n}\), and using the above equalities, we have

$$\begin{aligned}& \bigl\Vert \phi'_{n} (t) \bigr\Vert ^{2} + \bigl\Vert \nabla\phi'_{n}(t) \bigr\Vert ^{2} + \bigl\Vert \Delta\phi _{n} (t) \bigr\Vert ^{2} \\& \qquad{} + 2 \bigl( k * \Delta\phi_{n} (t), \Delta \phi_{n} (t) \bigr)_{2} \\& \quad = 2 \int^{t}_{0} \bigl(k' * \Delta \phi_{n}, \Delta\phi_{n} \bigr)_{2} \,ds + 2 \int^{t}_{0} k(0) \Vert \Delta\phi_{n} \Vert ^{2} \,ds + \Phi^{0} + \sum ^{5}_{i=1} \Phi^{i}_{n}, \end{aligned}$$
(4.18)

where

$$\begin{aligned}& \Phi^{0} = 2 \bigl(\Vert y_{1} \Vert ^{2} + \Vert \nabla y_{1} \Vert ^{2} + \Vert \Delta y_{0} \Vert ^{2} \bigr), \end{aligned}$$
(4.19)
$$\begin{aligned}& \Phi^{1}_{n} = -2 \bigl( \bigl(y'_{n} (t), y' (t) \bigr)_{2} + \bigl(\nabla y'_{n} (t), \nabla y'(t) \bigr)_{2} + \bigl(\Delta y_{n} (t), \Delta y (t) \bigr)_{2} \bigr), \end{aligned}$$
(4.20)
$$\begin{aligned}& \Phi^{2}_{n} = -2 \bigl( \bigl(k* \Delta y_{n} (t), \Delta y(t) \bigr)_{2} + \bigl(k* \Delta y(t), \Delta y_{n} (t) \bigr)_{2} \bigr), \end{aligned}$$
(4.21)
$$\begin{aligned}& \Phi^{3}_{n} = 2 \biggl( \int^{t}_{0} \bigl(k' * \Delta y_{n}, \Delta y \bigr)_{2} \,ds + \int ^{t}_{0} \bigl(k' * \Delta y, \Delta y_{n} \bigr)_{2} \,ds \biggr), \end{aligned}$$
(4.22)
$$\begin{aligned}& \Phi^{4}_{n} = 4 \int^{t}_{0} k(0) (\Delta y_{n}, \Delta y)_{2} \,ds, \end{aligned}$$
(4.23)
$$\begin{aligned}& \Phi^{5}_{n} = 2 \biggl( \int^{t}_{0} \bigl([y_{n}, v_{n}] + Bu_{n}, y'_{n} \bigr)_{2} \,ds + \int^{t}_{0} \bigl(F + Bu^{*}, y' \bigr)_{2} \,ds \biggr). \end{aligned}$$
(4.24)

Then by routine calculations in (4.18), as in the proof of Theorem 3.1, we derive the inequality

$$ \bigl\Vert \phi'_{n} (t) \bigr\Vert ^{2} + \bigl\Vert \nabla\phi'_{n}(t) \bigr\Vert ^{2} + \bigl\Vert \Delta\phi_{n} (t) \bigr\Vert ^{2} \le C(k, T) \Biggl\vert \Phi^{0} + \sum ^{5}_{i=1} \Phi^{i}_{n} \Biggr\vert . $$
(4.25)

By virtue of (4.11)-(4.13) together with [7], pp.518-520, we can extract a subsequence \(\{ y_{n_{k} } \}\) of \(\{ y_{n} \}\) such that, as \(k \to \infty\),

$$\begin{aligned}& \Phi^{1}_{n_{k}} \to -2 \bigl( \bigl\Vert y' (t) \bigr\Vert ^{2} + \bigl\Vert \nabla y'(t) \bigr\Vert ^{2} + \bigl\Vert \Delta y (t) \bigr\Vert ^{2} \bigr), \end{aligned}$$
(4.26)
$$\begin{aligned}& \Phi^{2}_{n_{k}} \to -4 \bigl(k* \Delta y(t), \Delta y(t) \bigr)_{2}, \end{aligned}$$
(4.27)
$$\begin{aligned}& \Phi^{3}_{n_{k}} \to 4 \int^{t}_{0} \bigl(k' * \Delta y, \Delta y \bigr)_{2} \,ds, \end{aligned}$$
(4.28)
$$\begin{aligned}& \Phi^{4}_{n_{k}} \to 4 \int^{t}_{0} k(0) \Vert \Delta y \Vert ^{2} \,ds. \end{aligned}$$
(4.29)

Since the imbedding \(H^{1}_{0} \hookrightarrow L^{2}\) is compact, by virtue of (4.11), we can refer to the result of the Aubin-Lions-Temam compact imbedding theorem (see Temam [16]; p.271) to verify that \(\{ y'_{n} \}\) is precompact in \(L^{2}(0,T; L^{2})\). Hence, there also exists a subsequence \(\{y'_{n_{k}} \} \subset\{ y'_{n} \}\) such that

$$ y'_{n_{k}} \longrightarrow y' \quad \mbox{strongly in } L^{2} \bigl(0,T;L^{2} \bigr) \mbox{ as } k \rightarrow\infty. $$
(4.30)

From (4.7), (4.14), and (4.30) we have

$$ \Phi^{5}_{n_{k}} \to 4 \int^{t}_{0} \bigl(F + Bu^{*}, y' \bigr)_{2} \,ds \quad\mbox{as } k \to\infty. $$
(4.31)

In view of (4.16), the sum of (4.19) and all the limits from (4.26) to (4.29) and (4.31) are 0, so that

$$ \Phi^{0} + \sum^{5}_{i=1} \Phi^{i}_{n_{k}} \to 0\quad\mbox{as } k \to\infty. $$
(4.32)

Therefore, from (4.25) and (4.32) we get that

$$ y_{n_{k}} (t) \to y(t)\quad\mbox{strongly in } H^{2}_{0} \mbox{ as } k \to \infty, \forall t \in[0, T]. $$
(4.33)

Thus, by Lemma 2.2, Theorem 3.1, and (4.33) it follows that

$$\begin{aligned}& \bigl\Vert [y_{n_{k}}, v_{n_{k}}] - [y, v] \bigr\Vert _{L^{2}(0, T; L^{2})} \\& \quad = \bigl\Vert [y_{n_{k}} - y, v_{n_{k}}] + [y, v_{n_{k}} - v ] \bigr\Vert _{L^{2}(0, T; L^{2})} \\& \quad \le \bigl\Vert [y_{n_{k}} - y, v_{n_{k}}] \bigr\Vert _{L^{2}(0, T; L^{2})} + \bigl\Vert \bigl[y, G[y,y] - G [y_{n_{k}}, y_{n_{k}}] \bigr] \bigr\Vert _{L^{2}(0, T; L^{2})} \\& \quad = \bigl\Vert [y_{n_{k}} - y, v_{n_{k}}] \bigr\Vert _{L^{2}(0, T; L^{2})} + \bigl\Vert \bigl[y, G[y- y_{n_{k}}, y_{n_{k}} + y] \bigr] \bigr\Vert _{L^{2}(0, T; L^{2})} \\& \quad \le C \bigl( \Vert v_{n_{k}} \Vert _{L^{\infty}(0, T;W^{2, \infty})} + \Vert y \Vert _{L^{\infty} (0, T; H^{2}_{0} )} \bigl( \Vert y_{n_{k}} \Vert _{L^{\infty} (0, T; H^{2}_{0} )} \\& \qquad{} + \Vert y \Vert _{L^{\infty} (0, T; H^{2}_{0} )} \bigr) \bigr) \Vert y_{n_{k}} - y \Vert _{L^{2}(0, T; H^{2}_{0})} \\& \quad \le C \bigl( \Vert y \Vert ^{2}_{L^{\infty} (0, T; H^{2}_{0} )} + \Vert y_{n_{k}} \Vert ^{2}_{L^{\infty} (0, T; H^{2}_{0} )} \bigr) \Vert y_{n_{k}} - y \Vert _{L^{2}(0, T; H^{2}_{0})} \\& \quad \le C \bigl( \bigl\Vert p^{*} \bigr\Vert ^{2}_{{\mathcal {P}}} + \Vert p_{n_{k}} \Vert ^{2}_{{\mathcal {P}}} \bigr) \Vert y_{n_{k}} - y \Vert _{L^{2}(0, T; H^{2}_{0})} \to 0 \end{aligned}$$
(4.34)

as \(k \to \infty\), where \(p^{*} = (y_{0}, y_{1}, Bu^{*})\) and \(p_{n_{k}} = (y_{0}, y_{1}, Bu_{n_{k}})\). Hence, by the uniqueness of the weak limits, from (4.14) and (4.34) it follows that

$$ F = [y, v] \equiv \bigl[y, -G[y,y] \bigr]. $$
(4.35)

We replace \(y_{n}\) by \(y_{n_{k}}\) and take \(k\to\infty\) in (4.8). Then by the standard argument in Dautray and Lions ([7], pp.561-565) we conclude that the limit y is a weak solution of

$$ \textstyle\begin{cases} y_{tt} - \Delta y_{tt} + \Delta^{2} y + \int^{t}_{0} k(t-s) \Delta^{2} y(s)\,ds = [y, v] + Bu^{*} \quad \mbox{in } Q, \\ \Delta^{2} v = - [y,y] \quad \mbox{in } Q, \\ y = \frac{\partial y }{\partial\nu} = v = \frac{\partial v }{\partial \nu} = 0 \quad \mbox{on } \Sigma, \\ y (0) = y_{0},\qquad y_{ t}(0) = y_{1} \quad \mbox{in } \Omega. \end{cases} $$
(4.36)

Also, since Eq. (4.36) has a unique weak solution \(y \in{S}(0,T)\) by Theorem 3.1, we conclude that \(y=y(u^{*})\) in \({ S}(0,T)\) by the uniqueness of solutions, which implies that \(y(u_{n})\to y(u^{*})\) weakly in \({ W}(0,T)\). Since C is continuous on \(S(0, T) \subset{ W}(0,T)\) and \(\Vert \cdot \Vert _{M}\) is lower semicontinuous, it follows that

$$\bigl\Vert Cy \bigl(u^{*} \bigr)-Y_{d} \bigr\Vert _{M} \leq \liminf_{n\rightarrow\infty} \bigl\Vert Cy(u_{n})-Y_{d} \bigr\Vert _{M}. $$

It is also clear from \(\liminf_{k\rightarrow\infty} \Vert R^{\frac{1}{2}}u_{n}\Vert _{ \mathcal {U}} \geq \Vert R^{\frac{1}{2}}u^{*} \Vert _{\mathcal {U}}\) that \(\liminf_{k\rightarrow\infty} (Ru_{n},u_{n})_{\mathcal {U}}\geq (Ru^{*},u^{*})_{\mathcal {U}}\). Hence,

$$J_{0}=\liminf_{n\rightarrow\infty}J(u_{n})\geq J \bigl(u^{*} \bigr). $$

But since \(J(u^{*})\geq J_{0}\) by definition, we conclude that \(J(u^{*})=\inf_{u\in{\mathcal {U}}_{\mathrm{ad}}}J(u)\). This completes the proof. □

In this section, we shall characterize the optimal controls by giving necessary conditions for optimality. For this, it is necessary to write down the necessary optimality condition

$$ DJ \bigl(u^{*} \bigr) \bigl(u-u^{*} \bigr) \ge0\quad\mbox{for all } u \in{ \mathcal {U}}_{\mathrm{ad}} $$
(4.37)

and to analyze (4.37) in view of the proper adjoint state system, where \(DJ(u^{*})\) denotes the Gâteaux derivative of \(J(u)\) at \(u=u^{*}\). That is, we have to prove that the mapping \(u \to y(u)\) of \({\mathcal {U}} \to{ S}(0,T)\) is Gâteaux differentiable at \(u=u^{*}\). First, we can see the continuity of the mapping.

Lemma 4.1

Let \(w\in{\mathcal {U}}\) be arbitrarily fixed. Then

$$ \lim_{\lambda\to0}y(u+\lambda w)=y(u)\quad \textit{strongly in } S(0,T). $$
(4.38)

Proof

The proof is an immediate consequence of Theorem 3.1. □

The solution map \(u \to y(u)\) of \({\mathcal {U}}\) into \({ S}(0,T)\) is said to be Gâteaux differentiable at \(u=u^{*}\) if for any \(w\in {\mathcal {U}}\), there exists a \(Dy(u^{*})\in{\mathcal {L}}({\mathcal {U}}, {S}(0,T))\) such that

$$\biggl\Vert \frac{1}{\lambda} \bigl(y \bigl(u^{*} +\lambda w \bigr)-y \bigl(u^{*} \bigr) \bigr)-Dy \bigl(u^{*} \bigr)w \biggr\Vert _{{ S}(0,T)} \to0 \quad \mbox{as } \lambda\to0. $$

The operator \(Dy(u^{*})\) denotes the Gâteaux derivative of \(y(u)\) at \(u=u^{*}\), and the function \(Dy(u^{*})w \in{ S}(0,T)\) is called the Gâteaux derivative in the direction \(w\in{\mathcal {U}}\), which plays an important part in the nonlinear optimal control problem.

Theorem 4.2

The map \(u\to y(u)\) of \({\mathcal {U}}\) into \({ S}(0,T)\) is Gâteaux differentiable at \(u=u^{*}\) and such the Gâteaux derivative of \(y(u)\) at \(u=u^{*}\) in the direction \(u-u^{*}\in{\mathcal {U}}\), say \(z=Dy(u^{*})(u-u^{*})\), is a unique weak solution of the following problem:

$$ \textstyle\begin{cases} z_{tt} - \Delta z_{tt} + \Delta^{2} z + \int^{t}_{0} k(t-s) \Delta^{2} z(s)\,ds \\ \quad = [z, -G[y(u^{*}),y(u^{*})]] +2 [y(u^{*}), -G[ z,y(u^{*})]] + B(u-u^{*}) \quad \textit{in } Q, \\ z = \frac{\partial z }{\partial\nu} = 0 \quad \textit{on } \Sigma, \\ z (0) = 0,\qquad z_{ t} (0) = 0 \quad \textit{in } \Omega. \end{cases} $$
(4.39)

Proof

Let \(\lambda\in(-1,1)\), \(\lambda\ne0\). We set \(y_{\lambda}:= y(u^{*} +\lambda(u-u^{*}))\) and

$$z_{\lambda}:= \lambda^{-1} \bigl(y_{\lambda} -y \bigl(u^{*} \bigr) \bigr). $$

Then, in the weak sense, \(z_{\lambda}\) satisfies

$$ \textstyle\begin{cases} z_{\lambda, tt} - \Delta z_{\lambda,tt} + \Delta^{2} z_{\lambda} + \int ^{t}_{0} k(t-s) \Delta^{2} z_{\lambda}(s)\,ds = F_{\lambda} + B(u-u^{*}) \quad \mbox{in } Q, \\ z_{\lambda} = \frac{\partial z_{\lambda} }{\partial\nu} = 0 \quad \mbox{on }\Sigma, \\ z_{\lambda} (0) = 0,\qquad z_{\lambda, t} (0) = 0 \quad \mbox{in } \Omega, \end{cases} $$
(4.40)

where

$$F_{\lambda} = \frac{1}{\lambda} \bigl( \bigl[y_{\lambda}, -G[y_{\lambda },y_{\lambda}] \bigr] - \bigl[y \bigl(u^{*} \bigr), -G \bigl[y \bigl(u^{*} \bigr), y \bigl(u^{*} \bigr) \bigr] \bigr] \bigr). $$

Here we note that

$$\begin{aligned}& \frac{1}{\lambda} \bigl( \bigl[y_{\lambda}, -G[y_{\lambda},y_{\lambda}] \bigr] - \bigl[y \bigl(u^{*} \bigr), -G \bigl[y \bigl(u^{*} \bigr), y \bigl(u^{*} \bigr) \bigr] \bigr] \bigr) \\& \quad = \bigl[z_{\lambda}, -G[y_{\lambda},y_{\lambda}] \bigr] + \bigl[y \bigl(u^{*} \bigr), -G \bigl[ z_{\lambda },y \bigl(u^{*} \bigr)+y_{\lambda} \bigr] \bigr]. \end{aligned}$$
(4.41)

Thus, from (2.5), Theorem 3.1, and (4.41) we deduce

$$\begin{aligned} \Vert F_{\lambda} \Vert _{L^{2}(0, T; L^{2})} \le& C \bigl( \Vert y_{\lambda} \Vert ^{2}_{L^{\infty}(0, T; H^{2}_{0})} + \bigl\Vert y \bigl(u^{*} \bigr) \bigr\Vert _{L^{\infty}(0, T; H^{2}_{0})} \bigl( \Vert y_{\lambda} \Vert _{L^{\infty}(0, T; H^{2}_{0})} \\ & {} + \bigl\Vert y \bigl(u^{*} \bigr) \bigr\Vert _{L^{\infty}(0, T; H^{2}_{0})} \bigr) \bigr) \Vert \Delta z_{\lambda} \Vert _{L^{2}(0, T; L^{2})} \\ \le& C \bigl( \Vert y_{\lambda} \Vert ^{2}_{L^{\infty}(0, T; H^{2}_{0})} + \bigl\Vert y \bigl(u^{*} \bigr) \bigr\Vert ^{2}_{L^{\infty}(0, T; H^{2}_{0})} \bigr) \Vert \Delta z_{\lambda} \Vert _{L^{2}(0, T; L^{2})} \\ \le& C \bigl( \Vert p_{\lambda} \Vert ^{2}_{{\mathcal {P}}} + \bigl\Vert p^{*} \bigr\Vert ^{2}_{{\mathcal {P}}} \bigr) \Vert \Delta z_{\lambda} \Vert _{L^{2}(0, T; L^{2})}, \end{aligned}$$
(4.42)

where \(p_{\lambda} = (y_{0}, y_{1}, B(u^{*} + \lambda(u-u^{*})) ) \) and \(p^{*} = (y_{0}, y_{1}, B u^{*} )\). Hence, by considering the energy equality satisfied by \(z_{\lambda}\) like (3.5) we get from (4.42) and the proof of Theorem 3.1 that the weak solution \(z_{\lambda}\) of Eq. (4.40) satisfies

$$ \Vert z_{\lambda} \Vert _{{ S}(0, T)} \le C \bigl\Vert B \bigl(u-u^{*} \bigr) \bigr\Vert _{L^{2}(0, T; L^{2})}. $$
(4.43)

Therefore, from (4.42) and (4.43) we see that there exists \(z \in{ W}(0,T) \cap L^{\infty} (0, T; H^{2}_{0})\) with \(z' \in L^{\infty}(0, T; H^{1}_{0})\), \(F \in L^{2} (0, T; L^{2})\) and a sequence \(\{\lambda_{k}\} \subset(-1,1)\) tending to 0 such that, as \(k \to \infty\),

$$\begin{aligned}& z_{\lambda_{k}} \to z\quad \mbox{weakly in } { W}(0, T), \end{aligned}$$
(4.44)
$$\begin{aligned}& z_{\lambda_{k}} \to z \quad\mbox{weakly * in } L^{\infty} \bigl(0, T; H^{2}_{0} \bigr), \end{aligned}$$
(4.45)
$$\begin{aligned}& z'_{\lambda_{k}} \to z' \quad \mbox{weakly * in } L^{\infty} \bigl(0, T; H^{1}_{0} \bigr), \end{aligned}$$
(4.46)
$$\begin{aligned}& F_{\lambda_{k}} \to F \quad\mbox{weakly in } L^{2} \bigl(0, T; L^{2} \bigr). \end{aligned}$$
(4.47)

We replace \(z_{\lambda}\) by \(z_{\lambda_{k}}\) and take \(k\to\infty\) in Eq. (4.40). Then by the standard argument in Dautray and Lions ([7], pp.561-565) we conclude that the limit z is a weak solution of

$$ \textstyle\begin{cases} z_{tt} - \Delta z_{tt} + \Delta^{2} z + \int^{t}_{0} k(t-s) \Delta^{2} z(s)\,ds = F + B(u-u^{*}) \quad\mbox{in } Q, \\ z = \frac{\partial z }{\partial\nu} = 0 \quad \mbox{on } \Sigma, \\ z (0) = 0,\qquad z_{ t} (0) = 0 \quad \mbox{in } \Omega. \end{cases} $$
(4.48)

Using (4.44)-(4.47), the respective energy equalities of Eq. (4.40) with \(z_{\lambda}\) replaced by \(z_{\lambda_{k}}\), and Eq. (4.48), we can proceed as in the proof of Theorem 4.1 to obtain

$$ z_{\lambda_{k}} \to z\quad\mbox{strongly in } S(0, T) \mbox{ as } k \to \infty. $$
(4.49)

By Theorem 3.1 and Lemma 2.2 we can verify the following:

$$\begin{aligned}& \bigl\Vert G \bigl[z_{\lambda_{k}}, y \bigl(u^{*} \bigr)+ y_{\lambda_{k}} \bigr] - 2 G \bigl[z, y \bigl(u^{*} \bigr) \bigr] \bigr\Vert _{C([0, T]; W^{2, \infty} )} \\& \quad = \bigl\Vert G \bigl[z_{\lambda_{k}} -z, y \bigl(u^{*} \bigr)+ y_{\lambda_{k}} \bigr] + G \bigl[z, y_{\lambda_{k}} - y \bigl(u^{*} \bigr) \bigr] \bigr\Vert _{C ([0, T]; W^{2, \infty} )} \\& \quad \le C T \bigl( \bigl( \bigl\Vert y \bigl(u^{*} \bigr) \bigr\Vert _{C([0, T]; H^{2}_{0})} + \Vert y_{\lambda_{k}} \Vert _{C([0, T]; H^{2}_{0})} \bigr) \Vert z_{\lambda_{k}} - z \Vert _{C([0, T]; H^{2}_{0})} \\& \qquad {}+ \Vert z\Vert _{C([0, T]; H^{2}_{0})} \bigl\Vert y_{\lambda_{k}} - y \bigl(u^{*} \bigr) \bigr\Vert _{C([0, T]; H^{2}_{0})} \bigr) \\& \quad\le C T \bigl( \bigl( \bigl\Vert p^{*} \bigr\Vert _{{\mathcal {P}}} + \Vert p_{\lambda_{k}} \Vert _{{\mathcal {P}}} \bigr) \Vert z_{\lambda_{k}} - z \Vert _{C([0, T]; H^{2}_{0})} \\& \qquad{} + \bigl\Vert B \bigl(u-u^{*} \bigr) \bigr\Vert _{L^{2}(0, T; L^{2})} \bigl\Vert y_{\lambda_{k}} - y \bigl(u^{*} \bigr) \bigr\Vert _{C([0, T]; H^{2}_{0})} \bigr), \end{aligned}$$
(4.50)

where \(p_{\lambda_{k}} = (y_{0}, y_{1}, B(u^{*} + \lambda_{k} (u-u^{*})) ) \) and \(p^{*} = (y_{0}, y_{1}, B u^{*} )\). Thus, from Lemma 4.1, (4.49), and (4.50), we have

$$ G \bigl[z_{\lambda_{k}}, y \bigl(u^{*} \bigr)+ y_{\lambda_{k}} \bigr] \to 2 G \bigl[z, y \bigl(u^{*} \bigr) \bigr]\quad\mbox{strongly in } C \bigl([0, T]; W^{2, \infty} \bigr) $$
(4.51)

as \(k \to\infty\).

Similarly, we can also show that

$$ G[y_{\lambda_{k}}, y_{\lambda_{k}}] \to G \bigl[y \bigl(u^{*} \bigr), y \bigl(u^{*} \bigr) \bigr]\quad\mbox{strongly in } C \bigl([0, T]; W^{2, \infty} \bigr) $$
(4.52)

as \(k \to\infty\). Therefore, by (4.51) and (4.52) we can show that

$$ F_{\lambda_{k}} \to \bigl[z, -G \bigl[y \bigl(u^{*} \bigr),y \bigl(u^{*} \bigr) \bigr] \bigr] +2 \bigl[y \bigl(u^{*} \bigr), -G \bigl[ z,y \bigl(u^{*} \bigr) \bigr] \bigr]\quad\mbox{strongly in } L^{2} \bigl(0, T; L^{2} \bigr) $$
(4.53)

as \(k \to\infty\).

Consequently, we can infer from (4.47) and (4.52) that

$$ F = \bigl[z, -G \bigl[y \bigl(u^{*} \bigr),y \bigl(u^{*} \bigr) \bigr] \bigr] +2 \bigl[y \bigl(u^{*} \bigr), -G \bigl[ z,y \bigl(u^{*} \bigr) \bigr] \bigr]. $$
(4.54)

Hence, it readily follows from (4.49) and (4.54) that \(z_{\lambda_{k}}\to z=Dy(u^{*})(u-u^{*})\) strongly in \({S}(0,T)\) as \(k \rightarrow\infty\), in which z is a weak solution of (4.39).

This completes the proof. □

Theorem 4.2 means that the cost \(J(u)\) is Gâteaux differentiable at \(u^{*}\) in the direction \(u-u^{*}\) and the optimality condition (4.37) is rewritten by

$$\begin{aligned}& \bigl(Cy \bigl(u^{*} \bigr)-Y_{d},C \bigl(Dy \bigl(u^{*} \bigr) \bigl(u-u^{*} \bigr) \bigr) \bigr)_{M}+ \bigl(Ru^{*},u-u^{*} \bigr)_{\mathcal {U}} \\& \quad = \bigl\langle C^{*}\Lambda_{M} \bigl(Cy \bigl(u^{*} \bigr)-Y_{d} \bigr),Dy \bigl(u^{*} \bigr) \bigl(u-u^{*} \bigr) \bigr\rangle _{{ W}(0,T)',{W}(0,T)} \\& \qquad{} + \bigl(Ru^{*},u-u^{*} \bigr)_{\mathcal {U}} \geq0,\quad \forall v\in{ \mathcal {U}}_{\mathrm{ad}}, \end{aligned}$$
(4.55)

where \(\Lambda_{M}\) is the canonical isomorphism M onto \(M'\).

In this paper, we consider the following physically important observation. We take \(M=L^{2}(0, T; L^{2})\) and \(C \in{\mathcal {L}}({ W}(0,T), M)\) and observe that \(Cy(u)= y(u; \cdot) \in L^{2}(0, T; L^{2})\).

4.2 Necessary condition of an optimal control for distributive observation

In this subsection, we consider the cost functional expressed by

$$ J(u) = \int^{T}_{0} \bigl\Vert y(u)-Y_{d} \bigr\Vert ^{2} \,dt +(Ru,u)_{\mathcal {U}}\quad \forall u \in{ \mathcal {U}}_{\mathrm{ad}} \subset{ \mathcal {U}}, $$
(4.56)

where \(Y_{d} \in L^{2}(0, T;L^{2})\) is the desired value. Let \(u^{*}\) be the optimal control subject to (4.2) and (4.56). Then the optimality condition (4.55) is represented by

$$ \int^{T}_{0} \bigl(y \bigl(u^{*} \bigr)-Y_{d}, z \bigr)_{2} \,dt + \bigl(Ru^{*},u-u^{*} \bigr)_{\mathcal {U}} \geq0\quad\forall u \in{\mathcal {U}}_{\mathrm{ad}}, $$
(4.57)

where z is the weak solution of Eq. (4.39). Now we formulate the adjoint system to describe the optimality condition:

$$ \textstyle\begin{cases} p_{tt} (u^{*}) -\Delta p_{tt}(u^{*}) + \Delta^{2} p(u^{*}) + \int^{T}_{t} k(\sigma-t) \Delta^{2} p(u^{*}; \sigma)\,d \sigma\\ \quad = [p(u^{*}), -G[y(u^{*}),y(u^{*})]] +2 [y(u^{*}), -G[ p(u^{*}), y(u^{*})]] \\ \qquad{} + y(u^{*})-Y_{d} \quad\mbox{in } Q,\\ p(u^{*})= \frac{\partial p(u^{*})}{\partial\nu} =0 \quad\mbox{on } \Sigma,\\ p(u^{*};T)= p_{t}(u^{*};T) =0 \quad\mbox{in } \Omega. \end{cases} $$
(4.58)

Proposition 4.1

Equation (4.58) admits a unique solution \(p(u^{*}) \in S(0, T)\).

Proof

Since

$$\int^{T}_{T-t} k(\sigma-T+t) \Delta^{2} p \bigl(u^{*}; \sigma \bigr) \,d\sigma= \int ^{t}_{0} k(t-s) \Delta^{2} p \bigl(u^{*}; T- s \bigr) \,ds, $$

the time reversed equation of Eq. (4.58) (\(t \to T-t\) in Eq. (4.58)) is given by

$$ \textstyle\begin{cases} \psi_{tt} -\Delta \psi_{tt}+ \Delta^{2} \psi+ \int^{t}_{0} k(t-s) \Delta ^{2} \psi(s)\,d s \\ \quad = [ \psi, -G[y(u^{*}),y(u^{*})]] +2 [y(u^{*}), -G[ \psi, y(u^{*})]] + y(u^{*})-Y_{d} \quad\mbox{in } Q,\\ \psi= \frac{\partial \psi}{\partial\nu} =0 \quad\mbox{on } \Sigma,\\ \psi(0)= \psi_{t}(0) =0 \quad\mbox{in } \Omega, \end{cases} $$
(4.59)

where \(\psi(t)= p(u^{*};T-t)\).

Here we note that, like (4.42),

$$\begin{aligned}& \bigl\Vert \bigl[ \psi, -G \bigl[y \bigl(u^{*} \bigr),y \bigl(u^{*} \bigr) \bigr] \bigr] +2 \bigl[y \bigl(u^{*} \bigr), -G \bigl[ \psi, y \bigl(u^{*} \bigr) \bigr] \bigr] \bigr\Vert _{L^{2}(0, T; L^{2})} \\& \quad \le C \bigl\Vert p^{*} \bigr\Vert ^{2}_{{\mathcal {P}}} \Vert \Delta\psi \Vert _{L^{2}(0, T; L^{2})}, \end{aligned}$$
(4.60)

where \(p^{*} = (y_{0}, y_{1}, Bu^{*})\). Thus, by Theorem 3.1 and [5], the conditions \(Y_{d} \in L^{2}(0, T;L^{2})\) and (4.60) enable us to deduce that there exists a unique \(\psi\in S(0, T)\).

This completes the proof. □

Now we proceed the calculations. We multiply both sides of the weak form of Eq. (4.58) by z and integrate it over \([0,T]\). Then we have

$$\begin{aligned}& \int^{T}_{0} \bigl\langle p'' \bigl(u^{*} \bigr) -\Delta p'' \bigl(u^{*} \bigr), z \bigr\rangle _{-2, 2} \,dt \\& \qquad{} + \int^{T}_{0} \biggl( \Delta p \bigl(u^{*} \bigr) + \int^{T}_{t} k(\sigma-t) \Delta p \bigl(u^{*}; \sigma \bigr)\,d \sigma, \Delta z \biggr)_{2} \,dt \\& \qquad{} - \int^{T}_{0} \bigl( \bigl[p \bigl(u^{*} \bigr), -G \bigl[y \bigl(u^{*} \bigr),y \bigl(u^{*} \bigr) \bigr] \bigr] +2 \bigl[y \bigl(u^{*} \bigr), -G \bigl[ p \bigl(u^{*} \bigr), y \bigl(u^{*} \bigr) \bigr] \bigr], z \bigr)_{2} \,dt \\& \quad = \int^{T}_{0} \bigl(y \bigl(u^{*} \bigr)-Y_{d}, z \bigr)_{2} \,dt. \end{aligned}$$
(4.61)

By Fubini’s theorem we have

$$\begin{aligned}& \int^{T}_{0} \biggl( \int^{T}_{t} k(\sigma-t) \Delta p \bigl(u^{*}; \sigma \bigr)\,d \sigma, \Delta z \biggr)_{2} \,dt \\& \quad = \int^{T}_{0} \biggl( \int^{t}_{0} k(t-s) \Delta z (s)\,ds, \Delta p \bigl(u^{*} \bigr) \biggr)_{2} \,dt \\& \quad = \int^{T}_{0} \biggl\langle \int^{t}_{0} k(t-s) \Delta^{2} z (s)\,ds, p \bigl(u^{*} \bigr) \biggr\rangle _{-2,2} \,dt. \end{aligned}$$
(4.62)

By Lemma 2.1 we deduce

$$\begin{aligned}& \int^{T}_{0} \bigl( \bigl[p \bigl(u^{*} \bigr), -G \bigl[y \bigl(u^{*} \bigr),y \bigl(u^{*} \bigr) \bigr] \bigr], z \bigr)_{2} \,dt \\& \quad = \int^{T}_{0} \bigl( \bigl[z, -G \bigl[y \bigl(u^{*} \bigr),y \bigl(u^{*} \bigr) \bigr] \bigr], p \bigl(u^{*} \bigr) \bigr)_{2} \,dt. \end{aligned}$$
(4.63)

We observe that by considering \(\phi, \psi\in H^{2}_{0}\) we have \([\phi, \psi] \in L^{1}\). However, since \(n=2\), we have

$$\begin{aligned} H^{2}_{0} \hookrightarrow L^{\infty}, \end{aligned}$$
(4.64)

and, therefore,

$$\begin{aligned} L^{1} \hookrightarrow H^{-2}. \end{aligned}$$
(4.65)

Thus, since G is a self-adjoint operator, by Lemma 2.1 and (4.65) we have

$$\begin{aligned}& \int^{T}_{0} \bigl( 2 \bigl[y \bigl(u^{*} \bigr), -G \bigl[ p \bigl(u^{*} \bigr), y \bigl(u^{*} \bigr) \bigr] \bigr], z \bigr)_{2} \,dt \\& \quad = \int^{T}_{0} \bigl\langle 2 \bigl[z, y \bigl(u^{*} \bigr) \bigr], -G \bigl[ p \bigl(u^{*} \bigr), y \bigl(u^{*} \bigr) \bigr] \bigr\rangle _{-2,2} \,dt \\& \quad = \int^{T}_{0} \bigl\langle -2 G \bigl[z, y \bigl(u^{*} \bigr) \bigr], \bigl[ p \bigl(u^{*} \bigr), y \bigl(u^{*} \bigr) \bigr] \bigr\rangle _{2,-2} \,dt \\& \quad = \int^{T}_{0} \bigl( 2 \bigl[y \bigl(u^{*} \bigr), - G \bigl[z, y \bigl(u^{*} \bigr) \bigr] \bigr], p \bigl(u^{*} \bigr) \bigr)_{2} \,dt. \end{aligned}$$
(4.66)

Considering from (4.62) to (4.66), the terminal value conditions of p in (4.58), and Eq. (4.39), we can verify by integration by parts that the left-hand side of (4.61) yields

$$\begin{aligned}& \int^{T}_{0} \biggl\langle p \bigl(u^{*} \bigr), z'' - \Delta z'' + \Delta^{2} z + \int^{t}_{0} k(t-s) \Delta^{2} z(s) \,ds \biggr\rangle _{2, -2} \,dt \\& \qquad{} - \int^{T}_{0} \bigl(p \bigl(u^{*} \bigr), \bigl[z, -G \bigl[y \bigl(u^{*} \bigr),y \bigl(u^{*} \bigr) \bigr] \bigr] +2 \bigl[y \bigl(u^{*} \bigr), -G \bigl[ z, y \bigl(u^{*} \bigr) \bigr] \bigr] \bigr)_{2} \,dt \\& \quad = \int^{T}_{0} \bigl(p \bigl(u^{*} \bigr), B \bigl(u-u^{*} \bigr) \bigr)_{2} \,dt. \end{aligned}$$
(4.67)

Therefore, combining (4.61) and (4.67), we deduce that the optimality condition (4.57) is equivalent to

$$\int^{T}_{0} \bigl(p \bigl(u^{*} \bigr), B \bigl(u-u^{*} \bigr) \bigr)_{2}\,dt+ \bigl(Ru^{*}, u-u^{*} \bigr)_{\mathcal {U}} \geq0\quad\forall u\in{ \mathcal {U}}_{\mathrm{ad}}. $$

Hence, we give the following theorem.

Theorem 4.3

The optimal control \(u^{*}\) for (4.56) is characterized by the following system of equations and inequality:

$$\begin{aligned}& \textstyle\begin{cases} y_{tt}(u^{*}) - \Delta y_{tt}(u^{*}) + \Delta^{2} y(u^{*}) + \int^{t}_{0} k(t-s) \Delta^{2} y(u^{*};s)\,ds \\ \quad = [y(u^{*}), v(u^{*})] + Bu^{*} \quad\textit{in } Q, \\ \Delta^{2} v(u^{*}) = -[y(u^{*}),y(u^{*})] \quad\textit{in } Q, \\ y(u^{*}) = \frac{\partial y(u^{*}) }{\partial\nu} = v(u^{*}) = \frac {\partial v(u^{*}) }{\partial\nu}=0 \quad\textit{on } \Sigma, \\ y (u^{*};0) = y_{0},\qquad y_{ t}(u^{*}; 0) = y_{1} \quad \textit{in } \Omega, \end{cases}\displaystyle \\& \textstyle\begin{cases} p_{tt} (u^{*}) -\Delta p_{tt}(u^{*}) + \Delta^{2} p(u^{*}) + \int^{T}_{t} k(\sigma-t) \Delta^{2} p(u^{*}; \sigma)\,d \sigma\\ \quad = [p(u^{*}), -G[y(u^{*}),y(u^{*})]] +2 [y(u^{*}), -G[ p(u^{*}), y(u^{*})]] + y(u^{*})-Y_{d} \quad\textit{in } Q,\\ p(u^{*})= \frac{\partial p(u^{*})}{\partial\nu} =0 \quad\textit{on } \Sigma,\\ p(u^{*};T)= p_{t}(u^{*};T) =0 \quad \textit{in } \Omega, \end{cases}\displaystyle \\& \int^{T}_{0} \bigl(p \bigl(u^{*} \bigr), B \bigl(u-u^{*} \bigr) \bigr)_{2}\,dt+ \bigl(Ru^{*}, u-u^{*} \bigr)_{ U} \geq0\quad \forall u \in{ U}_{\mathrm{ad}}. \end{aligned}$$

4.3 Local uniqueness of an optimal control

We note that the uniqueness of an optimal control in nonlinear equation is not ensured. However, it is worth noticing partial results. For instance, we can refer to the result in [12] to obtain the local uniqueness of an optimal control for distributive observation case. For that reason, in this subsection, we take \(M = L^{2}((0, t) \times\Omega)\) and observe that \(y \in L^{2}((0, t) \times\Omega)\). Hence, we consider the following quadratic cost functional:

$$ J(u) = \int^{t}_{0} \bigl\Vert y(u)-Y_{d} \bigr\Vert ^{2} \,ds +(Ru,u)_{ U} \quad\forall u\in{ \mathcal {U}}_{\mathrm{ad}} \subset{\mathcal {U}}, $$
(4.68)

where \(Y_{d} \in L^{2}((0, t) \times\Omega)\).

In order to show the local uniqueness of an optimal control by making use of the strict convexity of quadratic cost (see [17]), we consider the following proposition.

Proposition 4.2

The map \(w\to y(w)\) of U into \({S}(0,T)\) is second-order Gâteaux differentiable at \(w=u\) and such the second-order Gâteaux derivative of \(y(w)\) at \(w=u\) in the direction \(w-u\in{\mathcal {U}}\), say \(g =D^{2} y(u)(w-u, w-u)\), is a unique solution of the following problem:

$$ \textstyle\begin{cases} g_{tt} - \Delta g_{tt} + \Delta^{2} g + \int^{t}_{0} k(t-s) \Delta^{2} g(s)\,ds \\ \quad = [g, -G[y(u), y(u)]]+2[y(u), -G[g,y(u)]] + F(z, y(u)) \quad \textit{in } Q,\\ g = \frac{\partial g}{\partial\nu} =0 \quad \textit{on } \Sigma,\\ g(0)=g_{t}(0) = 0 \quad\textit{in } \Omega, \end{cases} $$
(4.69)

where

$$F \bigl(z,y(u) \bigr)= 4 \bigl[z, -G \bigl[z, y(u) \bigr] \bigr]+2 \bigl[y(u), -G[z,z] \bigr], $$

and z is the weak solution of Eq. (4.39), changing \(B(u-u^{*})\) by \(B(w-u)\).

Proof

The proof is similar to that of Theorem 4.2. □

Lemma 4.2

Let g be the weak solution of Eq. (4.69). Then we can show that

$$ \Vert g \Vert _{{ S}(0, T)} \leq C \Vert w- u \Vert ^{2}_{\mathcal {U}}, $$
(4.70)

where \(C > 0 \) is a constant depending on the time T and the data conditions of the equation of \(y(u)\).

Proof

Let z be the solution of Eq. (4.39), changing with \(B(u-u^{*})\) to \(B(w-u)\). Then, using the same arguments as in Eq. (3.1), we can deduce that

$$\begin{aligned} \Vert z \Vert _{{S}(0, T)} \leq& C \bigl\Vert B(w-u) \bigr\Vert _{L^{2}(0, T; L^{2})} \\ \leq& C \Vert B \Vert _{{\mathcal {L}}({\mathcal {U}}; L^{2}(0, T; L^{2} ))} \Vert w-u \Vert _{\mathcal {U}} \\ \leq& C \Vert w-u \Vert _{\mathcal {U}}. \end{aligned}$$
(4.71)

Also, for the solution g of Eq. (4.69), we can show that

$$\begin{aligned} \Vert g \Vert _{{ S}(0, T)} \leq& C \bigl\Vert F \bigl(z,y(u) \bigr) \bigr\Vert _{L^{2}(0, T; L^{2})} \\ \leq& C \bigl( \bigl\Vert 4 \bigl[z, -G \bigl[z, y(u) \bigr] \bigr] \bigr\Vert _{L^{2}(0, T; L^{2})} + \bigl\Vert 2 \bigl[y(u), -G[z,z] \bigr] \bigr\Vert _{L^{2}(0, T; L^{2})} \bigr) \\ \leq& C \bigl\Vert y(u) \bigr\Vert _{L^{2}(0, T; H^{2}_{0})} \Vert z \Vert ^{2}_{L^{\infty} (0, T; H^{2}_{0})} \\ \leq& C \sqrt{T} \bigl\Vert y(u) \bigr\Vert _{L^{\infty}(0, T; H^{2}_{0})} \Vert z \Vert ^{2}_{L^{\infty} (0, T; H^{2}_{0})} \\ \leq& C \sqrt{T} \Vert p \Vert _{{\mathcal {P}}} \Vert z \Vert ^{2}_{{S}(0, T)}, \end{aligned}$$
(4.72)

where \(p = (y_{0}, y_{1}, Bu)\). Combining (4.71) with (4.72), we have (4.70). □

We prove the local uniqueness of the optimal control.

Theorem 4.4

When t is small enough, there is a unique optimal control for the cost (4.68).

Proof

We show the local uniqueness by proving the strict convexity of the map \(u \in{\mathcal {U}}_{\mathrm{ad}} \to J(u)\). Therefore, as in [17], we need to show, for all \(u, w \in{\mathcal {U}}_{\mathrm{ad}}\) (\(u \ne w\)),

$$ D^{2}J \bigl(u + \xi(w-u) \bigr) (w-u, w-u) > 0 \quad ( 0 < \xi< 1 ). $$
(4.73)

For simplicity, we denote \(y(u + \xi(w-u))\), \(z(u + \xi(w-u))\), and \(g(u + \xi(w-u))\) by \(y(\xi)\), \(z(\xi)\), and \(g(\xi)\), respectively. We calculate

$$\begin{aligned}& DJ \bigl(u + \xi(w-u) \bigr) (w-u) \\& \quad = \lim_{l \to0} \frac{J(u + (\xi+ l)(w-u)) - J(u + \xi(w-u))}{l} \\& \quad = 2 \int^{t}_{0} \bigl(y(\xi) - Y_{d}, z(\xi) \bigr)_{2} \,ds + 2 \bigl(R \bigl(u+ \xi(w-u) \bigr), w-u \bigr)_{\mathcal {U}}. \end{aligned}$$
(4.74)

From (4.74) we obtain the second Gâteaux derivative of J:

$$\begin{aligned}& D^{2}J \bigl(u + \xi(w-u) \bigr) (w-u, w-u ) \\& \quad = \lim_{k \to0} \frac{DJ(u + (\xi+ k)(w-u))(w-u) - DJ(u + \xi (w-u))(w-u)}{k} \\& \quad = 2 \int^{t}_{0} \bigl(y(\xi) - Y_{d}, g(\xi) \bigr)_{2} \,ds + 2 \int^{t}_{0} \bigl\Vert z(\xi) \bigr\Vert ^{2} \,ds \\& \qquad{} + 2 \bigl(R (w-u), w-u \bigr)_{\mathcal {U}}. \end{aligned}$$
(4.75)

By Lemma 4.2 and (4.75) we deduce that

$$\begin{aligned}& D^{2}J \bigl(u + \xi(w-u) \bigr) (w-u, w-u ) \\& \quad \ge - 2 \bigl\Vert g(\xi) \bigr\Vert _{L^{\infty}(0, t; L^{2})} \int^{t}_{0} \bigl\Vert y(\xi) - Y_{d} \bigr\Vert \,ds \\& \qquad{} + 2 \int^{t}_{0} \bigl\Vert z(\xi) \bigr\Vert ^{2} \,ds + 2 d \Vert w-u \Vert ^{2}_{\mathcal {U}} \\& \quad \ge - 2 C \sqrt{t} \bigl\Vert g(\xi) \bigr\Vert _{{ S}(0, t)} \bigl\Vert y(\xi) - Y_{d} \bigr\Vert _{L^{2}(0, t; L^{2})} \\& \qquad{} + 2 \int^{t}_{0} \bigl\Vert z(\xi) \bigr\Vert ^{2} \,ds + 2 d \Vert w-u \Vert ^{2}_{\mathcal {U}} \\& \quad \ge 2 \bigl( d - C \sqrt{t} \bigl\Vert y(\xi) - Y_{d} \bigr\Vert _{L^{2}(0, t; L^{2})} \bigr) \Vert w-u \Vert ^{2}_{\mathcal {U}} \\& \qquad{} + 2 \int^{t}_{0} \bigl\Vert z(\xi) \bigr\Vert ^{2} \,ds. \end{aligned}$$
(4.76)

Here we can take \(t > 0 \) small enough so that the right-hand side of (4.76) is strictly greater than 0. Therefore, we obtain the strict convexity of the quadratic cost \(J(u)\), \(u \in{\mathcal {U}}_{\mathrm{ad}}\), which proves this theorem. □

Remark 4.1

If we assume that d is large enough, then we can obtain the strict convexity of the quadratic cost (4.68) in the global sense. Therefore, we can obtain the desired result of Theorem 4.4 in the global sense for the cost (4.68).

References

  1. Lagnese, JE: Boundary Stabilization of Thin Plates. SIAM, Philadelphia (1989)

    Book  MATH  Google Scholar 

  2. Cavalcanti, MM, Cavalcanti, ADD, Lasiecka, I, Wang, X: Existence and sharp decay rate estimates for a von Karman system with long memory. Nonlinear Anal., Real World Appl. 22, 289-306 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  3. Chueshov, I, Lasiecka, I: Long time dynamics of von Karman evolutions with thermal effects. Bol. Soc. Parana. Mat. 25(1-2), 37-54 (2007)

    MathSciNet  MATH  Google Scholar 

  4. Ji, G: Uniform decay rates and asymptotic analysis of the von Kármán plate with nonlinear dissipation in the boundary moments. Nonlinear Anal., Theory Methods Appl. 42, 835-870 (2000)

    Article  MATH  Google Scholar 

  5. Hwang, J, Nakagiri, S: On semi-linear second order Volterra integro-differential equations in Hilbert space. Taiwan. J. Math. 12, 679-701 (2008)

    MathSciNet  MATH  Google Scholar 

  6. Lions, JL: Optimal Control of Systems Governed by Partial Differential Equations. Springer, Berlin (1971)

    Book  MATH  Google Scholar 

  7. Dautray, R, Lions, JL: Mathematical Analysis and Numerical Methods for Science and Technology, Vol. 5: Evolution Problems I. Springer, Berlin (1992)

    MATH  Google Scholar 

  8. Hwang, J, Nakagiri, S: Optimal control problems for the equation of motion of membrane with strong viscosity. J. Math. Anal. Appl. 321, 327-342 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  9. Hwang, J, Nakagiri, S: Optimal control problems for Kirchhoff type equation with a damping term. Nonlinear Anal., Theory Methods Appl. 72, 1621-1631 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  10. Hwang, J: Optimal control problems for an extensible beam equation. J. Math. Anal. Appl. 353, 436-448 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  11. Hwang, J: Parameter identification problems for an extensible beam equation. J. Math. Anal. Appl. 359, 682-695 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  12. Ryu, S: Optimal control problems governed by some semilinear parabolic equations. Nonlinear Anal., Theory Methods Appl. 56, 241-252 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  13. Lions, JL: Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires. Dunford, Paris (1969)

    MATH  Google Scholar 

  14. Favini, A, Horn, M, Lasiecka, I, Tataru, D: Global existence, uniqueness and regularity of solutions to a von Karman equation with nonlinear boundary dissipation. Differ. Integral Equ. 9, 267-294 (1996). Addendum to this paper. Differ. Integral Equ. 10, 197-200 (1997)

    MathSciNet  MATH  Google Scholar 

  15. Lions, JL, Magenes, E: Non-Homogeneous Boundary Value Problems and Applications I, II. Springer, Berlin (1972)

    Book  MATH  Google Scholar 

  16. Temam, R: Navier Stokes Equation. North-Holland, Amsterdam (1984)

    MATH  Google Scholar 

  17. Zeidler, E: Nonlinear Functional Analysis and Its Applications, II/B, Nonlinear Monotone Operator. Springer, Berlin (1990)

    Book  MATH  Google Scholar 

Download references

Acknowledgements

This research was supported by the Daegu University Research Grant 2013.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinsoo Hwang.

Additional information

Competing interests

The author declares to have no completing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hwang, J. Optimal control problems for a von Kármán system with long memory. Bound Value Probl 2016, 87 (2016). https://doi.org/10.1186/s13661-016-0594-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13661-016-0594-7

MSC

Keywords