# Optimal decay result for Kirchhoff plate equations with nonlinear damping and very general type of relaxation functions

## Abstract

In this paper, we consider plate equations with viscoelastic damping localized on a part of the boundary and nonlinear damping in the domain. We establish general and optimal decay rate results for a wider class of relaxation functions. These results are obtained without imposing any restrictive growth assumption on the frictional damping term. Our results are more general than the earlier results.

## Introduction

In this paper, we consider the following Kirchhoff plate equations:

\begin{aligned}& u_{tt}+\Delta ^{2}u+\eta (t)h(u_{t})=0, \quad \text{in } \varOmega \times (0,\infty ), \end{aligned}
(1)
\begin{aligned}& u=\frac{\partial u}{\partial \nu }=0,\quad \text{on } \varGamma _{0}\times (0,\infty ), \end{aligned}
(2)
\begin{aligned}& -u+ \int _{0}^{t}k_{1}(t-s)\varPhi _{2}u(s)\,ds=0,\quad \text{on } \varGamma _{1}\times (0,\infty ), \end{aligned}
(3)
\begin{aligned}& \frac{\partial u}{\partial \nu }+ \int _{0}^{t}k_{2}(t-s)\varPhi _{1}u(s)\,ds=0,\quad \text{on } \varGamma _{1}\times (0,\infty ), \end{aligned}
(4)
\begin{aligned}& u(x,0)=u_{0}(x,t),\qquad u_{t}(x,0)=u_{1}(x), \quad \text{in } \varOmega . \end{aligned}
(5)

In system (1)–(5), $$u = u(x, t)$$ is the transversal displacement of a thin vibrating plate subjected to boundary viscoelastic damping and an internal time-dependent fractional damping. The integral terms in (3) and (4) describe the memory effects. The causes of these memory effects are, for example, the interaction with another viscoelastic element. In the above system, $$\eta \in C^{1} (0, \infty )$$ is a positive nonincreasing function called the time-dependent coefficient of the frictional damping and $$u_{0}$$ and $$u_{1}$$ are the initial data. The functions $$k_{1}, k _{2} \in C^{1} (0, \infty )$$ are positive and nonincreasing, called relaxation functions, and h is a function that satisfies some conditions. Denoting by $$\varPhi _{1}$$, $$\varPhi _{2}$$ the differential operators

$$\varPhi _{1}u=\Delta u+(1-\rho )D_{1}u,\qquad \varPhi _{2}u=\frac{\partial \Delta u}{\partial \nu }+(1- \rho )\frac{\partial D_{2}u}{\partial \delta },$$

where

$$D_{1}u=2\nu _{1}\nu _{2} u_{xy}-\nu _{1}^{2}u_{yy}-\nu _{2}^{2}u_{xx}, \qquad D_{2}u= \bigl(\nu _{1}^{2}-\nu _{2}^{2} \bigr)u_{xy}+\nu _{1}\nu _{2} (u_{yy}-u_{xx} ),$$

and $$\rho \in (0,\frac{1}{2})$$ represents the Poisson coefficient. The vector $$\nu =(\nu _{1}, \nu _{2})$$ denotes the unit outward normal and $$\delta =(-\nu _{2}, \nu _{1})$$ denotes the external unit normal to the boundary of the domain. The stability of the Kirchhoff plate equations in which the boundary (internal) feedback is linear or nonlinear has been studied by several authors, such as Lagnese , Komornik , Lasiecka , Cavalcanti et al. , Ammari and Tucsnak , Komornik , Guzman and Tucsnak , Vasconcellos and Teixeira  and Pazoto et al. . For the existence, multiplicity and asymptotic behavior of nonnegative solutions for a fractional Schrödinger–Poisson–Kirchhoff type system, we refer to Xiang and Wang . There exist a large number of papers which discuss the plate equations when the memory effects are in the domain or at the boundary. Here, we refer to Lagnese  and Rivera et al.  for the internal viscoelastic damping. They proved that the energy decays exponentially (polynomially) if the relaxation function k decays exponentially (polynomially). Alabau-Boussouira et al.  obtained the same results but for an abstract problem. Regarding the internal damping, if the viscoelastic term does not exist and $$\eta \equiv 1$$, the problem (1) was studied and analyzed in the literature such as by Enrike  who established an exponential decay for the wave equation with linear damping term. This result was extended by Komornik  and Nakao  who used different methods and treated the problem when the damping term is nonlinear. For the boundary damping, Santos and Junior  showed that the energy decays exponentially if the resolvent kernels r decays exponentially and polynomially if r decays polynomially. In the presence of $$\eta (t)$$, Benaissa et al.  established energy decay results which depend on h and $$\eta (t)$$. In all the above work, the rates of decay in the relaxation function were either of exponential or of polynomial type. In 2008, Messaoudi in  and  gave general decay rates for an extended class of relaxation functions for which the exponential (polynomial) decay rates are just special cases. However, the optimal decay rates in the polynomial decay case were not obtained. Specifically, he considered a relaxation function k that satisfies

$$k^{\prime }(t)\le -\xi (t) k^{p}(t), \quad t\ge 0,$$
(6)

where $$p=1$$ and ξ is a positive nonincreasing differentiable function. Furthermore, he showed that the decay rates of the energy are the same rates of decay of the kernel k. However, the decay rate is not necessarily of exponential or polynomial decay type. After that, different papers appeared and used the condition (6) where $$p=1$$; see, for instance, [21,22,23,24,25,26,27,28,29,30]. Lasiecka and Tataru  took one step forward and considered the following condition:

$$k^{\prime }(t)\le -G\bigl(k(t)\bigr),$$
(7)

where G is a positive, strictly increasing and strictly convex function on $$(0,R_{0}]$$, and G satisfies $$G(0)=G^{\prime }(0)=0$$. Using the above condition and imposing additional constraints conditions on G, several authors in different approaches obtained general decay results in terms of G; see for example [32,33,34,35,36], and . Later, the condition (6) was extended by Messaoudi and Al-Khulaifi  to the case $$1\le p <\frac{3}{2}$$ only and they obtained general and optimal decay results. In , Lasiecka et al. established optimal decay rate for all $$1 \le p < 2$$, but with $$\gamma (t)=1$$. Very recently, Mustafa  obtained optimal exponential and polynomial decay rates for all $$1 \le p < 2$$ and γ is a function of t. The work most closely related to our study is by Kang , Mustafa and Abusharkh  and Mustafa . Kang  investigated the system (1)–(5) whereas $$\eta (t) \equiv 1$$ and

$$G_{i} \bigl(-r_{i}'(t) \bigr)=-r_{i}'(t),\quad \forall i=1,2,$$
(8)

and established general decay results. Mustafa and Abusharkh  considered the system (1)–(5). But with the condition

$$r_{i}''(t) \geq G\bigl(- r_{i}^{\prime }(t)\bigr),\quad \forall i=1,2,$$
(9)

and $$h(t) \equiv 0$$. They established explicit and general decay rate results. Very recently, Mustafa  studied system (1)–(5). However, under the same condition (9) he obtained a general decay rate result. Our contribution in this paper is to investigate the system (1)–(5) under a very general assumption on the resolvent kernels $$r_{i}$$. This assumption is more general as it comprises the earlier results in [40, 41] and  in the presence of $$\xi (t)$$ and the very general assumption on the relaxation functions. Furthermore, we obtain our results without imposing any restrictive growth assumption on the damping and take into account the effect of the time-dependent coefficient $$\eta (t)$$. The rest of the paper is as follows: In Sect. 2, we give a literature review and in Sect. 3, we state our main results and provide some examples. In Sect. 4, some technical lemmas are presented and established. Finally, we prove and discuss our decay results.

## Preliminaries

In this section, some important materials in the proofs of our results will be presented. In this paper, $$L^{2}(\varOmega )$$ stands for the standard Lebesgue space and $$H_{0}^{1}(\varOmega )$$ the Sobolev space. We use those spaces with their usual scalar products and norms. Moreover, we denote by W the following space: $$W=\{w\in H^{2}(\varOmega ): w=\frac{\partial w}{\partial \nu }=0 \text{ on } \varGamma _{0}\}$$, and $$r_{i}$$ is the resolvent kernel of $$\frac{-k_{i}^{\prime }}{k_{i}(0)}$$, which satisfies

$$r_{i}(t)+\frac{1}{k_{i}(0)}\bigl(k_{i}^{\prime }*r_{i} \bigr) (t)=- \frac{1}{k_{i}(0)}k^{\prime }_{i}(t),\quad \forall i=1,2,$$

where represents the convolution product

$$(f*g) (t)= \int _{0}^{t}f(t-s)g(s)\,ds.$$

From (3) and (4), we get the following Volterra equations:

\begin{aligned}& \varPhi _{2}u+\frac{1}{k_{1}(0)}k_{1}^{\prime }*\varPhi _{2}u= \frac{1}{k_{1}(0)}u_{t}, \\& \varPhi _{1}u+\frac{1}{k_{2}(0)}k^{\prime }_{2}*\varPhi _{1}u=- \frac{1}{k_{2}(0)}\frac{\partial u_{t}}{\partial \nu }. \end{aligned}

Taking $$\tau _{i}=\frac{1}{k_{i}(0)}$$, for $$i=1,2$$, and using the Volterra’s inverse operator, we get

\begin{aligned}& \varPhi _{2}u=\tau _{1}\{u_{t}+r_{1}*u_{t} \},\quad \text{on } \varGamma _{1}\times (0,\infty ), \\& \varPhi _{1}u=-\tau _{2}\biggl\{ \frac{\partial u_{t}}{\partial \nu }+r_{2}* \frac{ \partial u_{t}}{\partial \nu }\biggr\} ,\quad \text{on } \varGamma _{1}\times (0,\infty ), \end{aligned}

In our paper, we assume that $$u_{0} \equiv 0$$, so we have

\begin{aligned}& \varPhi _{2}u=\tau _{1}\bigl\{ u_{t}+r_{1}(0) u+r_{1}'*u\bigr\} , \quad \text{on } \varGamma _{1}\times (0,\infty ), \end{aligned}
(10)
\begin{aligned}& \varPhi _{1}u=-\tau _{2}\biggl\{ \frac{\partial u_{t}}{\partial \nu }+r_{2}(0)\frac{ \partial u}{\partial \nu }+r_{2}'* \frac{\partial u}{\partial \nu }\biggr\} ,\quad \text{on } \varGamma _{1}\times (0,\infty ). \end{aligned}
(11)

Throughout the paper, c is a generic positive constant and we use (10) and (11) instead of (3) and (4).

### Assumptions

$$(A{1})$$ :

We assume that $$\varOmega \subset \mathbb{R}^{2}$$ is a bounded domain with a smooth boundary $$\varGamma =\varGamma _{0}\cup \varGamma _{1}$$. Here, the partitions $$\varGamma _{0}$$ and $$\varGamma _{1}$$ are closed and disjoint. We also assume that $$\operatorname{meas}(\varGamma _{0})> 0$$, and there exists a fixed point $$x_{0}\in \mathbb{R}^{2}$$ such that $$m\cdot \nu \le 0$$ on $$\varGamma _{0}$$ and $$m \cdot \nu > 0$$ on $$\varGamma _{1}$$ where $$m(x):=x-x _{0}$$ and ν is the unit outward normal vector. This assumption leads to positive constants $$\delta _{0}$$ and R such that

$$m \cdot \nu \ge \delta _{0} > 0\quad \text{on } \varGamma _{1} \quad \text{and}\quad \bigl\vert m(x) \cdot \nu \bigr\vert \le R, \quad \forall x\in \varOmega .$$
$$(A{2})$$ :

We assume that $$h:\mathbb{R} \rightarrow \mathbb{R}$$ is a $$C^{0}$$ nondecreasing function and there exists a strictly increasing function $$h_{0}\in C^{1}(\mathbb{R}^{+})$$ with $$h_{0}(0)=0$$ such that

\begin{aligned} &h_{0}\bigl( \vert s \vert \bigr)\le \bigl\vert h(s) \bigr\vert \le h_{0}^{-1} \bigl( \vert s \vert \bigr)\quad \text{for all } \vert s \vert \le \epsilon , \\ & c_{1} \vert s \vert \le \bigl\vert h(s) \bigr\vert \le c_{2} \vert s \vert \quad \text{for all } \vert s \vert \ge \epsilon , \end{aligned}
(12)

where $$c_{1}$$, $$c_{2}$$, ϵ are positive constants. In the case $$h_{0}$$ is nonlinear, we assume that the function H defined by $$H(s)=\sqrt{s}h_{0} (\sqrt{s})$$ is a strictly convex $$C^{2}$$ on $$(0,r_{0}]$$, where $$r_{0} > 0$$.

$$(A{3})$$ :

We assume that $$r_{i} : \mathbb{R}^{+} \rightarrow \mathbb{R}^{+}$$, for $$i=1,2$$, is a $$C^{2}$$ function satisfies

$$\lim_{t \rightarrow \infty } r_{i}(t)=0, \qquad r_{i}(0) > 0 , \qquad r_{i}'(t) \leq 0,$$
(13)

and there exists a positive, differentiable and nonincreasing function $$\xi _{i} : \mathbb{R}^{+} \rightarrow \mathbb{R}^{+}$$. We also assume that there exists a positive function $$G_{i} \in C^{1}(\mathbb{R}^{+})$$, $$G_{i}$$ being a linear or strictly increasing and strictly convex $$C^{2}$$ function on $$(0,R_{i}]$$, $$R_{i} > 0$$, with $$G_{i}(0) = G_{i}'(0) = 0$$, such that

$$r_{i}''(t) \geq \xi _{i}(t) G_{i} \bigl(-r_{i}'(t) \bigr), \quad (i = 1, 2) \ \forall t > 0.$$
(14)

Furthermore, we assume that the system ((1)–(5) has a unique solution

$$u\in L^{\infty }\bigl(\mathbb{R}^{+};H^{4}(\varOmega ) \cap W\bigr)\cap W^{1,\infty }\bigl(\mathbb{R}^{+};W\bigr)\cap W^{2,\infty }\bigl(\mathbb{R}^{+};L^{2}(\varOmega )\bigr).$$

This result can be obtained by using the Galerkin method as in Park and Kang  and Santos et al. .

### Remark 2.1

It is worth noting that condition (12) was considered first in .

### Remark 2.2

Using Assumption $$(A2)$$, one may notice that $$\operatorname{sh}(s)>0$$, for all $$s\ne 0$$.

### Remark 2.3

If G is a strictly increasing and strictly convex $$C^{2}$$ function on $$(0, r_{1}]$$, with $$G(0) = G'(0) = 0$$, then it has an extension , which is a strictly increasing and strictly convex $$C^{2}$$ function on $$(0,\infty )$$. For instance, if $$G(r_{1}) = a$$, $$G'(r _{1}) = b$$, $$G''(r_{1}) = c$$, we can define , for $$t > r_{1}$$, by

$$\overline{G}(t)=\frac{c}{2} t^{2}+ (b-cr_{1})t+ \biggl(a+\frac{c}{2} {r_{1}}^{2}-b r_{1} \biggr).$$
(15)

The same remark can be established for .

Now, we define the bilinear form $$a(\cdot , \cdot )$$ as follows:

$$a(u,v)= \int _{\varOmega }\bigl\{ u_{xx}v_{xx}+u_{yy}v_{yy}+ \rho (u_{xx}v_{yy}+u _{yy}v_{xx})+2(1-\rho )u_{xy}v_{xy}\bigr\} \,dx\,dy.$$
(16)

It is well known that $$\sqrt{a(u , u)}$$ is an equivalent norm on W, that is,

$$\beta _{1} \Vert u \Vert ^{2}_{H^{2}(\varOmega )} \leq a(u,u) \leq \beta _{2} \Vert u \Vert ^{2}_{H^{2}(\varOmega )},$$
(17)

for some positive constants $$\beta _{1}$$ and $$\beta _{2}$$. From (17) and the Sobolev embedding theorem, we have, for some positive constants $$c_{p}$$ and $$c_{s}$$,

$$\Vert u \Vert ^{2} \leq c_{p} a(u,u), \quad \text{and} \quad \Vert \nabla u \Vert ^{2} \leq c_{s} a(u,u), \quad \forall u \in H^{2}(\varOmega ).$$
(18)

The energy functional associated with (1)–(5) is

\begin{aligned}[b]E(t)&:=\frac{1}{2} \biggl[ \int _{\varOmega }{ \vert u_{t} \vert }^{2}+a(u,u)+ \tau _{1} \int _{\varGamma _{1}}\bigl(r_{1}(t){ \vert u \vert }^{2}-\bigl(r_{1}^{\prime }\circ u\bigr)\bigr)\,d\varGamma \biggr] \\ &\quad {}+\frac{1}{2} \biggl[\tau _{2} \int _{\varGamma _{1}} \biggl(r _{2}(t) \biggl\vert \frac{\partial u}{\partial \nu } \biggr\vert - \biggl(r_{2} ^{\prime } \circ \frac{\partial u}{\partial \nu } \biggr) \biggr) \,d \varGamma \biggr], \end{aligned}
(19)

where $$(f \circ g)(t)=\int _{0}^{t} f(t-s) |g(t)-g(s)|^{2} \,ds$$.

Our main stability results are in the following two theorems.

## The main results

### Theorem 3.1

Assume that $$(A1)$$$$(A3)$$ are satisfied and $$h_{0}$$ is linear. Then the solution of (1)(5) satisfies, for all $$t \geq t_{1}$$,

\begin{aligned}& E (t)\le c_{1} e^{-c_{2}\int _{t_{1}}^{t}\sigma (s)\,ds},\quad \textit{if }G\textit{ is linear}, \end{aligned}
(20)
\begin{aligned}& E (t)\le m_{2} G_{4}^{-1} \biggl(m_{1} \int _{t_{1}}^{t}\sigma (s)\,ds \biggr),\quad \textit{if }G\textit{ is nonlinear}, \end{aligned}
(21)

where $$c_{1}$$, $$c_{2}$$, $$m_{1}$$ and $$m_{2}$$ are strictly positive constants. $$G_{4}(t)=\int _{t}^{r}\frac{1}{sG^{\prime }(s)}\,ds$$, $$G= \min \{G_{1}, G_{2} \}$$, and $$\sigma (t)=\min \{\eta (t),\xi (t)\}$$ where $$\xi (t)=\min \{\xi _{1}(t), \xi _{2}(t) \}$$. $$G_{1}$$, $$G_{2}$$ and $$\xi _{1}(t)$$, $$\xi _{2}(t)$$ are defined in $$(A3)$$.

### Theorem 3.2

Assume that $$(A1)$$$$(A3)$$ are satisfied and $$h_{0}$$ is nonlinear. Then there exist strictly positive constants $$c_{3}$$, $$c_{4}$$, $$m_{3}$$, $$m_{4}$$, $$\varepsilon _{1}$$ and $$\varepsilon _{2}$$ such that the solution of (1)(5) satisfies, for all $$t \geq t_{1}$$,

$$E(t)\le H_{1}^{-1} \biggl(c_{3} \int _{t_{1}}^{t}\sigma (s)\,ds+c_{4} \biggr),\quad \textit{if }G\textit{ is linear},$$
(22)

where $$H_{1}(t)=\int _{t}^{1}\frac{1}{H_{2}(s)}\,ds$$ and $$H_{2}(t)=t H ^{\prime }(\varepsilon _{1}t)$$.

$$E(t) \leq m_{4} (t-t_{1}) {W_{2}}^{-1} \biggl( \frac{m_{3}}{(t-t_{1}) \int _{t_{1}}^{t}\sigma (s) \,ds } \biggr),\quad \textit{if }G \textit{ is nonlinear},$$
(23)

where $$W_{2}(t)=tW'(\varepsilon _{2} t)$$, $$W= (\overline{G}^{-1}+ \overline{H}^{-1} )^{-1}$$ and , are introduced in Remark 2.3 .

### Remark 3.1

()

In (21), one can see that the decay rate of $$E(t)$$ is consistent with the decay rate of $$(-r_{i}'(t))$$ given by (14). So, the decay rate of $$E(t)$$ is optimal.

In fact, using the general assumption (14), and taking into account the fact that $$G=\min \{G_{1}, G_{2}\}$$ and $$\sigma (t)= \min \{\eta (t), \xi (t)\}$$, we have

$$-r_{i}'(t)\le G_{5}^{-1} \biggl( \int _{-r_{i}^{\prime -1}(r)}^{t}\sigma (s)\,ds \biggr),\quad \forall t \ge -r_{i}^{\prime -1}(r),$$

where $$G_{5}(t)=\int _{t}^{r}\frac{1}{G(s)} \,ds$$. Using the properties of G, we get

$$G_{4}(t)= \int _{t}^{r}\frac{1}{s G^{\prime }(s)}\,ds\le \int _{t}^{r} \frac{1}{G(s)}\,ds=G_{5}(t).$$

Also, using the properties of $$G_{4}$$ and $$G_{5}$$, we have

$$G_{4}^{-1}(t)\le G_{5}^{-1}(t).$$

This shows that (21) provides the best decay rates expected under the very general assumption (14).

### Example 3.3

1. (1.A)

$$h_{0}$$ and G are linear and $$\eta (t) \equiv 1$$.

Let $$r_{i}'(t)= - a_{i} e^{-b_{i}(1+t)}$$, where $$b_{i} > 0$$ and $$a_{i} > 0$$, $$\forall i=1,2$$, so that Assumption $$(A3)$$ is satisfied, then $$r_{i}^{\prime \prime }(t)=\xi _{i}(t) G_{i}(-r_{i}'(t))$$. We take $$a=\min \{ a_{1}, a_{2}\}$$, $$b=\min \{ b_{1}, b_{2}\}$$, $$G=\min \{ G _{1}, G_{2}\}$$, $$\xi (t)=\min \{ \xi _{1}(t), \xi _{2}(t)\}$$ and $$\sigma (t) = \min \{ \eta (t), \xi (t)\}$$. Hence, $$G(t)=t$$, $$\xi (t)=b$$ and we let $$\sigma (t):=b_{0}=\min \{1,b\}$$. For the nonlinear case, assume that $$h_{0}(t)=ct$$ and $$H(t)=\sqrt{t} h_{0}( \sqrt{t})=ct$$. Therefore, we can use (20) to deduce

$$E(t) \leq c_{1} e^{-c_{2}t} ,$$
(24)

which is the exponential decay.

2. (1.B)

$$h_{0}$$ and G are linear and $$\eta (t)= \frac{b}{t+1}$$.

Let $$r_{i}'(t)= - a_{i} e^{-b_{i}(1+t)}$$, where $$b_{i} > 0$$ and $$a_{i} > 0$$, $$\forall i=1,2$$, so that Assumption $$(A3)$$ is satisfied, then $$r_{i}^{\prime \prime }(t)=\xi _{i}(t) G_{i}(-r_{i}'(t))$$. We take $$a=\min \{ a_{1}, a_{2}\}$$, $$b=\min \{ b_{1}, b_{2}\}$$, $$G=\min \{ G _{1}, G_{2}\}$$, $$\xi (t)=\min \{ \xi _{1}(t), \xi _{2}(t)\}$$ and $$\sigma (t) = \min \{ \eta (t), \xi (t)\}$$. Hence, $$G(t)=t$$, $$\xi (t)=b$$ and $$\sigma (t)=\frac{b}{t+1}$$. For the nonlinear case, assume that $$h_{0}(t)=ct$$ and $$H(t)=\sqrt{t} h_{0}(\sqrt{t})=ct$$. Therefore, we can use (20) to deduce

$$E(t) \leq \frac{c }{1+\ln (t+1)} .$$
(25)
3. (2)

$$h_{0}$$ is linear, G is nonlinear and $$\eta (t) \equiv 1$$.

Let $$r_{i}'(t)= - a_{i} e^{-t^{q}}$$, where $$0< q<1$$ and $$a_{i}>0$$, $$\forall i=1,2$$, so that Assumption $$(A3)$$ is satisfied, then $$r_{i}''(t)= \xi _{i}(t)G_{i} ( -r_{i}'(t) )$$. We take $$a=\min \{ a_{1}, a_{2}\}$$, $$G=\min \{ G_{1}, G_{2}\}$$, $$\xi (t)= \min \{ \xi _{1}(t), \xi _{2}(t)\}$$ and $$\sigma (t) = \min \{ \eta (t), \xi (t)\}$$. Hence, $$\xi (t)=1$$ and $$G(t)=\frac{q^{t}}{ ( \ln (a/t) ) ^{\frac{1}{q}-1}}$$. In this case, $$\sigma (t) \equiv 1$$. For, the boundary feedback, let $$h_{0}(t)=ct$$, and $$H(t)=\sqrt{t} h_{0}( \sqrt{t})=ct$$. Since

$$G'(t)=\frac{(1-q)+q \ln (a/t)}{ ( \ln (a/t) )^{1/q}}$$

and

$$G''(t)=\frac{(1-q) ( \ln (a/t)+1/q )}{ ( \ln (a/t) )^{\frac{1}{q}+1}},$$

the function G satisfies the condition $$(A3)$$ on $$(0,r]$$ for any $$r>0$$. We have

\begin{aligned}[b]G_{4}(t)&= \int _{t}^{r}\frac{1}{sG^{\prime }(s)}\,ds= \int _{t}^{r}\frac{ [\ln {\frac{a}{s}} ]^{\frac{1}{q}}}{s [1-q+q\ln {\frac{a}{s}} ]}\,ds \\ & = \int _{\ln {\frac{a}{r}}}^{\ln {\frac{a}{t}}}\frac{u ^{\frac{1}{q}}}{1-q+qu}\,du \\ & =\frac{1}{q} \int _{\ln {\frac{a}{r}}}^{\ln {\frac{a}{t}}} u^{\frac{1}{q}-1} \biggl[ \frac{u}{\frac{1-q}{q}+u} \biggr]\,du \\ & \le \frac{1}{q} \int _{\ln {\frac{a}{r}}}^{\ln {\frac{a}{t}}} u^{\frac{1}{q}-1}\,du \le \biggl(\ln { \frac{a}{t}} \biggr)^{\frac{1}{q}}. \end{aligned}

Then (21) gives

$$E(t)\leq k e^{-k t^{q}},$$
(26)

which is the optimal decay.

4. (3)

$$h_{0}$$ is nonlinear, G is linear and $$\eta (t)=\frac{b}{(t+e) \ln (t+e)}$$.

Let $$r_{i}'(t)= - a_{i} e^{-b_{i}(1+t)}$$, where $$b_{i} > 0$$ and $$a_{i} > 0$$, $$\forall i=1,2$$, so that Assumption $$(A3)$$ is satisfied, then $$r_{i}^{\prime \prime }(t)=\xi _{i}(t) G_{i}(-r_{i}'(t))$$. We take $$a=\min \{ a_{1}, a_{2}\}$$, $$b=\min \{ b_{1}, b_{2}\}$$, $$G=\min \{ G _{1}, G_{2}\}$$, $$\xi (t)=\min \{ \xi _{1}(t), \xi _{2}(t)\}$$ and $$\sigma (t) = \min \{ \eta (t), \xi (t)\}$$. Hence, $$G(t)=t$$ and $$\xi (t)= b$$. In this case, $$\sigma (t)=\frac{b}{(t+e)\ln (t+e)}$$. Also, assume that $$h_{0}(t)=ct^{q}$$, where $$q>1$$ and $$H(t)=\sqrt{t} h_{0}( \sqrt{t})=ct^{\frac{q+1}{2}}$$. Then

$${H_{1}}^{-1}(t)= (ct+1 )^{\frac{-2}{q-1}}.$$

Therefore, applying (22), we obtain

$$E(t)\leq \frac{c}{ [1+\ln ( \ln (t+e)) ]^{\frac{2}{q-1}}}.$$
(27)
5. (4)

$$h_{0}$$ is nonlinear, G is non-linear and $$\eta (t) \equiv 1$$.

Let $$r_{i}'(t)=\frac{- a_{i}}{(1+t)^{2}}$$, where $$a_{i} > 0$$, $$\forall i=1,2$$, is chosen so that Assumption $$(A3)$$ holds. We choose $$a=\min \{ a_{1}, a_{2}\}$$, then $$r_{i}^{\prime \prime }(t)= b_{i} G _{i}(-r_{i}'(t))$$. We select $$b=\min \{ b_{1}, b_{2}\}$$, $$G=\min \{ G _{1}, G_{2}\}$$, $$\xi (t)=\min \{ \xi _{1}(t), \xi _{2}(t)\}$$ and $$\sigma (t) = \min \{ \eta (t), \xi (t)\}$$. In this example, $$G(s)=s^{\frac{3}{2}}$$, $$\xi (t)=b$$. For the boundary feedback, let $$h_{0}(t)=c t^{5}$$ and $$H(t)=c t^{3}$$. Then

$$W(s)=\bigl(G^{-1}+H^{-1}\bigr)^{-1}= \biggl( \frac{-1+\sqrt{1+4s}}{2} \biggr) ^{3}$$

and

\begin{aligned}[b]W_{2}(s)&=\frac{3s}{\sqrt{1+4s}} \biggl( \frac{-1+\sqrt{1+4s}}{2} \biggr) ^{2} \\ & =\frac{3s}{2\sqrt{1+4s}}+\frac{3s^{2}}{\sqrt{1+4s}}- \frac{3s}{2} \\ & \le \frac{3s}{2}+\frac{3 s^{2}}{2\sqrt{s}}- \frac{3s}{2}=c s^{\frac{3}{2}}. \end{aligned}

Therefore, applying (23), we obtain

$$E(t)\le \frac{c}{(t-t_{1})^{\frac{1}{3}}}.$$

For the proofs of our main results, we state and establish several lemmas in the following section.

## Technical lemmas

In this section, we introduce some lemmas which are important in our proofs of our main results.

### Lemma 4.1

() Let u and v be functions in $$H^{4}( \varOmega )$$ and $$\rho \in \mathbb{R}$$. Then we have

$$\int _{\varOmega }\bigl(\Delta ^{2}u\bigr)v\,dx=a(u,v)+ \int _{\varGamma _{1}}\biggl\{ (\varPhi _{2}u)v-( \varPhi _{1}u)\frac{\partial u}{\partial \nu }\biggr\} \,d\varGamma$$
(28)

and

\begin{aligned}[b] \int _{\varOmega }(m.\nabla v)\Delta ^{2}v \,dx&=a(v,v)+ \frac{1}{2} \int _{ \varGamma }m.v \bigl[v_{xx}^{2}+v_{yy}^{2}+2 \rho v_{xx}v_{yy}+2(1-\rho )v _{xy}^{2} \bigr]\,d\varGamma \\ &\quad {}+ \int _{\varGamma } \biggl[(\varPhi _{2}v)m.\nabla v-(\varPhi _{1}v)\frac{ \partial }{\partial \nu }(m.\nabla \nu ) \biggr]\,d\varGamma . \end{aligned}
(29)

### Lemma 4.2

Under Assumptions $$(A1)$$$$(A3)$$ and considering Remark 2.2, the energy functional E satisfies, along the solution of (1)(5), the estimate

\begin{aligned}[b]E^{\prime }(t)&=- \frac{\tau _{1}}{2} \int _{\varGamma _{1}}\bigl(2{ \vert u_{t} \vert }^{2}-r_{1}^{\prime }(t){ \vert u \vert }^{2}+r_{1}^{\prime \prime } \circ u\bigr)\,d\varGamma \\ &\quad {}-\frac{\tau _{2}}{2} \int _{\varGamma _{1}} \biggl(2{ \biggl\vert \frac{\partial u_{t}}{\partial \nu } \biggr\vert }^{2}-r_{2}^{ \prime }(t){ \biggl\vert \frac{\partial u}{\partial \nu } \biggr\vert }^{2}+r_{2}^{\prime \prime } \circ \frac{\partial u}{\partial \nu } \biggr)\,d \varGamma -\eta (t) \int _{\varOmega } u_{t} h( u_{t} )\,dx \\ & \le 0. \end{aligned}
(30)

### Proof

The proof can be established by multiplying Eq. (1) by $$u_{t}$$, integrating by parts over Ω, and using (28) and the boundary conditions (10) and (11). With the help of the ideas in , one can establish the following two helpful lemmas.

### Lemma 4.3

For $$i=1,2$$, $$0 < \alpha _{i} < 1$$, and for

$$C_{\alpha _{i}}:= \int _{0}^{\infty }\frac{r_{i}^{\prime 2}(s)}{r_{i}''-\alpha _{i} r_{i}'(s)}\,ds\quad \textit{and} \quad \theta _{i}(t):=r_{i}''(t)- \alpha _{i} r_{i}'(t),$$
(31)

we have

\begin{aligned}& \begin{aligned}[b] & \biggl( \int _{0}^{t} r_{1}'(t-s) \bigl\vert u(s)- u(t) \bigr\vert \,ds \biggr)^{2} \le C_{\alpha _{1}}( \theta _{1} \circ u) (t), \end{aligned} \end{aligned}
(32)
\begin{aligned}& \begin{aligned}[b] & \biggl( \int _{0}^{t} r_{2}'(t-s) \biggl\vert \frac{\partial u(s)}{ \partial \nu }- \frac{\partial u(t)}{\partial \nu } \biggr\vert \,ds \biggr) ^{2}\le C_{\alpha _{2}}\biggl( \theta _{2} \circ \frac{\partial u}{\partial \nu }\biggr) (t). \end{aligned} \end{aligned}
(33)

### Lemma 4.4

There exist positive constants $$d_{1}$$, $$d_{2}$$ and $$t_{1}$$ such that

$$r_{i}''(t) \geq -d_{i} r_{i}'(t), \quad (i = 1, 2)\ \forall t \in [0, t_{1}].$$
(34)

### Lemma 4.5

Under Assumptions $$(A1)$$$$(A3)$$, the functional

$$\psi _{1}(t):= \int _{\varOmega }(m.\nabla u)u_{t}\,dx$$
(35)

satisfies, along the solution of (1)(5), the estimate

\begin{aligned} \psi _{1}^{\prime }(t) &\le \frac{1}{2} \int _{\varGamma _{1}} m.\nu { \vert u _{t} \vert }^{2}\,d\varGamma -\frac{1}{2} \int _{\varOmega }{ \vert u_{t} \vert } ^{2}\,dx- \biggl(1-\frac{c_{0}}{2}-\frac{\varepsilon c}{2} \biggr)a(u,u) \\ &\quad {}+\frac{ {\tau _{1}}^{2}}{2\varepsilon } \int _{\varGamma _{1}} \bigl[ \vert u_{t} \vert ^{2}+r_{1}^{2}(t) \vert u \vert ^{2} \bigr]\,d \varGamma +\frac{{\tau _{1}}^{2}C_{\alpha _{1}}}{2\varepsilon } \int _{\varGamma _{1}}(\theta _{1}\circ u)\,d\varGamma \\ &\quad {}+\frac{\tau _{2}^{2}}{2\varepsilon } \int _{\varGamma _{1}} \biggl[ \biggl\vert \frac{\partial u_{t}}{\partial \nu } \biggr\vert +r_{2}^{2}(t) \biggl\vert \frac{\partial u}{\partial \nu } \biggr\vert \biggr]\,d\varGamma +\frac{ {\tau _{2}}^{2}C_{\alpha _{2}}}{2\varepsilon } \int _{\varGamma _{1}} \biggl(\theta _{2}\circ \frac{\partial u}{\partial \nu } \biggr)\,d\varGamma + \frac{c}{2} \int _{\varOmega } h^{2}(u_{t})\,dx \\ &\quad {}- \biggl[\frac{1}{2}-\frac{\varepsilon c}{2} \biggr] \int _{\varGamma _{1}}m.\nu \bigl[u_{xx}^{2}+u_{yy}^{2}+2 \rho u_{xx}u_{yy}+2(1- \rho )u_{xy}^{2} \bigr]\,d\varGamma . \end{aligned}
(36)

### Proof

By direct integrations, using (1), and using (29) with $$v=u$$, we obtain

\begin{aligned}[b]\psi _{1}^{\prime }(t)&= \int _{\varOmega }(m \cdot \nabla u_{t})u_{t}\,dx+ \int _{\varOmega }(m \cdot \nabla u)u_{tt}\,dx \\ & =\frac{1}{2} \int _{\varGamma _{1}}m \cdot \nu { \vert u_{t} \vert }^{2}\,d\varGamma -\frac{1}{2} \int _{\varOmega }{ \vert u_{t} \vert }^{2}\,dx-a(u,u)- \eta (t) \int _{\varOmega } h(u_{t}) (m \cdot \nabla u) \,dx \\ &\quad {}- \int _{\varGamma } \biggl[(\varPhi _{2}u) (m \cdot \nabla u)-( \varPhi _{1}u)\frac{\partial }{\partial \nu }(m \cdot \nabla u) \biggr]\,d \varGamma \\ &\quad {}-\frac{1}{2} \int _{\varGamma }m.\nu \bigl[u^{2}_{xx}+u^{2} _{yy}+2\rho u_{xx}u_{yy}+2(1-\rho )u^{2}_{xy} \bigr]\,d\varGamma . \end{aligned}
(37)

Since $$u_{xx}u_{yy}-(u_{xy})^{2}=0$$ on $$\varGamma _{0}$$, we have

\begin{aligned}[b] u_{xx}u_{yy}+2(1- \rho )u_{xy}^{2}=(\Delta u)^{2}\quad \text{on } \varGamma _{0} \end{aligned} .
(38)

Now, as $$u=\frac{\partial u}{\partial \nu }=0$$ on $$\varGamma _{0}$$, we have $$D_{1}u=D_{2}u=0$$ on $$\varGamma _{0}$$ and

\begin{aligned}[b] \frac{\partial }{\partial \nu }(m.\nabla u)=(m. \nu )\Delta u. \end{aligned}
(39)

Combining (37), (38) and (39), (37) becomes

\begin{aligned} \psi _{1}^{\prime }(t)&= \frac{1}{2} \int _{\varGamma _{1}}m.\nu { \vert u_{t} \vert }^{2}\,d\varGamma -\frac{1}{2} \int _{\varOmega } \vert u_{t} \vert ^{2}\,dx-a(u,u)- \eta (t) \int _{\varOmega } (m \cdot \nabla u) h(u_{t}) \,dx \\ &\quad {}+\frac{1}{2} \int _{\varGamma _{0}}m.\nu (\Delta u)^{2}\,d \varGamma - \frac{1}{2} \int _{\varGamma _{1}}m.\nu \bigl[u_{xx}^{2}+u_{yy} ^{2}+2\rho u_{xx}u_{yy}+2(1-\rho )u_{xy}^{2} \bigr]\,d\varGamma \\ &\quad {}- \int _{\varGamma _{1}}(\varPhi _{2}u) (m.\nabla u)\,d\varGamma + \int _{\varGamma _{1}}(\varPhi _{1}u)\frac{\partial }{\partial \nu }(m.\nabla u)\,d \varGamma . \end{aligned}
(40)

\begin{aligned}& \begin{aligned}[b] \biggl\vert \int _{\varGamma _{1}}(\varPhi _{2} u) (m. \nabla u)\,d\varGamma \biggr\vert \le \frac{1}{2\varepsilon } \int _{\varGamma _{1}} \vert \varPhi _{2} u \vert ^{2}\,d\varGamma +\frac{\varepsilon }{2} \int _{\varGamma _{1}} \vert m. \nabla u \vert ^{2}\,d\varGamma , \end{aligned} \end{aligned}
(41)
\begin{aligned}& \begin{aligned}[b] \biggl\vert \int _{\varGamma _{1}}(\varPhi _{1} u)\frac{\partial }{\partial \nu }(m. \nabla u)\,d\varGamma \biggr\vert \le \frac{1}{2\varepsilon } \int _{\varGamma _{1}} \vert \varPhi _{1} u \vert ^{2}\,d\varGamma +\frac{\varepsilon }{2} \int _{\varGamma _{1}} \biggl\vert \frac{\partial }{\partial \nu }(m. \nabla u) \biggr\vert ^{2}\,d\varGamma , \end{aligned} \end{aligned}
(42)

where ε is a positive constant. Using (17) and (18), the fact $$\mid m(x)\mid \leq R$$, and the trace theory, we obtain

\begin{aligned}[b] & \int _{\varGamma _{1}} \vert m.\nabla u \vert ^{2}\,d\varGamma + \int _{\varGamma _{1}} \biggl\vert \frac{\partial }{\partial \nu }(m.\nabla u) \biggr\vert ^{2}\,d\varGamma \\ &\quad\leq R^{2} c_{s} a(u,u)+R \int _{\varGamma _{1}}m.\nu \bigl[u_{xx}^{2}+u_{yy} ^{2}+2\rho u_{xx}u_{yy}+2(1-\rho )u_{xy}^{2} \bigr]\,d\varGamma . \end{aligned}
(43)

Furthermore, using (17) and (18) and the property of the function $$\eta (t)$$, we have

\begin{aligned}[b] \biggl\vert \eta (t) \int _{\varOmega } h(u_{t}) m.\nabla u \,dx \biggr\vert \leq \frac{c}{2} \int _{\varOmega } h^{2}(u_{t}) \,dx + \frac{R^{2} c _{s}}{2} a(u,u). \end{aligned}
(44)

Combining (40)–(44), we have

\begin{aligned}[b]\psi _{1}^{\prime }(t) &\le \frac{1}{2} \int _{\varGamma _{1}}m.\nu { \vert u _{t} \vert }^{2}\,d\varGamma -\frac{1}{2} \int _{\varOmega }{ \vert u_{t} \vert } ^{2}\,dx- \biggl(1- \frac{\lambda _{0}}{2}- \frac{\varepsilon \lambda _{0}}{2} \biggr)a(u,u) \\ &\quad {}+\frac{1}{2\varepsilon } \int _{\varGamma _{1}} \vert \varPhi _{1} u \vert ^{2}\,d\varGamma +\frac{1}{2\varepsilon } \int _{\varGamma _{1}} \vert \varPhi _{2} u \vert ^{2} \,d\varGamma + \frac{c}{2} \int _{\varOmega } h^{2}(u _{t})\,dx \\ &\quad {}- \biggl[\frac{1}{2}-\frac{\varepsilon R}{2} \biggr] \int _{\varGamma _{1}}m.\nu \bigl[u_{xx}^{2}+u_{yy}^{2}+2 \rho u_{xx}u_{yy}+2(1- \rho )u_{xy}^{2} \bigr]\,d\varGamma , \end{aligned}
(45)

where $$\lambda _{0}= R^{2} c_{s}$$. By direct computation and using (4.3), we arrive at

\begin{aligned}[b]\bigl(r_{1}' \ast u\bigr) (t)&= \int _{0}^{t} r_{1}'(t-s)u(s)\,ds= \int _{0}^{t} r_{1}'(t-s) \bigl[u(s)-u(t)+u(t)\bigr]\,ds \\ & = \int _{0}^{t} r_{1}'(t-s) \bigl[u(s)-u(t)\bigr]\,ds+ \int _{0}^{t} r _{1}'(t-s)u(t)\,ds \\ & = - \int _{0}^{t} r_{1}'(t-s) \bigl[u(t)-u(s)\bigr]\,ds+ \int _{0}^{t} r _{1}'(t-s)u(t)\,ds \\ & =- \int _{0}^{t} r_{1}'(t-s) \bigl[u(t)-u(s)\bigr]\,ds+ r_{1}(t)u(t)-r _{1}(0)u(t) \\ & \leq \bigl[ C_{\alpha _{1}} (\theta _{1} \circ u) (t) \bigr] ^{\frac{1}{2}}+r_{1}(t)u(t)-r_{1}(0)u(t), \end{aligned}
(46)

similarly, we can show that

\begin{aligned}[b] & \biggl(r_{2}' \ast \frac{\partial u}{\partial \nu } \biggr) (t)\leq \biggl[ C_{\alpha _{2}} \biggl(\theta _{2} \circ \frac{\partial u}{ \partial \nu } \biggr) (t) \biggr]^{\frac{1}{2}}+r_{2}(t) \frac{\partial u(t)}{\partial \nu }-r_{2}(0)\frac{\partial u(t)}{\partial \nu }, \end{aligned}
(47)

then from the boundary conditions (10), (11) and using (46) and (47), we have

\begin{aligned}[b] &\varPhi _{2}u \leq \tau _{1} \bigl\{ u_{t} + r_{1}(t) u+ \bigl[C_{\alpha _{1}} (\theta _{1} \circ u) (t) \bigr]^{\frac{1}{2}} \bigr\} , \\ &\varPhi _{1}u \leq -\tau _{2} \biggl\{ \frac{\partial u_{t}}{\partial \nu }+r _{2}(t)\frac{\partial u}{\partial \nu }+ \biggl[ C_{\alpha _{2}} \biggl(\theta _{2} \circ \frac{\partial u_{t}}{\partial \nu }\biggr) (t) \biggr]^{ \frac{1}{2}} \biggr\} . \end{aligned}
(48)

Substituting the inequalities (48) in (45) and using the fact $$m.\nu \le 0$$ on $$\varGamma _{0}$$, (36) is achieved. □

### Lemma 4.6

Under Assumptions $$(A1)$$$$(A3)$$, the functionals

\begin{aligned} &\psi _{2}(t)= \int _{\varGamma _{1}} \int _{0}^{t}\mu _{1}(t-s) \bigl\vert u(s) \bigr\vert ^{2}\,ds \,dx, \\ & \psi _{3}(t)= \int _{\varGamma _{1}} \int _{0}^{t}\mu _{2}(t-s) \biggl\vert \frac{\partial u(s)}{\partial \nu } \biggr\vert ^{2} \,ds \,dx, \end{aligned}
(49)

satisfy, along the solution of (1)(5), the estimates

\begin{aligned} &\psi _{2}^{\prime }(t) \le \frac{1}{2} \bigl(r_{1}' \circ u\bigr) (t)+ 3 r_{1}(0) \int _{\varGamma _{1}} \bigl\vert u(t) \bigr\vert ^{2}\,dx, \\ &\psi _{3}^{\prime }(t) \le \frac{1}{2} \biggl(r_{2}' \circ \frac{\partial u}{\partial \nu } \biggr) (t)+ 3r_{2}(0) \int _{\varGamma _{1}} \biggl\vert \frac{\partial u(t)}{ \partial \nu } \biggr\vert ^{2}\,dx, \end{aligned}
(50)

where $$\mu _{i}(t)= \int _{t}^{+\infty }(-r_{i}'(s)) \,ds$$, $$i=1,2$$.

### Proof

Taking the derivative of the first equation in (49) and using the fact $$\mu _{1}^{\prime }(t)=r_{1}'(t)$$, we have

\begin{aligned}[b]\psi _{2}^{\prime }(t)&=r_{1}(0) \int _{\varGamma _{1}} \bigl\vert u(t) \bigr\vert ^{2}\,dx+ \int _{\varGamma _{1}} \int _{0}^{t}r_{1}'(t-s) \bigl\vert u(s) \bigr\vert ^{2} \,dx \\ & = \int _{\varGamma _{1}} \int _{0}^{t}r_{1}'(t-s) \bigl\vert u(s)- u(t) \bigr\vert ^{2} \,ds \,dx \\ &\quad {}+2 \int _{\varGamma _{1}}u(t) \int _{0}^{t}r_{1}'(t-s) \bigl(u(s)-u(t)\bigr)\,ds\,dx+r _{1}(t) \int _{\varGamma _{1}} \bigl\vert u(t) \bigr\vert ^{2}\,dx. \end{aligned}
(51)

Using the fact $$\lim_{t \rightarrow \infty } r_{1}(t) = 0$$, and Young’s inequality we have the following:

\begin{aligned}[b] &2 \int _{\varGamma _{1}} u(t) \int _{0}^{t}r_{1}'(t-s) \bigl(u(s)-u(t)\bigr)\,ds\,dx \\ &\quad \le 2\gamma \int _{\varGamma _{1}} \bigl\vert u(s) \bigr\vert ^{2} \,dx+ \frac{ \int _{0}^{t}r_{1}'(s)}{2\gamma } \int _{\varGamma _{1}} \int _{0}^{t}r_{1}'(t-s) \bigl\vert u(s)- u(t) \bigr\vert ^{2} \,ds \,dx \\ &\quad \le 2\gamma \int _{\varGamma _{1}} \bigl\vert u(s) \bigr\vert ^{2} \,dx+ \frac{ \int _{0}^{\infty }r_{1}'(s)}{2\gamma } \int _{\varGamma _{1}} \int _{0}^{t}r _{1}'(t-s) \bigl\vert u(s)- u(t) \bigr\vert ^{2} \,ds \,dx \\ &\quad \le 2 \gamma \int _{\varGamma _{1}} \bigl\vert u(t) \bigr\vert ^{2} \,dx- \frac{r _{1}(0)}{2\gamma } \int _{\varGamma _{1}} \int _{0}^{t}r_{1}'(t-s) \bigl\vert u(s)- u(t) \bigr\vert ^{2} \,ds\,dx \\ &\quad \le 2 r_{1}(0) \int _{\varGamma _{1}} \bigl\vert u(t) \bigr\vert ^{2} \,dx- \frac{1}{2} \int _{\varGamma _{1}} \int _{0}^{t}r_{1}'(t-s) \bigl\vert u(s)- u(t) \bigr\vert ^{2} \,ds\,dx. \end{aligned}
(52)

Combining (51) and (52) and using the fact that $$\mu _{1}(t) \leq \mu _{1}(0)=r_{1}(0)$$, the first estimate in (50) is established. Similarly, we can establish the second estimate in (50). □

### Lemma 4.7

Under Assumptions $$(A1)$$$$(A3)$$, the functional $$L(t):=NE(t)+N_{1}\psi _{1}(t)+n_{0} E(t)$$, where $$N, N_{1}, n_{0} > 0$$, satisfies along the solution of (1)(5) the following estimate:

\begin{aligned}[b]L^{\prime }(t)&\le -m E(t)- \frac{1}{4} \int _{t_{1}}^{t} r_{1}'(t-s) \int _{\varGamma _{1}} \bigl\vert u(t)-u(s) \bigr\vert ^{2} \,dx \,d\varGamma \\ &\quad {}-\frac{1}{4} \int _{t_{1}}^{t} r_{2}'(t-s) \int _{\varGamma _{1}} \biggl\vert \frac{\partial u(t)}{\partial \nu }-\frac{\partial u(s)}{ \partial \nu } \biggr\vert ^{2} \,d\varGamma +c \int _{\varOmega } h^{2}(u_{t})\,dx,\quad \forall t \ge t_{1}. \end{aligned}
(53)

### Proof

Using $$L'(t)=NE'(t)+N_{1} \psi _{1}'(t)+n_{0} E'(t)$$, combining (30) and (36), using the properties of $$r_{i}$$ and $$r_{i}'$$ given in Assumption $$(A3)$$ and using $$\mid m \cdot \nu \mid \leq R$$, we obtain

\begin{aligned}[b]L'(t) &\leq - \biggl( \tau _{1} N-\frac{R N_{1}}{2}-\frac{N_{1} \tau _{1} ^{2}}{2\varepsilon } \biggr) \int _{\varGamma _{1}}{ \vert u_{t} \vert }^{2}\,d \varGamma - \biggl(\tau _{2} N-\frac{N_{1} \tau _{1}^{2}}{2\varepsilon } \biggr) \int _{\varGamma _{1}}{ \biggl\vert \frac{\partial u_{t}}{\partial \nu } \biggr\vert }^{2}\,d\varGamma \\ &\quad {}-N_{1} \biggl(1-\frac{\lambda _{0}}{2}-\frac{\varepsilon \lambda _{0}}{2} \biggr) a(u,u)+\frac{N_{1} \tau _{1}^{2}}{2\varepsilon } \int _{\varGamma _{1}}r_{1}^{2}(t){ \vert u \vert }^{2}\,d\varGamma\\ &\quad {} +\frac{N_{1} \tau _{2}^{2}}{2\varepsilon } \int _{\varGamma _{1}}r_{2}^{2}(t){ \biggl\vert \frac{\partial u_{t}}{\partial \nu } \biggr\vert }^{2}\,d\varGamma -\frac{N_{1}}{2} \int _{\varOmega }{ \vert u_{t} \vert }^{2}\,dx\\ &\quad {}+ \frac{N _{1} \tau _{1}^{2} C_{\alpha _{1}}}{2\epsilon } \int _{\varGamma _{1}}(\theta _{1} \circ u ) \,d\varGamma + \frac{N_{1} \tau _{2}^{2} C_{\alpha _{1}}}{2 \epsilon } \int _{\varGamma _{1}} \biggl(\theta _{2} \circ \frac{\partial u _{t}}{\partial \nu } \biggr)\,d\varGamma \\ &\quad {}-N_{1} \biggl(\frac{1}{2}-\frac{\varepsilon R}{2} \biggr) \int _{\varGamma _{1}}m.\nu \bigl[u_{xx}^{2}+u_{yy}^{2}+2 \mu u_{xx}u_{yy}+2(1- \mu )u_{xy}^{2} \bigr]\,d\varGamma \\ &\quad {}-\frac{N_{1}\tau _{1}}{2} \int _{\varGamma _{1}}\bigl(r_{1}^{\prime \prime } \circ u\bigr)\,d \varGamma -\frac{N_{1}\tau _{2}}{2} \int _{\varGamma _{1}}\biggl(r _{2}^{\prime \prime } \circ \frac{\partial u}{\partial \nu }\biggr)\,d\varGamma\\ &\quad {} +n_{0} E'(t)+ \frac{N_{1} c}{2} \int _{\varOmega } h^{2}(u_{t})\,dx. \end{aligned}
(54)

Then choosing $$0 < \varepsilon \leq \min \{ \frac{1}{R}, \frac{2- \lambda _{0}}{\lambda _{0}}\}$$ so that $$\frac{1}{2}-\frac{\varepsilon R}{2} > 0$$ and $$c_{0}:=1-\frac{\lambda _{0}}{2}-\frac{\varepsilon \lambda _{0}}{2} > 0$$ and using $$\lim_{t \rightarrow \infty } r_{i}(t) = 0$$, for $$i = 1, 2$$, we obtain

\begin{aligned} L'(t) &\leq - \biggl( \tau _{1} N-\frac{R N_{1}}{2}-\frac{N_{1} \tau _{1} ^{2}}{2\varepsilon } \biggr) \int _{\varGamma _{1}}{ \vert u_{t} \vert }^{2}\,d \varGamma - \biggl(\tau _{2} N-\frac{N_{1} \tau _{1}^{2}}{2\varepsilon } \biggr) \int _{\varGamma _{1}}{ \biggl\vert \frac{\partial u_{t}}{\partial \nu } \biggr\vert }^{2}\,d\varGamma \\ &\quad {}-\frac{N_{1}}{2} \int _{\varOmega }{ \vert u_{t} \vert }^{2}\,dx- N_{1}c_{0} a(u,u)+\frac{N_{1} \tau _{1}^{2} C_{\alpha _{1}}}{2\epsilon } \int _{\varGamma _{1}}(\theta _{1} \circ u ) \,d\varGamma \\ &\quad {} + \frac{N_{1} c}{2} \int _{\varOmega } h^{2}(u_{t})\,dx +\frac{N_{1} \tau _{2}^{2} C_{\alpha _{2}}}{2\epsilon } \int _{\varGamma _{1}} \biggl(\theta _{2} \circ \frac{\partial u_{t}}{\partial \nu } \biggr)\,d\varGamma -\frac{N_{1}\tau _{1}}{2} \int _{\varGamma _{1}}\bigl(r_{1} ^{\prime \prime } \circ u\bigr)\,d \varGamma \\ &\quad {} -\frac{N_{1}\tau _{2}}{2} \int _{\varGamma _{1}}\biggl(r_{2}^{\prime \prime } \circ \frac{\partial u}{ \partial \nu }\biggr)\,d\varGamma +n_{0} E'(t). \end{aligned}
(55)

In this case, we choice N large enough so that

\begin{aligned}[b] &\tau _{2} N- \frac{N_{1} \tau _{1}^{2}}{2\varepsilon }> 0, \\ &\tau _{1} N-\frac{R N_{1}}{2}-\frac{N_{1} \tau _{1}^{2}}{2\varepsilon }> 0. \end{aligned}
(56)

Then (55) reduces to

\begin{aligned}[b] L'(t) &\leq - \frac{N_{1}}{2} \int _{\varOmega }{ \vert u_{t} \vert }^{2}\,dx-N _{1} c_{0} a(u,u)+\frac{N_{1} \tau _{1}^{2} C_{\alpha _{1}}}{2\epsilon } \int _{\varGamma _{1}}(\theta _{1} \circ u )\,d\varGamma \\ &\quad {}+ \frac{N_{1} c}{2} \int _{\varOmega } h^{2}(u_{t})\,dx+\frac{N_{1} \tau _{2}^{2} C_{\alpha _{2}}}{2\epsilon } \int _{\varGamma _{1}} \biggl(\theta _{2} \circ \frac{\partial u_{t}}{\partial \nu } \biggr)\,d\varGamma \\ &\quad {} -\frac{N_{1}\tau _{1}}{2} \int _{\varGamma _{1}}\bigl(r_{1} ^{\prime \prime } \circ u\bigr)\,d \varGamma-\frac{N_{1}\tau _{2}}{2} \int _{\varGamma _{1}}\biggl(r_{2}^{\prime \prime } \circ \frac{\partial u}{ \partial \nu }\biggr)\,d\varGamma+n_{0} E'(t). \end{aligned}
(57)

Recall that $$r_{i}''=\alpha r_{i}' + \theta _{i}$$, $$i=1,2$$, and use (19), to obtain

\begin{aligned}[b]L^{\prime }(t)&\le - \frac{N_{1}}{2} \int _{\varOmega }{ \vert u_{t} \vert } ^{2}\,dx-N_{1} c_{0} a(u,u)- \biggl( \frac{N_{1} \tau _{1}}{2}-\frac{N_{1} \tau _{1}^{2} C_{\alpha _{1}}}{2\epsilon } \biggr) \int _{\varGamma _{1}}(\theta _{1} \circ u)\,d\varGamma \\ &\quad {}- \biggl(\frac{N_{1}\tau _{2}}{2}-\frac{N_{1} \tau _{2}^{2} C_{\alpha _{2}}}{2 \epsilon } \biggr) \int _{\varGamma _{1}} \biggl(\theta _{2} \circ \frac{\partial u}{\partial \nu } \biggr)\,d\varGamma +\frac{N_{1} c}{2} \int _{\varOmega } h^{2}(u _{t})\,dx, \\ &\quad {}-\frac{\tau _{1} N_{1}\alpha }{2} \int _{\varGamma _{1}}\bigl(r_{1} ^{\prime } \circ u\bigr)\,d \varGamma -\frac{\tau _{2} N_{1} \alpha }{2} \int _{\varGamma _{1}}\biggl(r_{2}^{\prime } \circ \frac{\partial u}{\partial \nu }\biggr)\,d\varGamma +n_{0} E'(t)\quad \forall t\ge t_{1}. \end{aligned}
(58)

Now, our purpose is to have, for $$i=1,2$$,

$$\frac{N_{i} \tau _{i}}{2}-C_{\alpha _{i}} \biggl(\frac{N_{i} \tau _{i} ^{2} }{2\epsilon } \biggr) > \frac{N_{i} \tau _{i}}{4}.$$
(59)

As in , we can deduce that $$\alpha _{i} C _{\alpha _{i}} \rightarrow 0$$ when $$\alpha _{i} \rightarrow 0$$. Then there exists $$0 < {\alpha _{0}}_{i} < 1$$ such that if $$\alpha _{i} < {\alpha _{0}}_{i}$$, then

$$C_{\alpha _{i}} < \frac{\epsilon }{4\alpha _{i} \tau _{i}^{2} N_{i}}.$$

Now, we choose $$0 < \alpha _{i} = \frac{1}{2 N_{i} \tau _{i}} < 1$$, to obtain

$$C_{\alpha _{i}} \biggl(\frac{N_{i} \tau _{i}^{2} }{2\epsilon } \biggr) < \frac{1}{8 \alpha _{i}}= \frac{N_{i} \tau _{i}}{4},$$
(60)

and hence, we have

\begin{aligned}[b] &N_{1} \biggl( \frac{\tau _{i}}{2}-\frac{\tau _{i}^{2} C_{\alpha _{i}}}{2 \epsilon } \biggr)> 0, \quad i=1,2, \end{aligned}
(61)

and then (58) becomes

\begin{aligned}[b] L^{\prime }(t)&\le - \frac{N_{1}}{2} \int _{\varOmega }{ \vert u_{t} \vert } ^{2}\,dx-N_{1} c_{0} a(u,u) + \frac{N_{1} c}{2} \int _{\varOmega } h^{2}(u _{t})\,dx, \\ &\quad {}-\frac{1}{4} \int _{\varGamma _{1}}\bigl(r_{1}^{\prime } \circ u\bigr)\,d \varGamma -\frac{1}{4} \int _{\varGamma _{1}}\biggl(r_{2}^{\prime } \circ \frac{ \partial u}{\partial \nu }\biggr)\,d\varGamma + n_{0} E'(t). \end{aligned}
(62)

From (34) and (30), we notice that, for all $$t \geq t_{1}$$,

\begin{aligned} &{-} \int _{0}^{t_{1}}r_{1}'(s) \int _{\varGamma _{1}} \bigl\vert u(t)-u(t-s) \bigr\vert ^{2} \,d\varGamma \,ds \\ &\quad {}\leq \frac{1}{d_{1}} \int _{0}^{t_{1}}r_{1}''(s) \int _{\varGamma _{1}} \bigl\vert u(t)-u(t-s) \bigr\vert ^{2} \,d\varGamma \,ds \le -cE'(t), \\ &{-} \int _{0}^{t_{1}}r_{2}'(s) \int _{\varGamma _{1}} \biggl\vert \frac{ \partial u(t)}{\partial \nu }- \frac{\partial u(t-s)}{\partial \nu } \biggr\vert ^{2} \,d\varGamma \,ds \\ &\quad \leq \frac{1}{d_{2}} \int _{0}^{t_{1}}r_{2}''(s) \biggl\vert \frac{\partial u(t)}{\partial \nu }- \frac{\partial u(t-s)}{ \partial \nu } \biggr\vert ^{2} \,d\varGamma \,ds \le -cE'(t). \end{aligned}
(63)

Then, using (62) and (63), we have for all $$t \geq t_{1}$$

\begin{aligned}[b]L^{\prime }(t)&\le - \frac{N_{1}}{2} \int _{\varOmega }{ \vert u_{t} \vert } ^{2}\,dx-N_{1} c_{0} a(u,u)-\frac{1}{4} \int _{t_{1}}^{t}r_{1}'(s) \int _{\varGamma _{1}} \bigl\vert u(t)-u(t-s) \bigr\vert ^{2} \,d\varGamma \,ds \\ &\quad {}-\frac{1}{4} \int _{t_{1}}^{t}r_{2}'(s) \int _{\varGamma _{1}} \biggl\vert \frac{\partial u(t)}{\partial \nu }-\frac{\partial u(t-s)}{ \partial \nu } \biggr\vert \,d\varGamma \,ds\\&\quad{} +\frac{N_{1} c}{2} \int _{\varOmega } h^{2}(u_{t})\,dx +(n_{0}-c) E'(t). \end{aligned}
(64)

Now, we choose $$n_{0}$$ so that $$n_{0}-c >0$$, then (53) is established. Moreover, we can choose N even larger (if needed) so that

$$L(t) \sim E(t).$$
(65)

□

### Lemma 4.8

 Under Assumptions $$(A1)$$$$(A3)$$, the solution satisfies the estimates

\begin{aligned}& \begin{aligned}[b] \int _{\varOmega _{1}}h^{2}(u_{t})\,dx \le c \int _{\varOmega _{1}}u_{t} h(u_{t})\,dx,\quad \textit{if }h_{0}\textit{ is linear}, \end{aligned} \end{aligned}
(66)
\begin{aligned}& \begin{aligned}[b] \int _{\varOmega _{1}}h^{2}(u_{t})\,dx \le cH^{-1}\bigl(J(t)\bigr)-cE^{\prime }(t),\quad \textit{if }h_{0}\textit{ is nonlinear}, \end{aligned} \end{aligned}
(67)

where

$$J(t):= \int _{\varOmega _{1}}u_{t}(t)h\bigl(u_{t}(t)\bigr)\,dx \le -cE^{\prime }(t)$$
(68)

and

$$\varOmega _{1}=\bigl\{ x\in \varOmega : \bigl\vert u_{t}(t) \bigr\vert \le \varepsilon _{1} \bigr\} .$$

### Lemma 4.9

Assume that $$(A1)$$$$(A3)$$ hold and $$h_{0}$$ is linear. Then the energy functional satisfies the following estimate:

$$\int _{0}^{+\infty }E(s)\,ds < \infty .$$
(69)

### Proof

Let $$F(t)=L(t)+\psi _{2}(t)+ \psi _{3}(t)$$, using (50) and (64), and using the trace theory, we obtain for all $$t \geq t_{1}$$

\begin{aligned}[b]F^{\prime }(t)&\le - \frac{N_{1}}{2} \int _{\varOmega } \vert u_{t} \vert \,dx-N _{1} c_{0} a (u,u)+\frac{1}{4}\bigl(r_{1}' \circ u\bigr) (t) +\frac{1}{4} \biggl(r_{2}' \circ \frac{\partial u}{\partial \nu }\biggr) (t) \\ &\quad {}+ \frac{N_{1} c}{2} \int _{\varGamma _{1}} h^{2}(u_{t})\,d \varGamma +3 r_{1}(0) \int _{\varOmega } \bigl\vert u(t) \bigr\vert ^{2}\,dx+3r_{2}(0) \int _{\varOmega } \biggl\vert \frac{\partial u(t)}{\partial \nu } \biggr\vert ^{2}\,dx. \end{aligned}
(70)

Using (17) and (18), we arrive at

\begin{aligned}[b]F^{\prime }(t)&\le - \frac{N_{1}}{2} \int _{\varOmega } \vert u_{t} \vert \,dx-(N _{1} c_{0} - c_{r} ) a (u,u)+\frac{1}{4} \bigl(r_{1}' \circ u\bigr) (t) + \frac{1}{4} \biggl(r_{2}' \circ \frac{\partial u}{\partial \nu }\biggr) (t) \\ &\quad {}+ c \int _{\varGamma _{1}} h^{2}(u_{t})\,d \varGamma , \end{aligned}
(71)

where $$c_{r}= (3 c_{p} r_{1}(0) + 3 c_{s} r_{2}(0))$$ and $$c_{p}$$, $$c _{s}$$ are given in (18). Here, we choose $$N_{1}$$ large enough so that $$N_{1}c_{0} -c_{r} > 0$$. After that, we can choose N even larger (if needed) so that (56) holds. Now, we have

\begin{aligned}[b]F^{\prime }(t)&\le -b E(t)+c \int _{\varOmega }u_{t} h(u_{t})\,dx \\ & \le -bE(t)-cE^{\prime }(t), \end{aligned}

where b is a positive constant. Therefore,

$$b \int _{t_{1}}^{t}E(s)\,ds\le F_{1}(t_{1})-F_{1}(t) \le F_{1}(t_{1})< \infty ,$$
(72)

where $$F_{1}(t)=F(t)+cE(t)\sim E$$. □

Now, we define

\begin{aligned} &I_{1}(t):= \int _{t_{1}}^{t}r_{1}^{\prime \prime }(s) \int _{\varGamma _{1}} { \bigl\vert u(t)-u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds\le -cE^{\prime }(t), \\ &I _{2}(t):= \int _{t_{1}}^{t}r_{2}^{\prime \prime }(s) \int _{\varGamma _{1}} { \biggl\vert \frac{\partial u(t)}{\partial \nu }- \frac{\partial u(t-s)}{ \partial \nu } \biggr\vert }^{2}\,d \varGamma \,ds\le -cE^{\prime }(t). \end{aligned}
(73)

### Lemma 4.10

Under Assumptions $$(A1)$$$$(A3)$$, and if $$h_{0}$$ is linear, we have the following estimates:

$$\int _{t_{1}}^{t}-r_{1}' (s) \int _{\varOmega }{ \bigl\vert u(t)-u(t-s) \bigr\vert } ^{2}\,dx\,ds\le \frac{1}{q} \overline{G_{1}}^{-1} \biggl(\frac{q I_{1}(t)}{ \xi _{1}(t)} \biggr)$$
(74)

and

$$\int _{t_{1}}^{t}-r_{2}' (s) \int _{\varOmega } { \biggl\vert \frac{\partial u(t)}{\partial \nu }- \frac{\partial u(t-s)}{\partial \nu } \biggr\vert }^{2} \,d \varGamma \,ds\le \frac{1}{q} \overline{G_{2}}^{-1} \biggl( \frac{q I_{2}(t)}{\xi _{2}(t)} \biggr),$$
(75)

and if $$h_{0}$$ is nonlinear, we have the following estimates:

\begin{aligned}& \int _{t_{1}}^{t}-r_{1}' (s) \int _{\varOmega }{ \bigl\vert u(t)-u(t-s) \bigr\vert } ^{2}\,dx\,ds\le \frac{(t-t_{1})}{q} \overline{G_{1}}^{-1} \biggl(\frac{q I _{1}(t)}{(t-t_{1})\xi _{1}(t)} \biggr), \end{aligned}
(76)
\begin{aligned}& \int _{t_{1}}^{t}-r_{2}' (s) \int _{\varOmega }{ \biggl\vert \frac{\partial u(t)}{\partial \nu }- \frac{\partial u(t-s)}{\partial \nu } \biggr\vert }^{2} \,d \varGamma \,ds\le \frac{(t-t_{1})}{q} \overline{G_{2}}^{-1} \biggl( \frac{q I_{2}(t)}{(t-t_{1})\xi _{2}(t)} \biggr), \end{aligned}
(77)

where $$q \in (0,1)$$, $$\overline{G_{1}}$$ and $$\overline{G_{2}}$$ are the extensions of $$G_{1}$$ and $$G_{2}$$, respectively, such that $$\overline{G _{1}}$$ and $$\overline{G_{2}}$$ are strictly increasing and strictly convex $$C^{2}$$ functions on $$(0,\infty )$$

### Proof

Case I: if $$h_{0}$$ is linear: we define the following quantities:

\begin{aligned}[b] &\lambda _{1} (t):=q \int _{t_{1}}^{t} \int _{\varGamma _{1}} { \bigl\vert u(t)-u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds, \\ &\lambda _{2} (t):=q \int _{t_{1}}^{t} \int _{\varGamma _{1}} { \biggl\vert \frac{\partial u(t)}{\partial \nu }- \frac{ \partial u(t-s)}{\partial \nu } \biggr\vert }^{2}\,d \varGamma \,ds, \end{aligned}
(78)

where by (69), (19) and (16) we can choose q so small such that $$\forall t \ge t_{1}$$,

$$\lambda _{i} (t)< 1,\quad i=1,2.$$
(79)

Since $$G_{i}$$ is strictly convex on $$(0,R_{i}]$$ and $$G_{i}(0)=0$$, we have

$$G_{i}(\theta z)\le \theta G_{i}(z),\quad 0\le \theta \le 1\text{ and }z\in (0,r],$$
(80)

where $$r=\min \{R_{1}, R_{2}\}$$. Without loss of generality, for all $$t \ge t_{1}$$, we assume that $$I_{i}(t)> 0$$, $$i=1,2$$, otherwise we get an exponential decay from (53). Using (14), (79), (80) and Jensen’s inequality, we have

\begin{aligned}[b]I_{1}(t)&=\frac{1}{q \lambda _{1}(t)} \int _{t_{1}}^{t}\lambda _{1} (t) r _{1}''(s) \int _{\varGamma _{1}}{q \bigl\vert u(t)- u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds \\ & \ge \frac{1}{q \lambda _{1}(t)} \int _{t_{1}}^{t} \lambda _{1} (t) \xi _{1} (s) G_{1}\bigl(-r_{1}'(s)\bigr) \int _{\varGamma _{1}}{ q \bigl\vert u(t)- u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds \\ & \ge \frac{1}{q \lambda _{1}(t)} \int _{t_{1}}^{t} \xi _{1} (s) G_{1} \bigl(-\lambda _{1} (t) r_{1}'(s)\bigr) \int _{\varGamma _{1}}{ q \bigl\vert u(t)- u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds \\ & \ge \frac{ \xi _{1} (t)}{q \lambda _{1}(t)} \int _{t_{1}} ^{t} G_{1}\bigl(-\lambda _{1} (t) r_{1}'(s)\bigr) \int _{\varGamma _{1}}{ q \bigl\vert u(t)- u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds \\ &\ge \frac{ \xi _{1} (t)}{q \lambda _{1}(t)} \lambda _{1}(t) G_{1}\biggl(q \int _{t_{1}}^{t} - r_{1}'(s) \int _{\varGamma _{1}}{ \bigl\vert u(t)- u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds\biggr) \\ & = \frac{ \xi _{1} (t)}{q } \overline{G_{1}} \biggl( q \int _{t _{1}}^{t} -r_{1}'(s) \int _{\varGamma _{1}}{ \bigl\vert u(t)- u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds \biggr). \end{aligned}

This gives

$$\int _{t_{1}}^{t} -r_{1}' (s) \int _{\varGamma _{1}}{ \bigl\vert u(t)- u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds \leq \frac{1}{q } \overline{G_{1}}^{-1} \biggl(\frac{qI _{1}(t)}{\xi _{1} (t)} \biggr).$$

Similarly, we can show that

$$\int _{t_{1}}^{t}-r_{2}' (s) \int _{\varGamma _{1}} { \biggl\vert \frac{ \partial u(t)}{\partial \nu }- \frac{\partial u(t-s)}{\partial \nu } \biggr\vert }^{2} \,d \varGamma \,ds\le \frac{1}{q} \overline{G_{2}}^{-1} \biggl( \frac{q I_{2}(t)}{\xi _{2}(t)} \biggr).$$

Case II: if $$h_{0}$$ is nonlinear: we introduce the following functionals:

\begin{aligned}[b] &\lambda _{3} (t):= \frac{q }{t-t_{1}} \int _{t_{1}}^{t} \int _{\varGamma _{1}} { \bigl\vert u(t)-u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds, \\ & \lambda _{4} (t):= \frac{q }{t-t_{1}} \int _{t_{1}}^{t} \int _{\varGamma _{1}} { \biggl\vert \frac{\partial u(t)}{\partial \nu }- \frac{\partial u(t-s)}{\partial \nu } \biggr\vert }^{2}\,d \varGamma \,ds, \end{aligned}
(81)

then using (16), (19) and (69), we can choose q so small enough so that $$\forall t \ge t_{1}$$,

$$\lambda _{i} (t)< 1, \quad i=3,4.$$
(82)

Using (14), (80), (82) and Jensen’s inequality, we get

\begin{aligned}[b]I_{1}(t) &=\frac{1}{q \lambda _{3}(t)} \int _{t_{1}}^{t}\lambda _{3} (t) r _{1}''(s) \int _{\varGamma }{q \bigl\vert u(t)- u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds \\ & \ge \frac{1}{q \lambda _{3}(t)} \int _{t_{1}}^{t} \lambda _{3} (t) \xi _{1} (s) G_{1}\bigl(-r_{1}'\bigr) \int _{\varGamma }{ q \bigl\vert u(t)- u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds \\ & \ge \frac{1}{q \lambda _{3}(t)} \int _{t_{1}}^{t} \xi _{1} (s) G_{1} \bigl(-\lambda _{3} (t) r_{1}'(s)\bigr) \int _{\varGamma }{ q \bigl\vert u(t)- u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds \\ & \ge \frac{ \xi _{1} (t)}{q \lambda _{3}(t)} \int _{t_{1}} ^{t} G_{1}\bigl(-\lambda _{3} (t) r_{1}'(s)\bigr) \int _{\varGamma }{ q \bigl\vert u(t)- u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds \\ & \ge \frac{(t-t_{1}) \xi _{1} (t)}{q \lambda _{3}(t)} \lambda _{3}(t) G_{1} \biggl(\frac{q}{(t-t_{1})} \int _{t_{1}}^{t} - r _{1}'(s) \int _{\varGamma }{ \bigl\vert u(t)- u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds \biggr) \\ & = \frac{ (t-t_{1}) \xi _{1} (t)}{q } \overline{G_{1}} \biggl( \frac{q}{(t-t_{1})} \int _{t_{1}}^{t} -r_{1}'(s) \int _{\varGamma } { \bigl\vert u(t)- u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds \biggr). \end{aligned}

This gives

$$\int _{t_{1}}^{t} -r_{1}' \int _{\varGamma }{ \bigl\vert u(t)- u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds \leq \frac{(t-t_{1})}{q} \overline{G_{1}}^{-1} \biggl(\frac{q I_{1}(t)}{(t-t_{1})\xi _{1}(t)} \biggr).$$

Similarly, we can have

$$\int _{t_{1}}^{t} -r_{2}' (s) \int _{\varGamma _{1}} { \biggl\vert \frac{ \partial u(t)}{\partial \nu }- \frac{\partial u(t-s)}{\partial \nu } \biggr\vert }^{2} \,d \varGamma \,ds\le \frac{(t-t_{1})}{q} \overline{G_{2}} ^{-1} \biggl( \frac{q I_{2}(t)}{(t-t_{1})\xi _{1}(t)} \biggr).$$

□

## Proofs of our main results

Here, we prove the main results of our work given in Theorem 3.1 and 3.2.

### Proof of Theorem 3.1, case 1, G is linear

We multiply (53) by the nonincreasing function $$\sigma (t)$$. We use (14), (30) and (66), and invoke (14) to have

\begin{aligned}[b] \sigma (t) L^{\prime }(t)&\le -m \sigma (t) E(t)-c\sigma (t) \int _{t _{1}}^{t}r_{1}^{\prime }(s) \int _{\varGamma _{1}}{ \bigl\vert u(t)- u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds \\ &\quad {}-c\sigma (t) \int _{t_{1}}^{t}r_{2}^{ \prime }(s) \int _{\varGamma _{1}}{ \biggl\vert \frac{\partial u(t)}{\partial \nu }- \frac{u(t-s)}{ \partial \nu } \biggr\vert }^{2}\,d \varGamma \,ds+c\sigma (t) \int _{\varOmega } h^{2}(u_{t})\,dx\quad \forall t\ge t_{1} \\ & \le -m \sigma (t) E(t)+c \int _{\varGamma _{1}} \biggl[\bigl(r_{1}^{\prime \prime } \circ u \bigr) (t) + \biggl(r_{2}^{\prime \prime } \circ \frac{\partial u(t)}{ \partial \nu } \biggr) \biggr] \,d \varGamma +c\sigma (t) \int _{\varOmega } h^{2}(u _{t})\,dx \\ & \le -m \sigma (t) E(t)-2c E'(t). \end{aligned}

This gives

$$(\sigma L +2c E)' \leq -m \sigma (t) E(t), \quad \forall t \geq t_{1}.$$
(83)

Using the fact $$\sigma '(t) \leq 0$$, we have $$\sigma L + 2cE \sim E$$, and we can obtain

$$E (t)\le c_{1} e^{- c_{2} \int _{t_{1}}^{t} \sigma (s) \,ds}.$$
(84)

□

### Proof of Theorem 3.1, case 2, G is nonlinear

Using (53), (66), (75) and (74), we get

\begin{aligned}[b]L^{\prime }(t)&\le -mE(t)-c \int _{t_{1}}^{t}r_{1}^{\prime }(s) \int _{\varGamma _{1}}{ \bigl\vert u(t)-u(t-s) \bigr\vert }^{2}\,d \varGamma \,ds \\ &\quad {}-c \int _{t_{1}}^{t}r_{2}^{\prime }(s) \int _{\varGamma _{1}} { \biggl\vert \frac{\partial u(t)}{\partial \nu }- \frac{u(t-s)}{ \partial \nu } \biggr\vert }^{2}\,d \varGamma \,ds+c \int _{\varOmega } h^{2}(u _{t})\,dx\quad \forall t \ge t_{1} \\ & \le -m E(t)+\frac{1}{q } \overline{G}^{-1} \biggl( \frac{qI_{1}(t)}{ \sigma (t)} \biggr)+\frac{1}{q } \overline{G}^{-1} \biggl( \frac{qI_{2}(t)}{ \sigma (t)} \biggr)-cE'(t) \\ &\le -m E(t)+\frac{1}{q } \overline{G}^{-1} \biggl( \frac{qI(t)}{\sigma (t)} \biggr)-cE'(t), \end{aligned}
(85)

where $$I(t)=\max \{I_{1}(t),I_{2}(t)\}$$ $$\forall t \geq t_{1}$$. Let $$\mathcal{F}_{1}(t)=L(t)+cE(t)\sim E$$, then (85) becomes

$$\mathcal{F}_{1}^{\prime }(t)\le -m E(t)+c ( \overline{G} ) ^{-1} \biggl(\frac{qI(t)}{\sigma (t)} \biggr),$$
(86)

we notice that the functional $$\mathcal{F}_{2}$$, defined by

$$\mathcal{F}_{2}(t):=\overline{G}^{\prime } \biggl(\varepsilon _{0} \frac{E(t)}{E(0)} \biggr)\mathcal{F}_{1}(t)$$

satisfies

$$\alpha _{1}\mathcal{F}_{2}(t)\le E(t)\le \alpha _{2}\mathcal{F}_{2}(t)$$
(87)

where $$\alpha _{1},\alpha _{2}>0$$, and

\begin{aligned}[b]\mathcal{F}_{2}^{\prime }(t)&= \varepsilon _{0} \frac{E^{\prime }(t)}{E(0)}\overline{G}^{\prime \prime } \biggl( \varepsilon _{0} \frac{E(t)}{E(0)} \biggr)\mathcal{F}_{1}(t)+ \overline{G}^{\prime } \biggl(\varepsilon _{0}\frac{E(t)}{E(0)} \biggr){\mathcal{F}_{1}}^{ \prime }(t) \\ & \le -m E(t)\overline{G}^{\prime } \biggl(\varepsilon _{0} \frac{E(t)}{E(0)} \biggr)+c \overline{G}^{\prime } \biggl(\varepsilon _{0}\frac{E(t)}{E(0)} \biggr)\overline{G}^{-1} \biggl( \frac{qI(t)}{ \sigma (t)} \biggr). \end{aligned}
(88)

As in the sense of Young (see ), let $$\overline{G}^{*}$$ be the convex conjugate of , then

$$\overline{G}^{*}(a)=a\bigl(\overline{G}^{\prime } \bigr)^{-1}(a)-\overline{G} \bigl[\bigl(\overline{G}^{\prime } \bigr)^{-1}(a) \bigr],\quad \text{if } a\in \bigl(0,\overline{G}^{\prime }(r)\bigr]$$
(89)

and $$\overline{G}^{*}$$ satisfies the generalized Young inequality

$$A B\le \overline{G}^{*}(A)+\overline{G}(B),\quad \text{if } A\in \bigl(0,\overline{G}^{\prime }(r)\bigr], B\in (0,r].$$
(90)

So, with $$A=\overline{G}^{\prime } (\varepsilon _{0}\frac{E^{ \prime }(t)}{E(0)} )$$ and $$B=\overline{G}^{-1} (\frac{qI(t)}{ \sigma (t)} )$$ and using (19) and (88)–(90), we arrive at

\begin{aligned}[b]\mathcal{F}_{2}^{\prime }(t) &\le -m E(t)\overline{G}^{\prime } \biggl(\varepsilon _{0} \frac{E(t)}{E(0)} \biggr)+c \overline{G}^{*} \biggl(\overline{G} ^{\prime } \biggl(\varepsilon _{0}\frac{E(t)}{E(0)} \biggr) \biggr)+c \biggl(\frac{qI(t)}{\sigma (t)} \biggr) \\ & \le -m E(t)\overline{G}^{\prime } \biggl(\varepsilon _{0} \frac{E(t)}{E(0)} \biggr)+c\varepsilon _{0}\frac{E(t)}{E(0)} \overline{G}^{\prime } \biggl(\varepsilon _{0}\frac{E(t)}{E(0)} \biggr)+c \biggl(\frac{qI(t)}{\sigma (t)} \biggr). \end{aligned}
(91)

So, multiplying (91) by $$\sigma (t)$$ and using the fact that $$\varepsilon _{0}\frac{E(t)}{E(0)}< r$$, $$\overline{G}^{\prime } (\varepsilon _{0}\frac{E(t)}{E(0)} )=G^{\prime } (\varepsilon _{0}\frac{E(t)}{E(0)} )$$, gives

\begin{aligned}[b]\sigma (t) \mathcal{F}_{2}^{\prime }(t) &\le -m \sigma (t) E(t)G^{ \prime } \biggl(\varepsilon _{0} \frac{E(t)}{E(0)} \biggr)+c \sigma (t) \varepsilon _{0} \frac{E(t)}{E(0)}G^{\prime } \biggl(\varepsilon _{0} \frac{E(t)}{E(0)} \biggr)+c q I(t) \\ &\le -m \sigma (t) E(t)G^{\prime } \biggl(\varepsilon _{0} \frac{E(t)}{E(0)} \biggr)+c \sigma (t)\varepsilon _{0} \frac{E(t)}{E(0)}G^{\prime } \biggl(\varepsilon _{0} \frac{E(t)}{E(0)} \biggr)-c E^{\prime }(t). \end{aligned}

Now, for all $$t\ge t_{1}$$ and with a good choice of $$\varepsilon _{0}$$, we obtain

$$\mathcal{F}_{3}^{\prime }(t)\le -m_{0} \sigma (t) \biggl(\frac{E(t)}{E(0)} \biggr)G ^{\prime } \biggl(\varepsilon _{0}\frac{E(t)}{E(0)} \biggr)=-m_{0} \sigma (t) G_{3} \biggl(\frac{E(t)}{E(0)} \biggr),$$
(92)

where $$\mathcal{F}_{3}=\sigma \mathcal{F}_{2}+c E \sim E$$ satisfies, for any $$\beta _{3},\beta _{4}> 0$$,

$$\beta _{3}\mathcal{F}_{3}(t)\le E(t)\le \beta _{4}\mathcal{F}_{3}(t),$$
(93)

and $$G_{3}(t)=t G^{\prime }(\varepsilon _{0}t)$$. Since $$G^{\prime } _{3}(t)=G^{\prime }(\varepsilon _{0}t)+\varepsilon _{0}t G^{\prime \prime }(\varepsilon _{0}t)$$. Since G is strictly convex over $$(0,r]$$, we find that $$G_{3}^{\prime }(t), G_{3}(t)>0$$ on $$(0,1]$$. Then, with

$$R(t)= \frac{\beta _{3} \mathcal{F}_{3}(t)}{E(0)},$$

using (93) and (92), we obtain

$$R(t)\sim E(t)$$
(94)

and then

$$R^{\prime }(t)\le -m_{1}\sigma (t) G_{3}\bigl(R(t) \bigr),\quad \forall t\ge t_{1},$$

where $$m_{1} > 0$$. We, after integration over $$(t_{1},t)$$, get

\begin{aligned}[b] \int _{t_{1}}^{t}\frac{-R^{\prime }(s)}{G_{3}(R(s))}\,ds \ge m_{1} \int _{t_{1}}^{t}\sigma (s)\,ds. \end{aligned}

Hence, by an appropriate change of variable, we get

\begin{aligned}[b] \int _{\varepsilon _{0} R(t)}^{\varepsilon _{0} R(t_{1})}\frac{1}{\tau G ^{\prime }(\tau )}\,d\tau \ge m_{1} \int _{t_{1}}^{t} \sigma (s)\,ds. \end{aligned}

Thus, we have

$$R(t)\le \frac{1}{\varepsilon _{0}}G_{4}^{-1} \biggl(m_{1} \int _{t_{1}} ^{t}\sigma (s)\,ds \biggr),$$
(95)

where $$G_{4}(t)=\int _{t}^{r}\frac{1}{sG^{\prime }(s)}\,ds$$. Here, we used the strictly decreasing property of $$G_{4}$$ over $$(0,r]$$. Therefore (21) is established by virtue of (94) and hence we finished the proof of Theorem 3.1. □

### Proof of Theorem 3.2, case 1, G is linear

Multiplying (53) by $$\sigma (t)$$, using (67), gives, as $$\sigma (t)$$ is nonincreasing, the following:

\begin{aligned}& \begin{aligned}[b]\sigma (t) L^{\prime }(t) &\le -m \sigma (t) E(t)+c \int _{\varGamma _{1}} \biggl[\bigl(r_{1}^{\prime \prime } \circ u \bigr) (t) + \biggl(r_{2}^{\prime \prime } \circ \frac{\partial u(t)}{\partial \nu } \biggr) \biggr] \,d \varGamma +c\sigma (t) \int _{\varOmega } h^{2}(u_{t})\,dx \\ & \le -m \sigma (t) E(t)-2c E'(t)+c\sigma (t) \int _{\varOmega } h^{2}(u_{t})\,dx \\ & \le -m \sigma (t) E(t)-2c E'(t)+c \sigma (t) \bigl(H^{-1}\bigl(J(t)\bigr)-cE'(t) \bigr) \\ & \le -m \sigma (t) E(t)-3c E'(t)+c \sigma (t) H^{-1} \bigl(J(t)\bigr), \end{aligned} \\& (\sigma L +3 c E)' \leq -m \sigma (t) E(t)+ c \sigma (t) H^{-1}\bigl(J(t)\bigr),\quad \forall t \geq t_{1}. \end{aligned}
(96)

Therefore, (96) becomes

$${\mathcal{L}}'(t) \leq -m \sigma (t) E(t)+ c \sigma (t) H^{-1}\bigl(J(t)\bigr),\quad \forall t \geq t_{1},$$
(97)

where $$\mathcal{L}:=\sigma L +3c E \sim E$$. Now, for $$\varepsilon _{1}< r _{0}$$ and $$c_{0}>0$$, using (97) and the fact that $$E^{\prime }\le 0$$, $$H^{\prime }>0$$, $$H^{\prime \prime }>0$$ on $$(0,r_{0}]$$, we notice that the functional $$\mathcal{L}_{1}$$, defined by

$$\mathcal{L}_{1}(t):=H' \biggl(\varepsilon _{1} \frac{E(t)}{E(0)} \biggr) \mathcal{L}(t)+c_{0}E(t)$$

satisfies, for some $$\alpha _{3},\alpha _{4}>0$$.

$$\alpha _{3} \mathcal{L}_{1}(t)\le E(t)\le \alpha _{4}\mathcal{L}_{1}(t)$$
(98)

and

\begin{aligned}[b]\mathcal{L}_{1}^{\prime }(t)&= \varepsilon _{0} \frac{E^{\prime }(t)}{E(0)}H^{\prime \prime } \biggl(\varepsilon _{0} \frac{E(t)}{E(0)} \biggr) \mathcal{L}(t)+ H^{\prime } \biggl(\varepsilon _{0} \frac{E(t)}{E(0)} \biggr){ \mathcal{L}}^{\prime }(t)+c_{0}E^{\prime }(t) \\ & \le -m E(t)H^{\prime } \biggl(\varepsilon _{0} \frac{E(t)}{E(0)} \biggr)+c \sigma (t) H^{\prime } \biggl(\varepsilon _{0}\frac{E(t)}{E(0)} \biggr)H^{-1}\bigl(J(t) \bigr)+c_{0}E^{\prime }(t). \end{aligned}
(99)

Now, let $$H^{*}$$ be the convex conjugate of H (see ), then, as in (89) and (90), with $$A=H^{\prime } (\varepsilon _{1} \frac{E(t)}{E(0)} )$$ and $$B=H^{-1}(J(t))$$, (99) gives

\begin{aligned}[b]\mathcal{L}_{1}^{\prime }(t)&\le -m E(t)H^{\prime } \biggl(\varepsilon _{1}\frac{E(t)}{E(0)} \biggr)+c \sigma (t) H^{*} \biggl(H^{\prime } \biggl(\varepsilon _{1}\frac{E(t)}{E(0)} \biggr) \biggr)+c \sigma (t) J(t)+c _{0}E^{\prime }(t) \\ & \le -m E(t)H^{\prime } \biggl(\varepsilon _{1} \frac{E(t)}{E(0)} \biggr)+c\varepsilon _{1} \sigma (t) \frac{E(t)}{E(0)}H^{\prime } \biggl(\varepsilon _{1} \frac{E(t)}{E(0)} \biggr)-c E^{\prime }(t)+c_{0} E^{\prime }(t). \end{aligned}

Choosing suitable $$\varepsilon _{1}$$ and $$c_{0}$$, we find, for all $$t\ge t_{1}$$,

$$\mathcal{L}_{1}^{\prime }(t)\le -c \sigma (t) \frac{E^{\prime }(t)}{E(0)}H^{\prime } \biggl(\varepsilon _{1} \frac{E(t)}{E(0)} \biggr)=-c \sigma (t) H_{2} \biggl(\varepsilon _{1} \frac{E(t)}{E(0)} \biggr),$$
(100)

where $$H_{2}(t)=t H^{\prime }(\varepsilon _{1}t)$$. We have $$H^{\prime }_{2}(t)=H^{\prime }(\varepsilon _{1}t)+\varepsilon _{1}t H^{\prime \prime }(\varepsilon _{1}t)$$. Since H is strictly convex over $$(0,r_{0}]$$, we find that $$H_{2}^{\prime }(t), H_{2}(t)>0$$ on $$(0,1]$$. Then, with

$$R_{1}(t)= \frac{\alpha _{3}{\mathcal{L}_{1}}(t)}{E(0)},$$

using (98) and (100), we have

\begin{aligned}& R_{1}(t)\sim E(t), \\& R_{1}^{\prime }(t)\le -c_{3} \sigma (t) H_{2} \bigl(R_{1}(t)\bigr),\quad \forall t\ge t_{1}, \end{aligned}
(101)

where $$c_{3}>0$$. Thus, we integrate over $$(t_{1},t)$$ to get

$$R_{1}(t)\le H_{1}^{-1} \biggl(c_{3} \int _{t_{1}}^{t} \sigma (s) \,ds +c_{4} \biggr),\quad \forall t\ge t_{1},$$
(102)

where $$c_{4}>0$$, and $$H_{1}(t)=\int _{t}^{1}\frac{1}{H_{2}(s)}\,ds$$. □

### Proof of Theorem 3.2, case 2, G is nonlinear

Using (53), (67) and (77), we obtain

$$L^{\prime }(t)\le -m E(t)+c(t-t_{1}) (\overline{G} )^{-1} \biggl(\frac{qI(t)}{(t-t_{1})\sigma (t)} \biggr)+c H^{-1}\bigl(J(t) \bigr)-cE ^{\prime }(t).$$
(103)

Since $$\lim_{t\to +\infty } \frac{1}{t-t_{1}}=0$$, there exists $$t_{2} > t_{1}$$ such that $$\frac{1}{t-t_{1}} < 1$$ whenever $$t > t_{2}$$. By setting $$\theta =\frac{1}{t-t_{1}} < 1$$ and using (80), we obtain

$$\overline{H}^{-1}\bigl(J(t)\bigr) \leq (t-t_{1}) \overline{H}^{-1} \biggl(\frac{J(t)}{(t-t _{1})} \biggr),\quad \forall t \ge t_{2},$$

and then (103) becomes

\begin{aligned}[b]L^{\prime }(t)&\le -m E(t)+c(t-t_{1}) (\overline{G} )^{-1} \biggl( \frac{qI(t)}{(t-t_{1})\sigma (t)} \biggr)+c(t-t_{1}) \overline{H}^{-1} \biggl(\frac{J(t)}{(t-t_{1})} \biggr) \\ &\quad {}-cE^{\prime }(t),\quad \forall t\ge t_{2}. \end{aligned}
(104)

Let $$L_{1}(t)=L(t)+cE(t)\sim E$$, then (104) takes the form

\begin{aligned}& L_{1}^{\prime }(t)\le -mE(t)+c(t-t_{1}) (\overline{G} ) ^{-1} \biggl(\frac{qI(t)}{(t-t_{1})\sigma (t)} \biggr)+c(t-t_{1}) \overline{H}^{-1} \biggl(\frac{J(t)}{(t-t_{1})}\biggr) ,\quad \end{aligned}
(105)

Let $$r_{3}=\min {\{r,r_{0}\}}$$, $$\chi (t)=\max { \{\frac{qI(t)}{(t-t _{1})\sigma (t)},\frac{J(t)}{(t-t_{1})}} \}$$ and $$W= ( ( \overline{G} ) ^{-1}+\overline{H}^{-1} )^{-1}$$.

So, (105) reduces to

$$L_{1}^{\prime }(t)\le -m E(t)+c(t-t_{1}) W^{-1}\bigl(\chi (t)\bigr),\quad\forall t \ge t_{2} .$$
(106)

Now, for $$\varepsilon _{2}< r_{3}$$ and using (103) and the fact that $$E^{\prime }\le 0$$, $$W^{\prime }>0$$, $$W^{\prime \prime }>0$$ on $$(0,r_{3}]$$, we find that the functional $$L_{2}$$, defined by

$$L_{2}(t):=W^{\prime } \biggl(\frac{\varepsilon _{2}}{t-t_{1}} \cdot \frac{E(t)}{E(0)} \biggr)L_{1}(t),\quad \forall t\ge t_{2},$$

satisfies, for some $$\alpha _{5},\alpha _{6}>0$$.

$$\alpha _{5} L_{2}(t)\le E(t)\le \alpha _{6}L_{2}(t)$$
(107)

and, for all $$t\ge t_{2}$$,

\begin{aligned}[b]L_{2}^{\prime }(t)&= \biggl(\frac{-\varepsilon _{2}}{(t-t_{1})^{2}} \frac{E(t)}{E(0)}+\frac{\varepsilon _{2}}{(t-t_{1})} \frac{E^{\prime }(t)}{E(0)} \biggr)W^{\prime \prime } \biggl(\frac{ \varepsilon _{2}}{t-t_{1}} \cdot \frac{E(t)}{E(0)} \biggr){L_{1}}(t) \\ &\quad {}+W^{\prime } \biggl(\frac{\varepsilon _{2}}{t-t_{1}} \cdot \frac{E(t)}{E(0)} \biggr){L}^{\prime }_{1}(t) \\ & \le -m E(t)W^{\prime } \biggl(\frac{\varepsilon _{2}}{t-t _{1}} \cdot \frac{E(t)}{E(0)} \biggr)+c(t-t_{1})W^{\prime } \biggl( \frac{ \varepsilon _{2}}{t-t_{1}} \cdot \frac{E(t)}{E(0)} \biggr)W^{-1}\bigl( \chi (t)\bigr). \end{aligned}
(108)

Let $$W^{*}$$ be the convex conjugate of W (see ), then as in (89) and (90),

$$W^{*}(a)=a\bigl(W^{\prime }\bigr)^{-1}(a)-W \bigl[\bigl(W^{\prime }\bigr)^{-1}(a) \bigr],\quad \text{if } a \in \bigl(0,W^{\prime }(r_{3})\bigr]$$
(109)

and $$W^{*}$$ satisfies the Young inequality,

$$A B\le W^{*}(A)+W(B),\quad \text{if } A\in \bigl(0,W^{\prime }(r_{3})\bigr], B\in (0,r_{3}].$$
(110)

Therefore, taking $$A=W^{\prime } (\frac{\varepsilon _{2}}{t-t_{1}} \cdot \frac{E(t)}{E(0)} )$$ and $$B=W^{-1}(\chi (t))$$, (108) gives

\begin{aligned}[b]L_{2}^{\prime }(t) &\le -m E(t)W^{\prime } \biggl(\frac{\varepsilon _{2}}{t-t _{1}} \cdot \frac{E(t)}{E(0)} \biggr)+c (t-t_{1})W^{*} \biggl(W^{ \prime } \biggl( \frac{\varepsilon _{2}}{t-t_{1}} \cdot \frac{E(t)}{E(0)} \biggr) \biggr) \\ &\quad {}+c (t-t_{1})\chi (t) \\ &\le -m E(t)W^{\prime } \biggl(\frac{\varepsilon _{2}}{t-t _{1}} \cdot \frac{E(t)}{E(0)} \biggr)+c(t-t_{1})\frac{\varepsilon _{2}}{t-t _{1}} \cdot \frac{E(t)}{E(0)}W^{\prime } \biggl(\frac{\varepsilon _{2}}{t-t _{1}} \cdot \frac{E(t)}{E(0)} \biggr) \\ &\quad {}+c(t-t_{1})\chi (t). \end{aligned}
(111)

Using (68) and (73), we observe that

\begin{aligned}[b] &(t-t_{1})\sigma (t) \chi (t)\le -cE^{\prime }(t). \end{aligned}

So, multiplying (111) by $$\sigma (t)$$, using the fact that $$\varepsilon _{2}\frac{E(t)}{E(0)}< r_{3}$$, gives

\begin{aligned}[b]\sigma (t)L_{2}^{\prime }(t)&\le -m \sigma (t) E(t)W^{\prime } \biggl(\frac{ \varepsilon _{2}}{t-t_{1}} \cdot \frac{E(t)}{E(0)} \biggr)+c \varepsilon _{2} \sigma (t) \cdot \frac{E(t)}{E(0)}W^{\prime } \biggl(\frac{ \varepsilon _{2}}{t-t_{1}} \cdot \frac{E(t)}{E(0)} \biggr) \\ &\quad {}-cE^{\prime }(t),\quad \forall t\ge t_{2}. \end{aligned}

Using the property of $$\sigma (t)$$, we obtain, for all $$t \ge t_{2}$$,

\begin{aligned}[b]\bigl(\sigma (t)L_{2}+cE \bigr)^{\prime }(t)&\le -m \sigma (t) E(t)W^{\prime } \biggl( \frac{\varepsilon _{2}}{t-t_{1}} \cdot \frac{E(t)}{E(0)} \biggr) \\ &\quad {}+c \varepsilon _{2}\sigma (t) \frac{E(t)}{E(0)}W^{\prime } \biggl(\frac{\varepsilon _{2}}{t-t_{1}} \cdot \frac{E(t)}{E(0)} \biggr). \end{aligned}

Therefore, by setting $$L_{3}(t):=\sigma (t)L_{2}(t)+cE(t) \sim E(t)$$, we get

\begin{aligned}[b] &L_{3}^{\prime }(t)\le -m \sigma (t) E(t)W^{\prime } \biggl(\frac{ \varepsilon _{2}}{t-t_{1}} \cdot \frac{E(t)}{E(0)} \biggr)+c \varepsilon _{2} \sigma (t) \cdot \frac{E(t)}{E(0)}W^{\prime } \biggl(\frac{ \varepsilon _{2}}{t-t_{1}} \cdot \frac{E(t)}{E(0)} \biggr). \end{aligned}

This gives, for a suitable choice of $$\varepsilon _{2}$$,

\begin{aligned}[b] &L_{3}^{\prime }(t)\le -m_{0} \sigma (t) \biggl(\frac{E(t)}{E(0)} \biggr)W ^{\prime } \biggl(\frac{\varepsilon _{2}}{t-t_{1}} \cdot \frac{E(t)}{E(0)} \biggr),\quad \forall t\ge t_{2}, \end{aligned}

or

$$m_{0} \biggl(\frac{E(t)}{E(0)} \biggr)W^{\prime } \biggl(\frac{ \varepsilon _{2}}{t-t_{1}} \cdot \frac{E(t)}{E(0)} \biggr)\sigma (t) \leq - L_{3}^{\prime }(t),\quad \forall t\ge t_{2}.$$
(112)

An integration of (112) yields

$$\int _{t_{2}}^{t} m_{0} \biggl( \frac{E(s)}{E(0)} \biggr)W^{\prime } \biggl(\frac{\varepsilon _{2}}{s-t_{1}} \cdot \frac{E(s)}{E(0)} \biggr) \sigma (s) \,ds \leq - \int _{t_{2}}^{t} L_{3}^{\prime }(s)\,ds\le L_{3}(t _{2}).$$
(113)

Using the facts that $$W',W'' > 0$$ and the nonincreasing property of E, we deduce that the map $$t \mapsto E(t)W^{\prime } (\frac{ \varepsilon _{2}}{t-t_{1}} \cdot \frac{E(t)}{E(0)} )$$ is nonincreasing and, consequently, we have

\begin{aligned}[b] & m_{0} \biggl( \frac{E(t)}{E(0)} \biggr)W^{\prime } \biggl(\frac{ \varepsilon _{2}}{t-t_{1}} \cdot \frac{E(t)}{E(0)} \biggr) \int _{t_{2}} ^{t} \sigma (s) \,ds \\ &\quad \leq \int _{t_{2}}^{t} m_{0} \biggl( \frac{E(s)}{E(0)} \biggr)W ^{\prime } \biggl(\frac{\varepsilon _{2}}{s-t_{1}} \cdot \frac{E(s)}{E(0)} \biggr)\sigma (s) \,ds\le L_{3}(t_{2}). \end{aligned}
(114)

Multiplying each side of (114) by $$\frac{1}{t-t_{1}}$$, we have

$$m_{0} \biggl(\frac{1}{t-t_{1}} \cdot \frac{E(t)}{E(0)} \biggr) W^{ \prime } \biggl(\frac{\varepsilon _{2}}{t-t_{1}} \cdot \frac{E(t)}{E(0)} \biggr) \int _{t_{2}}^{t} \sigma (s) \,ds \leq \frac{m _{3}}{t-t_{1}}.$$
(115)

Next, we set $$W_{2}(t)=t W^{\prime }(\varepsilon _{2}t)$$ which is strictly increasing, then we obtain

$$m_{0} W_{2} \biggl(\frac{1}{t-t_{1}} \cdot \frac{E(t)}{E(0)} \biggr) \int _{t_{2}}^{t} \sigma (s) \,ds \leq \frac{m_{3}}{t-t_{1}}.$$
(116)

Finally, for two positive constants $$m_{3}$$ and $$m_{4}$$, we obtain

$$E(t) \leq m_{4} (t-t_{1}) {W_{2}}^{-1} \biggl( \frac{m_{3}}{(t-t_{1}) \int _{t_{2}}^{t}\sigma (s) \,ds } \biggr).$$
(117)

This finishes the proof of Theorem 3.2. □

## References

1. Lagnese, J.: Boundary Stabilization of Thin Plates. SIAM, Philadelphia (1989). Google Scholar

2. Komornik, V.: On the nonlinear boundary stabilization of Kirchhoff plates. Nonlinear Differ. Equ. Appl. 1(4), 323–337 (1994)

3. Lasiecka, I.: Exponential decay rates for the solutions of Euler–Bernoulli equations with boundary dissipation occurring in the moments only. J. Differ. Equ. 95(1), 169–182 (1992)

4. Cavalcanti, M., Domingos Cavalcanti, V., Soriano, J.: Global existence and asymptotic stability for the nonlinear and generalized damped extensible plate equation. Commun. Contemp. Math. 6(05), 705–731 (2004)

5. Ammari, K., Tucsnak, M.: Stabilization of Bernoulli–Euler beams by means of a pointwise feedback force. SIAM J. Control Optim. 39(4), 1160–1181 (2000)

6. Komornik, V.: Decay estimates for a Petrovski system with a nonlinear distributed feedback (1992)

7. Guzmán, R.B., Tucsnak, M.: Energy decay estimates for the damped plate equation with a local degenerated dissipation. Syst. Control Lett. 48(3–4), 191–197 (2003)

8. Vasconcellos, C.F., Teixeira, L.M.: Existence, uniqueness and stabilization for a nonlinear plate system with nonlinear damping. In: Annales de la Faculté des Sciences de Toulouse: Mathématiques, vol. 8, pp. 173–193 (1999). Université Paul Sabatier

9. Pazoto, A.F., Coelho, L., Coimbra Charao, R.: Uniform stabilization of a plate equation with nonlinear localized dissipation. Proyecciones 23(3), 205–234 (2004)

10. Xiang, M., Wang, F.: Fractional Schrödinger–Poisson–Kirchhoff type systems involving critical nonlinearities. Nonlinear Anal. 164, 1–26 (2017)

11. Lagnese, J.E.: Asymptotic energy estimates for Kirchhoff plates subject to weak viscoelastic damping. Int. Ser. Numer. Math. 91, 211–236 (1989)

12. Rivera, J.M., Lapa, E.C., Barreto, R.: Decay rates for viscoelastic plates with memory. J. Elast. 44(1), 61–87 (1996)

13. Alabau-Boussouira, F., Cannarsa, P., Sforza, D.: Decay estimates for second order evolution equations with memory. J. Funct. Anal. 254(5), 1342–1372 (2008)

14. Enrike, Z.: Exponential decay for the semilinear wave equation with locally distributed damping. Commun. Partial Differ. Equ. 15(2), 205–235 (1990)

15. Komornik, V.: Decay estimates for the wave equation with internal damping. In: Control and Estimation of Distributed Parameter Systems: Nonlinear Phenomena, pp. 253–266. Springer, Berlin (1994)

16. Nakao, M.: Decay of solutions of the wave equation with a local nonlinear dissipation. Math. Ann. 305(1), 403–417 (1996)

17. de Lima Santos, M., Junior, F.: A boundary condition with memory for Kirchhoff plates equations. Appl. Math. Comput. 148(2), 475–496 (2004)

18. Benaissa, A., Mimouni, S.: Energy decay of solutions of a wave equation of p-Laplacian type with a weakly nonlinear dissipation. J. Inequal. Pure Appl. Math. 7(1), Article 15 (2006)

19. Messaoudi, S.A.: General decay of the solution energy in a viscoelastic equation with a nonlinear source. Nonlinear Anal., Theory Methods Appl. 69(8), 2589–2598 (2008)

20. Messaoudi, S.A.: General decay of solutions of a viscoelastic equation. J. Math. Anal. Appl. 341(2), 1457–1467 (2008)

21. Han, X., Wang, M.: General decay of energy for a viscoelastic equation with nonlinear damping. Math. Methods Appl. Sci. 32(3), 346–358 (2009)

22. Liu, W.: General decay of solutions to a viscoelastic wave equation with nonlinear localized damping. In: Annales Academiæ Scientiarum Fennicæ. Mathematica, vol. 34, pp. 291–302 (2009)

23. Liu, W.: General decay rate estimate for a viscoelastic equation with weakly nonlinear time-dependent dissipation and source terms. J. Math. Phys. 50(11), 113506 (2009)

24. Messaoudi, S.A., Mustafa, M.I.: On the control of solutions of viscoelastic equations with boundary feedback. Nonlinear Anal., Real World Appl. 10(5), 3132–3140 (2009)

25. Mustafa, M.I.: Uniform decay rates for viscoelastic dissipative systems. J. Dyn. Control Syst. 22(1), 101–116 (2016)

26. Mustafa, M.I.: Well posedness and asymptotic behavior of a coupled system of nonlinear viscoelastic equations. Nonlinear Anal., Real World Appl. 13(1), 452–463 (2012)

27. Park, J.Y., Park, S.H.: General decay for quasilinear viscoelastic equations with nonlinear weak damping. J. Math. Phys. 50(8), 083505 (2009)

28. Wu, S.-T.: General decay for a wave equation of Kirchhoff type with a boundary control of memory type. Bound. Value Probl. 2011(1), 55 (2011)

29. Feng, B., Li, H.: Energy decay for a viscoelastic Kirchhoff plate equation with a delay term. Bound. Value Probl. 2016(1), 174 (2016)

30. Park, S.-H., Kang, J.-R.: General decay for weak viscoelastic Kirchhoff plate equations with delay boundary conditions. Bound. Value Probl. 2017(1), 96 (2017)

31. Lasiecka, I., Tataru, D., et al.: Uniform boundary stabilization of semilinear wave equations with nonlinear boundary damping. Differ. Integral Equ. 6(3), 507–533 (1993)

32. Cavalcanti, M.M., Cavalcanti, A.D., Lasiecka, I., Wang, X.: Existence and sharp decay rate estimates for a von Karman system with long memory. Nonlinear Anal., Real World Appl. 22, 289–306 (2015)

33. Guesmia, A.: Asymptotic stability of abstract dissipative systems with infinite memory. J. Math. Anal. Appl. 382(2), 748–760 (2011)

34. Lasiecka, I., Messaoudi, S.A., Mustafa, M.I.: Note on intrinsic decay rates for abstract wave equations with memory. J. Math. Phys. 54(3), 031504 (2013)

35. Lasiecka, I., Wang, X.: Intrinsic decay rate estimates for semilinear abstract second order equations with memory. In: New Prospects in Direct, Inverse and Control Problems for Evolution Equations, pp. 271–303. Springer, Berlin (2014)

36. Xiao, T.-J., Liang, J.: Coupled second order semilinear evolution equations indirectly damped via memory effects. J. Differ. Equ. 254(5), 2128–2157 (2013)

37. Mustafa, M.I.: On the control of the wave equation by memory-type boundary condition. Discrete Contin. Dyn. Syst., Ser. A 35(3), 1179–1192 (2015)

38. Messaoudi, S.A., Al-Khulaifi, W.: General and optimal decay for a quasilinear viscoelastic equation. Appl. Math. Lett. 66, 16–22 (2017)

39. Mustafa, M.I.: Asymptotic stability for the second order evolution equation with memory. J. Dyn. Control Syst. 25(2), 263–273 (2019)

40. Kang, J.-R.: General decay for Kirchhoff plates with a boundary condition of memory type. Bound. Value Probl. 2012(1), 129 (2012)

41. Mustafa, M.I., Abusharkh, G.A.: Plate equations with viscoelastic boundary damping. Indag. Math. 26(2), 307–323 (2015)

42. Mustafa, M.I.: Energy decay of dissipative plate equations with memory-type boundary conditions. Asymptot. Anal. 100(1–2), 41–62 (2016)

43. Park, J.-Y., Kang, J.-R.: A boundary condition with memory for the Kirchhoff plate equations with non-linear dissipation. Math. Methods Appl. Sci. 29(3), 267–280 (2006)

44. Mustafa, M.I.: Optimal decay rates for the viscoelastic wave equation. Math. Methods Appl. Sci. 41(1), 192–204 (2018)

45. Al-Gharabli, M.M., Al-Mahdi, A.M., Messaoudi, S.A.: General and optimal decay result for a viscoelastic problem with nonlinear boundary feedback. J. Dyn. Control Syst., 1–22 (2018). https://doi.org/10.1007/s10883-018-9422-y

46. Arnol’d, V.I.: Mathematical Methods of Classical Mechanics, vol. 60. Springer, Berlin (2013)

### Acknowledgements

The author thanks KFUPM for its continuous support. This work is funded by KFUPM under Project (SB181039).

Not applicable.

Not applicable.

## Funding

This work is funded by KFUPM under Project (SB181039).

## Author information

Authors

### Contributions

The author read and approved the final manuscript.

## Ethics declarations

### Competing interests

The author declares that he has no competing interests.

Not applicable.

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions 