 Research
 Open Access
 Published:
On decreasing solutions of second order nearly linear differential equations
Boundary Value Problems volume 2014, Article number: 62 (2014)
Abstract
We consider the nonlinear equation ${(r(t)G({y}^{\prime}))}^{\prime}=p(t)F(y)$, where r, p are positive continuous functions and $F(\cdot )$, $G(\cdot )$ are continuous functions which are both regularly varying at zero of index one. The existence and asymptotic behavior of decreasing slowly varying solutions are studied. Our observations can be understood at least in two ways: as a nonlinear extension of results for linear equations; and as an analysis of the border case (‘between sublinearity and superlinearity’) for a certain generalization of EmdenFowler type equation.
MSC:26A12, 34C11, 34E05.
1 Introduction
We consider the nonlinear equation
where p is a positive continuous function on $[a,\mathrm{\infty})$ and F, G are continuous functions on ℝ with $uF(u)>0$, $uG(u)>0$ for $u\ne 0$. To simplify our considerations we suppose that F and G are increasing and odd. Nonlinearities F and G are further assumed to have regularly varying behavior of index 1 at zero. More precisely, we require
the class ${\mathcal{RV}}_{0}$ being defined below. This condition justifies the terminology a nearly linear equation. Indeed, if we make the trivial choice $F=G=\mathrm{id}$, then (1) reduces to a linear equation. It is, however, clear that in contrast to linear equations, the solution space of (1) is generally neither additive nor homogeneous. Examples of $F(u)$ and $G(u)$ which lead to a nonlinear equation and can be treated within our theory are $ulnu$, $u/lnu$, or $u/\sqrt{1\pm {u}^{2}}$, and many others.
The theory of regular variation has been shown to be very useful in studying asymptotic properties of linear and nonlinear differential equations; see e.g. [1–5]. As for nonlinear equations, typically EmdenFowler type equations have been studied, e.g. of the form ${y}^{\u2033}=q(t){y}^{\gamma}sgny$ or, more generally,
where $\phi (\cdot )\in \mathcal{RV}(\gamma )$ or $\phi (\cdot )\in {\mathcal{RV}}_{0}(\gamma )$, $\gamma >0$; see e.g. [1, 3–5]. Usually the sublinearity condition resp. the superlinearity condition is assumed, i.e., $\gamma <1$ resp. $\gamma >1$, and such conditions play an important role in the proofs. Notice that from this point of view, our equation (1) (which arises as a variant of (3) with specific nonlinearities) is neither superlinear nor sublinear, since the indices of regular variation of F and G are the same. Therefore, asymptotic analysis of (1) in the framework of regular variation requires an approach which is different from the usual ones for the abovementioned EmdenFowler equation with $\gamma \ne 1$. The crucial property is now the fact that the nonlinearities in (1) are in some sense close to each other (they can differ by a slowly varying function). It turns out that a modification of some methods known from the linear theory is a useful tool. However, as we will see, some phenomena may occur for (1) which cannot happen in the linear case.
In addition to the abovementioned works where the analysis of linear and nonlinear equations is made in the framework of regular variation, we recommend the monograph [6] where various asymptotic properties of linear and nonlinear differential equations are investigated.
We are interested in asymptotic behavior of solutions y to (1) such that $y(t){y}^{\prime}(t)<0$ for large t. Without loss of generality, we restrict our study to eventually positive decreasing solutions of (1); such a set is denoted as $\mathcal{DS}$. As we will see, for any $y\in \mathcal{DS}$, ${lim}_{t\to \mathrm{\infty}}{y}^{\prime}(t)=0$.
The paper is organized as follows. In the remaining part of this section we recall a few essentials from the theory of regular variation. In the next section we present the main results, proofs, examples, and comments. We give conditions guaranteeing that $\mathcal{DS}$ solutions are slowly varying and, in particular, establish asymptotic formulas and estimates for them. The results can be understood at least in two ways: first, as a nonlinear extension of the existing linear results; and second, as an analysis of the border case in the sense of the above described sublinear and superlinear setting in (3), where, in addition, a nonlinearity can be present also in the differential term. In the last part of the paper, we discuss an extension to the equation
where r and p are positive continuous functions on $[a,\mathrm{\infty})$.
We use the usual convention: For eventually positive functions f, g we denote $f(t)\sim g(t)$ resp. $f(t)=o(g(t))$ as $t\to \mathrm{\infty}$ if ${lim}_{t\to \mathrm{\infty}}f(t)/g(t)=1$ resp. ${lim}_{t\to \mathrm{\infty}}f(t)/g(t)=0$.
We start with recalling the definition of regularly varying functions introduced by J Karamata in 1930.
Definition 1 A measurable function $f:[a,\mathrm{\infty})\to (0,\mathrm{\infty})$ is called regularly varying (at infinity) of index ϑ if ${lim}_{t\to \mathrm{\infty}}\frac{f(\lambda t)}{f(t)}={\lambda}^{\vartheta}$ for every $\lambda >0$; we write $f\in \mathcal{RV}(\vartheta )$. The class of slowly varying functions (at infinity), denoted by $\mathcal{SV}$, is defined as $\mathcal{SV}=\mathcal{RV}(0)$.
It is clear that $f\in \mathcal{RV}(\vartheta )$ iff $f(t)={t}^{\vartheta}L(t)$, where $L\in \mathcal{SV}$. The following proposition includes very important and useful properties. The proofs can be found in [[7], Chapter 1] or [[2], Chapter 1].
Proposition 1

(i)
(The uniform convergence theorem) The relation in the definition holds uniformly on each compact λset in $(0,\mathrm{\infty})$.

(ii)
(The representation theorem) $f\in \mathcal{RV}(\vartheta )$ iff
$$f(t)={t}^{\vartheta}\phi (t)exp\left\{{\int}_{a}^{t}\frac{\psi (s)}{s}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\right\},$$where φ, ψ are measurable functions with ${lim}_{t\to \mathrm{\infty}}\phi (t)=C\in (0,\mathrm{\infty})$ and ${lim}_{t\to \mathrm{\infty}}\psi (t)=0$

(iii)
(Karamata theorem (direct half)) If $L\in \mathcal{SV}$, then ${\int}_{t}^{\mathrm{\infty}}{s}^{\zeta}L(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\sim \frac{1}{\zeta 1}{t}^{\zeta +1}L(t)$ provided $\zeta <1$, and ${\int}_{a}^{t}{s}^{\zeta}L(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\sim \frac{1}{\zeta +1}{t}^{\zeta +1}L(t)$ provided $\zeta >1$. The integral ${\int}_{a}^{\mathrm{\infty}}L(s)/s\phantom{\rule{0.2em}{0ex}}\mathrm{d}s$ may or may not converge. The function $\tilde{L}(t)={\int}_{t}^{\mathrm{\infty}}\frac{L(s)}{s}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s$ resp. $\tilde{L}(t)={\int}_{a}^{t}\frac{L(s)}{s}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s$ is a new $\mathcal{SV}$ function and such that $L(t)/\tilde{L}(t)\to 0$ as $t\to \mathrm{\infty}$.
If $\phi (t)\equiv C$ in the above representation, then we speak of normalized regularly varying functions of index ϑ; we write $f\in \mathcal{NRV}(\vartheta )$. We denote $\mathcal{NSV}=\mathcal{NRV}(0)$.
Definition 2 A measurable function $f:(0,a]\to (0,\mathrm{\infty})$ is said to be regularly varying at zero of index ϑ if ${lim}_{t\to 0+}\frac{f(\lambda t)}{f(t)}={\lambda}^{\vartheta}$ for every $\lambda >0$; we write $f\in {\mathcal{RV}}_{0}(\vartheta )$.
Since regular variation of $f(\cdot )$ at zero of index ϑ means in fact regular variation of $f(1/t)$ at infinity of index −ϑ, properties of ${\mathcal{RV}}_{0}$ functions can be deduced from theory of $\mathcal{RV}$ functions.
Here are some further useful properties of regularly varying functions.
Proposition 2

(i)
If $f\in \mathcal{RV}(\vartheta )$, then ${f}^{\alpha}\in \mathcal{RV}(\alpha \vartheta )$ for every $\alpha \in \mathbb{R}$.

(ii)
If ${f}_{i}\in \mathcal{RV}({\vartheta}_{i})$, $i=1,2$, ${f}_{2}(t)\to \mathrm{\infty}$ as $t\to \mathrm{\infty}$, then ${f}_{1}\circ {f}_{2}\in \mathcal{RV}({\vartheta}_{1}{\vartheta}_{2})$.

(iii)
If ${f}_{i}\in \mathcal{RV}({\vartheta}_{i})$, $i=1,2$, then ${f}_{1}{f}_{2}\in \mathcal{RV}({\vartheta}_{1}+{\vartheta}_{2})$. If $L\in \mathcal{SV}$ and $\vartheta >0$, then ${t}^{\vartheta}L(t)\to \mathrm{\infty}$, ${t}^{\vartheta}L(t)\to 0$ as $t\to \mathrm{\infty}$.

(iv)
For a continuous δ in the representation of a normalized regularly varying function $f(t)=Cexp\{{\int}_{a}^{t}\frac{\delta (s)}{s}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\}$ of index ϑ, we have $t{f}^{\prime}(t)/f(t)=\delta (t)\to \vartheta $ as $t\to \mathrm{\infty}$.

(v)
If s positive function $f\in {C}^{1}$ satisfies $t{f}^{\prime}(t)/f(t)\to \vartheta $ as $t\to \mathrm{\infty}$, then $f\in \mathcal{NRV}(\vartheta )$.

(vi)
If $f\in \mathcal{RV}(\vartheta )$, $\vartheta \in \mathbb{R}$, then $lnf(t)/lnt\to \vartheta $ as $t\to \mathrm{\infty}$.

(vii)
If $f\in \mathcal{RV}(\vartheta )$ with $\vartheta \le 0$ and $f(t)={\int}_{t}^{\mathrm{\infty}}g(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s$ with g nonincreasing, then $t{f}^{\prime}(t)/f(t)\to \vartheta $ as $t\to \mathrm{\infty}$.

(viii)
Let $g\in {\mathcal{RV}}_{0}(\vartheta )$ with $\vartheta >0$ be increasing in a right neighborhood of zero. Then ${g}^{1}\in {\mathcal{RV}}_{0}(1/\vartheta )$, where ${g}^{1}$ stands for the inverse of g.
Proof The proofs of (i)(vi) can be found [[7], Chapter 1] or [[2], Chapter 1]. Or see [[4], Appendix] which includes also (vii).
(viii) Since $g\in {\mathcal{RV}}_{0}(\vartheta )$, we have $g(1/t)\in \mathcal{RV}(\vartheta )$. Denote $\tilde{g}(t)=1/g(1/t)$. Then $\tilde{g}\in \mathcal{RV}(\vartheta )$ and hence the inverse of $\tilde{g}$ satisfies ${\tilde{g}}^{1}\in \mathcal{RV}(1/\vartheta )$ by [[7], Theorem 1.5.12]. We have ${g}^{1}(u)=1/{\tilde{g}}^{1}(1/u)$. Hence, ${g}^{1}(1/t)=1/{\tilde{g}}^{1}(t)\in \mathcal{RV}(1/\vartheta )$, which implies ${g}^{1}(u)\in {\mathcal{RV}}_{0}(1/\vartheta )$. □
We use the convention that a slowly varying component of $f\in \mathcal{RV}(\vartheta )$ (or $f\in {\mathcal{RV}}_{0}(\vartheta )$) is denoted as ${L}_{f}$, i.e., ${L}_{f}(t)=f(t)/{t}^{\vartheta}$.
The de Haan theory (which includes the class Π defined next) can be seen as a refinement of the Karamata theory of regularly varying functions; see [[7], Chapter 3] and [[2], Chapter 1].
Definition 3 A measurable function $f:[a,\mathrm{\infty})\to \mathbb{R}$ is said to belong to the class Π if there exists a function $w:(0,\mathrm{\infty})\to (0,\mathrm{\infty})$ such that for $\lambda >0$, ${lim}_{t\to \mathrm{\infty}}\frac{f(\lambda t)f(t)}{w(t)}=ln\lambda $; we write $f\in \mathrm{\Pi}$ or $f\in \mathrm{\Pi}(w)$.
The function w is called an auxiliary function for f. The class Π of functions f is, after taking absolute values, a proper subclass of $\mathcal{SV}$. Auxiliary function is unique up to asymptotic equivalence.
For more information as regards the classes $\mathcal{RV}$ and Π see, e.g., the monographs [7, 8].
2 Results
We start with the simple result which gives the conditions guaranteeing slow variation of any solution in $\mathcal{DS}$.
Theorem 1 Assume that
Then
Proof Rewrite (1) as an equivalent system of the form
where ${G}^{1}$ is the inverse of G. Then we apply the existence result [[9], Theorem 1] to obtain $\mathcal{DS}\ne \mathrm{\varnothing}$.
Take $y\in \mathcal{DS}$, i.e., $y(t)>0$, ${y}^{\prime}(t)<0$, $t\ge {t}_{0}$. Then ${lim}_{t\to \mathrm{\infty}}{y}^{\prime}(t)=0$. Indeed, $G({y}^{\prime})$ is negative increasing and so is ${y}^{\prime}$. If ${lim}_{t\to \mathrm{\infty}}{y}^{\prime}(t)=c<0$, then $y(t)y({t}_{0})\sim c(t{t}_{0})$ as $t\to \mathrm{\infty}$, which contradicts eventual positivity of y. Integration of (1) from t to ∞ yields
Hence,
Thus,
where M, N are some positive constants which exist thanks to (6). Since the expression on the right hand side of (7) tends to zero, we get $t{y}^{\prime}(t)/y(t)\to 0$ as $t\to \mathrm{\infty}$, and $y\in \mathcal{NSV}$ follows. □
Remark 1 Condition (5) is somehow necessary. Indeed, take $y\in \mathcal{DS}\cap \mathcal{SV}$ and assume that ${lim\hspace{0.17em}inf}_{u\to 0+}{L}_{F}(u)>0$ and ${lim\hspace{0.17em}sup}_{u\to 0+}{L}_{G}(u)<\mathrm{\infty}$. First note that because of monotonicity of ${y}^{\prime}$, we have $t{y}^{\prime}(t)/y(t)\to 0$, and so $y\in \mathcal{NSV}$. Set $w=G({y}^{\prime})/y$. Then w satisfies
for large t. There exists $N\in (0,\mathrm{\infty})$ such that
as $t\to \mathrm{\infty}$. Hence,
Integration (8) from t to ∞ and multiplying by t, we get
which implies ${lim}_{t\to \mathrm{\infty}}t{\int}_{t}^{\mathrm{\infty}}p(s){L}_{F}(y(s))\phantom{\rule{0.2em}{0ex}}\mathrm{d}s=0$. Since $M\in (0,\mathrm{\infty})$ exists such that ${L}_{F}(y(t))\ge M$ for large t, condition (5) follows.
A necessity is discussed from certain point of view also in Remark 4.
Remark 2 Observe that in Theorem 1 we are dealing with all $\mathcal{SV}$ solutions of (1). It follows from the fact that $\mathcal{SV}$ solutions cannot increase. Indeed, for a positive increasing solution u of (1), due to convexity, we have ${u}^{\prime}(t)\ge {K}_{1}$ for some ${K}_{1}>0$. By integrating, $u(t)\ge {K}_{1}t+{K}_{2}$, which contradicts the fact the $u\in \mathcal{SV}$.
Remark 3 The statements of Theorem 1 and Remark 1 can be understood as a nonlinear extension of a part of [[4], Theorem 1.1].
In the next result, we derive asymptotic formulas for $\mathcal{SV}$ solutions provided p is regularly varying of index −2. Define
The function $\stackrel{\u02c6}{F}(x)$ is increasing on $(0,\mathrm{\infty})$. The constant 1 in the integral is unimportant; it can be replaced by any positive constant. Denote the inverse of $\stackrel{\u02c6}{F}$ by ${\stackrel{\u02c6}{F}}^{1}$. We have $\stackrel{\u02c6}{F}\in {\mathcal{SV}}_{0}$ and in general ${lim}_{u\to 0+}\stackrel{\u02c6}{F}(u)$ can be finite or infinite. Denote
and note that $H\in \mathcal{RV}(1)$ provided $p\in \mathcal{RV}(2)$.
Theorem 2 Assume that $p\in \mathcal{RV}(2)$, ${lim}_{u\to 0+}\stackrel{\u02c6}{F}(u)=\mathrm{\infty}$, and
for all $g\in {\mathcal{SV}}_{0}$. If $y\in \mathcal{DS}\cap \mathcal{SV}$, then $y\in \mathrm{\Pi}(t{y}^{\prime}(t))$. Moreover:

(i)
If ${\int}_{a}^{\mathrm{\infty}}H(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s=\mathrm{\infty}$, then
$$y(t)={\stackrel{\u02c6}{F}}^{1}\{{\int}_{a}^{t}(1+o(1))H(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\}$$(10)(and $y(t)\to 0$) as $t\to \mathrm{\infty}$

(ii)
If ${\int}_{a}^{\mathrm{\infty}}H(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s<\mathrm{\infty}$, then
$$y(t)={\stackrel{\u02c6}{F}}^{1}\{\stackrel{\u02c6}{F}(y(\mathrm{\infty}))+{\int}_{t}^{\mathrm{\infty}}(1+o(1))H(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\}$$(11)(and $y(t)\to y(\mathrm{\infty})\in (0,\mathrm{\infty})$) as $t\to \mathrm{\infty}$.
Proof Take $y\in \mathcal{DS}\cap \mathcal{SV}$ and let ${t}_{0}$ be such that $y(t)>0$, ${y}^{\prime}(t)<0$ for $t\ge {t}_{0}$. Then
provided $y(t)\to 0$ as $t\to \mathrm{\infty}$. If $y(t)\to C\in (0,\mathrm{\infty})$, then we get the same conclusion since $F(y(t))\to F(C)\in (0,\mathrm{\infty})$, and so $pF(y)\in \mathcal{RV}(2)$. Thus
In view of ${y}^{\prime}={G}^{1}(G({y}^{\prime}))$, we get ${y}^{\prime}\in \mathcal{RV}(1)$. Hence,
as $t\to \mathrm{\infty}$, thanks to the uniformity. This implies $y\in \mathrm{\Pi}(t{y}^{\prime}(t))$. Define
Then ${\mathrm{\Psi}}^{\prime}(t)=tp(t)F(y(t))\in \mathcal{RV}(1)$, which implies $\mathrm{\Psi}\in \mathrm{\Pi}(t{\mathrm{\Psi}}^{\prime}(t))$, similarly as in (12). Further, we claim $\mathrm{\Psi}\in \mathrm{\Pi}(tG({y}^{\prime}(t)))$. Indeed, fix $\lambda >0$, and then
as $t\to \mathrm{\infty}$, thanks to $G({y}^{\prime})\in \mathcal{RV}(1)$ and the uniformity. From the uniqueness of the auxiliary function up to asymptotic equivalence, we obtain
as $t\to \mathrm{\infty}$. Condition (9) is equivalent to ${L}_{G}(v(t)/t)\sim {L}_{G}(1/t)$ as $t\to \mathrm{\infty}$, for all $v\in \mathcal{SV}$. Hence,
as $t\to \mathrm{\infty}$. Combining this relation with (13), we get
as $t\to \mathrm{\infty}$, that is,
as $t\to \mathrm{\infty}$. By integrating this relation over $({t}_{0},t)$ we obtain
which implies (10) provided ${\int}_{a}^{\mathrm{\infty}}H(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s=\mathrm{\infty}$. Clearly then $y(t)\to 0$ as $t\to \mathrm{\infty}$, otherwise we get a contradiction with the divergence of the integral in (15). If ${\int}_{a}^{\mathrm{\infty}}H(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s<\mathrm{\infty}$ holds, then we integrate (14) over $(t,\mathrm{\infty})$ obtaining (11). In this case, $y(t)$ must tend to a positive constant as $t\to \mathrm{\infty}$. Indeed, if $y(t)\to 0$ as $t\to \mathrm{\infty}$, then the lefthand side of (15) becomes unbounded which is a contradiction. □
Remark 4 A closer examination of the proof of Theorem 2 shows that the condition ${lim}_{u\to 0+}\stackrel{\u02c6}{F}(u)=\mathrm{\infty}$ is somehow needed. Indeed, if we assume that this limit is finite and that ${\int}_{a}^{\mathrm{\infty}}H(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s=\mathrm{\infty}$, then in view of (15) we get contradiction. As a byproduct we then have a nonexistence of $\mathcal{SV}$ solutions. If ${lim}_{u\to 0+}\stackrel{\u02c6}{F}(u)<\mathrm{\infty}$ holds when ${\int}_{a}^{\mathrm{\infty}}H(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s<\mathrm{\infty}$, then no conclusion whether $y(\mathrm{\infty})=0$ or $y(\mathrm{\infty})>0$ can generally be drawn. Note that such phenomena cannot occur in the linear case.
Remark 5 There exists an alternative way to prove (13). Indeed, denote $\tilde{L}(t)={L}_{p}(t)F(y(t))$ and observe that $\tilde{L}\in \mathcal{RV}(0+1\cdot 0)=\mathcal{SV}$. Therefore,
as $t\to \mathrm{\infty}$ by the Karamata theorem. Since $G({y}^{\prime}(t))={\int}_{t}^{\mathrm{\infty}}p(s)F(y(s))\phantom{\rule{0.2em}{0ex}}\mathrm{d}s$, we obtain (13).
Remark 6 Observe that to prove asymptotic formulas for decreasing $\mathcal{SV}$ solutions of (1) we do not require (even onesided) boundedness conditions on ${L}_{F}$ and ${L}_{G}$ such as (6). As for condition (9) from Theorem 2, it is not too restrictive. Many functions satisfy it, for example, ${L}_{G}(u)\to C\in (0,\mathrm{\infty})$ as $u\to 0+$, or ${L}_{G}(u)=lnu{}^{{\alpha}_{1}}lnlnu{}^{{\alpha}_{2}}$, ${\alpha}_{1},{\alpha}_{2}\in \mathbb{R}$.
Remark 7 Observe that the value −2 which is required for the index of regular variation of p is natural and consistent within our setting. Indeed, since we work with $\mathcal{SV}$ solutions, the expression on the lefthand side of (1), which is somehow close to the second derivative, is expected to be in $\mathcal{RV}(2)$.
Corollary 1 Assume that $p\in \mathcal{RV}(2)$ and ${lim}_{t\to \mathrm{\infty}}{L}_{p}(t)=0$. Let (6) and (9) hold. Then any solution $y\in \mathcal{DS}$ belongs to $\mathcal{NSV}$. Moreover, $y\in \mathrm{\Pi}(t{y}^{\prime}(t))$ and asymptotic formulas (10) or (11) hold.
Proof By the Karamata theorem,
as $t\to \mathrm{\infty}$, and so (5) follows. Further, in view of ${lim\hspace{0.17em}sup}_{u\to 0+}{L}_{F}(u)<\mathrm{\infty}$, there exists $M>0$ such that we have for $x<1$,
which implies ${lim}_{x\to 0+}\stackrel{\u02c6}{F}(x)=\mathrm{\infty}$. The statement now follows from Theorem 1 and Theorem 2. □
Remark 8 Corollary 1 can be seen as a nonlinear extension of [[2], Theorem 0.1A].
Example 1 Consider the equation
where ${L}_{G}\in {\mathcal{SV}}_{0}$ and ${L}_{p}\in \mathcal{SV}$. Then $\stackrel{\u02c6}{F}(x)=\frac{{(lnx)}^{2}}{2}$, $x\in (0,1)$, $\stackrel{\u02c6}{F}(x)\to \mathrm{\infty}$ as $x\to 0+$, and ${\stackrel{\u02c6}{F}}^{1}(u)=exp\{\sqrt{2u}\}$, $u<0$. We restrict our considerations to positive (decreasing) solutions y of (1) such that $y(t)<1$ for $t\ge {t}_{0}$; we have this requirement because we need $F(u)$ to be increasing at least in a certain neighborhood of zero (here it is $(0,1)$). Note that a slight modification of F, namely $F(u)=x/lnx/k$, $k\in (0,\mathrm{\infty})$, ensures the required monotonicity of F on the (possibly bigger) interval $(0,k)$.
(i) Let $G(u)=ulnu$ and ${L}_{p}(t)=\frac{1}{lnt+h(t)}$, where h is a continuous function on $[a,\mathrm{\infty})$ with $h(t)=o(lnt)$ as $t\to \mathrm{\infty}$, and such that $lnt+h(t)>0$ for $t\in [a,\mathrm{\infty})$. Examples of h are $h(t)=cost$ or $h(t)=ln(lnt)$. Note that the required monotonicity of G is ensured in a small neighborhood of zero. But this is not a problem since we have ${y}^{\prime}$ as the argument of G, and ${y}^{\prime}(t)$ tends to zero as $t\to \mathrm{\infty}$. Nevertheless, we could modify G similarly as the abovementioned modification of F. The function H reads
as $t\to \mathrm{\infty}$. Thus, ${\int}^{\mathrm{\infty}}H(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s<\mathrm{\infty}$ and we have ${\int}_{t}^{\mathrm{\infty}}H(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\sim \frac{1}{lnt}$ as $t\to \mathrm{\infty}$. From Corollary 1, we see that for any eventually decreasing positive solution y of (1) (with $y(t)<1$ for large t), −y is in Π (y is in $\mathcal{NSV}$), y tends to $y(\mathrm{\infty})>0$ and satisfies the formula
as $t\to \mathrm{\infty}$.
(ii) Let ${L}_{p}$ be the same as in (i) and $G(u)=\frac{u}{\sqrt{1\pm {u}^{2}}}$. Note that such a particular case of G arises when searching radial solutions of PDEs with the mean curvature operator (the sign ‘+’) resp. the relativity operator (the sign ‘−’). Then
as $t\to \mathrm{\infty}$. Note that ${(ln(lnt))}^{\prime}=\frac{1}{tlnt}$, and so ${\int}^{\mathrm{\infty}}H(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s=\mathrm{\infty}$. From Corollary 1, we see that for any eventually decreasing positive solution y of (1), −y is in Π (y is in $\mathcal{NSV}$), y tends to zero and satisfies the formula
as $t\to \mathrm{\infty}$.
(iii) Let ${L}_{p}(t)=\frac{1}{{(lnt+h(t))}^{2}}$, where h is as in (i), and $G=\mathrm{id}$. Then
as $t\to \mathrm{\infty}$. Applying Corollary 1, we see that any eventually decreasing positive solution y of (1) (with $y(t)<1$ for large t) obeys the same asymptotic behavior as y in (i).
Among others, the above examples show how the convergence/divergence of the integral ${\int}^{\mathrm{\infty}}H(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s$ can be affected by the behavior of both p and G.
Under the conditions of Theorem 2(i), it does not follow that $y(t)\sim {\stackrel{\u02c6}{F}}^{1}\{{\int}_{a}^{t}H(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\}$ as $t\to \mathrm{\infty}$; this fact was observed already in the linear case; see [[2], Remark 2]. However, we can give a lower estimate under quite mild assumptions. For technical reasons we consider positive decreasing solutions (1) on $[0,\mathrm{\infty})$ (provided $p\in C([0,\mathrm{\infty}))$.
Theorem 3

(i)
Let ${lim\hspace{0.17em}inf}_{u\to 0+}{L}_{G}(u)>0$ and (5) hold. Then $y\in \mathcal{DS}\cap \mathcal{SV}$ satisfies the estimate
$$\underset{t\to \mathrm{\infty}}{lim\hspace{0.17em}inf}\frac{y(t)}{{\stackrel{\u02c6}{F}}^{1}\{\stackrel{\u02c6}{F}(y(0))M{\int}_{0}^{t}sp(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\}}\ge 1,$$(17)where M is some positive constant. The constant M can be taken such that $M=1/{inf}_{u\in [0,{y}^{\prime}(0)]}{L}_{G}(u)$

(ii)
In addition to the conditions in (i), assume that ${lim\hspace{0.17em}sup}_{u\to 0+}{L}_{F}(u)<\mathrm{\infty}$ holds. Then $y\in \mathcal{DS}$ implies $y\in \mathcal{NSV}$ and
$$\underset{t\to \mathrm{\infty}}{lim\hspace{0.17em}inf}y(t)exp\{N{\int}_{0}^{t}sp(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\}\ge y(0),$$where N is some positive constant. The constant N can be taken such that $N={sup}_{u\in [0,y(0)]}{L}_{F}(u)/{inf}_{u\in [0,{y}^{\prime}(0)]}{L}_{G}(u)$.
Proof (i) Take $y(t)\in \mathcal{DS}\cap \mathcal{SV}$, $t\ge 0$. For $\lambda \in (0,1)$, we have
$t>0$. Thanks to ${lim\hspace{0.17em}inf}_{u\to 0+}{L}_{G}(u)>0$, there exists $M>0$ such that
$t>0$, where the last estimate follows from (18). Integration over $\lambda \in (0,1)$ yields
where we substituted $s=\lambda t$ in ${\int}_{0}^{1}\frac{\mathrm{d}\lambda}{F(y(\lambda t))}$ and we applied the Fubini theorem in
From (20), we get
Since $F(y)\in \mathcal{SV}$, the Karamata theorem yields
where the asymptotic relation holds as $t\to \mathrm{\infty}$. Hence, $G({y}^{\prime}(t)){\int}_{0}^{t}\frac{\mathrm{d}s}{F(y(s))}=o(1)$ as $t\to \mathrm{\infty}$. Inequality (17) now easily follows from (21).
(ii) Take $y(t)\in \mathcal{DS}$, $t>0$. Then $y\in \mathcal{NSV}$ follows from Theorem 1. Thanks to (6), which is in fact assumed, there exists $N>0$ such that $NG({y}^{\prime}(\lambda t))/F(y(\lambda t))\ge {y}^{\prime}(\lambda t)/y(\lambda t)$, $t>0$. As in the proof of (i), we then get
Since this estimate is a special case of (19), the rest of the proof is now clear. □
Remark 9 It is reasonable to require the conditions ${lim}_{u\to 0+}\stackrel{\u02c6}{F}(u)=\mathrm{\infty}$ and ${\int}_{0}^{\mathrm{\infty}}sp(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s=\mathrm{\infty}$ when applying Theorem 3. Further notice that the proof of Theorem 3 does not require $p\in \mathcal{RV}(2)$, in contrast to the approach known from the linear case, cf. [[2], Remark 2]. From this point of view, the result is an improvement even in the linear case. Nevertheless, in order to see Theorem 3 as a partial refinement of information as regards solutions treated in Theorem 2(i), it is reasonable to assume $p\in \mathcal{RV}(2)$.
Remark 10 In view of almost monotonicity and the existence of an asymptotic inversion for $\mathcal{RV}$ functions with a nonzero index, many of our considerations could be extended to nonlinearities F and G which are not necessarily (eventually) monotone.
We now consider the more general equation (4). First note that in the case when $G=\mathrm{id}$ and ${\int}_{a}^{\mathrm{\infty}}1/r(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s=\mathrm{\infty}$, (4) can be transformed into the equation of the form (1) and the type of the interval is preserved. Indeed, denote $R(t)={\int}_{a}^{t}1/r(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s$ and introduce new independent variable $s=R(t)$ and new function $z(s)=y({R}^{1}(t))$. Then (4) is transformed into
$s\in [R(a),\mathrm{\infty})$. For a general G however such a transformation is not at disposal, and we must proceed directly. Let ${\mathcal{DS}}_{r}$ denote the set of all eventually positive decreasing solutions of (4). An extension of Theorem 1 to (4) reads as follows.
Theorem 4 Assume that
${lim\hspace{0.17em}sup}_{u\to 0+}{L}_{F}(u)<\mathrm{\infty}$,
for all $M\in (0,\mathrm{\infty})$, and ${L}_{G}(u)\ge N$, $u\in (0,\mathrm{\infty})$, for some $N>0$. Then $\mathrm{\varnothing}\ne {\mathcal{DS}}_{r}\subset \mathcal{NSV}$.
Proof We give only a concise proof. Existence of solutions in ${\mathcal{DS}}_{r}$ again follows from [9]. Take $y\in {\mathcal{DS}}_{r}$. Then $r(t)G({y}^{\prime}(t))\to 0$ as $t\to \mathrm{\infty}$. Otherwise we get contradiction with eventual positivity of y, because of condition (23). Similarly as in the proof of Theorem 1 we see that there exists $K\in (0,\mathrm{\infty})$ such that
for large t. Hence, $y\in \mathcal{NSV}$. □
For $p\in \mathcal{RV}(\beta )$ and $r\in \mathcal{RV}(\beta +2)$ with $\beta <1$, denote
and note that then ${H}_{r}\in \mathcal{RV}(1)$. An extension of Theorem 2 to (4) reads as follows.
Theorem 5 Assume that $p\in \mathcal{RV}(\beta )$ and $r\in \mathcal{RV}(\beta +2)$, with $\beta <1$, ${lim}_{u\to 0+}\stackrel{\u02c6}{F}(u)=\mathrm{\infty}$, and (9) holds. If $y\in {\mathcal{DS}}_{r}\cap \mathcal{SV}$, then $y\in \mathrm{\Pi}(t{y}^{\prime}(t))$. Moreover:

(i)
If ${\int}_{a}^{\mathrm{\infty}}{H}_{r}(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s=\mathrm{\infty}$, then (10) with ${H}_{r}$ instead of H holds and $y(t)\to 0$ as $t\to \mathrm{\infty}$.

(ii)
If ${\int}_{a}^{\mathrm{\infty}}H(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s<\mathrm{\infty}$, then (11) with ${H}_{r}$ instead of H holds and $y(t)\to y(\mathrm{\infty})\in (0,\mathrm{\infty})$ as $t\to \mathrm{\infty}$.
Proof We give again only a concise proof. Take $y\in {\mathcal{DS}}_{r}\cap \mathcal{SV}$. Then ${(rG({y}^{\prime}))}^{\prime}\in \mathcal{RV}(\beta )$. Hence, $rG({y}^{\prime})\in \mathcal{RV}(\beta +1)$, as so ${y}^{\prime}\in \mathcal{RV}(1)$, which implies $y\in \mathrm{\Pi}(t{y}^{\prime})$ by (12). If $\tilde{L}={L}_{p}F(y)$, then $\tilde{L}\in \mathcal{SV}$ and we have
as $t\to \mathrm{\infty}$, where we applied the Karamata theorem. Asymptotic formulas then follow similarly as (10) and (11) in the proof of Theorem 2. □
Remark 11 If $p\in \mathcal{RV}(\beta )$, $\beta <1$, and $r\in \mathcal{RV}(\beta +2)$, then (22) holds provided ${L}_{p}(t)/{L}_{r}(t)\to 0$ as $t\to \mathrm{\infty}$. Indeed, by the Karamata theorem we have
as $t\to \mathrm{\infty}$, and the claim follows.
Remark 12 Assume that $p\in \mathcal{RV}(\beta )$ and $r\in \mathcal{RV}(\beta +2)$ with $\beta >1$. Recall that in the previous theorem we assumed $\beta <1$. Take $y\in {\mathcal{DS}}_{r}\cap \mathcal{SV}$. Then we get ${\int}_{a}^{\mathrm{\infty}}p(s)F(y(s))\phantom{\rule{0.2em}{0ex}}\mathrm{d}s=\mathrm{\infty}$ since the index of regular variation of $pF(y)$ is bigger than −1. Integrating (4) from ${t}_{0}$ to t, where ${t}_{0}$ is such that $y(t)>0$, ${y}^{\prime}(t)<0$, $t\ge {t}_{0}$, we obtain $r(t)G({y}^{\prime}(t))=r({t}_{0})G({y}^{\prime}({t}_{0}))+{\int}_{{t}_{0}}^{t}p(s)F(y(s))\phantom{\rule{0.2em}{0ex}}\mathrm{d}s$. Hence, if we let t tend to ∞, then $r(t)G({y}^{\prime}(t))$ tends to ∞. Thus ${y}^{\prime}$ is eventually positive, which contradicts $y\in {\mathcal{DS}}_{r}$. In other words, this observation indicates that $\mathcal{SV}$ solutions should not be searched among ${\mathcal{DS}}_{r}$ solutions in this setting. We conjecture that we should take an increasing solution in order to remain in the set $\mathcal{SV}$. Of course, some logical adjustments then have to be made, like taking $\mathcal{RV}$ instead of ${\mathcal{RV}}_{0}$ in (2). As for ${\mathcal{DS}}_{r}$, we conjecture that this class somehow corresponds to $\mathcal{RV}(1)$ solutions. Note that the integral ${\int}_{a}^{\mathrm{\infty}}1/r(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s$ (which is ‘close’ to the integral ${\int}_{a}^{\mathrm{\infty}}{G}^{1}(M/r(s))\phantom{\rule{0.2em}{0ex}}\mathrm{d}s$) is divergent for $\beta <1$ resp. convergent for $\beta >1$ since $1/r\in \mathcal{RV}(\beta 2)$.
We have not mentioned the remaining possibility so far, namely $\beta =1$. This border case is probably the most difficult one, and it surely will require a quite different approach. The direct use of the Karamata theorem is problematic in contrast to the corresponding situations in other cases. If $p\in \mathcal{RV}(1)$ and $r\in \mathcal{RV}(1)$, we cannot even say whether ${\int}_{a}^{\mathrm{\infty}}p(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s$, ${\int}_{a}^{\mathrm{\infty}}1/r(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s$ are convergent or divergent. In fact, the situation is more tangled because of the presence of nonlinearities F, G, where $\mathcal{SV}$ components ${L}_{F}$, ${L}_{G}$ are supposed to have a stronger effect than in the case $\beta \ne 1$.
Remark 13 In this last remark, we indicate some further directions for a possible future research. Asymptotic theory of nearly linear equations offers many interesting questions. This paper contains some answers but there are many issues which could be followed further. There is also some space for improving the presented results. We conjecture that our results can be generalized in the sense of replacing condition (2) by $F(\cdot ),G(\cdot )\in {\mathcal{RV}}_{0}(\gamma )$, $\gamma >0$, which would lead to a ‘nearly halflinear equation.’ We expect that  within our setting, with taking $\mathcal{RV}$ instead of ${\mathcal{RV}}_{0}$ in (2)  increasing solutions of (1) are in $\mathcal{RV}(1)$ and asymptotic formulas can be established. In contrast to the linear case, a reduction of order formula is not at our disposal. A topic which would also be of interest is to obtain more precise information as regards $\mathcal{SV}$ solutions of (1), for instance, by means of the class $\mathrm{\Pi}{\mathrm{R}}_{2}$, cf. [[2], Theorem 0.1B].
References
 1.
Evtukhov VM, Kharkov VM: Asymptotic representations of solutions of secondorder essentially nonlinear differential equations. Differ. Uravn. 2007, 43: 13111323. (Russian)
 2.
Geluk JL: On slowly varying solutions of the linear second order differential equation. Publ. Inst. Math. (Belgr.) 1990, 48: 5260. Proceedings of the Third Annual Meeting of the International Workshop in Analysis and Its Applications.
 3.
Kusano T, Manojlović J, Marić V: Increasing solutions of ThomasFermi type differential equations  the sublinear case. Bull. Cl. Sci. Math. Nat. Sci. Math. 2011, 36: 2136.
 4.
Marić V Lecture Notes in Mathematics 1726. In Regular Variation and Differential Equations. Springer, Berlin; 2000.
 5.
Matucci, S, Řehák, P: Extreme solutions of a system of n nonlinear differential equations and regularly varying functions (submitted)
 6.
Kiguradze IT, Chanturia TA Mathematics and Its Applications 89. In Asymptotic Properties of Solutions of Nonautonomous Ordinary Differential Equations. Kluwer Academic, Dordrecht; 1993.
 7.
Bingham NH, Goldie CM, Teugels JL Encyclopedia of Mathematics and Its Applications 27. In Regular Variation. Cambridge University Press, Cambridge; 1987.
 8.
Geluk JL, de Haan L CWI Tract 40. Regular Variation, Extensions and Tauberian Theorems 1987. Amsterdam
 9.
Chanturiya TA: Monotone solutions of a system of nonlinear differential equations. Ann. Pol. Math. 1980, 37: 5970. (Russian)
Acknowledgements
The author acknowledges the support by the grant GAP201/10/1032 of the Czech Science Foundation and by RVO 67985840. The author thanks the both referees for their careful reading of the manuscript and helpful comments.
Author information
Additional information
Competing interests
The author declares that he has no competing interests.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Received
Accepted
Published
DOI
Keywords
 nonlinear second order differential equation
 decreasing solution
 regularly varying function