Skip to main content

Boundary value conditions for linear differential equations with power degenerations

Abstract

On the interval \([0,1]\) we consider the nth order linear differential equation, the coefficient of the highest derivative of which is equivalent to the power function \(t^{\mu }\) when \(t\rightarrow 0\). The main aim of the paper is to pose “generalized” Cauchy conditions for the given equation at the point of singularity \(t=0\), which would be correct for any \(\mu >0\).

1 Introduction

Let us consider the following nth order linear differential equation:

$$ \sum_{i=0}^{n}a_{i}(t)y^{(i)}(t)=f(t), \quad t\in [0,1], $$
(1)

where the coefficients \(a_{i}(\cdot )\), \(i=0,1,\ldots,n\), and the right-hand side \(f(\cdot )\) are continuous functions on \([0,1]\). Moreover, \(a_{n}(t)>0\) when \(t\in (0,1]\) and \(a_{n}(t)\) is equivalent to \(t^{\mu }\) when \(t\rightarrow 0\), i.e., equation (1) has a singularity at the point \(t=0\) of order μ. It is known that if \(0<\mu <1\), then all solutions of equation (1) belong to \(C^{n}[0,1]\). Hence, in this case we can pose the same boundary conditions for (1) as for a nonsingular equation. In particular, at the point of singularity the following Cauchy conditions can be posed:

$$ y^{(i)}(0)=0,\quad i=0,1,\ldots,n-1. $$

In the case \(\mu \geq 1\), there are, in general, no finite limits \(\lim_{t\rightarrow 0} y^{(i)}(t)\) for all \(i=0,1,\ldots,n-1\). Therefore, the Cauchy conditions have no meaning. The main aim of the paper is to pose “generalized” Cauchy conditions for (1) at \(t=0\), which would be correct for any \(\mu >0\).

The problem will be solved in the following way. Let \(\gamma _{i}<1\), \(i=0,1,\ldots,n-1\), be an arbitrary set of n real numbers and \(\gamma _{n}=1\). Suppose that the set of numbers \(\overline{\alpha }=(\alpha _{0},\alpha _{1},\ldots,\alpha _{n})\) is such that

$$ \alpha _{i}=\gamma _{i-1}-\gamma _{i}+1,\quad i=1,2,\ldots,n,$$
(2)

and \(\alpha _{0}\) is calculated from the equality

$$ \mu =\gamma _{0}+\alpha _{0}+n-1. $$

That gives

$$ \sum_{i=0}^{n} \alpha _{i}=\mu ,\qquad \gamma _{i}=\alpha _{n}+ \sum _{k=i+1}^{n-1} (\alpha _{k}-1)< 1,\quad i=0,1,\ldots,n-2, \text{ and } \gamma _{n-1}=\alpha _{n}< 1. $$
(3)

Using this set of numbers α̅ for \(y(t)\in C^{n}(0,1]\), we construct the following operations:

$$\begin{aligned}& D_{\overline{\alpha }}^{0}y(t)=t^{\alpha _{0}}y(t), \\& D_{\overline{\alpha }}^{i}y(t)= t^{\alpha _{i}}\frac{d}{dt}t^{\alpha _{i-1}} \frac{d}{dt} \cdots t^{\alpha _{1}}\frac{d}{dt}t^{\alpha _{0}}y(t), \quad i=1,2,\ldots,n. \end{aligned}$$

We call this differential operation (or operator) \(D_{\overline{\alpha }}^{i}\)the multiweighted derivative of the functionyof orderi,\(i=0,1,\ldots,n\). At the point of singularity \(t=0\) of equation (1) we pose the boundary conditions

$$ D_{\overline{\alpha }}^{i}y(0)=0,\quad i=0,1, \ldots,n-1,$$
(4)

where each \(D_{\overline{\alpha }}^{i}y(0)=0\), \(i=0,1,\ldots,n-1\), is understood in the sense of the existence of the finite limit \(\lim_{t\rightarrow 0}D_{\overline{\alpha }}^{i}y(t)=D_{ \overline{\alpha }}^{i}y(0)\). Conditions (4) are the required “generalized” Cauchy conditions, and in the paper we prove that problem (1) and (4) with (3) has a unique solution.

Let us note that the operator \(D_{\overline{\alpha }}^{n}\) gives the basis for a space \(W_{p,\overline{\alpha }}^{n}=W_{p,\overline{\alpha }}^{n}(I)\) of functions \(y:I\rightarrow \mathbb{R}\) with the finite semi-norm

$$ \Vert y \Vert _{W_{p,\overline{\alpha }}^{n}(I)}= \bigl\Vert D_{\overline{\alpha }}^{n}y \bigr\Vert _{p}, $$

where \(1< p<\infty \), \(I=(0,1)\) or \(I=(1,\infty )\).

The idea to study function spaces with the purpose to apply them to different problems concerning differential equations appeared in works by Sobolev in the thirties. From this time the theory of Sobolev spaces has been developed to be a very powerful instrument for solving boundary value problems of differential equations. Moreover, such concept as a “weight function” was introduced to take care of different problems connected to singularities. Correspondingly, Kudryavtsev presented a fairly complete theory of one-dimensional Sobolev spaces with power weights (see, e.g., [815] and the references given there). He considered a space \(L_{p,\gamma }^{n}=L_{p,\gamma }^{n}(I)\) of functions \(y:I\rightarrow \mathbb{R}\), which on the interval I have nth order derivative with the finite semi-norm

$$ \Vert y \Vert _{L_{p,\gamma }^{n}}= \bigl\Vert t^{\gamma }y^{(n)} \bigr\Vert _{p}. $$

For the interval \(I=(0,1)\) it was shown that if \(\gamma >n-\frac{1}{p}\), then the function y, in general, does not have finite limit value when \(t\rightarrow 0\). Thus, the introduction of an additional function as “weight” does not always solve singularity problems. This argumentation served as a motivation for constructing the operator \(D_{\overline{\alpha }}^{n}\) and defining the space \(W_{p,\overline{\alpha }}^{n}\) with \(n+1\) weights. The theory of the function space \(W_{p,\overline{\alpha }}^{n}\) generalizes the previous theory of the function space \(L_{p,\gamma }^{n}\). In the series of works (see, e.g., [16]) there were considered similar problems for the \(W_{p,\overline{\alpha }}^{n}\) as those considered by Kudryavtsev for the space \(L_{p,\gamma }^{n}\). The results for the space \(W_{p,\overline{\alpha }}^{n}\) cover such singularities that cannot be handled by one weight, but can be handled with many weights, and they give the basis for the main result of the present paper. More precisely, in the paper [5] it was shown that the properties of the space \(W_{p,\overline{\alpha }}^{n}\) are dependent on the values \(\gamma _{i}\), \(i=0,1,\ldots,n-1\), in accordance with which we have the following three cases of the degeneration of the weight functions \(t^{\alpha _{i}}\), \(i=1,2,\ldots,n\):

  1. 1.

    \(\gamma _{\max }=\max_{0\leq i\leq n-1}\gamma _{i}<1- \frac{1}{p}\) (weak degeneration);

  2. 2.

    \(\gamma _{\min }=\min_{0\leq i\leq n-1}\gamma _{i}>1- \frac{1}{p}\) (strong degeneration);

  3. 3.

    \(\gamma _{\min }<1-\frac{1}{p}< \gamma _{\max }\) (mixed degeneration).

Then in the paper [6] it was proved that, for any function \(y\in W_{p,\overline{\alpha }}^{n}\), the condition \(\gamma _{\max }<1-\frac{1}{p}\) (weak degeneration) is necessary and sufficient for the existence of limit values \(\lim_{t\rightarrow 0}D_{\overline{\alpha }}^{i}y(t)=D_{ \overline{\alpha }}^{i}y(0)\) for all \(0\leq i\leq n-1\). If we resolve the equalities in (3) with respect to \(\alpha _{i}\), \(i=1,2,\ldots,n\), we get the equalities in (2). Moreover, by the assumption the values \(\gamma _{i}\), \(i=0,1,\ldots,n-1\), satisfy the condition \(\gamma _{i}<1\). Thus, we have a weak degeneration of the weight functions \(t^{\alpha _{i}}\), \(i=1,2,\ldots,n\), which guarantees the existence of characteristics at the singular point \(t=0\).

The paper is organized as follows: In Sect. 2 we collect all the required notations, definitions, and statements; in Sect. 3 we state and prove our main result concerning the existence of a unique solution of problem (1) and (4).

2 Preliminaries

Let us introduce the following family of functions \(K_{k}(x,t)\), \(k=0,1,\ldots,n\), assuming that \(K_{n}(x,t)\equiv 1\), \(K_{n-1}(x,t)=\int _{x}^{t} s^{-\alpha _{n-1}}\,ds\), \(K_{n-2}(x,t)=\int _{x}^{t} y^{-\alpha _{n-2}} \int _{x}^{y}s^{- \alpha _{n-1}}\,ds \,dy\) and, in general,

$$ K_{k}(x,t)= \int _{x}^{t}t_{k+1}^{-\alpha _{k+1}} \int _{x}^{t_{k+1}}t_{k+2}^{-\alpha _{k+2}}\cdots \int _{x}^{t_{n-2}}t_{n-1}^{- \alpha _{n-1}} \,dt_{n-1}\,dt_{n-2}\cdots dt_{k+1},\quad k=0,1,\ldots,n-1, $$

for \(t>x\). Moreover, we assume that \(K_{k}(x,t)=0\), \(k=0,1,\ldots,n-1\), for \(t\le x\).

Let \(n-1\geq k\geq 0\). If in the integrals of \(K_{k}(x,t)\) we successively change the variables \(t_{n-1}=x\tau _{n-1}\), \(t_{n-2}=x\tau _{n-2}\), …, \(t_{k+1}=x\tau _{k+1}\), we get

$$ x^{-\alpha _{n}}K_{k}(x,t)=x^{-\gamma _{k}}K_{k}\biggl(1, \frac{t}{x}\biggr) $$

and

$$ x^{-\alpha _{n}}K_{k}(x,t)\geq x^{-\gamma _{k}}K_{k}(1,2) \quad \text{for } 0< x \leq \frac{t}{2}, $$
(5)

i.e., the function \(x^{-\alpha _{n}}K_{k}(x,t)\) has a singularity at zero of order \(x^{-\gamma _{k}}\) for \(0<\gamma _{k}<1\).

Let \(n-1\geq k\geq 0\), \(i=k,k+1,\ldots,n-1\). Changing the order of integration and using (3), for \(t>0\), we get

$$\begin{aligned} &\int _{0}^{t}s^{-\alpha _{n}}K_{k}(s,t) \,ds \\ &\quad = \int _{0}^{t}s^{- \alpha _{n}} \int _{s}^{t}t_{k+1}^{-\alpha _{k+1}} \int _{s}^{t_{k+1}}t_{k+2}^{-\alpha _{k+2}} \cdots \int _{s}^{t_{n-2}}t_{n-1}^{- \alpha _{n-1}} \,dt_{n-1}\,dt_{n-2} \cdots dt_{k+1}\,ds \\ &\quad = \int _{0}^{t}t_{k+1}^{-\alpha _{k+1}} \int _{0}^{t_{k+1}}t_{k+2}^{- \alpha _{k+2}} \cdots \int _{0}^{t_{n-2}}t_{n-1}^{-\alpha _{n-1}} \int _{0}^{t_{n-1}}s^{-\alpha _{n}}\,ds \,dt_{n-1} \cdots dt_{k+2}\,dt_{k+1} \\ &\quad =\frac{1}{1-\gamma _{n-1}} \int _{0}^{t}t_{k+1}^{-\alpha _{k+1}} \int _{0}^{t_{k+1}}t_{k+2}^{-\alpha _{k+2}} \cdots \int _{0}^{t_{n-2}}t_{n-1}^{- \gamma _{n-2}} \,dt_{n-1} \cdots dt_{k+2}\,dt_{k+1} \\ &\quad =\frac{1}{(1-\gamma _{n-1})(1-\gamma _{n-2})} \int _{0}^{t}t_{k+1}^{- \alpha _{k+1}} \int _{0}^{t_{k+1}}t_{k+2}^{-\alpha _{k+2}} \cdots \int _{0}^{t_{n-3}}t_{n-2}^{-\gamma _{n-3}} \,dt_{n-2} \cdots dt_{k+2}\,dt_{k+1} \\ &\quad =\cdots=\frac{1}{\prod_{i=k+1}^{n-1}(1-\gamma _{i})} \int _{0}^{t}t_{k+1}^{-\gamma _{k}} \,dt_{k+1}=d_{k}t^{1-\gamma _{k}}, \end{aligned}$$
(6)

where \(d_{k}=\frac{1}{\prod_{i=k}^{n-1}(1-\gamma _{i})}\).

Lemma 1

Let\(\gamma _{n-1}<1\). Then, for\(y\in C^{n}(0,1]\)satisfying the condition\(\sup_{0\leq t\leq 1} |D_{\overline{\alpha }}^{n}y(t)|< \infty \), there exists\(D_{\overline{\alpha }}^{n-1}y(0)\)and the estimate

$$ t^{\gamma _{n-1}-1} \bigl\vert D_{\overline{\alpha }}^{n-1}y(t)- D_{ \overline{\alpha }}^{n-1}y(0) \bigr\vert \leq d_{n-1}\sup _{0\leq t\leq 1} \bigl\vert D_{ \overline{\alpha }}^{n}y(t) \bigr\vert $$
(7)

holds.

Proof. Indeed, by assumption it follows that the function \(t^{-\alpha _{n}}D_{\overline{\alpha }}^{n}y(t)\) is absolutely summable on the interval \((0,1]\). Therefore, by Newton–Leibniz formula for \(0< t\leq 1\), we have

$$\begin{aligned} \int _{0}^{t}s^{-\alpha _{n}}D_{\overline{\alpha }}^{n}y(s) \,ds&= \lim_{\varepsilon \rightarrow 0} \int _{\varepsilon }^{t} \frac{d}{ds}D_{\overline{\alpha }}^{n-1}y(s) \,ds \\ &=D_{\overline{\alpha }}^{n-1}y(t)- \lim_{\varepsilon \rightarrow 0}D_{\overline{\alpha }}^{n-1}y( \varepsilon )= D_{ \overline{\alpha }}^{n-1}y(t)-D_{\overline{\alpha }}^{n-1}y(0). \end{aligned}$$
(8)

Since \(\alpha _{n}=\gamma _{n-1}\) and \(\gamma _{n-1}<1\), from (8) we get

$$ \bigl\vert D_{\overline{\alpha }}^{n-1}y(t)-D_{\overline{\alpha }}^{n-1}y(0) \bigr\vert \leq \sup_{0\leq t\leq 1} \bigl\vert D_{\overline{\alpha }}^{n}y(t) \bigr\vert \int _{0}^{t} s^{-\alpha _{n}}\,ds=\sup _{0\leq t\leq 1} \bigl\vert D_{ \overline{\alpha }}^{n}y(t) \bigr\vert \cdot \frac{t^{1-\gamma _{n-1}}}{1-\gamma _{n-1}}. $$

The last gives (7). The proof of Lemma 1 is complete.

Lemma 2

Let\(n-1\geq k \geq 0\)and\(\gamma _{i}<1\)for\(i=k,k+1,\ldots,n-1\). Let a function\(y\in C^{n}(0,1]\)satisfy the conditions: \(\sup_{0\leq t\leq 1}|D_{\overline{\alpha }}^{n}y(t)|< \infty \)and\(D_{\overline{\alpha }}^{i}y(0)=0\), \(i=k+1,\ldots,n-1\). Then there exist\(D_{\overline{\alpha }}^{k}y(0)\)and

$$\begin{aligned}& t^{\gamma _{k}-1} \bigl\vert D_{\overline{\alpha }}^{k}y(t)- D_{ \overline{\alpha }}^{k}y(0) \bigr\vert \leq d_{k}\sup _{0\leq t\leq 1} \bigl\vert D_{ \overline{\alpha }}^{n}y(t) \bigr\vert ,\\ \end{aligned}$$
(9)
$$\begin{aligned}& D_{\overline{\alpha }}^{k}y(t)= D_{\overline{\alpha }}^{k}y(0)+ \int _{0}^{t}K_{k}(s,t)s^{-\alpha _{n}} D_{\overline{\alpha }}^{n}y(s)\,ds. \end{aligned}$$
(10)

Proof. Since \(\sup_{0\leq t\leq 1}|D_{\overline{\alpha }}^{n}y(t)|< \infty \) and \(\gamma _{i}<1\), \(i=k,k+1,\ldots,n-1\), from (6) we have that the function \(K_{i}(s,t)s^{-\alpha _{n}} D_{\overline{\alpha }}^{n}y(s)\) is absolutely summable on \((0,t)\). Therefore, there exists

$$\begin{aligned} & \int _{0}^{t}K_{i}(s,t)s^{-\alpha _{n}} D_{\overline{\alpha }}^{n}y(s)\,ds \\ &\quad = \int _{0}^{t}t_{i+1}^{-\alpha _{i+1}} \int _{0}^{t_{i+1}}t_{i+2}^{- \alpha _{i+2}} \cdots \int _{0}^{t_{n-2}}t_{n-1}^{-\alpha _{n-1}} \int _{0}^{t_{n-1}}t_{n}^{-\alpha _{n}} D_{\overline{\alpha }}^{n}y(t_{n})\,dt_{n} \,dt_{n-1} \cdots dt_{i+1} \end{aligned}$$
(11)

for all \(i=k,k+1,\ldots,n-1\).

Since \(D_{\overline{\alpha }}^{n-1}y(0)=0\), from (8) we have

$$ D_{\overline{\alpha }}^{n-1}y(t)= \int _{0}^{t}t_{n}^{-\alpha _{n}} D_{\overline{\alpha }}^{n}y(t_{n})\,dt_{n},\quad t\in (0,1]. $$

The last, together with (11) for \(i=n-2\), gives

$$ D_{\overline{\alpha }}^{n-2}y(t)-D_{\overline{\alpha }}^{n-2}y(0)= \int _{0}^{t}t_{n-1}^{-\alpha _{n-1}} \int _{0}^{t_{n-1}}t_{n}^{- \alpha _{n}} D_{\overline{\alpha }}^{n}y(t_{n})\,dt_{n} \,dt_{n-1}. $$

By the condition \(D_{\overline{\alpha }}^{n-2}y(0)=0\), hence

$$ D_{\overline{\alpha }}^{n-2}y(t)= \int _{0}^{t}t_{n-1}^{- \alpha _{n-1}} \int _{0}^{t_{n-1}}t_{n}^{-\alpha _{n}} D_{ \overline{\alpha }}^{n}y(t_{n})\,dt_{n} \,dt_{n-1}. $$

If we continue this process, using the fact that \(D_{\overline{\alpha }}^{i}y(0)=0\), \(i=k+1,\ldots,n-1\), and the finiteness of (11), we get

$$ D_{\overline{\alpha }}^{k+1}y(t)= \int _{0}^{t}t_{k+2}^{- \alpha _{k+2}} \int _{0}^{t_{k+2}}t_{k+3}^{-\alpha _{k+3}} \cdots \int _{0}^{t_{n-1}}t_{n}^{-\alpha _{n}} D_{\overline{\alpha }}^{n}y(t_{n})\,dt_{n} \,dt_{n-1} \cdots dt_{k+2}. $$

That gives

$$\begin{aligned} D_{\overline{\alpha }}^{k}y(t)-D_{\overline{\alpha }}^{k}y(0) &= \int _{0}^{t}t_{k+1}^{-\alpha _{k+1}} \int _{0}^{t_{k+1}}t_{k+2}^{- \alpha _{k+2}} \cdots \int _{0}^{t_{n-1}}t_{n}^{-\alpha _{n}} D_{ \overline{\alpha }}^{n}y(t_{n})\,dt_{n} \,dt_{n-1} \cdots dt_{k+1} \\ &= \int _{0}^{t}K_{k}(s,t)s^{-\alpha _{n}} D_{ \overline{\alpha }}^{n}y(s)\,ds, \end{aligned}$$
(12)

i.e., there exists \(D_{\overline{\alpha }}^{k}y(0)\) and (10) holds. From (6) and (12) we have

$$\begin{aligned} \bigl\vert D_{\overline{\alpha }}^{k}y(t)-D_{\overline{\alpha }}^{k}y(0) \bigr\vert &\leq \sup_{0\leq t\leq 1} \bigl\vert D_{\overline{\alpha }}^{n}y(t) \bigr\vert \int _{0}^{t}K_{k}(s,t)s^{-\alpha _{n}} \,ds\cdot \\ &=\frac{1}{\prod_{i=k}^{n-1}(1-\gamma _{i})} t^{1-\gamma _{k}} \sup_{0\leq t\leq 1} \bigl\vert D_{\overline{\alpha }}^{n}y(t) \bigr\vert . \end{aligned}$$

Thus, we get (9). The proof of Lemma 2 is complete.

Corollary 1

Let\(n-1\geq k \geq 0\). Suppose that the conditions of Lemma 2hold and\(D_{\overline{\alpha }}^{k}y(0)=0\). Then

$$ D_{\overline{\alpha }}^{i}y(t)= \int _{0}^{t}K_{i}(s,t)s^{- \alpha _{n}}D_{\overline{\alpha }}^{n}y(s) \,ds$$
(13)

and

$$ \sup_{0\leq t\leq 1}t^{\gamma _{i}-1} \bigl\vert D_{\overline{\alpha }}^{i}y(t) \bigr\vert \leq d_{i}\sup _{0\leq t\leq 1} \bigl\vert D_{\overline{\alpha }}^{n}y(t) \bigr\vert $$
(14)

for\(i=k,k+1,\ldots,n-1\).

Lemma 3

Let\(y:I\rightarrow \mathbb{R}\)be such that\(y\in C^{n}(0,1]\). Then

$$ y^{(k)}(t)=\sum_{i=0}^{k}b_{k,i}t^{\gamma _{i}- \gamma _{0}- \alpha _{0}-k}D_{\overline{\alpha }}^{i}y(t),\quad k=0,1,\ldots,n, $$
(15)

where\(b_{k,k}=1\), \(k=0,1,\ldots,n\), and coefficients\(b_{k,i}\), \(i=1,2,\ldots,k-1\), \(k=1,2,\ldots n\), and\(b_{k,0}\), \(k=1,2,\ldots n\), are defined by the recurrent formulas:

$$ b_{k,i}=b_{k-1,i-1}+b_{k-1,i}\bigl(\gamma _{i}- \gamma _{0}-\alpha _{0}-(k-1)\bigr) \quad \textit{and}\quad b_{k,0}=-b_{k-1,0}(\alpha _{0}+k-1). $$

Proof. For \(k=0\), we have

$$ y(t)=t^{-\alpha _{0}}D_{\overline{\alpha }}^{0}y(t)=b_{0,0}t^{\gamma _{0}- \gamma _{0}-\alpha _{0}-0}D_{\overline{\alpha }}^{0}y(t), $$

where \(b_{0,0}=1\).

For \(k=1\), we have

$$ D_{\overline{\alpha }}^{1}y(t)=t^{\alpha _{1}} \bigl(\alpha _{0}t^{-1}D_{ \overline{\alpha }}^{0}y(t)+t^{\alpha _{0}}y'(t) \bigr). $$

That, using \(\alpha _{1}=\gamma _{0}-\gamma _{1}+1\), gives

$$\begin{aligned} y'(t)&=-\alpha _{0}t^{-\alpha _{0}-1}D_{\overline{\alpha }}^{0}y(t)+t^{- \alpha _{0}-\alpha _{1}}D_{\overline{\alpha }}^{1}y(t) \\ &=b_{1,0}t^{\gamma _{0}- \gamma _{0}-\alpha _{0}-1}D_{ \overline{\alpha }}^{0}y(t)+b_{1,1}t^{\gamma _{1}-\gamma _{0}-\alpha _{0}-1}D_{ \overline{\alpha }}^{1}y(t), \end{aligned}$$

where \(b_{1,0}=-b_{0,0}(\alpha _{0}+1-1)\) and \(b_{1,1}=1\).

Now, we assume that

$$ y^{(k-1)}(t)=\sum_{i=0}^{k-1}b_{k-1,i}t^{\gamma _{i}- \gamma _{0}-\alpha _{0}-(k-1)}D_{\overline{\alpha }}^{i}y(t) $$

is true. Then, using (2), we get

$$\begin{aligned} y^{(k)}(t)&=\sum_{i=0}^{k-1}b_{k-1,i}t^{\gamma _{i}- \gamma _{0}- \alpha _{0}-(k-1)-\alpha _{i+1}}D_{\overline{\alpha }}^{i+1}y(t) \\ &\quad {}+\sum_{i=0}^{k-1}b_{k-1,i} \bigl(\gamma _{i}- \gamma _{0}-\alpha _{0}-(k-1) \bigr)t^{ \gamma _{i}- \gamma _{0}-\alpha _{0}-k}D_{\overline{\alpha }}^{i}y(t) \\ & =b_{k-1,k-1}t^{\gamma _{k-1}- \gamma _{0}-\alpha _{0}-(k-1)-\alpha _{k}}D_{ \overline{\alpha }}^{k}y(t) \\ &\quad {}+\sum_{i=1}^{k-1}b_{k-1,i-1}t^{\gamma _{i-1}- \gamma _{0}- \alpha _{0}-(k-1)-\alpha _{i}}D_{\overline{\alpha }}^{i}y(t) \\ &\quad {}+\sum_{i=1}^{k-1}b_{k-1,i} \bigl(\gamma _{i}- \gamma _{0}-\alpha _{0}-(k-1) \bigr)t^{ \gamma _{i}- \gamma _{0}-\alpha _{0}-k}D_{\overline{\alpha }}^{i}y(t) \\ &\quad {}+ b_{k-1,0}\bigl(\gamma _{0}- \gamma _{0}- \alpha _{0}-(k-1)\bigr)t^{\gamma _{0}- \gamma _{0}-\alpha _{0}-k}D_{\overline{\alpha }}^{0}y(t) \\ &=-b_{k-1,0}(\alpha _{0}+k-1)t^{\gamma _{0}-\gamma _{0}-\alpha _{0}-k}D_{ \overline{\alpha }}^{0}y(t) \\ &\quad {}+\sum_{i=1}^{k-1} (b_{k-1,i-1}+b_{k-1,i}\bigl(\gamma _{i}- \gamma _{0}-\alpha _{0}-(k-1) \bigr)t^{\gamma _{i}- \gamma _{0}- \alpha _{0}-k}D_{\overline{\alpha }}^{i}y(t) \\ &\quad {}+b_{k-1,k-1}t^{\gamma _{k}- \gamma _{0}-\alpha _{0}-k}D_{ \overline{\alpha }}^{k}y(t). \end{aligned}$$

The last implies (15). The proof of Lemma 3 is complete.

3 Main result

Theorem 1

Let (3) hold. Let the coefficients\(a_{i}(t)\), \(i=0,1,\ldots,n\), of equation (1) be continuous functions on\((0,1]\)and satisfy the conditions

$$\begin{aligned}& c_{1}t^{\mu }\leq a_{n}(t)\leq c_{2}t^{\mu },\quad \mu >0, t\in (0,1], \end{aligned}$$
(16)
$$\begin{aligned}& a_{i}(t)=o\bigl(t^{\mu -n+i}\bigr),\quad i=0,1, \ldots,n-1, \textit{ for } t\rightarrow 0, \end{aligned}$$
(17)

where the constants\(c_{1}>0\)and\(c_{2}>0\)do not depend on\(t\in (0,1]\). Then, for any\(\mu >0\)and for any function\(f(\cdot )\)continuous on\([0,1]\), there exists a unique solution of problem (1) and (4), and the following estimate

$$ \sum_{k=0}^{n}\sup _{0\leq t\leq 1} t^{\mu -n+k} \bigl\vert y^{(k)}(t) \bigr\vert \leq c \max_{0\leq t\leq 1} \bigl\vert f(t) \bigr\vert $$
(18)

holds, where\(c>0\)does not depend onf.

Proof. If we substitute (15) in (1), then we have

$$ \sum_{k=0}^{n} \Biggl(\sum _{i=k}^{n}a_{i}(t)b_{i,k}t^{ \gamma _{k}- \gamma _{0}-\alpha _{0}-i} \Biggr)D_{\overline{\alpha }}^{k}y(t)=f(t). $$

By introducing the notations \(\widetilde{a}_{k}(t)=\sum_{i=k}^{n}a_{i}(t) b_{i,k} t^{ \gamma _{k}- \gamma _{0}-\alpha _{0}-i}\), \(k=0,1,\ldots,n\), we get

$$ \sum_{k=0}^{n} \widetilde{a}_{k}(t) D_{\overline{\alpha }}^{k}y(t)=f(t).$$
(19)

From conditions (16) and (17) we have

$$\begin{aligned}& c_{1}\leq \widetilde{a}_{n}(t)\leq c_{2},\quad t\in (0,1], \end{aligned}$$
(20)
$$\begin{aligned}& \widetilde{a}_{k}(t)=o\bigl(t^{\gamma _{k}-1}\bigr), \quad k=0,1,\ldots,n-1, \text{ for } t\rightarrow 0. \end{aligned}$$
(21)

By condition (3) we have that \(\gamma _{i}<1\), \(i=0,1,\ldots,n-1\), therefore (14) and (13) are valid for \(k=0\).

Let \(z(t)=\widetilde{a}_{n}(t) D_{\overline{\alpha }}^{n}y(t)\), then from (4), (13), and (19) we obtain

$$ z(t)+ \int _{0}^{t}\sum_{k=0}^{n-1} \widetilde{a}_{k}(t)K_{k}(s,t)s^{- \alpha _{n}} \widetilde{a}_{n}^{-1}(s)z(s)\,ds=f(t). $$
(22)

Now, we prove that integral equation (22) has a unique solution continuous on \([0,1]\), for which the estimate

$$ \max_{0\leq t\leq 1} \bigl\vert z(t) \bigr\vert \leq \bar{c}\max_{0\leq t \leq 1} \bigl\vert f(t) \bigr\vert $$
(23)

holds, where \(\overline{c}>0\) does not depend on f.

Despite the fact that integral equation (22) has the form of a Volterra equation, (5) implies that the kernel of the integral operator in (22) is unbounded if, for some \(0\le k\le n-1\), we have \(0<\gamma _{k}<1\). Therefore, we cannot apply the regular theory of Volterra integral equations in the space of continuous functions (see, e.g., [16]). Hence, let us first solve equation (22) on some interval \([0,\delta ]\), \(0<\delta <1\), using the method of contraction mapping (see, e.g., [7]).

By (21) there exists \(1>\delta >0\) such that

$$ c_{1}^{-1}\sup_{0\leq t\leq \delta } \sum _{k=0}^{n-1} \frac{1}{\prod_{i=k}^{n-1}(1-\gamma _{i})} \bigl\vert \widetilde{a}_{k}(t) \bigr\vert t^{1- \gamma _{k}}=q< 1.$$
(24)

In \(C[0,\delta ]\) we consider the integral operator

$$ Kz(t)=\sum_{k=0}^{n-1}\widetilde{a}_{k}(t) \int _{0}^{t}K_{k}(s,t)s^{- \alpha _{n}} \widetilde{a}_{n}^{-1}(s)z(s)\,ds,\quad t\in [0,\delta ]. $$

Due to (6), (21), and (24) we have

$$\begin{aligned} \sup_{0\leq t\leq \delta } \bigl\vert Kz(t) \bigr\vert &\leq \sup _{0\leq t \leq \delta }\sum_{k=0}^{n-1} \bigl\vert \widetilde{a}_{k}(t) \bigr\vert \int _{0}^{t}K_{k}(s,t)s^{-\alpha _{n}} \bigl\vert \widetilde{a}_{n}(s) \bigr\vert ^{-1} \bigl\vert z(s) \bigr\vert \,ds \\ &\leq \sup_{0\leq t\leq \delta }c_{1}^{-1}\sum _{k=0}^{n-1} \bigl\vert \widetilde{a}_{k}(t) \bigr\vert \sup_{0\leq t\leq \delta } \bigl\vert z(t) \bigr\vert \int _{0}^{t}K_{k}(s,t)s^{-\alpha _{n}} \,ds \\ &=c_{1}^{-1}\sup_{0\leq t\leq \delta }\sum _{k=0}^{n-1} \frac{1}{\prod_{i=k}^{n-1}(1-\gamma _{i})} \bigl\vert \widetilde{a}_{k}(t) \bigr\vert t^{1-\gamma _{k}}\sup _{0\leq t\leq \delta } \bigl\vert z(t) \bigr\vert =q\sup _{0\leq t\leq \delta } \bigl\vert z(t) \bigr\vert . \end{aligned}$$

Therefore, K is a contraction operator in \(C[0,\delta ]\). Applying the method of contraction mapping to integral equation (22) ([7], pp. 88–89), we have that equation (22) has a unique solution \(\bar{z}_{0}\in C[0,\delta ]\); in addition, \(\max_{0\leq t\leq \delta }|\bar{z}_{0}(t)|\leq \bar{c}_{0} \max_{0\leq t\leq 1}|f(t)|\). The successive approximations \(z_{0}\), \(z_{1}\), … , \(z_{n}\), … to this solution are of the form

$$ z_{n}(t)+ \int _{0}^{t}\sum_{k=0}^{n-1} \widetilde{a}_{k}(t)K_{k}(s,t)s^{- \alpha _{n}} \widetilde{a}_{n}^{-1}(s)z_{n-1}(s)\,ds=f(t), $$

where any function from \(C[0,\delta ]\) can be chosen as the first approximation \(z_{0}(t)\).

Thus, we have found the solution \(\bar{z}_{0}\in C[0,\delta ]\) of equation (22) on the interval \([0,\delta ]\). From equation (22) we have

$$ \bar{z}_{0}(\delta )=f(\delta )-\sum_{k=0}^{n-1} \widetilde{a}_{k}(\delta ) \int _{0}^{\delta }K_{k}(s,\delta )s^{- \alpha _{n}}\widetilde{a}_{n}^{-1}(s) \bar{z}_{0}(s)\,ds. $$

Let us now solve equation (22) on the interval \([\delta ,1]\) with the condition

$$ z(\delta )=\bar{z}_{0}(\delta ). $$
(25)

On \([\delta ,1]\) we present equation (22) in the form

$$\begin{aligned} &z(t)+\sum_{k=0}^{n-1}\widetilde{a}_{k}(t) \biggl( \int _{0}^{\delta }K_{k}(s,t)s^{-\alpha _{n}} \widetilde{a}_{n}^{-1}(s)z(s)\,ds+ \int _{\delta }^{t}K_{k}(s,t)s^{-\alpha _{n}} \widetilde{a}_{n}^{-1}(s)z(s)\,ds \biggr) \\ &\quad =f(t). \end{aligned}$$

Since \(z=\bar{z}_{0}\) on the interval \([0,\delta ]\), we have

$$\begin{aligned}& z(t)+\sum_{k=0}^{n-1}\widetilde{a}_{k}(t) \int _{ \delta }^{t}K_{k}(s,t)s^{-\alpha _{n}} \widetilde{a}_{n}^{-1}(s)z(s)\,ds \\& \quad =f(t)-\sum_{k=0}^{n-1} \widetilde{a}_{k}(t) \int _{0}^{ \delta }K_{k}(s,t)s^{-\alpha _{n}} \widetilde{a}_{n}^{-1}(s)\bar{z}_{0}(s) \,ds. \end{aligned}$$
(26)

It means that the kernel of the integral operator K is continuous on the bounded set \(\delta \le s\le t\le 1\). Hence, equation (22) is a regular Volterra integral equation on the interval \([\delta ,1]\). Therefore, it has a unique continuous solution \(\bar{z}_{1}\) on the interval \([\delta ,1]\) (see, e.g., [16]), for which the estimate \(\max_{\delta \leq t\leq 1}|\bar{z}_{1}(t)|\leq \bar{c}_{1} \max_{0\leq t\leq 1}|f(t)|\) holds.

From (26) we have

$$ \bar{z}_{1}(\delta )=f(\delta )-\sum_{k=0}^{n-1} \widetilde{a}_{k}(\delta ) \int _{0}^{\delta }K_{k}(s,\delta )s^{- \alpha _{n}}\widetilde{a}_{n}^{-1}(s) \bar{z}_{0}(s)\,ds. $$

The last gives that \(\bar{z}_{0}(\delta )=\bar{z}_{1}(\delta )\), i.e., (25) holds. Hence, the function

$$ z(t)= \textstyle\begin{cases} \bar{z}_{0}(t),& 0\leq t\leq \delta ,\\ \bar{z}_{1}(t),& \delta \leq t\leq 1, \end{cases} $$

belongs to \(C[0,1]\) and it is a unique solution of (22). From the estimates for \(\bar{z}_{0}\) and \(\bar{z}_{1}\), we get (23) with \(\bar{c}=\max \{\bar{c}_{0},\bar{c}_{1}\}\).

Thus, problem (1) and (4) is reduced to the problem

$$ \textstyle\begin{cases} D_{\overline{\alpha }}^{n}y(t)= \widetilde{a}_{n}^{-1}(t)z(t), \\ D_{\overline{\alpha }}^{i}y(0)=0,\quad i=0,1,\ldots,n-1, \end{cases} $$
(27)

with condition (3), where \(z\in C[0,1]\) is a unique solution of (22).

Since \(z\in C[0,1]\), the function \(\widetilde{a}_{n}^{-1}(t)= (a_{n}(t)t^{-\mu })^{-1}\) is continuous on \((0,1]\) and by (20) we have that \(\frac{1}{c_{2}}\leq \widetilde{a}_{n}^{-1}(t)\leq \frac{1}{c_{1}}\), \(t\in (0,1]\). Then from (27) we get \(D_{\overline{\alpha }}^{n}y(t)\in C(0,1]\) and \(\sup_{0\leq t \leq 1}|D_{\overline{\alpha }}^{n}y(t)|< \infty \). Moreover, by condition (3) we have \(\gamma _{i}<1\), \(i=0,1,\ldots,n-1\). Then, on the basis of Lemmas 1 and 2 and from (13) and (14), we get

$$\begin{aligned} D_{\overline{\alpha }}^{i}y(t)&= \int _{0}^{t}K_{i}(s,t)s^{- \alpha _{n}} D_{\overline{\alpha }}^{n}y(s)\,ds \\ &= \int _{0}^{t}K_{i}(s,t)s^{-\alpha _{n}} \widetilde{a}_{n}^{-1}(s)z(s)\,ds,\quad i=0,1, \ldots,n-1, \end{aligned}$$
(28)

and the estimate

$$ \sup_{0\leq t \leq 1}t^{\gamma _{i}-1} \bigl\vert D_{\overline{\alpha }}^{i}y(t) \bigr\vert \leq d_{i}\sup _{0\leq t \leq 1} \bigl\vert D_{\overline{\alpha }}^{n}y(t) \bigr\vert \leq \frac{d_{i}}{c_{1}}\max_{0\leq t \leq 1} \bigl\vert z(t) \bigr\vert ,\quad i=0,1,\ldots,n. $$
(29)

From (29) the uniqueness of problem (27) follows.

From (23) and (29) we obtain

$$ \sup_{0\leq t \leq 1}t^{\gamma _{i}-1} \bigl\vert D_{\overline{\alpha }}^{i}y(t) \bigr\vert \leq \frac{\bar{c} d_{i}}{c_{1}} \max _{0\leq t \leq 1} \bigl\vert f(t) \bigr\vert ,\quad i=0,1, \ldots,n.$$
(30)

Since \(\mu =\gamma _{0}+\alpha _{0}+n-1\), from (15) we have

$$ t^{\mu -n+k}y^{(k)}(t)=\sum_{i=0}^{k}b_{k,i} t^{\gamma _{i}-1} D_{\overline{\alpha }}^{i}y(t),\quad k=0,1,\ldots,n. $$

The last, together with (30), gives the estimate

$$ \sup_{0\leq t \leq 1} \bigl\vert t^{\mu -n+k}y^{(k)}(t) \bigr\vert \leq \tilde{c} \max_{0\leq t \leq 1} \bigl\vert f(t) \bigr\vert ,\quad k=0,1,\ldots,n, $$

where \(\tilde{c}>0\) depends on , \(c_{1}\), \(d_{i}\) and \(b_{k,i}\), \(k,i=0,1,\ldots,n\), and does not depend on f. The last implies the validity of (18). The proof of Theorem 1 is complete.

Remark 1

Under the conditions of Theorem 1 problem (1) and (4) is solvable for any set \(\alpha _{0}\), \(\alpha _{1}\), …, \(\alpha _{n}\) satisfying condition (3).

References

  1. Abdikalikova, Z., Baiarystanov, A., Oinarov, R.: Compactness of embedding between spaces with multiweighted derivatives—the case \(p\leq q\). Math. Inequal. Appl. 14(4), 793–810 (2011)

    MathSciNet  MATH  Google Scholar 

  2. Abdikalikova, Z., Kalybay, A.: Summability of a Tchebysheff system of functions. J. Funct. Spaces Appl. 8(1), 87–102 (2010). https://doi.org/10.1155/2010/405313

    Article  MathSciNet  MATH  Google Scholar 

  3. Abdikalikova, Z., Oinarov, R., Persson, L.-E.: Boundedness and compactness of the embedding between spaces with multiweighted derivatives when \(1\leq q < p <\infty \). Czechoslov. Math. J. 61 7 (2011). https://doi.org/10.1007/s10587-011-0014-1

    Article  MATH  Google Scholar 

  4. Baideldinov, B.L.: On stabilization to the generalized polynomial of functions with an integrable weight derivative. Dokl. Math. 354(3), 295–297 (1997)

    MathSciNet  Google Scholar 

  5. Baideldinov, B.L.: On equivalent norms in multiweighted spaces. Izv. Nats. Acad. Nauk Resp. Kaz. Ser. Fiz.-Mat. 3, 8–14 (1998) (in Russian)

    MathSciNet  Google Scholar 

  6. Kalybay, A.: A generalized multiparameter weighted Nikol’skii–Lizorkin inequality. Dokl. Math. 68(1), 121–127 (2003)

    MATH  Google Scholar 

  7. Kolmogorov, A.N., Fomin, S.V.: Elements of the Theory of Functions and Functional Analysis. Nauka, Moskow (1989) (in Russian)

    MATH  Google Scholar 

  8. Kudryavtsev, L.D.: Direct and inverse embedding theorems. Applications to the solution of elliptic equations by variational method. Trudy Mat. Inst. Akad. Nauk SSSR 55, 3–182 (1959) (in Russian)

    Google Scholar 

  9. Kudryavtsev, L.D.: Imbedding theorems for functions defined on unbounded regions. Sov. Math. Dokl. 4, 530–532 (1963) English transl. in

    MathSciNet  MATH  Google Scholar 

  10. Kudryavtsev, L.D.: An imbedding theorem for a class of functions defined in the whole space or in the half-space. I. Mat. Sb. 69(111)(4), 616–639 (1966) (in Russian)

    MathSciNet  Google Scholar 

  11. Kudryavtsev, L.D.: Imbedding theorems for classes of functions defined on the entire space or on a half space. II. Mat. Sb. 70(112)(1), 3–35 (1966) (in Russian)

    Google Scholar 

  12. Kudryavtsev, L.D.: Equivalent norms in weighted spaces. Proc. Steklov Inst. Math. 170, 185–218 (1987)

    MATH  Google Scholar 

  13. Kudryavtsev, L.D.: On variational problems for quadratic weight functionals on infinite intervals. Proc. Steklov Inst. Math. 172, 223–235 (1987)

    MATH  Google Scholar 

  14. Kudryavtsev, L.D.: An analogue for an infinite interval of the Lizorkin–Nikol’skii inequality. Proc. Steklov Inst. Math. 173, 151–160 (1987)

    MATH  Google Scholar 

  15. Kudryavtsev, L.D.: On some inequalities for functions and their derivatives on an infinite interval. Proc. Steklov Inst. Math. 204, 143–147 (1994)

    MathSciNet  MATH  Google Scholar 

  16. Smirnov, V.I.: A Course of Higher Mathematics. Nauka, Moskow (1974) (in Russian)

    Google Scholar 

Download references

Acknowledgements

The author would like to thank Professor Ryskul Oinarov and the unknown referees for their generous suggestions and remarks, which have improved this paper.

Availability of data and materials

Not applicable.

Funding

The paper was written with financial support of the Ministry of Education and Science of the Republic of Kazakhstan, grant no. AP05130975 in the area “Scientific research in the field of natural sciences”.

Author information

Authors and Affiliations

Authors

Contributions

All authors read and approved the final manuscript.

Corresponding author

Correspondence to Aigerim Kalybay.

Ethics declarations

Competing interests

The author declares that she has no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kalybay, A. Boundary value conditions for linear differential equations with power degenerations. Bound Value Probl 2020, 110 (2020). https://doi.org/10.1186/s13661-020-01412-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13661-020-01412-6

MSC

Keywords