Skip to main content

A new method for high-order boundary value problems

Abstract

This paper presents a numerical algorithm for solving high-order BVPs. We introduce the construction method of multiscale orthonormal basis in \(W^{m}_{2}[0,1]\). Based on the orthonormal basis, the numerical solution of the boundary value problem is obtained by finding the ε-approximate solution. In addition, the convergence order, stability, and time complexity of the method are discussed theoretically. At last, several numerical experiments show the feasibility of the proposed method.

1 Introduction

High-order BVPs are important mathematical models in the field of electro-magnetics, fluid mechanics, and material science. Many problems in the theory of elastic stability can be handled by BVPs [1]. It is difficult to find the analytic solutions of high-order BVPs because of the complexity of the systems, many numerical algorithms for high-order BVPs have been proposed in recent years. The multistage integration method is an important method to solve the numerical solution of high-order models by reducing the order gradually [25]. Ref. [68] discuss the existence of solutions to higher-order differential equations. Cao [9] solved a class of high-order fractional ordinary differential equations by the quadratic interpolation function method. The collocation method proposed by [10] and the orthonormal Bernstein polynomials method proposed by Mirzaee [11, 12] can solve high-order linear complex differential equations effectively. Mirzaee et al. [1322] proposed a variety of numerical algorithms for solving high-order integro-differential equations. Many scholars have also proposed many methods in the field of numerical solution of high-order partial differential equations [2325]. Reproducing kernel space is an important Banach space which has been used in the field of numerical analysis. The reproducing kernel methods are used in the numerical solutions of high-order models, singular BVPs, and interface problems [2631].

In this paper, we construct a set of multiscale orthonormal bases based on the idea of wavelet in the reproducing kernel space. This set of bases is orthonormal, which can improve the computational efficiency. For the numerical solution of differential equations, many literature works, such as [32, 33], use the idea of ε-approximate solution. ε-approximate solution provided the stability of the algorithm, good order of convergence in the calculation method. In this article, we construct the multiscale method for the following high-order boundary value problems (BVPs):

$$ \textstyle\begin{cases} u^{(m)}+p_{1}(x)u^{(m-1)}+ \cdots +p_{m-1}(x)u^{\prime }+p_{m}(x)u=f(x), \quad x \in (0,1) \\ B_{i}u=\alpha _{i}, \quad i=1,\ldots ,m, \end{cases} $$
(1.1)

where \(B_{i}\) (\(i=1,2,\ldots , m\)) are bounded linear functionals on \(W_{2}^{m}[0,1]\), \(p_{i}(x)\) (\(i=1,2,\ldots , m\)) have certain smoothness.

The paper is organized as follows: In Sect. 2, we construct a set of multiscale orthonormal bases in the reproducing kernel space \(W^{m}_{2}[0,1]\). In Sect. 3, we introduce a method to obtain the numerical solution of BVPs by finding an ε-approximate solution, and verify the existence of the ε-approximate solution. In Sect. 4, the convergence, stability, and complexity of this method are discussed. In Sect. 5, we report the numerical result obtained by the present method and compare this method with other previous methods.

2 Multiscale orthonormal basis

In this section, the reproducing kernel space is defined and a set of multiscale orthonormal bases is constructed. This knowledge is very useful in the following article.

Definition 2.1

The reproducing kernel space \(W^{m}_{2}[0,1]=\{u| u^{(m-1)}\in C[0,1], u^{(m)} \in L^{2}[0,1] \}\), and the inner product of \(W^{m}_{2}\) is

$$ \langle u,v \rangle _{W_{2}^{m}}=\sum_{i=0}^{m-1} u^{(i)}(0)v^{(i)}(0)+ \int ^{1}_{0}{u^{(m)} v^{(m)}\,dx}, \qquad \Vert u \Vert _{W_{2}^{m}}=\sqrt{ \langle u,u \rangle _{W_{2}^{m}}}. $$

Definition 2.2

The reproducing kernel space

$$ W^{m}_{2,0}[0,1]=\bigl\{ u| u \in W_{2}^{m}, u(0)=u^{\prime }(0)=\cdots =u^{(m-1)}(0)=0,u^{(m-1)}(1)=0 \bigr\} . $$

Clearly, \(W^{m}_{2,0}[0,1]\) is the closed subspace of \(W^{m}_{2}[0,1]\). In Ref. [32], we set up a multiscale orthonormal basis in \(W^{1}_{2,0}\):

$$\begin{aligned} \phi _{i,k}(x)=2^{\frac{i-1}{2}} \textstyle\begin{cases} (x-\frac{k}{2^{i-1}}), &x\in [\frac{k}{2^{i-1}},\frac{k+1/2}{2^{i-1}}], \\ (\frac{k+1}{2^{i-1}}-x),&x\in [\frac{k+1/2}{2^{i-1}}, \frac{k+1}{2^{i-1}}], \\ 0,& \mbox{else}, \end{cases}\displaystyle \end{aligned}$$
(2.1)

where \(i=1,2,\ldots \) , \(k=0,1,2,\ldots ,2^{i-1}-1\). The graph of \(\phi _{i,k}\) is shown in Fig. 1.

Figure 1
figure 1

\(\phi _{i,k}(x)\)

In order to solve Eq. (1.1), this paper constructs a set of orthonormal bases in \(W_{2}^{m}[0,1]\) by \(\{\phi _{i,k}\}_{i=1,k=0}^{\infty ,2^{i-1}-1}\). First, let us construct the basis functions in \(W_{2,0}^{m}[0,1]\). Note

$$ J^{m-1}_{0}u=\frac{1}{(m-2)!} \int ^{x}_{0} (x-s)^{m-2}u(s)\,ds \quad (m\in N,m \geq 2 ). $$

Theorem 2.1

\(\{J^{m-1}_{0}\phi _{1,0}(x),J^{m-1}_{0}\phi _{2,0}(x),J^{m-1}_{0} \phi _{2,1}(x),\ldots ,J^{m-1}_{0}\phi _{i,k}(x),\ldots \}\) is the multiscale orthonormal basis of \(W^{m}_{2,0}[0,1]\).

Proof

We just prove the orthogonality and completeness.

First, orthonormality. Obviously,

$$\begin{aligned} \bigl\langle J^{m-1}_{0}\phi _{i,k},J^{m-1}_{0} \phi _{j,m} \bigr\rangle _{W^{m}_{2,0}}= \langle \phi _{i,k},\phi _{j,m} \rangle _{W^{1}_{2,0}}= \textstyle\begin{cases} 1,&i=j,k=m, \\ 0,&\mbox{else}, \end{cases}\displaystyle \end{aligned}$$

orthogonality is true.

Second, completeness. If \(\langle u, J^{m-1}_{0}\phi _{i,k} \rangle _{W^{m}_{2,0}}=0\), then \(u\equiv 0\), which means the basis is complete. In fact,

$$ \bigl\langle J^{m-1}_{0}\phi _{i,k},u \bigr\rangle _{W^{m}_{2,0}}=\bigl\langle \phi _{i,k},u^{(m-1)} \bigr\rangle _{W^{1}_{2,0}}=0 \quad \mbox{implied to}\quad u^{(m-1)}=0. $$

According to Def. 2.2, \(u\equiv 0\). □

Next, we construct the orthonormal basis in \(W^{m}_{2}[0,1]\). There are \(m+1\) more conditions in space \(W^{m}_{2,0}[0,1]\) than in space \(W^{m}_{2}[0,1]\). If the basis in \(W^{m}_{2}[0,1]\) is constructed from the basis in \(W^{m}_{2,0}[0,1]\), we need to look for \(m+1\) functions \(g_{k}(x)\in W^{m}_{2}[0,1]\), \(k=0,1,2,\ldots ,m\), such that

$$\begin{aligned}& \bigl\langle g_{i}(x), g_{j}(x)\bigr\rangle _{W^{m}_{2}}=0, \end{aligned}$$
(2.2)
$$\begin{aligned}& \bigl\langle g_{k}(x), g_{k}(x)\bigr\rangle _{W^{m}_{2}}=1, \end{aligned}$$
(2.3)
$$\begin{aligned}& \bigl\langle g_{i}(x), J^{m-1}_{0}\phi _{i,k}(x)\bigr\rangle _{W^{m}_{2}}=0. \end{aligned}$$
(2.4)

It is clear that \(g_{1}(x)=1\), \(g_{2}(x)=x\) in \(W^{m}_{2}[0,1]\) satisfy Eq. (2.2)–Eq. (2.4). Let \(g_{k}(x)=ax^{k} \in W^{m}_{2}[0,1] \), (\(k=2,\ldots ,m\)). By the definition of inner product and Eq. (2.2)–Eq. (2.4), we can obtain \(a=\frac{1}{k!}\).

Theorem 2.2

$$ \bigl\{ \rho _{j}(x)\bigr\} _{j=1}^{\infty }=\biggl\{ 1,x,\frac{x^{2}}{2},\ldots , \frac{x^{m}}{m!},J^{m-1}_{0} \phi _{1,0}(x),J^{m-1}_{0}\phi _{2,0}(x),J^{m-1}_{0} \phi _{2,1}(x), \ldots ,J^{m-1}_{0}\phi _{i,k}(x),\ldots \biggr\} $$

is the multiscale orthonormal basis of \(W^{m}_{2}[0,1]\).

Proof

According to Th. 2.1 and Eq. (2.2)–Eq. (2.4), it is clear that

$$\begin{aligned} \bigl\langle \rho _{i}(x),\rho _{j}(x)\bigr\rangle _{W^{m}_{2}}= \textstyle\begin{cases} 1,&i=j, \\ 0,&i\neq j. \end{cases}\displaystyle \end{aligned}$$

So \(\{\rho _{j}(x)\}_{j=1}^{\infty }\) is orthogonal.

Next, we just need to prove completeness. That is, if \(\langle u, \rho _{j} \rangle _{W^{m}_{2}}=0\), then \(u\equiv 0\). In fact,

$$\begin{aligned}& \biggl\langle u, \frac{x^{k}}{k!}\biggr\rangle _{W^{m}_{2}}=0 \quad \mbox{implied to} \quad u^{(k)}(0)=0, \quad k=0,1,2,\ldots ,m-1. \end{aligned}$$
(2.5)
$$\begin{aligned}& \biggl\langle u, \frac{x^{m}}{m!}\biggr\rangle _{W^{m}_{2}}=0 \quad \mbox{implied to}\quad u^{(m-1)}(1)=0. \end{aligned}$$
(2.6)
$$\begin{aligned}& \bigl\langle u,J^{m-1}_{0}\phi _{i,k}(x)\bigr\rangle _{W^{m}_{2}}=\bigl\langle u^{(m-1)}, \phi _{i,k}(x)\bigr\rangle _{W^{1}_{2,0}}=0 \quad \mbox{implied to}\quad u^{(m-1)} \equiv 0. \end{aligned}$$
(2.7)

From Eq. (2.5)–Eq. (2.7), \(u\equiv 0\). □

3 ε-approximate solution of high-order BVPs

In this section, we give the ε-approximation of Eq. (1.1) and get the numerical solution of BVPs by finding the ε-approximate solution of Eq. (1.1).

Put

$$ Lu=u^{(m)}+p_{1}(x)u^{(m-1)}+ \cdots +p_{m-1}(x)u^{\prime }+p_{m}(x)u, $$

where \(L:W^{m}_{2}[0,1]\to L^{2}[0,1]\).

Theorem 3.1

\(L:W^{m}_{2}[0,1]\to L^{2}[0,1]\) is a bounded linear operator.

Proof

Because \(W^{m}_{2}[0,1]\) is a reproducing kernel space,

$$\begin{aligned} \bigl\vert u^{(k)}(x) \bigr\vert =& \biggl\vert \biggl\langle u(\cdot ), \frac{\partial ^{k} K(x,\cdot )}{\partial x^{k}}\biggr\rangle _{W^{m}_{2}} \biggr\vert \\ \leq& \bigl\Vert u(\cdot ) \bigr\Vert _{W^{m}_{2}} \biggl\Vert \frac{\partial ^{k} K(x,\cdot )}{\partial x^{k}} \biggr\Vert _{W^{m}_{2}}, \quad k=0,1,2, \ldots ,m-1. \end{aligned}$$
(3.1)

By Eq. (3.1), there exist positive constants \(M_{k}\) such that

$$\begin{aligned} \bigl\Vert p_{m-k}(x)u^{(k)}(x) \bigr\Vert _{L^{2}} =& \biggl( \int ^{1}_{0} \bigl(p_{m-k}(x)u^{(k)}(x) \bigr)^{2}\,dx \biggr)^{\frac{1}{2}} \\ \leq & \max_{x\in [0,1]} \bigl\vert p_{m-k}(x) \bigr\vert \biggl( \int ^{1}_{0} \bigl(u^{(k)}(x) \bigr)^{2}\,dx \biggr)^{\frac{1}{2}} \\ \leq& M_{k} \Vert u \Vert _{W^{m}_{2}}, \quad k=0,1,2, \ldots ,m-1. \end{aligned}$$
(3.2)

Therefore

$$\begin{aligned} \bigl\Vert u^{(m)} \bigr\Vert _{L^{2}}^{2}= \int ^{1}_{0} \bigl(u^{(m)} \bigr)^{2}\,dx \leq \sum_{i=0}^{2} u^{(i)}(0)u^{(i)}(0)+ \int ^{1}_{0} \bigl(u^{(m)} \bigr)^{2}\,dx= \Vert u \Vert ^{2}_{W^{m}_{2}}. \end{aligned}$$
(3.3)

From Eq. (3.2) and Eq. (3.3), it follows that

$$ \Vert Lu \Vert _{L^{2}} \leq M \Vert u \Vert _{W^{m}_{2}}, $$

where M is a positive constant. □

Then Eq. (1.1) is equivalent to the following equation:

$$ \textstyle\begin{cases} Lu=f(x), &x \in (0,1) \\ B_{i}u=\alpha _{i}, & i=1,2,\ldots ,m. \end{cases} $$
(3.4)

Zhang [32] proposed the ε-approximate theory of second-order differential equations, now we define the ε-approximate solution of Eq. (3.4) based on the idea.

Definition 3.1

\(\forall \varepsilon >0\), \(\exists N>0\), when \(n>N\), if \(\|Lu_{n}-f\|_{L^{2}}^{2}+\sum_{i=1}^{m}(B_{i}u_{n}-\alpha _{i})^{2}< \varepsilon ^{2}\), \(u_{n}\) is called ε-approximate solution of Eq. (3.4).

Lemma 3.1

\(\forall \varepsilon >0\), \(\exists N>0\), when \(n>N\),

$$ u_{n}=\sum_{k=1}^{n}c_{k}^{*} \rho _{k} $$
(3.5)

is the ε-approximate solution of Eq. (3.4), where \(c_{k}^{*}\) satisfies

$$ \begin{aligned} & \Biggl\Vert \sum _{k=1}^{n}c_{k}^{*}L \rho _{k} -f \Biggr\Vert _{L^{2}}^{2}+\sum _{l=1}^{m}\Biggl( \sum _{k=1}^{n}c_{k}^{*}B_{l} \rho _{k} -\alpha _{l}\Biggr)^{2} \\ &\quad =\min_{c_{k}}\Biggl[ \Biggl\Vert \sum _{k=1}^{n}c_{k}L\rho _{k} -f \Biggr\Vert _{L^{2}}^{2}+ \sum _{l=1}^{m}\Biggl(\sum _{k=1}^{n}c_{k}B_{l} \rho _{k} -\alpha _{l}\Biggr)^{2}\Biggr]. \end{aligned} $$
(3.6)

Put J is a quadratic form about \(c=(c_{1},\ldots ,c_{n})\),

$$ J(c_{1},\ldots ,c_{n})= \Biggl\Vert \sum _{k=1}^{n}c_{k}L\rho _{k} -f \Biggr\Vert _{L^{2}}^{2}+ \sum _{l=1}^{m}\Biggl(\sum _{k=1}^{n}c_{k}B_{l} \rho _{k} -\alpha _{l}\Biggr)^{2}, $$

\(c_{k}^{*}\) is the minimum point of J. In fact, in order to find the minimum value of J, that is,

$$ \frac{\partial }{\partial c_{j}}J(c_{1},\ldots ,c_{n})=0. $$

Because of

$$\begin{aligned}& \frac{\partial }{\partial c_{j}}J(c_{1},\ldots ,c_{n}) \\& \quad =2 \sum_{k=1}^{n}c_{k} \langle L\rho _{k},L\rho _{j} \rangle _{L^{2}}-2\langle L\rho _{j},f \rangle _{L^{2}}+2 \sum_{l=1}^{m}\sum _{k=1}^{n}c_{k}B_{l} \rho _{k}B_{l} \rho _{j}-2\sum _{l=1}^{m}B_{l}\rho _{j} \alpha _{l}, \end{aligned}$$

then

$$\begin{aligned} \sum_{k=1}^{n}c_{k} \langle L\rho _{k},L\rho _{j} \rangle _{L^{2}}+ \sum_{l=1}^{m}\sum _{k=1}^{n}c_{k}B_{l} \rho _{k} B_{l}\rho _{j}= \langle L\rho _{j},f \rangle _{L^{2}}+\sum_{l=1}^{m}B_{l} \rho _{j} \alpha _{l}. \end{aligned}$$
(3.7)

Let

$$\begin{aligned}& \mathbf{A_{n}} =\Biggl( \langle L\rho _{k},L\rho _{j} \rangle _{L^{2}}+\sum _{l=1}^{m}B_{l} \rho _{k} B_{l}\rho _{j}\Biggr) _{n\times n}, \\& \mathbf{b_{n}} = \Biggl( \langle L\rho _{k},f \rangle _{L^{2}}+\sum_{l=1}^{m}B_{l} \rho _{j}\alpha _{l} \Biggr)_{n}. \end{aligned}$$

Then Eq. (3.7) changes to

$$\begin{aligned} \mathbf{A_{n}}c=\mathbf{b_{n}}. \end{aligned}$$
(3.8)

According to [32], the unique solution of Eq. (3.8) is the minimum point of J.

4 Theoretical analysis

In this section, the properties of the algorithm, such as uniform convergence, stability, and complexity, are introduced.

4.1 Convergence analysis

Theorem 4.1

Assume that u is the exact solution of Eq. (1.1), \(u_{n}\) is the ε-approximation of Eq. (1.1). If \(u^{(m+1)}\) is bounded on [0,1], then \(|u-u_{n}|^{2}\leq 2^{-2n}C\), where C is a constant.

Proof

Assume that

$$\begin{aligned} v_{n}(x)=\sum_{j=0}^{m-1} c_{j} \frac{x^{j}}{j!}+\sum_{i=1}^{n} \sum_{k=0}^{2^{i-1}-1}c_{i,k} J^{2}_{0}\phi _{i,k}(x) \end{aligned}$$
(4.1)

satisfies Eq. (3.4), where \(c_{j}=\langle u,\frac{x^{j}}{j!} \rangle _{W^{m}_{2}}\), \(c_{i,k}=\langle u,J^{2}_{0}\phi _{ik} \rangle _{W^{m}_{2}}\). Clearly, \({\lim_{n \to +\infty }}v_{n}=u\).

Due to Lemma 3.1, it can get

$$\begin{aligned} \Vert u-u_{n} \Vert _{W^{m}_{2}}^{2} \leq& \bigl\Vert L^{-1} \bigr\Vert ^{2} \Vert Lu-Lu_{n} \Vert _{L^{2}}^{2} \\ \leq& \bigl\Vert L^{-1} \bigr\Vert ^{2}\Biggl( \Vert Lu-Lu_{n} \Vert _{L^{2}}^{2}+\sum _{i=1}^{m} \vert B_{i}u_{n}-a_{i} \vert ^{2}\Biggr) \\ \leq& \bigl\Vert L^{-1} \bigr\Vert ^{2}\Biggl( \Vert Lu-Lv_{n} \Vert _{L^{2}}^{2}+\sum _{i=1}^{m} \vert B_{i}v_{n}-a_{i} \vert ^{2}\Biggr) \\ \leq& \bigl\Vert L^{-1} \bigr\Vert ^{2}\Biggl( \Vert Lu-Lv_{n} \Vert _{L^{2}}^{2}+\sum _{i=1}^{m} \vert B_{i}v_{n}-B_{i}u \vert ^{2}\Biggr) \\ \leq& \bigl\Vert L^{-1} \bigr\Vert ^{2}( \Vert L \Vert ^{2} \Vert u-v_{n} \Vert ^{2}_{W^{m}_{2}}+ \sum_{i=1}^{m} \Vert B_{i} \Vert ^{2} \Vert u-v_{n} \Vert _{3} \\ \leq& M \Vert u-v_{n} \Vert _{W^{m}_{2}}. \end{aligned}$$

That is,

$$\begin{aligned} \Vert u-u_{n} \Vert _{W^{m}_{2}} \leq& M \Vert u-v_{n} \Vert _{W^{m}_{2}}^{2} \\ =&M \Biggl\Vert u- \sum^{m}_{j=0} c_{j}\frac{x^{j}}{j!}-\sum_{i=1}^{n} \sum_{k=0}^{2^{i-1}-1} c_{i,k} J^{2}_{0}\phi _{i,k} \Biggr\Vert _{W^{m}_{2}} \\ =&M\sum_{i=n+1}^{\infty }\sum _{k=0}^{2^{i-1}-1} (c_{i,k})^{2} . \end{aligned}$$

We can obtain \((c_{i,k})^{2} \leq (\frac{1}{2} )^{3i}C_{1}\). In fact,

$$ \vert c_{i,k} \vert ^{2}= \bigl\vert \bigl\langle u,J^{m-1}_{0}\phi _{i,k}\bigr\rangle _{W^{m}_{2,0}} \bigr\vert ^{2}= \biggl\vert \int _{0}^{1} u^{(m)}(\phi _{i,k})^{\prime }\,dx \biggr\vert ^{2}= \biggl\vert \int _{0}^{1} u^{(m+1)} \phi _{i,k}\,dx \biggr\vert ^{2}. $$

According to Holder’s inequality,

$$ \biggl\vert \int _{0}^{1} u^{(m+1)}(\phi _{i,k})\,dx \biggr\vert ^{2} = \biggl\vert \int _{ \frac{k}{2^{i-1}}}^{\frac{k+1}{2^{i-1}}} u^{(m+1)}(\phi _{i,k})\,dx \biggr\vert ^{2} \leq \int _{\frac{k}{2^{i-1}}}^{\frac{k+1}{2^{i-1}}} \bigl(u^{(m+1)} \bigr)^{2}\,dx \int _{\frac{k}{2^{i-1}}}^{\frac{k+1}{2^{i-1}}} (\phi _{i,k})^{2} \,dx. $$

Because \(u^{(m+1)}\) is bound, so \(|\int _{\frac{k}{2^{i-1}}}^{\frac{k+1}{2^{i-1}}} (u^{(m+1)})^{2}\,dx| \leq \frac{1}{2^{i-1}}M\). Then

$$\begin{aligned} \vert c_{i,k} \vert ^{2} \leq& \frac{1}{2^{i-1}}M \int _{\frac{k}{2^{i-1}}}^{ \frac{k+1}{2^{i-1}}} (\phi _{i,k})^{2} \,dx \\ =&M\biggl( \int _{\frac{k}{2^{i-1}}}^{ \frac{k+1/2}{2^{i-1}}} \biggl(x-\frac{k}{2^{i-1}} \biggr)^{2}\,dx+ \int _{ \frac{k}{2^{i-1}}}^{\frac{k+1/2}{2^{i-1}}} \biggl(\frac{k+1}{2^{i-1}}-x \biggr)^{2}\,dx \biggr)\leq \biggl(\frac{1}{2} \biggr)^{3i} C_{1}. \end{aligned}$$

So then

$$ \Vert u-u_{n} \Vert _{W^{m}_{2}}^{2}\leq \sum _{i=2n+1}^{\infty }\sum _{k=0}^{2^{i-1}-1} \biggl( \biggl(\frac{1}{2} \biggr)^{3i}M \biggr) = \biggl(\frac{1}{2} \biggr)^{2n}M, $$

where M is a constant.

$$ \bigl\vert u(x)-u_{n}(x) \bigr\vert ^{2}= \bigl\vert \bigl\langle u-u_{n},K(x,y) \bigr\rangle _{W^{m}_{2}} \bigr\vert ^{2} \leq \bigl( \Vert u-u_{n} \Vert _{W^{m}_{2}} \bigl\Vert K(x,y) \bigr\Vert _{W^{m}_{2}} \bigr)^{2} \leq 2^{-2n}C, $$

where C is a constant, \(K(x,y)\) is the reproducing kernel of \(W_{2}^{m}\). □

From Theorem 4.1, \(u_{n}\) uniformly converges to u.

4.2 Stability analysis

It is well known that if A is a reversible symmetrical matrix, then the condition number of A is

$$ \operatorname{cond}(\mathbf{A})_{2}= \biggl\vert \frac{\lambda _{1}}{\lambda _{n}} \biggr\vert , $$

where \(\lambda _{1}\) and \(\lambda _{n}\) are the maximum and minimum eigenvalues of A respectively.

Obviously, A of Eq. (3.8) is an invertible symmetric matrix. Therefore, in order to prove the stability of the algorithm, we can first prove the boundedness of the eigenvalues.

Lemma 4.1

Suppose \(\lambda \mathbf{x=Ax}\), \(\|\mathbf{x}\|=1\), where \(\mathbf{x}=(x_{1},\ldots ,x_{n})^{T}\) is the related eigenvector of λ, then

$$ C^{2} \leq \lambda \leq \Vert L \Vert ^{2}+\sum _{l=1}^{m} \Vert B_{l} \Vert ^{2}. $$

Proof

According to \(\lambda \mathbf{x=Ax}\),

$$\begin{aligned}& \begin{aligned}&\lambda x_{i}=\sum_{j=1}^{n}a_{ij}x_{j}= \sum_{j=1}^{n} \Biggl( \langle L\rho _{i}, L\rho _{j} \rangle _{L^{2}}+\sum _{l=1}^{m}B_{l} \rho _{i}B_{l}\rho _{j} \Biggr)x_{j}, \\ &\sum_{j=1}^{n} \Biggl(\langle L\rho _{i}, x_{j}L\rho _{j} \rangle _{L^{2}}+ \sum_{l=1}^{m}B_{l} \rho _{i}x_{j}B_{l}\rho _{j} \Biggr)=\Biggl\langle L \rho _{i},\sum_{j=1}^{n}x_{j}L \rho _{j} \Biggr\rangle _{L^{2}}+\sum _{l=1}^{m}B_{l} \rho _{i}\sum_{j=1}^{n}x_{j}B_{l} \rho _{j}. \end{aligned} \end{aligned}$$
(4.2)

Multiply both sides of (4.2) by \(x_{i}\) (\(i = 1,2,\ldots ,n\)) and add up to get

$$\begin{aligned} \lambda =&\lambda \sum_{j=1}^{n}x_{j}^{2} \\ =&\Biggl\langle \sum_{i=1}^{n}x_{i}L \rho _{i},\sum_{j=1}^{n}x_{j}L \rho _{j} \Biggr\rangle _{L^{2}}+\sum _{l=1}^{m} \Biggl(\sum _{i=1}^{n}x_{i}B_{l} \rho _{i}\sum_{j=1}^{n}x_{j}B_{l} \rho _{j} \Biggr) \\ =& \Biggl\Vert \sum_{i=1}^{n}x_{i}L \rho _{i} \Biggr\Vert _{L^{2}}^{2}+\sum _{l=1}^{m} \Biggl(B_{l}\sum _{j=1}^{n}x_{j}\rho _{i} \Biggr)^{2}\leq \Vert L \Vert ^{2} \sum_{i=1}^{n}x_{i}^{2}+ \sum_{l=1}^{m} \Vert B_{l} \Vert ^{2}\sum_{i=1}^{n}x_{i}^{2} \\ =& \Vert L \Vert ^{2}+\sum_{l=1}^{m} \Vert B_{l} \Vert ^{2}. \end{aligned}$$

So

$$ \lambda \leq \Vert L \Vert ^{2}+\sum _{l=1}^{m} \Vert B_{l} \Vert ^{2}. $$

In addition, \(\lambda \ge \|\sum_{i=1}^{n}x_{i}L\rho _{i}\|_{L^{2}}\).

Let \(u=\sum_{i=1}^{n}x_{i}\rho _{i}\), then \(\lambda \ge \|Lu\|_{L^{2}}^{2}\). Without loss of generality, put \(\|u\|_{W^{m}_{2,0}}=1\). According to the inverse operator theorem [21], \(\|Lu\|_{L^{2}}\ge C^{2}\|u\|^{2}_{W^{m}_{2}}\).

So \(\lambda \ge \|Lu\|_{L^{2}} \ge C^{2}\|u\|^{2}_{W^{m}_{2}}=C^{2}\).

To sum up

$$ C^{2} \leq \lambda \leq \Vert L \Vert ^{2}+\sum _{l=1}^{m} \Vert B_{l} \Vert ^{2}. $$

 □

From Lemma 4.1, we get

$$ \operatorname{cond}(\mathbf{A})_{2}= \biggl\vert \frac{\lambda _{1}}{\lambda _{n}} \biggr\vert \leq \frac{ \Vert L \Vert ^{2}+\sum_{l=1}^{m} \Vert B_{l} \Vert ^{2}}{\frac{1}{C^{2}}}=\Biggl( \Vert L \Vert ^{2}+\sum_{l=1}^{m} \Vert B_{l} \Vert ^{2}\Biggr)C^{2}. $$

That is, the presented method is stable.

4.3 Complexity analysis

Complexity analysis includes time complexity and space complexity. But ultimately, it is the time efficiency of the algorithm that matters. As long as the algorithm does not take up storage space that is unacceptable to the computer. So this part analyzes the time complexity.

Theorem 4.2

The time complexity of the algorithm is \(O(n^{3})\).

Proof

There are four steps to calculate ε-the approximate solution \(u_{n}(x)\) of (3.1).

First, the calculation of matrix An in Eq. (3.8). The matrix An is

$$ \mathbf{A_{n}} = \Biggl( \langle L\rho _{k},L\rho _{j} \rangle +\sum _{l=1}^{m}B_{l} \rho _{k} B_{l}\rho _{j} \Biggr)_{n\times n}. $$

Set the number of multiplication required to compute \(\langle L\rho _{k},L\rho _{j} \rangle \) and \(B_{l}\rho _{k} B_{l}\rho _{j}\) as \(C_{1}\), \(C_{2}\) respectively, \(C_{1}\), \(C_{2}\) are constant. So each term of An is evaluated \(C_{1}+mC_{2}\) times. Since An is a symmetric matrix, we only need to consider the calculation of the main diagonal and above elements. The first row of An is evaluated n times, the second row \(n-1\) times, and so on, so that the total number of multiplications required in calculation of An is

$$ \frac{n(n+1)}{2}(C_{1}+m C_{2}). $$

Second, the calculation of vector bn in Eq. (3.8). The vector bn is

$$ \mathbf{b_{n}} = \Biggl( \langle L\rho _{k},f \rangle +\sum_{l=1}^{m}B_{l} \rho _{j} \alpha _{l} \Biggr)_{n}. $$

Let the number of multiplication of \(\langle L\rho _{k},f \rangle \) and \(B_{l}\rho _{j}\alpha _{l}\) be \(C_{3}\), \(C_{4}\), \(C_{3}\), \(C_{4}\) are constant. So each term of bn is evaluated \(C_{3}+mC_{4}\) times. So the total number of multiplications required in calculation of bn is

$$ (C_{3}+mC_{4})n. $$

Third, solve Eq. (3.8). We solve the system by Gaussian elimination. From the mathematical knowledge, Gaussian elimination requires operations

$$ \frac{n(n+1)(2n+1)}{6}. $$

Forth, calculation \(u_{n}\). When calculating the \(u_{n}\), the total number of multiplications required is n.

To sum up, the total number of multiplication is

$$ \frac{n(n+1)}{2}(C_{1}+m)+(C_{2}+m)n+ \frac{n(n+1)(2n+1)}{6}+n=O\bigl(n^{3}\bigr). $$

 □

5 Numerical experiments

In this section, we give several numerical experiments to verify the effectiveness of the proposed algorithm. We denote by \(u_{n}(x)\) the approximation to the exact solution \(u(x)\) obtained by the numerical schemes in the present work, and we measure the errors in the following sense:

$$ e_{n}(x)= \bigl\vert u_{n}(x)-u(x) \bigr\vert , $$

where n is the number of bases. C.R. represents the convergence order. All numerical experiments are computed by Mathematica 9.0.

Example 5.1

Ref. [34] mentioned that in order to get the shear deformation of sandwich beams, consider the following third-order BVP:

$$\begin{aligned} \textstyle\begin{cases} u^{\prime \prime \prime }-k^{2} u^{\prime }+r=0, \quad x\in (0,1) \\ u^{\prime }(0)=u(\frac{1}{2})=u^{\prime }(1)=0, \end{cases}\displaystyle \end{aligned}$$

where the physical constants are \(k=5\) and \(r=1\). The function \(u(x)\) shows the shear deformation of sandwich beams. The analytic solution of this problem is

$$ u(x)= \frac{r(k(2x-1)\sinh (kx)+2\cosh (kx)\tanh (\frac{k}{2}))}{2k^{3}}. $$

The numerical results are given in Table 1.

Table 1 Absolute errors of Example 5.1

Example 5.2

Our second example is for fourth-order BVP [31]

$$\begin{aligned} \textstyle\begin{cases} u^{(4)}-2u=-1-(8\pi ^{4}-1)\cos (2\pi x),\quad x\in (0,1) \\ u(0)=u(1)=u^{\prime }(0)=u^{\prime }(1)=0. \end{cases}\displaystyle \end{aligned}$$

Table 2 and Fig. 2 list the error comparison between the multilevel augmentation method [31] and our method for this BVP.

Figure 2
figure 2

The error for Ex. 5.2 (\(n=515\))

Table 2 Absolute errors of Example 5.2

Example 5.3

Our third example is a fifth-order BVP

$$\begin{aligned} \textstyle\begin{cases} u^{(5)}+u^{(4)}-3xu^{\prime \prime \prime }-u^{\prime \prime }+u^{ \prime }=(2-3x)e^{x}-1, \quad x\in (0,1) \\ u(0)=u^{\prime }(0)=0,\qquad u^{\prime \prime }(0)=1, \\ u^{\prime }(1)=e{-}1,\qquad u^{\prime \prime }(1)=e. \end{cases}\displaystyle \end{aligned}$$

The exact solution of this example is \(u(x)=e^{x}-x-1\). Figure 3 shows the error of Example 5.3.

Figure 3
figure 3

The error for Ex. 5.3 (\(n=67\))

6 Conclusion

In summary, this study used a set of multiscale orthonormal bases to find the ε-approximate solutions of higher-order BVPs. This paper not only demonstrates the convergence and stability in theory, but also demonstrates the feasibility of the method through numerical experiments. Through theoretical analysis and numerical experiments, this method can be extended to solve general linear models, such as linear integral equations, differential equations, and fractional differential equations.

Availability of data and materials

The authors declare that all data and materials in this paper are available and veritable.

References

  1. Timoshenko, S.P., Gere, J.M.: Theory of Elastic Stability. McGrawHill-Kogakusha, Tokyo (1961)

    Google Scholar 

  2. Jackiewicz, Z., Mittelmann, H.: Construction of IMEX DIMSIMs of high order and stage order. Appl. Numer. Math. 121, 234–248 (2017)

    Article  MathSciNet  Google Scholar 

  3. Ling, X., Gao, N., Hu, A.: Dynamic analysis of a planetary gear system with multiple nonlinear parameters. J. Comput. Appl. Math. 327, 325–340 (2017)

    MathSciNet  MATH  Google Scholar 

  4. Moradi, A., Sharififi, M., Abdi, A.: Transformed implicit-explicit second derivative diagonally implicit multi stage integration methods with strong stability preserving explicit part. Appl. Numer. Math. 156, 14–31 (2020)

    Article  MathSciNet  Google Scholar 

  5. Lei, X., Li, J.: Transversal effects of high order numerical schemes for compressible fluid flows. Appl. Math. Mech. 3, 343–354 (2019)

    Article  MathSciNet  Google Scholar 

  6. Lü, X., Cui, M.: Existence and numerical method for nonlinear third-order boundary value problem in the reproducing kernel space. Bound. Value Probl. 2010, 459754 (2010)

    Article  MathSciNet  Google Scholar 

  7. Ji, Y., Guo, Y., Yao, Y.: Positive solutions for higher order differential equations with integral boundary conditions. Bound. Value Probl. 2015, 214 (2015)

    Article  MathSciNet  Google Scholar 

  8. Xu, L., Chen, H.: Existence and multiplicity of solutions for fourth-order elliptic equations of Kirchhoff type via genus theory. Bound. Value Probl. 2014, 212 (2014)

    Article  MathSciNet  Google Scholar 

  9. Cao, J., Wang, Z., Xu, C.: A high-order scheme for fractional ordinary differential equations with the Caputo–Fabrizio derivative. Commun. Appl. Math. Comput. 2(2), 179–199 (2020)

    Article  MathSciNet  Google Scholar 

  10. Toutounian, F., Tohidi, E., Shateyi, S.: A collocation method based on the Bernoulli operational matrix for solving high-order linear complex differential equations in a rectangular domain. Abstr. Appl. Anal. 4, 215–222 (2013)

    MathSciNet  MATH  Google Scholar 

  11. Mirzaee, F., Samadyar, N., Alipour, S.: Numerical solution of high order linear complex differential equations via complex operational matrix method. SeMA J. 76, 1–13 (2019)

    Article  MathSciNet  Google Scholar 

  12. Mirzaee, F., et al.: Parameters estimation of HIV infection model of CD4+ T-cells by applying orthonormal Bernstein collocation method. Int. J. Biomath. 11(2), 1850020 (2018)

    Article  MathSciNet  Google Scholar 

  13. Raslan, K., et al.: Numerical solution of high-order linear integro differential equations with variable coefficients using two proposed schemes for rational Chebyshev functions. New Trends Math. Sci. 4(3), 22–35 (2016)

    Article  MathSciNet  Google Scholar 

  14. Yzba, S., Yildirim, A., et al.: A collocation approach for solving high-order linear Fredholm–Volterra integro-differential equations. Math. Comput. Model. 55, 547–563 (2012)

    Article  MathSciNet  Google Scholar 

  15. Mirzaee, F., Rafei, Z.: The block by block method for the numerical solution of the nonlinear two-dimensional Volterra integral equations. J. King Saud Univ., Sci. 23, 191–195 (2011)

    Article  Google Scholar 

  16. Mirzaee, F., Hoseini, S.F.: Solving systems of linear Fredholm integro-differential equations with Fibonacci polynomials. Ain Shams Eng. J. 5, 271–283 (2014)

    Article  Google Scholar 

  17. Mirzaee, F., Hoseini, S.F.: A new collocation approach for solving systems of high-order linear Volterra integro-differential equations with variable coefficients. Appl. Math. Comput. 311, 272–282 (2017)

    Article  MathSciNet  Google Scholar 

  18. Mirzaee, F., Bimesl, S.: An efficient numerical approach for solving systems of high-order linear Volterra integral equations. Sci. Iran. 21(6), 2250–2263 (2014)

    MATH  Google Scholar 

  19. Mirzaee, F., Bimesl, S., Tohidi, E.: A numerical framework for solving high-order pantograph delay Volterra integro-differential equations. Kuwait J. Sci. 43(1), 69–83 (2016)

    MathSciNet  MATH  Google Scholar 

  20. Mirzaee, F., Bimesl, S.: Numerical solutions of systems of high-order Fredholm integro-differential equations using Euler polynomials. Appl. Math. Model. 39, 6767–6779 (2015)

    Article  MathSciNet  Google Scholar 

  21. Mirzaee, F., Bimesl, S.: Application of Euler matrix method for solving linear and a class of nonlinear Fredholm integro-differential equations. Mediterr. J. Math. 11, 999–1018 (2014)

    Article  MathSciNet  Google Scholar 

  22. Samadyar, N., Orthonormal, M.F.: Bernoulli polynomials collocation approach for solving stochastic Itô–Volterra integral equations of Abel type. Int. J. Numer. Model. 2019, e2688 (2019)

    Google Scholar 

  23. Ren, J., Shi, D., Vong, S.: High accuracy error estimates of a Galerkin finite element method for nonlinear time fractional diffusion equation. Numer. Methods Partial Differ. Equ. 36, 284–301 (2020)

    Article  MathSciNet  Google Scholar 

  24. Li, Z., Liang, Z., Yan, Y.: High-order numerical methods for solving time fractional partial differential equations. J. Sci. Comput. 71, 785–803 (2017)

    Article  MathSciNet  Google Scholar 

  25. Mirzaee, F., Bimesl, S.: A new approach to numerical solution of second-order linear hyperbolic partial differential equations arising from physics and engineering. Results Phys. 3, 241–247 (2013)

    Article  Google Scholar 

  26. Mei, L.: A novel method for nonlinear impulsive differential equations in broken reproducing kernel space. Acta Math. Sci. 40, 723C733 (2020)

    Article  MathSciNet  Google Scholar 

  27. Zhao, Z., Lin, Y., Niu, J.: Convergence order of the reproducing kernel method for solving boundary value problems. Math. Model. Anal. 21(4), 466–477 (2016)

    Article  MathSciNet  Google Scholar 

  28. Tirmizi, I.A., Twizell, E.H., Islam, S.U.: A numerical method for third-order non-linear boundary-value problems in engineering. Int. J. Comput. Math. 82, 103–109 (2005)

    Article  MathSciNet  Google Scholar 

  29. Li, X., Wu, B.: Reproducing kernel method for singular multipoint boundary value problems. Math. Sci. 6(1), 1–5 (2016)

    MathSciNet  Google Scholar 

  30. Li, X., Wu, B.: A new kernel functions based approach for solving 1-D interface problems. Appl. Math. Comput. 380, 125276 (2020)

    Article  MathSciNet  Google Scholar 

  31. Chen, Z., Wu, B., Xu, Y.: Multilevel augmentation methods for differential equations. Adv. Comput. Math. 24, 213–238 (2006)

    Article  MathSciNet  Google Scholar 

  32. Zhang, Y., Sun, H., Jia, Y., Lin, Y.: An algorithm of the boundary value problem based on multiscale orthogonal compact base. Appl. Math. Lett. 101, 106044 (2020)

    Article  MathSciNet  Google Scholar 

  33. Zheng, Y., Lin, Y., Shen, Y.: A new multiscale algorithm for solving second order boundary value problems. Appl. Numer. Math. 156, 528–541 (2020)

    Article  MathSciNet  Google Scholar 

  34. Haque, M., Baluch, M.H., Mohsen, M.F.N.: Solution of multiple point, nonlinear boundary value problems by method of weighted residuals. Int. J. Comput. Math. 19, 69–84 (1986)

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Authors’ information

Zhuhai Campus, Beijing Institute of Technology, Zhuhai, Guangdong, 519085, China

Funding

This work has been supported by the Basic and Applied Basic Research Project of Zhuhai City (ZH22017003200026PWC) and the Characteristic Innovative Scientific Research Project of Guangdong Province (2019KTSCX217).

Author information

Authors and Affiliations

Authors

Contributions

This is to declare that all authors have contributed equally and significantly to the contents of the paper. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Yingchao Zhang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y., Mei, L. & Lin, Y. A new method for high-order boundary value problems. Bound Value Probl 2021, 48 (2021). https://doi.org/10.1186/s13661-021-01527-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13661-021-01527-4

Keywords