Skip to main content

Enhanced shifted Jacobi operational matrices of integrals: spectral algorithm for solving some types of ordinary and fractional differential equations

Abstract

We provide here a novel approach for solving IVPs in ODEs and MTFDEs numerically by means of a class of MSJPs. Using the SCM, we build OMs for RIs and RLFI for MSJPs as part of our process. These architectures guarantee accurate and efficient numerical computations. We provide theoretical assurances for the efficacy of an algorithm by establishing its convergence and error analysis features. We offer five numerical examples to prove that our method is accurate and applicable. Through these examples, we demonstrate the greater accuracy and efficiency of our approach by comparing our results with previously published findings. Tables and graphs show that the method produces exact and approximate solutions that agree quite well with each other.

1 Introduction

A subfield of mathematics known as fractional calculus has recently attracted a lot of interest due to involved integrals and derivatives of noninteger order. Complex systems displaying long-term memory effects and anomalous diffusion phenomena, such as heat transport, can be effectively modeled and analyzed using this mathematical technique [13]. Financing, biology, engineering, physics, and many branches of applied calculus are all included [48].

There has been a lot of research on numerical methods for solving IVPs and BVPs in ordinary differential equations and partial differential equations (e.g., [924]). To numerically solve various types of DEs, OMs constructed from orthogonal and nonorthogonal polynomials have been extensively used [2532]. As far as accuracy and computing efficiency are concerned, many algorithms have shown promise. Nonetheless, new avenues should be investigated to improve the numerical solutions in terms of accuracy and efficiency. Our new method for numerically solving ODEs takes the form

$$ y^{(n)}(\mathfrak{z})+ \sum_{q=0}^{n-1} \eta _{q} y^{(q)}( \mathfrak{z})=f_{1}( \mathfrak{z}),\quad \mathfrak{z}\in [0, \mathfrak{L}], n=1,2,3,\dots , $$
(1.1)

and, for solving MTFDEs,

$$ D^{\nu}y(\mathfrak{z})+ \sum_{i=0}^{k} \gamma _{i} D^{\beta _{i}}y( \mathfrak{z})+\gamma _{k+1}y( \mathfrak{z})=f_{2}(\mathfrak{z}),\quad \mathfrak{z}\in [0,\mathfrak{L}], $$
(1.2)

subject to the ICs

$$ y^{(j)}(0)=\alpha _{j},\quad j=0,1,\dots ,n-1, $$
(1.3)

where \(\eta _{i}\), \(\alpha _{i}\) (\(i=0,1,\dots ,n-1\)), \(\beta _{i}\), \(\gamma _{i}\) (\(i=0,1,\dots ,k\)), \(\gamma _{k+1}\), and ν are constants such that \(n-1\le \nu < n\), \(0<\beta _{1}<\beta _{2}<\cdots <\beta _{k}<\nu \). The functions \(f_{1}\) and \(f_{2}\) are supposed to be continuous.

First, we build OMs for RIs and RLFI for MSJPs using our technique. Then we apply the SCM. We are able to acquire very close approximations of the solutions since these architectures guarantee efficient and accurate numerical computations. Following these basic stages, the suggested method may solve ODEs (1.1) and MTFDEs (1.2) susceptible to ICs (1.3):

  1. (i)

    We transform equations (1.1) and (1.2) together with ICs (1.3) into an equivalent form with homogeneous conditions.

  2. (ii)

    Fully integrated forms of (1.1) and (1.2) are obtained by applying RIs and RLFI, respectively. This conversion allows for a more comprehensive representation of the problem.

  3. (iii)

    In the integrated forms of (1.1) and (1.2), the solution and all of its RIs and RLFI are approximated by using the constructed OMs to write them as linear combinations of MSJPs, followed by the application of the SCM.

  4. (iv)

    Using a suitable numerical method, the systems of algebraic equations obtained in (iii) can be solved, which provides the required numerical solutions.

We examine the convergence properties and perform a comprehensive error analysis to prove that our suggested algorithm works. To prove that our method is accurate and efficient, we offer theoretical guarantees. We also include five numerical examples that show a variety of IVPs, from (1.1) to (1.3). By contrasting our findings with those of other researchers we demonstrate how our method is more precise and efficient. The provided graphs and tables show that the exact and approximate solutions correspond very well.

One notable aspect of our suggested approach is using MSJPs. An innovative method for solving the aforementioned IVPs is presented by making use of these polynomials and the OMs that are linked with them. Advantages of MSJPs over current approaches include better accuracy, faster convergence, and lower computing cost.

The paper is outlined as follows. The definitions and properties of RFI and RIs are provided in Sect. 2. Section 3 details several features of SJPs and MSJPs. The main emphasis is placed on developing new OMs for the RFI and RIs of MSJPs in Sect. 4. To construct the algorithms that are to be used to resolve the IVPs (1.1)–(1.3), this act is carried out. Here in Sect. 5, we lay out our approach to solving IVPs in ODEs and MTFDEs with the help of the built OMs and the SCM. The suggested method is the subject of the theoretical examination in Sect. 6. As part of our investigation of the convergence characteristics of the method, we run an error analysis. Theoretical assurances regarding the precision and performance of the algorithm are deduced and examined. We provide five numerical examples in Sect. 7 to verify that our method is accurate and applicable. To evaluate the suggested algorithm, we may look at these examples that span a variety of IVPs, from (1.1) to (1.3). We show that our method is more accurate and efficient in comparison with those published earlier. There is a very close match between the exact and approximate solutions, as seen in the tables and graphs. We highlight the merits, limits, and prospective enhancements of our algorithm in Sect. 8, where we also summarize the key findings and offer conclusions based on our study.

2 Preliminaries and notation

This section introduces the key ideas and tools needed to construct the suggested approach. These ideas and technologies underpin our approach, helping us solve the challenge. In this context, the Riemann–Liouville definition of a fractional integration of order \(\nu >0\) is defined as follows [8].

Definition 2.1

$$ I^{\nu}f(x)=\frac {1}{\Gamma (\nu )} \int ^{x}_{0} (x-\tau )^{\nu -1}f( \tau )\,d\tau , \quad \nu >0, x >0, $$
(2.1)

and \(I^{0}f(x)=f(x)\), where \(m-1\le \nu < m\), and \(m\in N\) is the smallest integer greater than ν.

For \(\mu ,\nu \geq 0\) and \(\gamma >-1\), the following properties of are satisfied:

$$\begin{aligned}& I^{\mu }I^{\nu } f(x) = I^{\mu +\nu }f(x), \end{aligned}$$
(2.2)
$$\begin{aligned}& I^{\mu }I^{\nu }f(x) = I^{\nu }I^{\mu }f(x), \end{aligned}$$
(2.3)
$$\begin{aligned}& I^{\mu }x^{\gamma } = \frac {\Gamma (\gamma +1)}{\Gamma (\gamma +\mu +1)} x^{\gamma +\mu }, \end{aligned}$$
(2.4)
$$\begin{aligned}& I^{\nu}\bigl(\lambda _{1} h_{1}(x)+\lambda _{2} h_{2}(x)\bigr) = \lambda _{1} I^{\nu}h_{1}(x)+\lambda _{2} I^{\nu}h_{2}(x). \end{aligned}$$
(2.5)

The RLFD of order \(\nu > 0\), denoted by \({}^{R}{}D^{\nu}\), is defined as follows:

$$ {}^{R}{}D^{\nu}f(x)=\frac{d^{m}}{dx^{m}}\bigl(I^{m-\nu}f(x) \bigr), \quad x > 0. $$
(2.6)

On the other hand, the CFD of order ν, denoted by \({}^{C}{}D^{\nu}\), is defined as follows:

$$ {}^{C}{}D^{\nu}f(x)=\frac {1}{\Gamma (m-\nu )} \int _{0}^{x}(x- \tau )^{m-\nu -1}f^{(m)}( \tau )\,d\tau , \quad x > 0, $$
(2.7)

which can be written in the form

$$ {}^{C}{}D^{\nu}f(x)=\frac{d^{m}}{dx^{m}}\bigl(I^{m-\nu}f^{(m)}(x) \bigr),\quad x > 0. $$
(2.8)

The CFD satisfies the following properties:

$$\begin{aligned}& {}^{C}{}D^{\nu} C = 0\quad (C \text{ is constant}), \end{aligned}$$
(2.9)
$$\begin{aligned}& I^{\nu }{}^{C}{}D^{\nu}f(x) = f(x)-\sum _{j=0}^{m-1} \frac{f^{(j)}(0^{+})}{j!} x^{j}, \end{aligned}$$
(2.10)
$$\begin{aligned}& {}^{C}{}D^{\nu}x^{\gamma } = \frac {\Gamma (\gamma +1)}{\Gamma (\gamma +1-\mu )} x^{\gamma -\nu }, \end{aligned}$$
(2.11)
$$\begin{aligned}& {}^{C}{}D^{\nu}\bigl(\lambda _{1} h_{1}(x)+\lambda _{2} h_{2}(x)\bigr) = \lambda _{1} D^{\nu}h_{1}(x)+\lambda _{2} D^{\nu}h_{2}(x). \end{aligned}$$
(2.12)

The relation between the RLFI and CFD is given by [8, Eq. (2.4.6)]

$$ {}^{C}{}D^{\nu}f(x)={}^{R}{}D^{\nu}f(x)- \sum_{j=0}^{m-1} \frac{f^{(j)}(0)}{\Gamma (j-\nu +1)} x^{j-\nu},\quad m-1\le \nu < m, $$
(2.13)

so that if \(f^{(j)}(0)=0\), \(j=0,1,\dots ,m-1\), then

$$ {}^{R}{}D^{\nu}f(x)={}^{C}{}D^{\nu}f(x). $$
(2.14)

To accomplish the proposed algorithm, we must define the q times repeated integral of \(f(x)\) as follows.

Definition 2.2

Let f be a continuous function on the real line. Then the \(qth\) repeated integral of f, \(J^{q} f\), is defined as follows:

$$ J^{q} f(x)= \overset{q-\text{times}}{\overbrace{ \int ^{x}_{0} \int ^{x}_{0}\dots \int ^{x}_{0}}}f( \tau )\, \overset{q-\text{times}}{d \tau \,d\tau \dots d\tau}, $$
(2.15)

which is known as the q-fold integral and has the form [33, Eq. (2.16)]

$$ J^{q} f(x)=\frac {1}{(q-1)!} \int ^{x}_{0} (x-\tau )^{q-1}f(\tau )\,d\tau , \quad x >0. $$
(2.16)

According to integral expressions (2.1) and (2.16), it is shown that

$$ J^{q} f(x)=I^{q} f(x). $$
(2.17)

Lemma 2.1

$$ J^{q} f^{(r)}(x)=J^{q-r} f(x)-\sum _{j=0}^{r-1} \frac{f^{(j)}(0^{+})}{(q-r+j)!} x^{q-r+j},\quad r \le q, $$
(2.18)

and for the case \(r=q\), we have

$$ J^{q} f^{(q)}(x)= f(x)-\sum _{j=0}^{q-1}\frac{f^{(j)}(0^{+})}{j!} x^{j}. $$
(2.19)

Accordingly, if \(f^{(j)}(0)=0\), \(j=0,1,\dots ,r-1\), then \(J^{q} f^{(r)}(x)=J^{q-r} f(x)\), \(r\le q\).

Proof

It is easy to prove this lemma using induction on r. □

Remark 2.1

Riemann’s modified form of Liouville’s fractional integral operator is a direct generalization of Cauchy’s formula for a q-fold integral. Moreover, in view of formula (2.16), we can see that formula (2.19) coincides with Taylor’s formula with remainder.

3 An overview on SJPs and MSJPs

The main objective of this section is to present the fundamental characteristics of JPs and their shifted form. Furthermore, we will introduce a set of MSJPs.

3.1 An overview on SJPs

The orthogonal JPs, \(\mathfrak{J}^{(\mathfrak{a},\mathfrak{b})}_{n}(x)\), \(\mathfrak{a}, \mathfrak{b} >-1\), satisfy the following relationship [34]:

$$ \int _{-1}^{1}w^{\mathfrak{a},\mathfrak{b}}(x) \mathfrak{J}^{( \mathfrak{a},\mathfrak{b})}_{n}(x) \mathfrak{J}^{(\mathfrak{a}, \mathfrak{b})}_{m}(x)\,dx = \textstyle\begin{cases} 0,& m\neq n, \\ h^{(\mathfrak{a},\mathfrak{b})}_{n},& m=n,\end{cases} $$

where \(w^{\mathfrak{a},\mathfrak{b}}(x)=(1-x)^{\mathfrak{a}}(1+x)^{ \mathfrak{b}}\) and \(h^{(\mathfrak{a},\mathfrak{b})}_{n}= \frac {2^{\lambda}\Gamma (n+\mathfrak{a}+1)\Gamma (n+\mathfrak{b}+1)}{n!(2n+\lambda )\Gamma (n+\lambda )}\), \(\lambda =\mathfrak{a}+\mathfrak{b}+1\).

The SJPs, denoted as \(\mathfrak{J}^{(\mathfrak{a},\mathfrak{b})}_{\mathfrak{L},n}( \mathfrak{z})=\mathfrak{J}^{(\mathfrak{a},\mathfrak{b})}_{n}(2 \mathfrak{z}/\mathfrak{L}-1)\), are in accordance with

$$ \int _{0}^{\mathfrak{L}}w^{\mathfrak{a},\mathfrak{b}}_{\mathfrak{L}}( \mathfrak{z}) \mathfrak{J}^{(\mathfrak{a},\mathfrak{b})}_{ \mathfrak{L},n}(\mathfrak{z}) \mathfrak{J}^{(\mathfrak{a}, \mathfrak{b})}_{\mathfrak{L},m}(\mathfrak{z})\,d\mathfrak{z} = \textstyle\begin{cases} 0,& m\neq n, \\ (\frac{\mathfrak{L}}{2} )^{\lambda}h^{(\mathfrak{a}, \mathfrak{b})}_{n},& m=n,\end{cases} $$

where \(w^{\mathfrak{a},\mathfrak{b}}_{\mathfrak{L}}(\mathfrak{z})=( \mathfrak{L}-\mathfrak{z})^{\mathfrak{a}} \mathfrak{z}^{\mathfrak{b}}\).

The expansions that will serve as the foundation in this paper are the following fundamental ones [35, Sect. 11.3.4]:

  1. 1.

    The power form representation of \(\mathfrak{J}^{(\mathfrak{a},\mathfrak{b})}_{\mathfrak{L},n}( \mathfrak{z})\) is as follows:

    $$ \mathfrak{J}^{(\mathfrak{a},\mathfrak{b})}_{\mathfrak{L},i}( \mathfrak{z})=\sum _{k=0}^{i} c^{(i)}_{k} \mathfrak{z}^{k}, $$
    (3.1)

    where

    $$ c^{(i)}_{k}= \frac{(-1)^{i-k} \Gamma (i+\mathfrak{b} +1) \Gamma (i+k+\lambda )}{\mathfrak{L}^{k} k! (i-k)! \Gamma (k+\mathfrak{b} +1) \Gamma (i+\lambda )}. $$
    (3.2)
  2. 2.

    Alternatively, the expression for \(\mathfrak{z}^{k}\) in relation to \(\mathfrak{J}^{(\mathfrak{a},\mathfrak{b})}_{\mathfrak{L},r}( \mathfrak{z})\) has the form

    $$ \mathfrak{z}^{k}=\sum_{r=0}^{k}b^{(k)}_{r} \mathfrak{J}^{( \mathfrak{a},\mathfrak{b})}_{\mathfrak{L},r}(\mathfrak{z}), $$
    (3.3)

    where

    $$ b^{(k)}_{r}= \frac{\mathfrak{L}^{k} k! (\lambda +2 r) \Gamma (k+\mathfrak{b} +1) \Gamma (r+\lambda )}{(k-r)! \Gamma (r+\mathfrak{b} +1) \Gamma (k+r+\lambda +1)}. $$
    (3.4)

3.2 Presenting MSJP

In this section, we define the polynomials \(\{\mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,j}(\mathfrak{z})\}_{j \ge 0}\) as follows:

$$ \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,j}(\mathfrak{z})= \mathfrak{z}^{n} \mathfrak{J}^{(\mathfrak{a},\mathfrak{b})}_{ \mathfrak{L},j}( \mathfrak{z}),\quad n=1,2,\dots . $$
(3.5)

They are needed to satisfy the homogeneous form of the given ICs (1.3) for a suitable choice of p. Subsequently, these polynomials satisfy the orthogonality relation:

$$ \int _{0}^{\mathcal{\mathfrak{L}}} \frac{w^{\mathfrak{a},\mathfrak{b}}_{\mathfrak{L}}(\mathfrak{z})}{\mathfrak{z}^{2n}} \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,i}(\mathfrak{z}) \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,j}( \mathfrak{z})\,d\mathfrak{z} = \textstyle\begin{cases} 0,& i\neq j, \\ (\frac{\mathfrak{L}}{2} )^{\lambda}h^{(\mathfrak{a}, \mathfrak{b})}_{i},& i=j.\end{cases} $$
(3.6)

4 OM for RIs and RLFI for \(\mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,i}(\mathfrak{z})\)

In this section, we prove Theorems 4.1 and 4.2, which give the qth integrals for all \(q \ge 1\) and fractional integrals of \(\mathfrak{K}_{n,i}(\mathfrak{z})\) in terms of the same polynomials.

Theorem 4.1

\(J^{q} \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,i}(\mathfrak{z})\), \(i\ge 0\), can be written in the form

$$ J^{q}\mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,i}( \mathfrak{z})= \sum_{j=0}^{i+q} \mathfrak{P}^{(\mathfrak{a},\mathfrak{b})}_{i,j}(n,q) \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,j}( \mathfrak{z}) $$
(4.1)

with

$$ \begin{aligned} \mathfrak{P}^{(\mathfrak{a},\mathfrak{b})}_{i,j}(n,q)= \tilde{C}^{\mathfrak{a},\mathfrak{b}}_{i,j} \sum_{r=\max (0,j-q)}^{i} \frac{(-1)^{r} (n+r)! (q+r)! (r+\mathfrak{b} +1)_{q} (i+\lambda )_{r}}{r! (i-r)! (q+r-j)! (n+q+r)! \Gamma (j+q+r+\lambda +1)}, \end{aligned} $$
(4.2)

where

$$ \tilde{C}^{\mathfrak{a},\mathfrak{b}}_{i,j}= \frac{(-1)^{i} \mathfrak{L}^{q} \Gamma (i+\mathfrak{b} +1) (\lambda +2 j) \Gamma (j+\lambda )}{\Gamma (j+\mathfrak{b} +1)}. $$

Consequently, \(J^{q} \boldsymbol{\mathfrak{K}}^{(\mathfrak{a},\mathfrak{b})}_{n,N}( \mathfrak{z})\), \(q=1,2,\dots ,n\), have the form

$$ J^{q} \boldsymbol{\mathfrak{K}}^{(\mathfrak{a},\mathfrak{b})}_{n,N}( \mathfrak{z})={\mathbf{J}}_{n}^{(q)}\boldsymbol{\mathfrak{K}}^{(\mathfrak{a}, \mathfrak{b})}_{n,N+q}( \mathfrak{z}), $$
(4.3)

where \({\mathbf{J}}_{n}^{(q)}=(\mathfrak{g}_{i,j}^{(q)}(n))\) is a matrix of order \((N+1)\times (N+q+1)\), expressed explicitly as

$$ \begin{pmatrix} \mathfrak{P}^{(\mathfrak{a},\mathfrak{b})}_{0,0}(n,q) & \cdots & \mathfrak{P}^{(\mathfrak{a},\mathfrak{b})}_{0,q}(n,q)&0&\cdots & \cdots &\cdots 0 \\ \mathfrak{P}^{(\mathfrak{a},\mathfrak{b})}_{1,0}(n,q) &\cdots & \cdots &\mathfrak{P}^{(\mathfrak{a},\mathfrak{b})}_{1,q+1}(n,q) &0& \cdots &\cdots 0 \\ \vdots & \vdots & \vdots &\vdots & \ddots & & \vdots \\ \mathfrak{P}^{(\mathfrak{a},\mathfrak{b})}_{N,0}(n,q) & \cdots & \cdots &\cdots &\cdots &\cdots &\mathfrak{P}^{(\mathfrak{a}, \mathfrak{b})}_{N,N+q}(n,q) \end{pmatrix} $$
(4.4)

with

$$ \mathfrak{g}_{i,j}^{(q)}(n)= \textstyle\begin{cases} \mathfrak{P}^{(\mathfrak{a},\mathfrak{b})}_{i,j}(n,q),& j=0,1,\dots ,i+q, i=0,1,\dots ,N, \\ 0& \textit{otherwise}, \end{cases} $$
(4.5)

and

$$ {\boldsymbol{\mathfrak{K}}^{(\mathfrak{a},\mathfrak{b})}_{n,N}}(\mathfrak{z})=\bigl[ \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,0}(\mathfrak{z}), \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,1}( \mathfrak{z}),\dots , \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,N}(\mathfrak{z}) \bigr]^{T}. $$
(4.6)

Proof

The following formula can be obtained by combining integrating operations q times and relation (3.1):

$$ J^{q} \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,i}(\mathfrak{z})= \sum _{j=0}^{i}c^{(i)}_{j} \frac{(j+n)!}{(j+n+q)!}\mathfrak{z}^{j+n+q}. $$
(4.7)

Now using formula (3.3), we obtain

$$ J^{q} \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,i}(\mathfrak{z})= \sum _{j=0}^{i}c^{(i)}_{j} \frac{(j+n)!}{(j+n+q)!}\sum_{k=0}^{j+q}b^{(j+q)}_{k} \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,k}(\mathfrak{z}). $$
(4.8)

Expanding and collecting similar terms, after some algebra, we get

$$ J^{q} \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,i}( \mathfrak{z})= \sum_{j=0}^{i+q} \Biggl(\sum _{k=\max (0,j-q)}^{i}c^{(i)}_{k} \frac{(k+n)!}{(k+n+q)!}b^{(k+q)}_{j} \Biggr) \mathfrak{K}^{( \mathfrak{a},\mathfrak{b})}_{n,j}( \mathfrak{z}). $$
(4.9)

Then substituting formulae (3.2) and (3.4) into (4.9), after some manipulation, yields (4.1), which can be expressed as follows:

$$ J^{q} \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,i}(\mathfrak{z})= \bigl[\mathfrak{P}^{(\mathfrak{a},\mathfrak{b})}_{i,0}(n,q),\mathfrak{P}^{( \mathfrak{a},\mathfrak{b})}_{i,1}(n,q), \dots ,\mathfrak{P}^{( \mathfrak{a},\mathfrak{b})}_{i,i+q}(n,q),0,\dots ,0\bigr]\boldsymbol{ \mathfrak{K}}^{( \mathfrak{a},\mathfrak{b})}_{n,N+q}(\mathfrak{z}), $$
(4.10)

and this expression leads to the proof of (4.3). □

Theorem 4.2

\(I^{\mu}\mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,i}(\mathfrak{z})\), \(i\ge 0\), can be written in the form

$$ I^{\mu}\mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,i}( \mathfrak{z})= \mathfrak{z}^{\mu}\sum_{j=0}^{i} \mathfrak{F}_{i,j}^{(\mu )}(n) \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,j}( \mathfrak{z}), $$
(4.11)

and, consequently, \(I^{\mu}\boldsymbol{\mathfrak{K}}^{(\mathfrak{a},\mathfrak{b})}_{n,N}( \mathfrak{z})\) has the form

$$ I^{\mu}\boldsymbol{\mathfrak{K}}^{(\mathfrak{a},\mathfrak{b})}_{n,N}( \mathfrak{z})= \mathfrak{z}^{\mu}{\mathbf{I}}_{n}^{(\mu )}\boldsymbol{ \mathfrak{K}}^{( \mathfrak{a},\mathfrak{b})}_{n,N}(\mathfrak{z}), $$
(4.12)

where \({\mathbf{I}}_{n}^{(\mu )}=(\mathfrak{h}_{i,j}^{(\mu )}(n))\) is a matrix of order \((N+1)\times (N+1)\), which can be expressed explicitly as

$$ \begin{pmatrix} \mathfrak{F} _{0,0}^{(\mu )}(n) & 0 & \cdots & \cdots & \cdots & 0 \\ \mathfrak{F} _{1,0}^{(\mu )}(n) & \mathfrak{F} _{1,1}^{(\mu )}(n) & 0 & \cdots & \cdots & 0 \\ \vdots & & \ddots & & & \vdots \\ \mathfrak{F} _{i,0}^{(\mu )}(n) & \cdots & \mathfrak{F} _{i,i}^{(\mu )}(n) &0 &\cdots &0 \\ \vdots & & & \ddots & & \vdots \\ \vdots & & & & \ddots & 0 \\ \mathfrak{F} _{N,0}^{(\mu )}(n) &\cdots & \cdots & \cdots &\cdots & \mathfrak{F} _{N,N}^{(\mu )}(n) \end{pmatrix}, $$
(4.13)

where

$$ \mathfrak{h}_{i,j}^{(\mu )}(n)= \textstyle\begin{cases} \mathfrak{F} _{i,j}^{(\mu )}(n),& i\ge j, \\ 0&\textit{otherwise,} \end{cases} $$
(4.14)

and

$$\begin{aligned} \mathfrak{F} _{i,j}^{(\mu )}(n) =& \frac{(-1)^{i-j} (n+j)! \Gamma (i+\mathfrak{b} +1) \Gamma (j+\lambda ) \Gamma (i+j+\lambda )}{(i-j)! \Gamma (j+\mathfrak{b} +1) \Gamma (2 j+\lambda ) \Gamma (i+\lambda ) \Gamma (n+j+\mu +1)} \\ &{}\times _{3}F_{2}\left ( \textstyle\begin{array}{l} j-i,n+j+1,i+j+\lambda \\ 2 j+\lambda +1,n+j+\mu +1 \end{array}\displaystyle ;1 \right ) . \end{aligned}$$
(4.15)

Proof

Considering (3.1) and utilizing (2.4), we obtain

$$ I^{\mu}\mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,i}( \mathfrak{z})= \mathfrak{z}^{n+\mu}\sum_{k=0}^{i} c^{(i)}_{k} \frac{\Gamma (k+n+1)}{\Gamma (k+n+\mu +1)}\mathfrak{z}^{k}. $$
(4.16)

By utilizing (3.3), (4.16) may be reformulated as (4.11), which can be represented as

$$ I^{\mu}\mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,i}(\mathfrak{z})= \mathfrak{z}^{\mu}\bigl[\mathfrak{F} _{i,0}^{(\mu )}(n), \mathfrak{F} _{i,1}^{( \mu )}(n),\dots ,\mathfrak{F} _{i,i}^{(\mu )}(n),0,\dots ,0\bigr] \boldsymbol{\mathfrak{K}}^{(\mathfrak{a},\mathfrak{b})}_{n,N}( \mathfrak{z}), $$
(4.17)

and this expression leads to the proof of (4.12). □

Note 4.1

It is easy to see that \({\mathbf{J}}_{n}^{(0)}={\mathbf{I}}_{n}^{(0)}=I_{N+1}\), where \(I_{N+1}\) is the identity matrix of size \(N+1\), and hence (4.3) and (4.12) are satisfied for \(q=0\) and \(\mu =0\), respectively.

Remark 4.1

It is worth stating that the forms of \(\mathfrak{P}^{(\mathfrak{a},\mathfrak{b})}_{i,j}(n,q)\) and \(\mathfrak{F} _{i,j}^{(\mu )}(n)\) in Theorems 4.1 and 4.2 have a closed form for certain \(\mathfrak{a}\), \(\mathfrak{b}\). These include particular cases of Jacobi polynomials: the Chebyshev polynomials of the first and second kinds, Legendre polynomials, and ultraspherical polynomials.

Remark 4.2

The utilization of formula (3.3) in relation (4.16) leads to the lower triangular structure of matrix (4.13). This structure reduces the complexity of the algorithm, allowing it to handle larger problem sizes without excessive computational demands.

5 Numerical algorithm for solving ODE (1.1) and MTFDE (1.2) subject to ICs (1.3)

In this section, we propose a numerical solution for Eq. (1.1) when we apply the homogeneous form of ICs, specifically, when \(\alpha _{j}=0\) for all \(j=0,1,2,\dots ,n-1\). With this respect, the basis \(\mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,i}(\mathfrak{z})\) is used to satisfy this homogeneous form of ICs. On the other hand, creating the suggested procedure requires converting (1.1) and (1.2) into equivalent forms with homogeneous conditions, also taking into account the nonhomogeneous conditions (1.3).

5.1 Homogeneous ICs

Consider the homogeneous case of ICs (1.3). The first step of our algorithm is applying the integral operators \(J^{n}\) and \(I^{ \nu}\) to Eqs. (1.1) and (1.2), respectively. Using Lemma 2.1 and properties (2.2), (2.10), (2.12), (2.14), and (2.17), we get the following integrated forms of (1.1) and (1.2):

$$ y(\mathfrak{z})+ \sum_{q=0}^{n-1} \eta _{q} J^{n-q}y(\mathfrak{z})=g_{1}( \mathfrak{z}), \quad \mathfrak{z}\in [0,\mathfrak{L}], n=1,2,3,\dots , $$
(5.1)

and

$$ y(\mathfrak{z})+ \sum_{i=0}^{k} \gamma _{i} I^{\nu -\beta _{i}}y( \mathfrak{z})+\gamma _{k+1}I^{\nu}y(\mathfrak{z})=g_{2}(\mathfrak{z}),\quad \mathfrak{z}\in [0,\mathfrak{L}], $$
(5.2)

where \(g_{1}(\mathfrak{z})=J^{n}f_{1}(\mathfrak{z})\) is defined by (2.16), and \(g_{2}(\mathfrak{z})=I^{\nu}f_{2}(\mathfrak{z})\). Now consider the approximate solution of \(y(\mathfrak{z})\) in the form

$$ y(\mathfrak{z})\simeq y_{N}(\mathfrak{z})=\sum _{i=0}^{N}c_{i} \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,i}(\mathfrak{z})={ \boldsymbol{A}}^{T} \boldsymbol{\mathfrak{K}}^{(\mathfrak{a},\mathfrak{b})}_{n,N}( \mathfrak{z}), $$
(5.3)

where \({\boldsymbol{A}}= [c_{0}, c_{1},\dots ,c_{N} ]^{T}\). Finally, Theorems 4.1 and 4.2 enable us to approximate the \(J^{n-q}y(\mathfrak{z})\), \(q=0,1,\dots ,n-1\), and \(I^{\nu -\beta _{i}}y(\mathfrak{z})\), \(i=0,1,\dots ,k\), in the matrix forms

$$ J^{n-q}y(\mathfrak{z})\simeq J^{n-q}y_{N}( \mathfrak{z})={\boldsymbol{A}}^{T} {\mathbf{J}}^{(n-q)}_{n} \boldsymbol{\mathfrak{K}}^{(\mathfrak{a},\mathfrak{b})}_{n,N+q}( \mathfrak{z}) $$
(5.4)

and

$$ I^{\nu -\beta _{i}}y(\mathfrak{z})\simeq I^{\nu -\beta _{i}}y_{N}( \mathfrak{z})=\mathfrak{z}^{\nu -\beta _{i}}{\boldsymbol{A}}^{T} { \mathbf{I}}^{( \nu -\beta _{i})}_{n} \boldsymbol{\mathfrak{K}}^{(\mathfrak{a},\mathfrak{b})}_{n,N}( \mathfrak{z}). $$
(5.5)

In this method, approximations (5.3), (5.4), and (5.5) allow us to write the residuals of equations (5.1) and (5.2) as

$$\begin{aligned}& \mathfrak{R}_{n,N}(\mathfrak{z}) = {\boldsymbol{A}}^{T} \boldsymbol{ \mathfrak{K}}^{( \mathfrak{a},\mathfrak{b})}_{n,N}(\mathfrak{z})+ \sum _{q=0}^{n-1} \eta _{q}\, { \boldsymbol{A}}^{T} {\mathbf{J}}^{(n-q)}_{n} \boldsymbol{ \mathfrak{K}}^{( \mathfrak{a},\mathfrak{b})}_{n,N+n-q}(\mathfrak{z})-g_{1}( \mathfrak{z}), \end{aligned}$$
(5.6)
$$\begin{aligned}& \mathcal{R}_{n,N}(\mathfrak{z}) = {\boldsymbol{A}}^{T} \boldsymbol{ \mathfrak{K}}^{( \mathfrak{a},\mathfrak{b})}_{n,N}(\mathfrak{z})+ \sum _{i=0}^{k} \gamma _{i}\, \mathfrak{z}^{\nu -\beta _{i}}{\boldsymbol{A}}^{T} {\mathbf{I}}^{( \nu -\beta _{i})}_{n} \boldsymbol{\mathfrak{K}}^{(\mathfrak{a},\mathfrak{b})}_{n,N}( \mathfrak{z}) \\& \hphantom{\mathcal{R}_{n,N}(\mathfrak{z}) =}{}+\gamma _{k+1}\, \mathfrak{z}^{\nu}{\boldsymbol{A}}^{T} { \mathbf{I}}^{( \nu )}_{n} \boldsymbol{\mathfrak{K}}^{(\mathfrak{a},\mathfrak{b})}_{n,N}( \mathfrak{z})-g_{2}(\mathfrak{z}). \end{aligned}$$
(5.7)

In view of Note 4.1, \(\mathfrak{R}_{n,N}(\mathfrak{z})\) and \(\mathcal{R}_{n,N}(\mathfrak{z})\) can be written in the forms

$$\begin{aligned} \mathfrak{R}_{n,N}(\mathfrak{z}) =& \sum_{q=0}^{n} \eta _{n-q}\, { \boldsymbol{A}}^{T} {\mathbf{J}}^{(q)}_{n} \boldsymbol{\mathfrak{K}}^{(\mathfrak{a}, \mathfrak{b})}_{n,N+q}(\mathfrak{z})-g_{1}( \mathfrak{z}), \quad \eta _{n}=1, \end{aligned}$$
(5.8)
$$\begin{aligned} \mathcal{R}_{n,N}(\mathfrak{z}) =& \sum_{i=0}^{k+2} \gamma _{i} \,\mathfrak{z}^{\nu -\beta _{i}}{\boldsymbol{A}}^{T} { \mathbf{I}}^{(\nu -\beta _{i})}_{n} \boldsymbol{\mathfrak{K}}^{(\mathfrak{a},\mathfrak{b})}_{n,N}( \mathfrak{z})-g_{2}( \mathfrak{z}),\quad \gamma _{k+2}=1, \beta _{k+1}=0, \beta _{k+2}=\nu . \end{aligned}$$
(5.9)

In this part, we suggest a spectral method called MSJCOMIM to numerically solve Eqs. (1.1) and (1.2) under the ICs (1.3) (where \(\alpha _{j}=0\), \(j=0,1,\dots ,n-1\)). The collocation points of this method are selected as the \(N+1\) zeros of \(\mathfrak{J}^{(\mathfrak{a},\mathfrak{b})}_{\mathfrak{L},N+1}( \mathfrak{z})\) or, alternatively, as the points \(\mathfrak{z}_{i}=\frac{\mathfrak{L}(i+1)}{N+2}\), \(i=0,1,\dots ,N\). Then we get

$$ \mathfrak{R}_{n,N}(\mathfrak{z}_{i})=0,\quad i=0,1,\ldots,N, $$
(5.10)

in the case of ODE (1.1), whereas in the case of MTFDE (1.2), we have

$$ \mathcal{R}_{n,N}(\mathfrak{z}_{i})=0,\quad i=0,1,\ldots,N. $$
(5.11)

Solving (5.10) or (5.11) by an appropriate solver, the coefficients \(c_{i}\) (\(i = 0, 1, \ldots,N\)) can be determined to obtain numerical solutions for the DEs (1.1) or (1.2), respectively.

Note 5.1

There are a number of aspects to think about deciding which collocation points to use, such as the nature of the problem and the desired numerical solution properties. We should perform a comparison analysis with the numerical solutions computed to determine which of these choices is better. Applying both sets of collocation points to the investigated problems and evaluating them according to accuracy, convergence, and computing efficiency will shed light on the relative merits of the two options for the problem class in consideration.

5.2 Nonhomogeneous ICs

An essential part of creating the suggested algorithm is changing the nonhomogeneous conditions (1.3) and equations (5.1) and (5.2) into equivalent versions with homogeneous conditions. The following transformation makes these changes possible:

$$ \bar{y}(\mathfrak{z})=y(\mathfrak{z})-q_{n}( \mathfrak{z}), \quad q_{n}( \mathfrak{z})=\sum_{i=0}^{n-1} \frac{\alpha _{i}}{i!} \mathfrak{z}^{i}. $$
(5.12)

As a result, the current problems can be simplified by solving the following modified equations:

$$\begin{aligned} \sum_{q=0}^{n}\eta _{q} J^{n-q}\bar{y}(\mathfrak{z}) =&\tilde{g}_{1}( \mathfrak{z}),\quad \mathfrak{z}\in [0,\mathfrak{L}], \end{aligned}$$
(5.13)
$$\begin{aligned} \sum_{i=0}^{k+2}\gamma _{i} I^{\nu -\beta _{i}}\bar{y}(\mathfrak{z}) =& \tilde{g}_{2}(\mathfrak{z}),\quad \mathfrak{z}\in [0,\mathfrak{L}], \end{aligned}$$
(5.14)

subject to

$$ \bar{y}^{(j)}(0)=0,\quad j=0,1,\dots ,n-1, $$
(5.15)

where

$$\begin{aligned}& \tilde{g}_{1}(\mathfrak{z}) = J^{n}f_{1}( \mathfrak{z})-\sum_{q=0}^{n} \eta _{q} J^{n-q} q_{n}(\mathfrak{z}), \end{aligned}$$
(5.16)
$$\begin{aligned}& \tilde{g}_{2}(\mathfrak{z}) = I^{\nu}f_{2}( \mathfrak{z})-\sum_{i=0}^{k+2} \gamma _{i} I^{\nu -\beta _{i}} q_{n}(\mathfrak{z}), \end{aligned}$$
(5.17)

and then

$$ y_{N}(\mathfrak{z})=\bar{y}_{N}( \mathfrak{z})+q_{n}(\mathfrak{z}). $$
(5.18)

Remark 5.1

In Sect. 7, we use MSJCOMIM to solve numerous numerical problems. A computer system with 3.60 GHz Intel(R) Core(TM) i9-10850 CPU, 10 cores, and 20 logical processors ran the calculations using Mathematica 13.3. The algorithmic steps for solving the ODE and MTFDE using MSJCOMIM are expressed in Algorithms 1 and 2, respectively:

Algorithm 1
figure a

MSJCOMIM algorithm to solve ODE (1.1)

Algorithm 2
figure b

MSJCOMIM algorithm to solve MTFDE (1.2)

6 Convergence and error analysis

In this section, we examine the convergence and error estimates of suggested method. The space \(\mathfrak{S}_{n,N}\) is defined as follows:

$$ \mathfrak{S}_{n,N}=\operatorname{Span}\bigl\{ \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,0}( \mathfrak{z}), \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,1}( \mathfrak{z}),\ldots, \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{n,N}( \mathfrak{z})\bigr\} . $$

Additionally, we define the error between \(y(\mathfrak{z})\) and its approximation \(y_{N}(\mathfrak{z})\) as

$$ \mathfrak{E}_{N}(\mathfrak{z})= \bigl\vert y(\mathfrak{z})-y_{N}( \mathfrak{z}) \bigr\vert . $$
(6.1)

In the paper, the error of the numerical scheme is analyzed by using the estimate of the \(L_{2}\) norm error,

$$ \Vert \mathfrak{E}_{N} \Vert _{2}= \Vert y-y_{N} \Vert _{2}= \biggl( \int _{0}^{ \mathfrak{L}} \bigl\vert y(\mathfrak{z})-y_{N}( \mathfrak{z}) \bigr\vert ^{2}\,d\mathfrak{z} \biggr)^{1/2}, $$
(6.2)

and the estimate of the \(L_{\infty}\) norm error,

$$ \Vert \mathfrak{E}_{N} \Vert _{\infty}= \Vert y-y_{N} \Vert _{\infty}= \max_{0\le \mathfrak{z} \le \mathfrak{L}} \bigl\vert y(\mathfrak{z})-y_{N}( \mathfrak{z}) \bigr\vert . $$
(6.3)

Theorem 6.1

[36] suppose that \(y(\mathfrak{z})=\mathfrak{z}^{n} u(\mathfrak{z})\) and \(y_{N}(\mathfrak{z})\) is presented by (5.3) and represents the best possible approximation for \(y(\mathfrak{z})\) out of \(\mathfrak{S}_{n,N}\). In that case, there exists a constant K such that

$$ \Vert \mathfrak{E}_{N} \Vert _{\infty}\leq \frac{K \mathfrak{L}^{n+1}}{2^{\lambda}} \biggl( \frac{e \mathfrak{L}}{4} \biggr)^{N}(N+1)^{s-N-1} $$
(6.4)

and

$$ \Vert \mathfrak{E}_{N} \Vert _{2}\leq \frac{K \mathfrak{L}^{n+3/2}}{2^{\lambda}} \biggl( \frac{e \mathfrak{L}}{4} \biggr)^{N}(N+1)^{s-N-1}, $$
(6.5)

where \(s=\max \{\mathfrak{a},\mathfrak{b},-1/2\}\) and \(K=\max_{\mathfrak{z}\in [0,\mathfrak{L}]} | \frac {d^{N+1} u(\eta )}{d \mathfrak{z}^{N+1}} |\), \(\eta \in [0, \mathfrak{L}]\).

The following conclusion demonstrates that the obtained error converges at a fairly rapid rate.

Corollary 6.1

For all \(N > s-1\), we have the estimates

$$ \Vert \mathfrak{E}_{N} \Vert _{\infty}= \mathcal{O}\bigl((e \mathfrak{L}/4)^{N} N^{s-N-1}\bigr) $$
(6.6)

and

$$ \Vert \mathfrak{E}_{N} \Vert _{2}= \mathcal{O}\bigl((e \mathfrak{L}/4)^{N} N^{s-N-1}\bigr). $$
(6.7)

An estimate for error propagation is the focus of the next theorem, which stresses the stability of error.

Theorem 6.2

For two iterative approaches to \(y(\mathfrak{z})\), we have

$$ \vert y_{N+1}-y_{N} \vert \lesssim \mathcal{O}\bigl((e \mathfrak{L}/4)^{N} N^{s-N-1}\bigr),\quad N > s-1, $$
(6.8)

where indicates that there exists a generic constant d such that \(|y_{N+1}-y_{N}|\le d (e \mathfrak{L}/4)^{N} N^{s-N-1}\).

Note 6.1

As \(e \mathfrak{L}/4\) goes down, the error estimates in this section show that the rate of convergence changes from an inverse polynomial to an exponential one.

7 Numerical simulations

To demonstrate that the method given in Sect. 5 is effective and efficient, we give several examples. To measure precision, we display the MAE between the exact and approximate solutions. In particular, we demonstrate in Examples 7.1 and 7.4 that the suggested method \(MSJCOMIM\) produces the exact solution for problems with a polynomial solution of degree N. We also show the calculated errors for numerical solutions \(y_{N}(\mathfrak{z})\) obtained with \(MSJCOMIM\) for \(N = 1,\dots ,20\). We can see the excellent computational accuracy in the findings summarized in Tables 1, 2, 4, 6, and 7. In addition, Tables 3, 5, and 7 compare \(MSJCOMIM\) with other techniques provided in [3741]. The results show that \(MSJCOMIM\) is the best method, giving more accurate predictions than the others. Figures 1a, 2a, 3a, 3b, and 4a show that the approximate and exact solutions for Examples 7.2, 7.3, and 7.5 are highly congruent with each other. Furthermore, the log-errors for different \(\mathfrak{a}\) and \(\mathfrak{b}\) values are shown in Figs. 1b, 2b, and 4b. This demonstrates that the solutions for Problems 7.2 and 7.5 when employing \(MSJCOMIM\) are stable and converge.

Figure 1
figure 1

Figures of obtained errors \(E_{\mathcal{N}}\) using various \(\mathcal{N}\) and \(\mathfrak{a}=3/2\), \(\mathfrak{b}=1/2\), \(\mathfrak{L}=1\) for Example 7.2

Figure 2
figure 2

Figures of obtained errors \(E_{\mathcal{N}}\) using various \(\mathcal{N}\) and \(\mathfrak{a}=3/2\), \(\mathfrak{b}=1/2\), \(\mathfrak{L}=4\) for Example 7.2

Figure 3
figure 3

Figures of obtained errors and approximate solutions for Example 7.3 using various \(\mathcal{N}\) and \(\mathfrak{a}=0\), \(\mathfrak{b}=0\)

Figure 4
figure 4

Figures of obtained errors for Example 7.5 at \(\alpha =1, 4\pi \) using various \(\mathcal{N}\) and \(\mathfrak{a}=3/2\), \(\mathfrak{b}=5/2\)

Table 1 Errors obtained for Example 7.2 using \(\mathfrak{L}=1\)
Table 2 Errors obtained for Example 7.2 using \(\mathfrak{L}=4\)
Table 3 A comparison of approaches [38] and \(MSJCOMIM\) for Example 7.2
Table 4 Errors obtained for Example 7.3
Table 5 A comparison of approaches [39] and \(MSJCOMIM\) for Example 7.3 using \(\mathfrak{a}=3\), \(\mathfrak{b}=1\)
Table 6 Errors obtained for Example 7.5 (\(\alpha =1\))
Table 7 Errors obtained for Example 7.5 (\(\alpha =4\pi \))

7.1 Numerical simulations for handling ODE (1.1) with ICs (1.3)

Problem 7.1

Consider the differential equation

(7.1)

where \(g(\mathfrak{z})\) is chosen such that \(y(\mathfrak{z})=\mathfrak{z}^{4}+2\). The application of the proposed method \(MSJCOMIM\) gives

$$ y(\mathfrak{z})=y_{1}(\mathfrak{z})= \frac{(\mathfrak{b} +1) \mathfrak{L}}{\lambda +1} \mathfrak{K}^{( \mathfrak{a},\mathfrak{b})}_{3,0}(\mathfrak{z})+ \frac{\mathfrak{L}}{\lambda +1} \mathfrak{K}^{(\mathfrak{a}, \mathfrak{b})}_{3,1}(\mathfrak{z})+2. $$

Problem 7.2

Consider the differential equation [38]

(7.2)

where \(y(\mathfrak{z})=\mathfrak{z}^{2} e^{-2\mathfrak{z}}-\mathfrak{z}^{2}+3\). This solution agrees perfectly with the numerical solutions obtained with accuracy of 10−16 when \(\mathfrak{L}=1,4\) and \(\mathcal{N}=11,20\), respectively, as shown in Tables 1 and 2.

Problem 7.3

Consider the seventh-order linear IVP [39]

(7.3)

where \(y(\mathfrak{z})=\mathfrak{z}(1-\mathfrak{z}) e^{\mathfrak{z}}\). According to Table 4, this solution is in perfect agreement with the numerical solutions produced with an accuracy of 10−16 for \(\mathcal{N}=10\).

7.2 Numerical simulations for handling MTFDE (1.2) with ICs (1.3)

Problem 7.4

Consider the Bagley–Torvik equations [37]

(7.4)

where \(g_{1}(\mathfrak{z})\) and \(g_{2}(\mathfrak{z})\) are chosen such that the exact solutions are \(y(\mathfrak{z})=\mathfrak{z}^{3}\) and \(y(\mathfrak{z})=\mathfrak{z}^{4} (\mathfrak{z}-1)\), respectively.

The application of \(MSJCOMIM\) give the exact solution \(y(\mathfrak{z})=y_{1}(\mathfrak{z})=\mathfrak{z}^{3}\) in the form

$$ y_{1}(\mathfrak{z})=\frac{(\mathfrak{b} +1) \mathfrak{L}}{\lambda +1} \mathfrak{K}^{(\mathfrak{a},\mathfrak{b})}_{2,0}(\mathfrak{z})+ \frac{\mathfrak{L}}{\lambda +1} \mathfrak{K}^{(\mathfrak{a}, \mathfrak{b})}_{2,1}(\mathfrak{z}) $$
(7.5)

and the exact solution \(y(\mathfrak{z})=y_{3}(\mathfrak{z})=\mathfrak{z}^{4} (\mathfrak{z}-1)\) in the form

$$ y_{3}(\mathfrak{z})=\sum _{i=0}^{3}c_{i} \mathfrak{K}^{( \mathfrak{a},\mathfrak{b})}_{2,i}( \mathfrak{z}), $$
(7.6)

where

$$\begin{aligned}& c_{0} = \frac{(\mathfrak{b} +1) (\mathfrak{b} +2) (-\lambda +(\mathfrak{b} +3) \mathfrak{L}-3) \mathfrak{L}^{2}}{(\lambda +1) (\lambda +2) (\lambda +3)},\qquad c_{1}= \frac{(\mathfrak{b} +2) (3 (\mathfrak{b} +3) \mathfrak{L}-2 (\lambda +4)) \mathfrak{L}^{2}}{(\lambda +1) (\lambda +3) (\lambda +4)}, \\& c_{2} = - \frac{2 (\lambda -3 (\mathfrak{b} +3) \mathfrak{L}+5) \mathfrak{L}^{2}}{(\lambda +2) (\lambda +3) (\lambda +5)},\qquad c_{3}= \frac{6 \mathfrak{L}^{3}}{(\lambda +3) (\lambda +4) (\lambda +5)}. \end{aligned}$$

Remark 7.1

It is worth noting that the exact solutions (7.5) and (7.6) are obtained using \(\mathcal{N}=1,3, \mathfrak{a}\), \(\mathfrak{b} >-1\), respectively, whereas these exact solutions are obtained in [37, Example 3] using \(\mathfrak{L}=2\) and \(\mathcal{N}=4,6\), respectively.

Problem 7.5

Consider the Bagley–Torvik equation [37, 41]

(7.7)

where \(g(\mathfrak{z})\) is chosen such that \(y(\mathfrak{z})=\sin (\alpha \mathfrak{z})\).

The numerical solutions at \(\mathcal{N}=16\) agree precisely with this solution and are presented in Tables 6 and 7, and their accuracy is 10−16. In Table 8 the comparison results of MAE using \(MSJCOMIM\) significantly outperform those of [37, 41]. Additionally, the CPU (seconds) of \(MSJCOMIM\) was found to be faster compared to the corresponding method [37, Table 1].

Table 8 Comparison of MAE between the methods [37, 41] and \(MSJCOMIM (\mathfrak{a}=2, \mathfrak{b}=3)\) for Example 7.5

8 Conclusions

In this research, we have introduced a kind of shifted JPs that satisfy homogeneous ICs. We have also developed a new method for approximating the ODE and MTFDE solutions specified in Sect. 4 using the SCM in conjunction with the derived OMs. Using five different cases, \(MSJCOMIM\) has proven to be incredibly accurate and efficient in resolving these issues. Based on the promising results obtained in this research, we envision several potential directions for future work. Firstly, an interesting avenue would be to investigate the extension of \(MSJCOMIM\) to handle higher-dimensional problems, such as systems of ODEs and MTFDEs. This expansion would require the development of suitable multidimensional OMs and the adaptation of the spectral collocation framework. Furthermore, it could be valuable to explore the applicability of \(MSJCOMIM\) to other classes of FDEs beyond those considered in this study. Various types of FDEs exist in different scientific and engineering fields, and investigating their solutions using \(MSJCOMIM\) could provide valuable insights and contribute to advancing the field. Additionally, the theoretical findings presented in this paper open up possibilities for further research in the area of numerical methods for DEs. Exploring alternative modifications of shifted JPs or investigating the use of different OPs could lead to the development of even more accurate and efficient approximation techniques. In conclusion, the introduced \(MSJCOMIM\) has shown great potential in solving ODEs and FDEs with high accuracy. We believe that the knowledge and techniques presented in this work can serve as a foundation for addressing a broader range of DEs and inspire further advancements in the field of numerical methods for DEs.

Data Availability

No datasets were generated or analysed during the current study.

Abbreviations

DEs:

Differential equations

ODEs:

Ordinary differential equations

RIs:

Repeated integrals

RLFI:

Riemann–Liouville fractional integral

FDEs:

Fractional differential equations

CFD:

Caputo fractional derivative

MTFDEs:

Multiterm fractional differential equations

OMs:

Operational matrices

SCM:

Spectral collocation method

JPs:

Jacobi polynomials

SJPs:

Shifted Jacobi polynomials

MSJPs:

Modified shifted Jacobi polynomials

IVPs:

Initial value problems

BVPs:

Boundary value problems

MAE:

Maximum absolute error

References

  1. Ionescu, C., Lopes, A., Copot, D., Machado, J.A.T., Bates, J.H.T.: The role of fractional calculus in modeling biological phenomena: a review. Commun. Nonlinear Sci. Numer. Simul. 51, 141–159 (2017)

    Article  MathSciNet  Google Scholar 

  2. Battaglia, J.-L., Cois, O., Puigsegur, L., Oustaloup, A.: Solving an inverse heat conduction problem using a non-integer identified model. Int. J. Heat Mass Transf. 44(14), 2671–2680 (2001)

    Article  Google Scholar 

  3. Losa, G.A., Nonnenmacher, T.F., Merlini, D., Weibel, E.R.: Fractals in Biology and Medicine: III, vol. 3. Springer, Berlin (1994)

    Google Scholar 

  4. Coimbra, C.F.M., Soon, C.M., Kobayashi, M.H.: The variable viscoelasticity operator. Ann. Phys. 14, 378–389 (2005)

    Article  Google Scholar 

  5. Odzijewicz, T., Malinowska, A.B., Torres, D.F.M.: Fractional variational calculus of variable order. In: Almeida, A., Castro, L., Speck, F.O. (eds.) Advances in Harmonic Analysis and Operator Theory: Advances and Applications, vol. 229, pp. 291–301. Birkhäuser, Basel (2013)

    Chapter  Google Scholar 

  6. Ostalczyk, P.W., Duch, P., Brzeziński, D.W., Sankowski, D.: Order functions selection in the variable-fractional-order pid controller. In: Advances in Modelling and Control of Non-integer-Order Systems. Lect. Notes Electr. Eng., vol. 320, pp. 159–170 (2015)

    Chapter  Google Scholar 

  7. Rapaić, M.R., Pisano, A.: Variable-order fractional operators for adaptive order and parameter estimation. IEEE Trans. Autom. Control 59(3), 798–803 (2013)

    Article  MathSciNet  Google Scholar 

  8. Kilbas, A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differential Equations, vol. 204. Elsevier, Amsterdam (2006)

    Book  Google Scholar 

  9. Youssri, Y.H., Abd-Elhameed, W.M., Ahmed, H.M.: New fractional derivative expression of the shifted third-kind chebyshev polynomials: application to a type of nonlinear fractional pantograph differential equations. J. Funct. Spaces, 2022 (2022)

  10. Hwang, C., Shih, Y.P.: Parameter identification via Laguerre polynomials. Int. J. Syst. Sci. 13(2), 209–217 (1982)

    Article  MathSciNet  Google Scholar 

  11. Paraskevopoulos, P.N.: Chebyshev series approach to system identification, analysis and optimal control. J. Franklin Inst. 316(2), 135–157 (1983)

    Article  MathSciNet  Google Scholar 

  12. Paraskevopoulos, P.N.: Legendre series approach to identification and analysis of linear systems. IEEE Trans. Autom. Control 30(6), 585–589 (1985)

    Article  MathSciNet  Google Scholar 

  13. Paraskevopoulos, P.N., Sklavounos, P.G., Georgiou, G.C.: The operational matrix of integration for Bessel functions. J. Franklin Inst. 327(2), 329–341 (1990)

    Article  MathSciNet  Google Scholar 

  14. Singh, A.K., Singh, V.K., Singh, O.P.: The Bernstein operational matrix of integration. Appl. Math. Sci. 3(49), 2427–2436 (2009)

    MathSciNet  Google Scholar 

  15. Ahmed, H.M.: A new first finite class of classical orthogonal polynomials operational matrices: an application for solving fractional differential equations. Contemp. Math. 4(4), 974–994 (2023)

    Article  Google Scholar 

  16. Ahmed, H.M.: Numerical solutions for singular Lane-Emden equations using shifted Chebyshev polynomials of the first kind. Contemp. Math. 4(1), 132–149 (2023)

    Article  Google Scholar 

  17. Youssri, Y.H., Zaky, M.A., Hafez, R.M.: Romanovski–Jacobi spectral schemes for high-order differential equations. Appl. Numer. Math. 198, 148–159 (2024)

    Article  MathSciNet  Google Scholar 

  18. Hafez, R.M., Youssri, Y.H.: Fully Jacobi–Galerkin algorithm for two-dimensional time-dependent PDEs arising in physics. Int. J. Mod. Phys. C 35(3), 1–24 (2024)

    Article  Google Scholar 

  19. Hammad, M., Hafez, R.M., Youssri, Y.H., Doha, E.H.: Exponential Jacobi–Galerkin method and its applications to multidimensional problems in unbounded domains. Appl. Numer. Math. 157, 88–109 (2020)

    Article  MathSciNet  Google Scholar 

  20. Hafez, R.M., Youssri, Y.H.: Shifted Jacobi collocation scheme for multidimensional time-fractional order telegraph equation. Iran. J. Numer. Anal. Optim. 10(1), 195–223 (2020)

    Google Scholar 

  21. Hafez, R.M., Youssri, Y.H.: Jacobi spectral discretization for nonlinear fractional generalized seventh-order kdv equations with convergence analysis. Tbil. Math. J. 13(2), 129–148 (2020)

    MathSciNet  Google Scholar 

  22. Doha, E.H., Hafez, R.M., Youssri, Y.H.: Shifted Jacobi spectral-Galerkin method for solving hyperbolic partial differential equations. Comput. Math. Appl. 78(3), 889–904 (2019)

    Article  MathSciNet  Google Scholar 

  23. Abd-Elhameed, W.M., Ahmed, H.M.: Spectral solutions for the time-fractional heat differential equation through a novel unified sequence of Chebyshev polynomials. AIMS Math. 9, 2137–2166 (2024)

    Article  MathSciNet  Google Scholar 

  24. Ahmed, H.M., Hafez, R.M., Abd-Elhameed, W.M.: A computational strategy for nonlinear time-fractional generalized Kawahara equation using new eighth-kind Chebyshev operational matrices. Phys. Scr. 99(4), 045250 (2024)

    Article  Google Scholar 

  25. Abd-Elhameed, W.M., Ahmed, H.M., Youssri, Y.H.: A new generalized Jacobi Galerkin operational matrix of derivatives: two algorithms for solving fourth-order boundary value problems. Adv. Differ. Equ. 2016(1), 22 (2016)

    Article  MathSciNet  Google Scholar 

  26. Abd-Elhameed, M.S., Al-Harbi, W.M., Amin, A.K., Ahmed, H.M.: Spectral treatment of high-order Emden–Fowler equations based on modified Chebyshev polynomials. Axioms 12(2), 1–17 (2023)

    Article  Google Scholar 

  27. Abd-Elhameed, W.M., Youssri, Y.H.: Spectral solutions for fractional differential equations via a novel Lucas operational matrix of fractional derivatives. Rom. J. Phys. 61(5–6), 795–813 (2016)

    Google Scholar 

  28. Ahmed, H.M.: Highly accurate method for a singularly perturbed coupled system of convection–diffusion equations with Robin boundary conditions. J. Nonlinear Math. Phys. 31(17), 1–19 (2024)

    MathSciNet  Google Scholar 

  29. Ahmed, H.M.: Highly accurate method for boundary value problems with Robin boundary conditions. J. Nonlinear Math. Phys. 30, 1239–1263 (2023)

    Article  MathSciNet  Google Scholar 

  30. Hafez, R.M., Youssri, Y.H., Atta, A.G.: Jacobi rational operational approach for time-fractional sub-diffusion equation on a semi-infinite domain. Contemp. Math. 4(4), 853–876 (2023)

    Article  Google Scholar 

  31. Youssri, Y.H., Hafez, R.M.: Exponential Jacobi spectral method for hyperbolic partial differential equations. Math. Sci. 13(4), 347–354 (2019)

    Article  MathSciNet  Google Scholar 

  32. Abd-Elhameed, W.M., Ahmed, H.M.: Tau and Galerkin operational matrices of derivatives for treating singular and Emden–Fowler third-order-type equations. Int. J. Mod. Phys. C 33(05), 2250061 (2022)

    Article  MathSciNet  Google Scholar 

  33. Kilbas, A.A., Marichev, O.I., Samko, S.G.: Fractional Integrals and Derivatives: Theory and Applications (1993)

    Google Scholar 

  34. Szeg, G.: Orthogonal Polynomials, Volume XXIII, 4th edn. Am. Math. Soc., Providence (1975)

    Google Scholar 

  35. Luke, Y.L.: Mathematical Functions and Their Approximations. Academic Press, London (1975)

    Google Scholar 

  36. Ahmed, H.M.: Enhanced shifted Jacobi operational matrices of derivatives: spectral algorithm for solving multiterm variable-order fractional differential equations. Bound. Value Probl. 2023(108), 108 (2023)

    Article  MathSciNet  Google Scholar 

  37. Youssri, Y.H.: A new operational matrix of Caputo fractional derivatives of Fermat polynomials: an application for solving the Bagley–Torvik equation. Adv. Differ. Equ. 2017, 73 (2017)

    Article  MathSciNet  Google Scholar 

  38. Bhrawy, A.H., Abd-Elhameed, W.M.: New algorithm for the numerical solutions of nonlinear third-order differential equations using Jacobi–Gauss collocation method. Math. Probl. Eng. 2011, Article ID 837218 (2011)

    Article  MathSciNet  Google Scholar 

  39. Akram, G., Beck, C.: Hierarchical cascade model leading to 7-th order initial value problem. Appl. Numer. Math. 91, 89–97 (2015)

    Article  MathSciNet  Google Scholar 

  40. Napoli, A., Abd-Elhameed, W.M.: An innovative harmonic numbers operational matrix method for solving initial value problems. Calcolo 54, 57–76 (2017)

    Article  MathSciNet  Google Scholar 

  41. Doha, E.H., Bhrawy, A.H., Ezz-Eldien, S.S.: Efficient Chebyshev spectral methods for solving multi-term fractional orders differential equations. Appl. Math. Model. 35(12), 5662–5672 (2011)

    Article  MathSciNet  Google Scholar 

Download references

Funding

Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). No funding was received to assist with the preparation of this manuscript.

Author information

Authors and Affiliations

Authors

Contributions

H.M. Ahmed wrote the main manuscript text and prepared all figures. He reviewed the manuscript.

Corresponding author

Correspondence to H. M. Ahmed.

Ethics declarations

Ethics approval and consent to participate

Not Applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ahmed, H.M. Enhanced shifted Jacobi operational matrices of integrals: spectral algorithm for solving some types of ordinary and fractional differential equations. Bound Value Probl 2024, 75 (2024). https://doi.org/10.1186/s13661-024-01880-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13661-024-01880-0

Mathematics Subject Classification

Keywords