 Research
 Open access
 Published:
An application of artificial neural networks for solving fractional higherorder linear integrodifferential equations
Boundary Value Problems volume 2023, Article number: 74 (2023)
Abstract
This ongoing work is vehemently dedicated to the investigation of a class of ordinary linear Volterra type integrodifferential equations with fractional order in numerical mode. By replacing the unknown function by an appropriate multilayered feedforward type neural structure, the fractional problem of such initial value is changed into a course of nonlinear minimization equations, to some extent. Put differently, interest was sparked in structuring an optimized iterative firstorder algorithm to estimate solutions for the origin fractional problem. On top of that, some computer simulation models exemplify the preciseness and wellfunctioning of the indicated iterative technique. The outstanding accomplished numerical outcomes conveniently reflect the productivity and competency of artificial neural network methods compared to customary approaches.
1 Introduction
To the best of our knowledge, the expansion of the notion of differentiation and integration to random noninteger (real/complex) order formed with the wellestablished foundation of deliberations on fractional calculus. Over the course of the last two decades, many researchers have studied fractional calculus in the domain of modern mathematics. That is why the problem of fractionalorder integrodifferential equations (FOIDEs) is generally utilized in applied mathematics in the same way as in other linked domains of science and engineering. This is one of the foremost reasons why the issue tackled in the current study has been more interesting to a large circle of researchers and scientists. It is well known that a majority of the initial or boundary value problems in fractional derivatives are not meant to be solved explicitly. A class of nonlinear fractional Fredholm–Volterra–Hammerstein integrodifferential delay equations with a functional bound was studied by Kurkcu [14]. Wnang [23] has developed a hybrid method based on the combination of Bernoulli polynomials approximation and Caputo fractional derivative and numerical integral transformation to approximate solutions of twodimensional nonlinear Volterra–Fredholm integral equations and fractional integrodifferential equations (of Hammerstein and mixed types). Nematia and Lima showed a case in point for an adjustment of hat functions in finding solutions to a class of nonlinear singular fractional integrodifferential equations [17]. Elbeleze et al. [5] applied the methods of homotopy perturbation and variational iteration for Fredholm–Volterra type integrodifferential equations with initial/boundary conditions. Wang and Zhu [24] employed Euler wavelet approximation to gain a solution to nonlinear fractionalorder Volterra type integrodifferential equations. Bazgir and Ghazanfari [2] benefited from a very efficient combination of a selfadjoint operator at fourth order concerning a fractional order of eigenfunctions and changed the Legendre polynomials into the numerical solution for a fourthorder fractional partial integrodifferential equation. Rahimkhani et al. [19] used the Bernoulli pseudospectral method to solve nonlinear fractional Volterra integrodifferential equations. A large number of numerical methods deal with solutions for several types of fractional integrodifferential equations. For more details, the reader is referred to references [1, 7, 25, 26]. It is noteworthy that Bentrcia et al. [3] investigated the asymptotic stability of a viscoelastic Bresse system in the onedimensional bounded domain. They introduced two internal damping terms expressed using the generalized Caputo fractional derivative. Of course, it seems necessary to mention that Mennouni [15] established an improved convergence analysis via the Kulkarni method to approximate the solution of an integrodifferential equation in \(L^{2}([1,1])\).
In order to model and solve newly emerging and complex mathematic problems, it is highly recommended to employ the approach of artificial neural networks (ANNs), which simulate the neural structure of the human brain, which has been called one of the world’s wonders. Of course, it is highly desirable to recall that the multiple structures of these networks were previously used to estimate solutions of different mathematical problems in applied grounds (for instance, see [9, 10, 22]). Also, remarkable fractionalorder mathematical problems have been numerically examined through the ANN approach in the recent past [11–13, 21]. Now, an appropriate structure of ANNs will be introduced and later applied to numerically solve a fractional higherorder linear Volterra type integrodifferential equation having initial conditions. In this regard, in order to model the fractional problem in question, an appropriate threelayered feedforward neural network will be designed and then applied. The neural architecture, based on a firstorder gradient descent optimization algorithm, is able to transform the origin fractional problem into a minimization one through accumulating the initial grounds. Then a backpropagation (BP) algorithm is used to train the designed network, until the network error reaches an acceptable value. Now, the supposed ANN architecture is able to estimate the unknown function on a solution area to any desired accuracy. The present paper is organized as follows: In Sect. 2, different notations and definitions used in fractional calculus and ANNs are shortly elaborated on. In Sect. 3, an acceptable architecture of neural networks is structured for estimating solutions of the mentioned fractional problem. In Sect. 4, numerical accomplishments are appropriately illustrated to indicate the accuracy of the proposed iterative technique. Finally, in Sect. 5 the discussion is extended with the major outcomes of the recommended method.
2 Preliminaries
As declared before, the preeminent goal of the current research is to apply the ANN approach to approximate the solution of a FOIDE problem. The current section provides a clear explanation of several required mathematical interpretations, features of fractional calculus theory, and the ANN approach.
2.1 Fractional calculus
A short summary of the literature on fractional calculus reveals that the field initiated with the question “how can a function’s derivative and integral be generalized to a noninteger order?” Following this ambiguous controversy, the mathematical explanation of a nonintegerorder integral or derivative was placed under the spotlight by a number of scholars for a specific period of time. Finally, Lacroix presented the first research concerning fractionalorder derivatives [20]. Over the course of the following years, numerous researchers studied the subject of fractional calculus and proffered multifarious applicable descriptions of nonintegerorder derivatives or integrals. Amongst the definitions, the Caputo and Riemann–Liouville definitions seem to be the two most used ones. There is no need to note which derivative has been utilized more widely, as each derivative has its own appropriate operational range. The Caputo definition more appropriately describes problems of initial value fractional order [27]. Due to the congruence of the initial conditions, we decided to use Caputo’s fractional definition in this research. The Caputo fractional differential operator, proposed by the Italian mathematician Caputo [4], is clearly defined below.
Definition 1
Let \(u(x)\) be a continuously differentiable function on finite interval \([a,b]\) up to order k. The Caputo derivative \(D^{\alpha}_{x}\) and fractional integral operator \(I^{\alpha}_{a,x}\) of order \(\alpha > 0\) are defined as follows:
respectively. Many studies have been conducted on the properties and performance of the Caputo fractional operator. Here, we will focus on its important properties and uses. It should be noted that the derivative of any order of the constant function is zero and also that the following attributes hold:
In the above relations, the notation \(\lceil \alpha \rceil \) indicates the smallest integer greater than or equal to constant α.
2.2 Basic structure of ANNs
As time passes, new methods are developed and others fall into disfavor. As is well known, an ANN can be defined as a tidy braininspired computing system intended to empower computer systems to learn empirically. The logic of the ANN procedure is to gather some training figures and then automatically establish a system that is capable of learning from the training data. Based on such perspective, we will consider a neural architecture called “perceptron,” which was suggested by the researcher Frank Rosenblatt, who founded an important research program. In such a neural network, various numbers of signals are introduced to input neurons. The network’s first layer (known as the input layer) does not change the input signals’ values. Neurons in the second layer (called the hidden layer) combine their inputs by benefiting from a set of network weights and biases. Then they pass through the nodes of the hidden layer via a proper activation function. Here, the sigmoidal activation function is employed to control the oscillation of the hidden neurons’ output. The output of each node in the hidden layer is then passed on to the last layer of neurons to generate the output of the network. One should bear in mind that this model, just as many others, utilizes the identity function for the input and output layers. For more details on the proposed approach, see [6, 8]. Paying attention to the neural architecture illustrated in Fig. 1, one can comprehend the usefulness and innovative value of ANNs. The description elaborated on in this part can be formulated as follows:

input layer unit:
$$ o^{1}_{1}=x; $$(5) 
hidden layer units:
$$\begin{aligned}& o^{2}_{i}=f(\mathit{net}_{i}), \quad i=1,\ldots,I, \\& \mathit{net}_{i}=x.w^{1}_{i}+b_{i}, \end{aligned}$$(6)where the symbol f demonstrates the sigmoid function;

output layer unit:
$$ \mathit{Net}(x)=\sum_{i=1}^{I} \bigl(w^{2}_{i}.o^{2}_{i}\bigr)= \sum_{i=1}^{I}\bigl(w^{2}_{i}.f \bigl(w^{1}_{i}.x+b_{i}\bigr)\bigr). $$(7)It seems necessary to mention at this point that after making some slight alterations to this pro type network model, it becomes an efficient tool for modeling the main problem.
3 Description of the proposed method
FOIDEs have gained a lot of attention due to their applicability in various scientific disciplines. Generally, there is no straightforward way to find exact solutions of fractional problems. As such, researchers must draw inferences from the suggested arbitrary numerical methods. The primary objective of this section is to use the ANN approach to find the approximate solution for a given class of fractional integrodifferential equations (see Fig. 1). Hence, a preliminary value is considered for ordinary linear Volterra type integrodifferential equations with fractional order requiring the Caputo type derivative of the form
under the influence of initial conditions
Here P, Q, and H are specified realvalued analytic functions on the continuous region \((a,b)\). As is obvious, a power optimization technique is fundamentally comprised of a finite Maclaurin series approximation of a solution function with a successful optimization strategy. The power series (PS) method is applicable for estimating the solution of a minimization (or maximization) problem on a given region. From now on, we desire to use the PS method to approximate the unknown function \(u(x)\), after rewriting it in an applicable trial solution form. This means that in order to employ this strategy, the initial conditions should be firstly applied to the origin problem. For the alluded equation the trial solution is written as follows:
During the process of this research, an attempt will be made to approximate the network parameter vectors \(w^{1}\), \(w^{2}\), and b, utilizing the most applicable BP machine learning algorithm.
3.1 Formulating a minimization problem
As shown in the previous part, the intended ANN architecture can be modeled completely and imitate the fractional problem (8) with the assistance of the trial solution (9). Here it is important to be aware of the fact that the neural network needs to be fully trained prior to treating it as an option for the unknown function \(u(x)\). In this way, it must be pointed out that the learning objective means finding proper quantitative values for the parameters of the network, i.e., \(w^{1}_{i}\), \(w^{2}_{i}\), and \(b_{i}\) (for \(i=1,\ldots ,n\)), in such a way as to approximate the solution function with high precision. Therefore, the origin problem (8) is reduced to a corresponding minimization problem through discretizing the specific domain \(\Omega =(a,b)\). In this discretization procedure, \(\Omega _{r}\) is a partition of the domain Ω with the nodal points \(x_{r}=a+\frac{r(ba)}{R}\) (for \(r=0,\ldots,R\), \(R\in \mathbf{N}\)). For simplification, the research is continued under the supposition that \((a,b)=(0,1)\). More generally, any case is capable of being absolutely transformed to this circumstance by the linearization operator \(\frac{x}{ba} + \frac{a}{ab}\), \(x \in (a,b)\). Replacing the above trial solution in equation (8) contributes to the following applicable form:
Here one must spread out the operators \(D^{\alpha _{1}}_{x}\) and \(I^{\alpha _{2}}_{0,x}\) in the series involving the nonlinear activation function f. From a mathematical point of view, computing the fractionalorder derivative and integral for the nonlinear function f is very complicated. An alternative scheme must be found so as to assist us in explaining the issue. Now, for calculating higherorder fractional derivatives, a recurrence relation is proposed [16]:
The constant coefficients \(\xi ^{n}_{k}\) for the initial values of n and k are shown in Table 1. After replacing equation (11) in equation (10) and simplifying the result, the following result is obtained:
Now, we have
To complete this strategy, the identified points \(x_{r}\) (for \(r=0,\ldots,R\)) are put into equation (13). In the end, the differentiable least mean square (LMS) algorithm is used to improve the optimization strategy as follows:
Improving this system using a suitable error rectification strategy is our objective in the following part. For more details, see reference [8].
3.1.1 Proposed machine learning approach
As reasonably explained above, in equation (8), the indicated integrodifferential fractional initial value problem was meant to be converted into an optimization model through applying the prominent LMS rule. To find the solution of the obtained system, the network error needs to be significantly optimized on the reduced network parameter space (weights and biases). To do so, the quadratic error function, consisting of the sum of the squared network errors, \(E_{r}\) (for \(r=0,\ldots ,R\)), is minimized benefiting from the standard BP (gradient descentbased) algorithm. To train a neural network, BP is an often used repetitive learning procedure on the foundation of the adjustment of weights and biases. At the beginning, the parameters of the network, i.e., \(w^{1}_{i}\), \(w^{2}_{i}\), and \(b_{i}\), are realvalued random constants for training. Then, the differentiable function \(E=\sum_{r=0}^{R}\) is improved via the suggested supervised BP learning rule. To do so, the algorithm is established for parameter \(w^{2}_{i}\) as follows:
where τ, η, and γ are the repetition step number, learning rate, and momentum term, respectively. Furthermore, the indexes \(w^{2}_{i}(\tau +1)\) and \(w^{2}_{i}(\tau )\) depict the adjusted and current weight parameter for each label of the training subscript i, respectively. In order to complete the learning process, the partial derivative \(\frac{\partial E_{r}}{\partial w^{2}_{i}}\) is given as follows:
In a process similar to that for the weight parameter \(w^{2}_{i}\), this modifying routine is reiterated for parameter \(w^{1}_{i}\) as follows:
where
In this case, the bias parameter adjustment relations are identical to those in the example given above. Hence, we gain
where
To a large extent, an initial value fractional higherorder linear Volterra type integrodifferential equation is carefully regarded as follows:
with initial conditions
where \(m<\alpha _{1}\leq m+1\), \(m'<\alpha _{2}\leq m'+1,\ m\), \(m'\in \mathbf{N}^{>1}\). The investigated trial solution for this problem is chosen as follows:
To keep up with the procedure, the trial solution (19) has been replaced with equation (18) and various simplifications have been performed. As a result, the parallel optimization system is fulfilled for \(x=x_{r}\). As previously indicated, the resulting problem might be minimized with the help of the BP learning rule. Please be aware that to shun overstatement, the related updating relations are not rewritten here.
4 Illustrative examples
Two test sample problems are treated in this part to show the productivity and suitability of the suggested method. Based on the data gathered below, a comparison is made with a method described in [18] to contribute to a better understanding and to show the accuracy of the proposed method. The consecutive mathematical calculations have been carried out employing the analytic software MatlabR2013b. Parameters were set as follows:

1.
learning rate \(\eta =0.05\),

2.
momentum constant \(\gamma =0.01\),

3.
PS limitation \(N=6\),

4.
number of nodal points \(R=11\).
Example 4.1
First, presume the following higherorder linear fractional Volterra type integrodifferential equation:
with primary requirements \(u(0)=1\), \(u'(0)=0\) and the accurate solution \(u(x)=x^{2}+1\). To proceed, it would be crucial to determine the network parameters \(w^{1}_{i}\), \(w^{2}_{i}\), and \(b_{i}\) (for \(i=1,\ldots,5\)) with realvalued random constants. The achieved modified data are then applied, and the net parameters are adjusted for \(\tau =1000\) in succession. The accessed results are illustrated in Table 2, confirming the accuracy of the technique introduced in this study. The demonstrated total network error E is plotted in Fig. 2. In addition, the accurate and proximate solutions are plotted in Fig. 3 for several numbers of repetitions. The absolute errors between the exact and the approximate solutions are shown in Fig. 4. The proficiency of the designed ANN structure for varied control bases are illustrated using the \(E_{\mathit{mid}}\) function in Fig. 5.
Note that each time the training procedure was performed, the adjustable parameters were randomly selected as small positive real numbers.
Example 4.2
Consider the following fractional initial value problem:
with initial conditions \(u(0)=1\), \(u'(0)=1\) and the exact solution \(u(x)=x^{2}\) on a finite domain \(0\leq x\leq 1\). The main purpose of this example is to compare the numerical results (\(E_{\mathit{mid}}\) errors) obtained from the proposed model with the samples acquired by the Bessel polynomials method presented in [18], which are given in Table 3. These results allow us to claim that the proposed hybrid algorithm is able to approximate the unknown function with desired accuracy.
5 Conclusion
In this study, a combination of ANN and the PS approach has been effectually employed to approximate the solution of a Caputo type ordinary higherorder linear fractional Volterra integrodifferential problem. To transmute the mentioned specific fractional problem into a minimization one, some felicitous features of the PS method together with the LMS rule were implemented. Not so long ago, ANNs’ copious structures were definitely ingrained in the modeling and simulation of numerous realistic intricate phenomena. This ongoing work is vehemently dedicated to the investigation of a class of ordinary linear Volterra type integrodifferential equations with fractional order in numerical mode. By replacing the unknown function by an appropriate multilayered feedforward type neural structure, the fractional problem of such initial value is changed into a course of nonlinear minimization equations, to some extent. Because of the exceedingly complex structure of the observed problem, the error BP algorithm was used by considering slight adjustments in the learning procedure. The designed multilayer neural architecture was then utilized to approximate the optimization problem on given subdomains. Two fractional problems were tested to deal with the dependability of the present numerical method. Comparative comparison of the obtained numerical solution with corresponding exact ones for different partitionings of the solution domain revealed that the proposed technique is very effective and reliable. Providing fractional derivatives of different orders of the employed type of activation function in the hidden neurons is by far the most important result of this research. To achieve this, employing an effectual formulation was a definite must to calculate fractional derivatives of the sigmoidal function. This article is expected to underline the significance of the proposed method not only to solving iterative functions but also to other studies in related fields or specific areas. By expanding the recommended strategy to a broad class of nonlinear situations, the shortcomings of prior research can be overcome and new ideas can be found to solve new problems.
Availability of data and materials
Not applicable.
References
Alkan, S., Hatipoglu, V.: Approximate solutions of Volterra–Fredholm integrodifferential equations of fractional order. Tbil. Math. J. 10(2), 1–13 (2017)
Bazgir, H., Ghazanfari, B.: Spectral solution of fractional fourth order partial integrodifferential equations. Comput. Methods Differ. Equ. 7(2), 289–301 (2019)
Bentrcia, T., Mennouni, A.: On the asymptotic stability of a Bresse system with two fractional damping terms: theoretical and numerical analysis. Am. Inst. Math. Sci. 28(1), 580–622 (2023)
Caputo, M.: Linear model of dissipation whose Q is almost frequency dependent II. Geophys. J. R. Astron. Soc. 13, 529–539 (1967)
Elbeleze, A.A., Kilicman, A., Taib, B.M.: Approximate solution of integrodifferential equation of fractional (arbitrary) order. J. King Saud Univ., Sci. 28(1), 61–68 (2016)
Graupe, D.: Principles of Artificial Neural Networks, 2nd edn. World Scientific, Singapore (2007)
Hamoud, A.A., Ghadle, K.P., Atshan, S.H.: The approximate solutions of fractional integrodifferential equations by using modified Adomian decomposition method. Khayyam J. Math. 5(1), 21–39 (2019)
Hassoun, M.H.: Fundamentals of Artificial Neural Networks. MIT Press, Cambridge (1995)
Jafarian, A., Measoomy Nia, S., Abbasbandy, S.: Artificial neural networks based modeling for solving Volterra integral equations system. Appl. Soft Comput. 27, 391–398 (2015)
Jafarian, A., Measoomy Nia, S., Jafari, R.: Solving fuzzy equations using neural nets with a new learning algorithm. J. Adv. Comput. Res. 3(4), 33–45 (2012)
Jafarian, A., Mokhtarpour, M., Baleanu, D.: Artificial neural network approach for a class of fractional ordinary differential equation. Neural Comput. Appl. 28(4), 765–773 (2017)
Jafariana, A., Measoomy Nia, S.: An application of ANNs on power series method for solving fractional Fredholm type integrodifferential equations. Neural Parallel Sci. Comput. 24 (2016)
Jafariana, A., Measoomy Nia, S., Golmankhaneh, A.K., Baleanu, D.: Onartificial neural networks approach with new cost functions. Appl. Math. Comput. 339(15), 546–555 (2018)
Kurkcu, O.K.: An evolutionary numerical method for solving nonlinear fractional Fredholm–Volterra–Hammerstein integrodifferentialdelay equations with a functional bound. Int. J. Comput. Math. 99(11), 2159–2174 (2022)
Mennouni, A.: Improvement by projection for integrodifferential equations. Math. Methods Appl. Sci. (2020). https://doi.org/10.1002/mma.6318
Minai, A.A., Williams, R.D.: On the derivatives of the sigmoid. Neural Netw. 6, 845–853 (1993)
Nemati, S., Lima, P.M.: Numerical solution of nonlinear fractional integrodifferential equations with weakly singular kernels via a modification of hat functions. Appl. Math. Comput. 327(15), 79–92 (2018)
Ordokhani, Y., Dehestani, H.: Numerical solution of linear Fredholm–Volterra integrodifferential equations of fractional order. World J. Model. Simul. 12(3), 204–216 (2016)
Rahimkhania, P., Ordokhani, Y., Babolian, E.: A numerical scheme for solving nonlinear fractional Volterra integrodifferential equations. Iran. J. Math. Sci. Inform. 13(2), 111–132 (2018)
Ross, B.: Fractional Calculus and Its Applications. Proceedings of the International Conference Held at the University of New Haven. Springer, Berlin (1974)
Rostami, F., Jafarian, A.: A new artificial neural network structure for solving highorder linear fractional differential equations. Int. J. Comput. Math. 95(3), 528–539 (2018)
Soltani, Z., Jafarian, A.: A new artificial neural networks approach for diagnosing diabetes disease type II. Int. J. Adv. Comput. Sci. Appl. 7(6), 89–94 (2016)
Wang, J.: Numerical algorithm for twodimensional nonlinear Volterra–Fredholm integral equations and fractional integrodifferential equations (of Hammerstein and mixed types). Eng. Comput. 38(9), 3548–3563 (2021)
Wang, Y., Zhu, L.: Solving nonlinear Volterra integrodifferential equations of fractional order by using Euler wavelet method. Adv. Differ. Equ. (2017). https://doi.org/10.1186/s1366201710856
Wei, J., Tian, T.: Numerical solution of nonlinear Volterra integrodifferential equations of fractional order by the reproducing kernel method. Appl. Math. Model. 39, 4871–4876 (2015)
Yang, A.M., Han, Y., Mang, Y.Z.: On local fractional Volterra integrodifferential equations in fractal steady heat, transfer. Therm. Sci. 20, 789–793 (2016)
Zhou, D., Zhang, K., Ravey, A., Gao, F., Miraoui, A.: Parameter sensitivity analysis for fractionalorder modeling of lithiumion batteries. Energies 9(3), 1–26 (2016)
Acknowledgements
The authors would like to thank the editor and the reviewers for the detailed and valuable suggestions that helped to improve the original manuscript to its present form.
Funding
The work of UFG was supported by the government of the Basque Country for the ELKARTEK21/10 KK2021/00014 and ELKARTEK20/78 KK2020/00114 research programs, respectively.
Author information
Authors and Affiliations
Contributions
TA is the supervisor of this study and was a major contributor in methodology, investigation and validation. AJ and RS worked on resources, investigation and formal analysis of this study. SMN, FK, UFG and SN worked on software, writing—review and editing and validation of the results. All authors have main contributions in writing the original draft preparation and also writing—review and editing the paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Allahviranloo, T., Jafarian, A., Saneifard, R. et al. An application of artificial neural networks for solving fractional higherorder linear integrodifferential equations. Bound Value Probl 2023, 74 (2023). https://doi.org/10.1186/s1366102301762x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1366102301762x