Skip to main content

Fitted numerical method for singularly perturbed Burger–Huxley equation

Abstract

This paper deals with the numerical treatment of a singularly perturbed unsteady Burger–Huxley equation. We linearize the problem using the Newton–Raphson–Kantorovich approximation method. We discretize the resulting linear problem using the implicit Euler method and specially fitted finite difference method for time and space variables, respectively. We provide the stability and convergence analysis of the method, which is first-order parameter uniform convergent. We present several model examples to illustrate the efficiency of the proposed method. The numerical results depict that the present method is more convergent than some methods available in the literature.

1 Introduction

We consider the following unsteady nonlinear singularly perturbed Burger–Huxley equation:

$$ \textstyle\begin{cases} \text{\pounds}_{\varepsilon}y(s,t)=\frac{\partial y}{\partial t}- \varepsilon \frac{\partial ^{2}y}{\partial s^{2}}+\alpha y \frac{\partial y}{\partial s} -\lambda (1-y ) (y- \theta ) =0, \\ (s,t)\in \Im =\Im _{s}\times \Im _{t}=(0,1)\times (0,T] , \\ y(s,0)=y_{0}(s),\quad s\in \overline{\Im}_{s}, \\ y(0,t)=\wp _{0}(t),\qquad y(1,t)=\wp _{1}(t),\quad t \in (0,T], \end{cases} $$
(1)

with left boundary \(\Im _{l}= \lbrace (s,t ):s=0,t\in \Im _{t} \rbrace \), initial boundary \(\Im _{i}= \lbrace (s,t ):t=0,s\in \Im _{s} \rbrace \), and right boundary \(\Im _{r}= \lbrace (s,t ):s=1,t\in \Im _{t} \rbrace \), where ε is a small singular perturbation parameter such that \(0 < \varepsilon \ll 1\), \(\alpha \ge 1\), \(\lambda \ge 0\), \(\theta \in (0,1)\), and \(\partial \Im =\Im _{l}\cup \Im _{i}\cup \Im _{r}\), The functions \(\wp _{0}(t)\), \(\wp _{1}(t)\), and \(y_{0}(s) \) are assumed to be sufficiently smooth, bounded, and independent of ε. Equation (1) shows a prototype model for describing the interaction between nonlinear convection effects, reaction mechanisms, and diffusion transport. This equation has many intriguing phenomena such as bursting oscillation [1], population genetics [2], interspike [3], bifurcation, and chaos [4]. Several membrane models based on the dynamics of potassium and sodium ion fluxes can be found in [5].

In [616] and references therein, the authors constructed various analytical and numerical methods for the Burger equations. The Burger–Huxley equation, in which the highest order derivative term is affected by a small parameter ε \((0<\varepsilon \ll 1)\), is classified as the singularly perturbed Burger–Huxley equation (SPBHE). The presence of ε and the nonlinearity in the problem lead to severe difficulties in approximating the solution of the problem. For instance, due to the presence of ε, the solution reveals boundary/sharp interior layer(s), and it is tough to find a stable numerical approximation. While solving SPBHE, unless specially designed meshes are used, the presented methods in [616] and other standard numerical methods fail to give acceptable results. This limitation of the conventional numerical methods has encouraged researchers to develop robust numerical techniques that perform well enough independently of ε. Kaushik and Sharma [17] investigated problem (1) using the finite difference method (FDM) on the piecewise uniform Shishkin mesh. In [18] a monotone hybrid finite difference operator on a piecewise uniform Shishkin mesh is employed to find the approximate solution for problem (1). An upwind FDM on an adaptive nonuniform grid to find an approximate solution for problem (1) is suggested by Liu et al. [19]. In [2023] the authors proposed a parameter uniform numerical method based on fitted operator techniques for problem (1).

However, the development of the solution methodologies for problem (1) is at an infant stage. This limitation motivated us to construct a parameter uniform numerical scheme for solving problem (1) based on the fitted operator approach. The proposed method is an ε-uniformly convergent numerical algorithm that does not require a priori knowledge of the location and breadth of the boundary layer(s), which in turn increases the difficulty of finding the free oscillation solution of the problem under consideration. Also, the proposed method requires less computational effort to solve the families of the problem under consideration.

2 A priori estimates for the solution of the continuous problem

Lemma 2.1

(maximum principle)

If \(y\arrowvert _{\partial \Im}\ge 0\) and \((\textit{\pounds}_{\varepsilon} ) y\arrowvert _{\Im}\ge 0\), then \(y\arrowvert _{\overline{\Im}}\ge 0\).

Proof

See [18]. □

Lemma 2.2

(stability estimate)

The solution \(y(s,t)\) of Eq. (1) is bounded, that is,

$$ \Vert y \Vert _{\overline{\Im}} \le T \Vert y_{0} \Vert _{ \Im _{i}}+ \Vert y \Vert _{\partial{\Im}}.$$

Proof

See [18]. □

3 Formulation of the numerical scheme

3.1 Quasi-linearization technique

Equation (1) can be rewritten as

$$ \textstyle\begin{cases} \text{\pounds}_{\varepsilon}y(s,t)= ( \frac{\partial y}{\partial t}- \varepsilon \frac{\partial ^{2}y}{\partial s^{2}} ) ( s,t )=g(s, t, y(s, t), \frac{\partial y}{\partial s}(s,t)),\quad (s,t) \in \Im , \\ y(s,0)=y_{0}(s),\quad s\in \overline{\Im}_{s}, \\ y(0,t)=\wp _{0}(t),\qquad y(1,t)=\wp _{1}(t),\quad t \in (0,T], \end{cases} $$
(2)

where \(g(s, t, y(s, t), \frac{\partial y}{\partial s}(s,t))=-\alpha y \frac{\partial y}{\partial s}+\lambda (1-y ) (y- \theta ) \) is a nonlinear function of s, t, \(y(s,t)\), \(\frac{\partial y}{\partial s}(s,t)\).

To linearize the semilinear term of Eq. (1), we choose a reasonable initial approximation \(y^{0}(s,t) \) for the function \(y(s, t)\) in the term \(g(s, t, y(s, t), \frac{\partial y}{\partial s}(s,t))\) that satisfies both initial and boundary conditions and is obtained by the separation-of-variables method of the homogeneous part of the problem under consideration; it is given by

$$ y^{0}(s,t)=y_{0}(s)\exp \bigl(-\pi ^{2}t\bigr). $$

Now we apply the Newton–Raphson–Kantorovich approximation technique to the nonlinear term \(g(s, t, y(s, t), \frac{\partial y}{\partial s}(s,t))\) of Eq. (2), which can be linearized as

$$ \begin{aligned} &g\biggl(s, t, y^{(m+1)}(s, t), \frac{\partial y^{(m+1)}}{\partial s}(s,t)\biggr) \\ &\quad \cong g\biggl(s, t, y^{(m)}(s, t),\frac{\partial y^{(m)}}{\partial s}(s,t)\biggr) + \bigl(y^{(m+1)}(y, t)-y^{(m)}(s, t)\bigr) \frac{\partial g}{\partial y^{(m)}}\bigg|_{(s, t, y^{(m)}(s, t), \frac{\partial y^{(m)}}{\partial s}(s,t))} \\ &\qquad {}+\biggl(\frac{\partial y^{(m+1)}}{\partial s}(s,t)- \frac{\partial y^{(m)}}{\partial s}(s,t)\biggr) \frac{\partial g}{\partial ( \frac{\partial y^{(m)}}{\partial s} ) }\bigg|_{(s, t, y^{(m)}(s, t), \frac{\partial y^{(m)}}{\partial s}(s,t))}+ \cdots , \end{aligned} $$
(3)

where \(\lbrace y^{(m)} \rbrace ^{\infty}_{m=0}\) is a sequence of approximate solutions of \(g(s, t, y^{(m)}(s, t), \frac{\partial y^{(m)}}{\partial s}(s,t))\).

For simplicity, we denote \({y^{(m+1)}}=\hat{y}\) and substitute Eq. (3) into Eq. (2), which yields

$$ \textstyle\begin{cases} \text{\pounds}_{\varepsilon}\hat{y}(s,t)= ( \frac{\partial \hat{y}}{\partial t}-\varepsilon \frac{\partial ^{2} \hat{y}}{\partial s^{2}}+\gamma \frac{\partial \hat{y}}{\partial s}+\delta \hat{y} ) (s,t)=v(s,t), \\ \hat{y}(s,0)=y_{0}(s),\quad s\in \overline{\Im}_{s}, \\ \hat{y}(0,t)=\phi _{0}(t),\quad t\in \overline{\Im}_{t}, \\ \hat{y}(1,t)=\phi _{1}(t),\quad t\in \overline{\Im}_{t}, \end{cases} $$
(4)

where

$$\begin{aligned}& \gamma (s,t)=- \frac{\partial g}{\partial ( \frac{\partial y^{(m)}}{\partial s} ) }\bigg|_{(s, t, y^{(m)}(s, t), \frac{\partial y^{(m)}}{\partial s}(s,t))}, \\& \delta (s,t)=-\frac{\partial m}{\partial s^{(m)}}\bigg|_{(s, t, y^{(m)}(s, t), \frac{\partial y^{(m)}}{\partial s}(s,t))}, \\& \begin{aligned} v(s,t)&=g\biggl(s, t, y^{(m)}(s, t), \frac{\partial y^{(m)}}{\partial s}(s,t)\biggr)-y^{(m)} \frac{\partial g}{\partial y^{(m)}}\bigg|_{(s, t, y^{(m)}(s, t), \frac{\partial y^{(m)}}{\partial s}(s,t))} \\ &\quad {} -\frac{\partial y^{(m)}}{\partial s}(s,t) \frac{\partial g}{\partial ( \frac{\partial y^{(m)}}{\partial s} ) }\bigg|_{(s, t, y^{(m)}(s, t), \frac{\partial y^{(m)}}{\partial s}(s,t))}. \end{aligned} \end{aligned}$$

3.2 Temporal semidiscretization

Now we apply the implicit Euler method with uniform mesh \(\Im ^{M}_{\tau }= \lbrace j\tau , 0< j\le M, \tau =T/M \rbrace \) to Eq. (4) in temporal variable:

$$ \textstyle\begin{cases} ( 1+\tau \text{\pounds}_{\varepsilon}^{M} ) \hat{Y}(s,t_{j+1})= (-\varepsilon \frac{\partial ^{2} \hat{Y}}{\partial s^{2}}+ \gamma \frac{\partial \hat{Y}}{\partial s}+\delta \hat{Y} )(s,t_{j+1})- v(s,t_{j+1})=\hat{Y}(s,t_{j}), \\ \hat{Y}(s,0)=Y_{0}(s),\quad s\in \overline{\Im}_{s}, \\ \hat{Y}(0,t_{j+1})=\wp _{0}(t_{j+1}),\quad 0\le j\le M-1, \\ \hat{Y}(1,t_{j+1})=\wp _{1}(t_{j+1}),\quad 0\le j\le M-1. \end{cases} $$
(5)

Clearly, the operator \((I+\tau \text{\pounds}^{M}_{\varepsilon} ) \) satisfies the maximum principle, which confirms the stability of the semidiscrete equation (5).

Lemma 3.1

(Local Error Estimate)

The local truncation error estimate \(e_{j+1}=\hat{y}(s,t_{j+1})-\hat{Y}(s,t_{j+1})\) of the solution of Eq. (5) is bounded by

$$ \Vert e_{j+1} \Vert _{\infty}\le C\tau ^{2}, $$

and the global error estimate in the temporal direction is given by

$$ \Vert E_{j} \Vert _{\infty} \le C\tau ,\quad j \le T/\tau , $$

where C is a positive constant independent of ε and τ.

Proof

See [23]. □

Lemma 3.2

The derivative of the solution \(Y^{j+1}(s)\) of Eq. (5) is bounded by

$$ \biggl\Vert \frac{\partial ^{i}Y^{j+1}(s)}{\partial s ^{i}} \biggr\Vert _{ \overline{\Im}_{s}} \le C \biggl( 1+ \varepsilon ^{-i}\exp \biggl( \frac{-(\gamma ^{*} (1-s))}{\varepsilon} \biggr) \biggr),\quad 0 \le i\le 4.$$

Proof

See [18]. □

Rewrite Eq. (5) as

$$ \textstyle\begin{cases} -\varepsilon \frac{d^{2}Y^{j+1}(s)}{ds^{2}}+ \gamma (s) \frac{dY^{j+1}(s)}{ds}+ Q(s)Y^{j+1}(s)=\vartheta ^{j+1}(s), \quad 0\le s \le 1, \\ Y^{j+1}(0)=\wp _{0}^{j+1}, \qquad Y^{j+1}(1)=\wp _{1}^{j+1}, \quad 0< j< M-1, \end{cases} $$
(6)

where

$$ Y^{j+1}(s)=\hat{Y}^{j+1}(s),\qquad Q(s)= \biggl( \delta (s)+ \frac{1}{\tau} \biggr), \qquad \vartheta ^{j+1}(s)= \biggl(v^{j+1}(s)+ \frac{Y^{j}(s)}{\tau} \biggr).$$

3.3 Spatial semidiscretization

In this section, we use the finite difference method for the spatial discretization of problem (6) with a uniform step size. For right boundary layer problem, by the theory of singular perturbations [24] the asymptotic solution of the zeroth-order approximation for problem (6) is given as

$$ Y^{j+1}(s)\approx Y^{j+1}_{0}(s)+ \frac{\gamma (1)}{\gamma (s)} \bigl( \wp _{1}^{j+1}-Y^{j+1}_{0}(s) \bigr) \exp \biggl(-\gamma (s) \frac{1-s}{\varepsilon} \biggr)+O(\varepsilon ), $$
(7)

where \(Y^{j+1}_{0}(s)\) is the solution of the reduced problem

$$ \gamma (s) \frac{dY^{j+1}_{0}(s)}{ds}+ Q(s)Y^{j+1}_{0}(s)=\vartheta ^{j+1}(s) \quad \text{with } Y^{j+1}_{0}(1)=\wp _{1}^{j+1}.$$

Taking the first terms in Taylor’s series expansion for \(\gamma (s)\) about the point 1, Eq. (7) becomes

$$ Y^{j+1}(s)\approx Y^{j+1}_{0}(s)+ \bigl(\wp _{1}^{j+1}-Y^{j+1}_{0}(s) \bigr) \exp \biggl(-\gamma (1)\frac{1-s}{\varepsilon} \biggr)+O( \varepsilon ). $$
(8)

Now we divide the interval \([0, 1]\) into N equal parts with \(\ell =1/N\) yielding a space mesh \(\Im ^{N}_{s}= \lbrace 0 = s_{0}, s_{1}, s_{2},\dots , s_{N} = 1 \rbrace \). Then we have \(s_{i}=i\ell \), \(i=0,1,2,\dots ,N\). By considering Eq. (8) at \(s_{i} = i\ell \) as \(\ell \rightarrow 0\) we obtain

$$ \lim_{\ell \rightarrow 0}Y^{j+1}(i\ell )\approx Y^{j+1}_{0}(0)+ \bigl(\wp _{1}^{j+1}-Y^{j+1}_{0}(1) \bigr) \exp \biggl(-\gamma (1) \biggl( \frac{1}{\varepsilon ^{2}} -i\rho \biggr) \biggr) +O( \varepsilon ), $$
(9)

where \(\rho =\frac{\ell}{\varepsilon ^{2}}\).

Let \(Y^{j+1}(s)\) be a smooth function in the interval \([0, 1]\). Then by applying Taylor’s series we have

$$ \begin{aligned} Y^{j+1}(s_{i+1})& \approx Y^{j+1}_{i+1} \\ &\approx Y^{j+1}_{i}+\ell \frac{dY^{j+1}_{i}}{ds}+ \frac{\ell ^{2}}{2!} \frac{d^{2}Y^{j+1}_{i}}{ds^{2}}+\frac{\ell ^{3}}{3!} \frac{d^{3}Y^{j+1}_{i}}{ds^{3}}+ \frac{\ell ^{4}}{4!} \frac{d^{4}Y^{j+1}_{i}}{ds^{4}} \\ &\quad {}+\frac{\ell ^{5}}{5!}\frac{d^{5}Y^{j+1}_{i}}{ds^{5}}+ \frac{\ell ^{6}}{2!} \frac{d^{6}Y^{j+1}_{i}}{ds^{6}}+ \frac{\ell ^{7}}{7!}\frac{d^{7}Y^{j+1}_{i}}{ds^{7}}+ \frac{\ell ^{8}}{8!} \frac{d^{8}Y^{j+1}_{i}}{ds^{8}}+O\bigl(\ell ^{9}\bigr) \end{aligned} $$
(10)

and

$$ \begin{aligned} Y^{j+1}(s_{i-1})& \approx Y^{j+1}_{i-1} \\ &\approx Y^{j+1}_{i}-\ell \frac{dY^{j+1}_{i}}{ds}+ \frac{\ell ^{2}}{2!} \frac{d^{2}Y^{j+1}_{i}}{ds^{2}}-\frac{\ell ^{3}}{3!} \frac{d^{3}Y^{j+1}_{i}}{ds^{3}}+ \frac{\ell ^{4}}{4!} \frac{d^{4}Y^{j+1}_{i}}{ds^{4}} \\ &\quad {}-\frac{\ell ^{5}}{5!}\frac{d^{5}Y^{j+1}_{i}}{ds^{5}}+ \frac{\ell ^{6}}{2!} \frac{d^{6}Y^{j+1}_{i}}{ds^{6}}- \frac{\ell ^{7}}{7!}\frac{d^{7}Y^{j+1}_{i}}{ds^{7}}+ \frac{\ell ^{8}}{8!} \frac{d^{8}Y^{j+1}_{i}}{ds^{8}} -O\bigl(\ell ^{9}\bigr). \end{aligned} $$
(11)

Adding Eq. (10) and Eq. (11), we get

$$ Y^{j+1}_{i-1}-2Y^{j+1}_{i}+Y^{j+1}_{i+1}= \frac{2\ell ^{2}}{2!} \frac{d^{2}Y^{j+1}_{i}}{ds^{2}} +\frac{2\ell ^{4}}{4!} \frac{d^{4}Y^{j+1}_{i}}{ds^{4}}+ \frac{2\ell ^{6}}{2!} \frac{d^{6}Y^{j+1}_{i}}{ds^{6}}+\frac{2\ell ^{8}}{8!} \frac{d^{8}Y^{j+1}_{i}}{ds^{8}}+O \bigl(\ell ^{10}\bigr) $$
(12)

and

$$ \begin{aligned} &\frac{d^{2}Y^{j+1}_{i-1}}{ds^{2}}-\frac{d^{2}Y^{j+1}_{i}}{ds^{2}}+ \frac{d^{2}Y^{j+1}_{i+1}}{ds^{2}}\\ &\quad = \frac{2\ell ^{2}}{2!} \frac{d^{4}Y^{j+1}_{i}}{ds^{4}}+\frac{2\ell ^{4}}{4!} \frac{d^{6}Y^{j+1}_{i}}{ds^{6}}+ \frac{2\ell ^{6}}{6!} \frac{d^{8}Y^{j+1}_{i}}{ds^{8}}+\frac{2\ell ^{8}}{8!} \frac{d^{10}Y^{j+1}_{i}}{ds^{10}}+O \bigl(\ell ^{12}\bigr). \end{aligned} $$
(13)

Substituting \(\frac{\ell ^{4}}{12}\frac{d^{6}Y^{j+1}_{i}}{ds^{6}}\) from Eq. (13) into Eq. (12), we obtain

$$ Y^{j+1}_{i-1}-2Y^{j+1}_{i}+Y^{j+1}_{i+1}= \frac{\ell ^{2}}{30} \biggl( \frac{d^{2}Y^{j+1}_{i-1}}{ds^{2}}+28\frac{d^{2}Y^{j+1}_{i}}{ds^{2}}+ \frac{d^{2}Y^{j+1}_{i+1}}{ds^{2}} \biggr) +R, $$
(14)

where \(R=\frac{\ell ^{4}}{20}\frac{d^{4}Y^{j+1}_{i}}{ds^{4}}- \frac{13\ell ^{6}}{302400}\frac{d^{8}Y^{j+1}_{i}}{ds^{8}}+O(\ell ^{(10)})\).

Now from Eq. (6) we have

$$ \textstyle\begin{cases} \varepsilon \frac{d^{2}Y^{j+1}_{i-1}}{ds^{2}}=\gamma _{i-1} \frac{dY^{j+1}_{i-1}}{ds}+Q_{i-1}Y^{j+1}_{i-1}-\vartheta ^{j+1}_{i-1}, \\ \varepsilon \frac{d^{2}Y^{j+1}_{i}}{ds^{2}}=\gamma _{i} \frac{dY^{j+1}_{i}}{ds}+Q_{i}Y^{j+1}_{i}-\vartheta ^{j+1}_{i}, \\ \varepsilon \frac{d^{2}Y^{j+1}_{i+1}}{ds^{2}}=\gamma _{i+1} \frac{dY^{j+1}_{i+1}}{ds}+Q_{i+1}Y^{j+1}_{i+1}-\vartheta ^{j+1}_{i+1}, \end{cases} $$
(15)

where we approximate \(\frac{dY^{j+1}_{i-1}}{ds}\), \(\frac{dY^{j+1}_{i}}{ds} \), and \(\frac{dY^{j+1}_{i+1}}{ds}\) using nonsymmetric finite differences [25]:

$$ \textstyle\begin{cases} \frac{dY^{j+1}_{i-1}}{ds}\approx \frac{-Y^{j+1}_{i+1}+4Y^{j+1}_{i}-3Y^{j+1}_{i-1}}{2\ell}+\ell \frac{d^{2}Y^{j+1}_{i}}{ds^{2}}+O (\ell ^{2} ), \\ \frac{dY^{j+1}_{i}}{ds}\approx \frac{Y^{j+1}_{i+1}-Y^{j+1}_{i-1}}{2\ell}+O (\ell ^{2} ), \\ \frac{dY^{j+1}_{i+1}}{ds}\approx \frac{3Y^{j+1}_{i+1}-4Y^{j+1}_{i}+Y^{j+1}_{i-1}}{2\ell}-\ell \frac{d^{2}Y^{j+1}_{i}}{ds^{2}}+O (\ell ^{2} ). \end{cases} $$
(16)

Substituting Eq. (16) into Eq. (15), we obtain

$$ \textstyle\begin{cases} \varepsilon \frac{d^{2}Y^{j+1}_{i-1}}{ds^{2}}=\gamma _{i-1} ( \frac{-Y^{j+1}_{i+1}+4Y^{j+1}_{i}-3Y^{j+1}_{i-1}}{2\ell} ) +Q_{i-1}Y^{j+1}_{i-1}- \vartheta ^{j+1}_{i-1}, \\ \varepsilon \frac{d^{2}Y^{j+1}_{i}}{ds^{2}}=\gamma _{i} ( \frac{Y^{j+1}_{i+1}-Y^{j+1}_{i-1}}{2\ell} ) +Q_{i}Y^{j+1}_{i}- \vartheta ^{j+1}_{i}, \\ \varepsilon \frac{d^{2}Y^{j+1}_{i+1}}{ds^{2}}=\gamma _{i+1} ( \frac{3Y^{j+1}_{i+1}-4Y^{j+1}_{i}+Y^{j+1}_{i-1}}{2\ell} ) +Q_{i+1}Y^{j+1}_{i+1}- \vartheta ^{j+1}_{i+1}. \end{cases} $$
(17)

Inserting Eq. (17) into Eq. (14) and rearranging, we get

$$ \begin{aligned} & \biggl( \varepsilon + \frac{\gamma _{i+1}\ell}{30}- \frac{\gamma _{i-1}\ell}{30} \biggr) \biggl( \frac{Y^{j+1}_{i-1}-2Y^{j+1}_{i}+Y^{j+1}_{i+1}}{\ell ^{2}} \biggr) \\ &\quad = \biggl( \frac{-\gamma _{i-1}}{20\ell}+\frac{Q_{i-1}}{30}- \frac{7\gamma _{i}}{15\ell}+ \frac{\gamma _{i+1}}{60\ell} \biggr) Y^{j+1}_{i-1} + \biggl( \frac{\gamma _{i-1}}{15\ell}+\frac{14Q_{i}}{15\ell}- \frac{\gamma _{i+1}}{15\ell} \biggr)Y^{j+1}_{i} \\ &\qquad {}+ \biggl( \frac{-\gamma _{i-1}}{60\ell}+\frac{7\gamma _{i}}{15\ell}+ \frac{\gamma _{i+1}}{20\ell}+ \frac{Q_{i+1}}{30} \biggr)Y^{j+1}_{i+1} - \frac{1}{30} \bigl( \vartheta ^{j+1}_{i-1}+28\vartheta ^{j+1}_{i}+ \vartheta ^{j+1}_{i+1} \bigr). \end{aligned} $$
(18)

Introducing a constant fitting factor \(\sigma (\rho )\) in Eq. (18), we obtain

$$ \begin{aligned} & \biggl( \sigma (\rho )\varepsilon + \frac{\gamma _{i+1}\ell}{30}- \frac{\gamma _{i-1}\ell}{30} \biggr) \biggl( \frac{Y^{j+1}_{i-1}-2U^{j+1}_{i}+Y^{j+1}_{i+1}}{\ell ^{2}} \biggr) \\ &\quad = \biggl( \frac{-\gamma _{i-1}}{20\ell}+\frac{Q_{i-1}}{30}- \frac{7\gamma _{i}}{15h}+ \frac{\gamma _{i+1}}{60\ell} \biggr) Y^{j+1}_{i-1} + \biggl( \frac{\gamma _{i-1}}{15\ell}+\frac{14Q_{i}}{15}- \frac{\gamma _{i+1}}{15h} \biggr)Y^{j+1}_{i} \\ &\qquad {}+ \biggl( \frac{-\gamma _{i-1}}{60\ell}+\frac{7\gamma _{i}}{15\ell}+ \frac{\gamma _{i+1}}{20\ell}+ \frac{Q_{i+1}}{30} \biggr)Y^{j+1}_{i+1} - \frac{1}{30} \bigl( \vartheta ^{j+1}_{i-1}+28\vartheta ^{j+1}_{i}+ \vartheta ^{j+1}_{i+1} \bigr). \end{aligned} $$
(19)

Multiplying (19) by and taking the limit as \(\ell \rightarrow 0\), we get

$$ \lim_{\ell \rightarrow 0}\sigma (\rho ) \biggl( \frac{Y^{j+1}_{i-1}-2Y^{j+1}_{i}+Y^{j+1}_{i+1}}{\rho} \biggr)= \frac{\gamma (0)}{2} \bigl( Y^{j+1}_{i+1}-Y^{j+1}_{i-1} \bigr). $$
(20)

Using Eq. (9), we have

$$ \textstyle\begin{cases} \frac{\sigma (\rho )}{\rho}\lim_{\ell \rightarrow 0} ( Y^{j+1}(i \ell -\ell )-2Y^{j+1}(i\ell )+Y^{j+1}(i\ell +\ell ) ) \\ \quad \approx (\wp _{1}^{j+1}-Y^{j+1}_{0}(1) ) \exp (- \gamma (1) ( \frac{1}{\varepsilon} -i\rho ) ) ( \exp (\gamma (1)\rho )-2+\exp (-\gamma (1)\rho ) ) , \\ \frac{\sigma (\rho )}{\rho}\lim_{\ell \rightarrow 0} ( Y^{j+1}(i \ell +\ell )-Y^{j+1}(i\ell -\ell ) ) \\ \quad \approx (\wp _{1}^{j+1}-Y^{j+1}_{0}(1) ) \exp (- \gamma (1) ( \frac{1}{\varepsilon} -i\rho ) ) ( \exp (-\gamma (1)\rho )-\exp (\gamma (1)\rho ) ). \end{cases} $$

Using the above expressions in Eq. (20), we get

$$ \frac{\sigma (\rho )}{\rho} \bigl(e^{\gamma (1)\rho}-2+ e^{-\gamma (1) \rho} \bigr) = \frac{\gamma (0)}{2} \bigl(e^{\gamma (1)\rho}-e^{- \gamma (1)\rho} \bigr). $$

On simplifying, we get

$$ \sigma (\rho ) =\frac{\gamma (0)\rho}{2}\coth \biggl( \frac{\gamma (1)\rho}{2} \biggr), $$
(21)

which is the required value of the constant fitting factor \(\sigma (\rho )\). Finally, using Eq. (19) and the value of \(\sigma (\rho )\) given by Eq. (21), we get

$$ \begin{aligned} &\text{\pounds}^{N,M}Y^{j+1}_{i}=\chi ^{-}_{i}Y^{j+1}_{i-1}+\chi ^{c}_{i}Y^{j+1}_{i}+ \chi ^{+}_{i}Y^{j+1}_{i+1}=\mu ^{j+1}_{i}, \\ &\quad i=1,2,\dots , N-1,j=0,1, \dots ,M-1, \end{aligned} $$
(22)

where

$$ \textstyle\begin{cases} \chi ^{-}_{i}=-\frac{\sigma (\rho )\varepsilon}{\ell ^{2}}- \frac{\gamma _{i-1}}{60\ell}-\frac{28\gamma _{i}}{60\ell}- \frac{\gamma _{i+1}}{60\ell}+\frac{Q_{i-1}}{30} , \\ \chi ^{c}_{i}=\frac{2\sigma (\rho )\varepsilon}{\ell ^{2}}+ \frac{28Q_{i}}{30}, \\ \chi ^{+}_{i}=-\frac{\sigma (\rho )\varepsilon}{\ell ^{2}}+ \frac{\gamma _{i-1}}{60h}+\frac{\gamma _{i+1}}{60h} + \frac{28\gamma _{i}}{60h}+\frac{Q_{i+1}}{30}, \\ \mu _{i}^{j+1}=\frac{1}{30} (\vartheta ^{j+1}_{i-1}+28\vartheta ^{j+1}_{i}+ \vartheta ^{j+1}_{i+1} ). \end{cases} $$

For sufficiently small mesh sizes, the above matrix is nonsingular, and \(\vert \chi ^{c}_{i} \vert \ge \vert \chi ^{c}_{i} \vert + \vert \chi ^{+}_{i} \vert \). Hence by [26] the matrix χ is an M-matrix and has an inverse. Therefore Eq. (22) can be solved by the matrix inverse with given boundary conditions.

4 Convergence analysis

Lemma 4.1

If \(Y_{i}^{j+1} \ge 0\) on \(i=0,N\) and \(\textit{\pounds}^{N,M}Y_{i}^{j+1}\ge 0\) on \(\overline{\Im}^{N,M}\), then \(Y_{i}^{j+1}\ge 0\) at each point of \(\overline{\Im}^{N,M}\).

Lemma 4.2

The solution \(Y^{j+1}_{i}\) of the discrete scheme in (22) on \(\overline{\Im}^{N,M}\) satisfies the following bound:

$$ \bigl\Vert Y^{j+1}_{i} \bigr\Vert \le \max \bigl\lbrace \bigl\vert Y^{j+1}_{0} \bigr\vert , \bigl\vert Y^{j+1}_{N} \bigr\vert \bigr\rbrace + \frac{ \Vert \textit{\pounds}^{N,M} \Vert }{Q^{*}}, $$

where \(Q(s_{i})\ge Q^{*}>0\).

Hence Lemma 4.2 confirms that the discrete scheme (22) is uniformly stable in supremum norm.

Lemma 4.3

If \(Y\in C^{3}(I)\), then the local truncation error in space discretization is given as

$$ \vert \top _{i} \vert \le \max_{s_{i-1}\le s\le s_{i+1}} \biggl\lbrace \frac{28\gamma \ell ^{2}}{180} \biggl\vert \frac{d^{3}Y^{j+1}(s)}{ds^{3}} \biggr\vert \biggr\rbrace +O \bigl( \ell ^{3} \bigr) ,\quad i=1,2,\dots ,N-1.$$

Proof

By definition

$$\begin{aligned}& \begin{aligned} \top _{i}&=-\sigma \varepsilon \biggl\lbrace \frac{Y^{j+1}_{i-1}-2Y^{j+1}_{i}+Y^{j+1}_{i+1}}{\ell ^{2}}- \frac{d^{2}Y^{j+1}_{i}}{ds^{2}} \biggr\rbrace \\ &\quad {}+ \frac{\gamma _{i-1}}{30} \biggl\lbrace \biggl( \frac{-3Y^{j+1}_{i-1}+4Y^{j+1}_{i}-Y^{j+1}_{i-1}}{2\ell} +\ell \frac{d^{2}Y^{j+1}_{i}}{ds^{2}} \biggr)- \frac{dY^{j+1}_{i-1}}{ds} \biggr\rbrace \\ &\quad {}+ \frac{28\gamma _{i}}{30} \biggl\lbrace \frac{Y^{j+1}_{i+1}-Y^{j+1}_{i-1}}{2\ell}- \frac{dY^{j+1}_{i}}{ds} \biggr\rbrace \\ &\quad {} +\frac{\gamma _{i+1}}{30} \biggl\lbrace \biggl( \frac{Y^{j+1}_{i+1}-4Y^{j+1}_{i}+3Y^{j+1}_{i-1}}{2\ell} -\ell \frac{d^{2}Y^{j+1}_{i}}{ds^{2}} \biggr) - \frac{dY^{j+1}_{i+1}}{ds} \biggr\rbrace , \\ &\quad i=1(1)N-1. \end{aligned} \\& \begin{aligned} \quad \Rightarrow\quad \top _{i}&=-\sigma \varepsilon \biggl\lbrace \frac{\ell ^{2}}{12}\frac{d^{4}Y^{j+1}_{i}}{ds^{4}}+ \frac{\ell ^{4}}{360} \frac{d^{6}Y^{j+1}_{i}}{ds^{6}}+\cdots \biggr\rbrace +\frac{\gamma _{i-1}}{30} \biggl\lbrace \ell \frac{d^{2}Y^{j+1}_{i}}{ds^{2}}-\frac{2\ell ^{2}}{3} \frac{d^{3}Y^{j+1}_{i}}{ds^{3}}+ \biggr\rbrace \\ &\quad {}+\frac{28\gamma _{i}}{30} \biggl\lbrace \frac{\ell ^{2}}{6} \frac{d^{3}Y^{j+1}_{i}}{ds^{3}}+\frac{\ell ^{4}}{120} \frac{d^{5}Y^{j+1}_{i}}{ds^{5}}+\cdots \biggr\rbrace \\ &\quad {} + \frac{\gamma _{i+1}}{30} \biggl\lbrace -\ell \frac{d^{2}Y^{j+1}_{i}}{ds^{2}}-\frac{2\ell ^{2}}{3} \frac{d^{3}Y^{j+1}_{i}}{ds^{3}}+\cdots \biggr\rbrace \end{aligned} \\& \quad \Rightarrow\quad \vert \top _{i} \vert \le \max _{s_{i-1}\le s \le s_{i+1}} \biggl\lbrace \frac{\sigma \ell ^{2} \varepsilon}{12} \biggl\vert \frac{d^{4}Y^{j+1}(s)}{ds^{4}} \biggr\vert \biggr\rbrace +\max_{s_{i-1} \le s \le s_{i+1}} \biggl\lbrace \frac{28}{180}\gamma \ell ^{2} \biggl\vert \frac{d^{3}Y^{j+1}(s)}{ds^{3}} \biggr\vert \biggr\rbrace . \end{aligned}$$

Using relation (22) with \(W=\frac{\gamma (0)}{2}\coth (\frac{\gamma (1)\rho}{2} ) \), we get

$$\begin{aligned}& \Rightarrow \quad \vert \top _{i} \vert \le \max _{s_{i-1}\le s \le s_{i+1}} \biggl\lbrace \frac{W\ell ^{3}}{12} \biggl\vert \frac{d^{4}Y^{j+1}(s)}{ds^{4}} \biggr\vert \biggr\rbrace +\max_{s_{i-1} \le s \le s_{i+1}} \biggl\lbrace \frac{28}{180}\gamma \ell ^{2} \biggl\vert \frac{d^{3}Y^{j+1}(s)}{ds^{3}} \biggr\vert \biggr\rbrace \\& \Rightarrow \quad \vert \top _{i} \vert \le \max _{s_{i-1}\le s \le s_{i+1}} \biggl\lbrace \frac{28}{180}\gamma \ell ^{2} \biggl\vert \frac{d^{3}Y^{j+1}(s)}{ds^{3}} \biggr\vert \biggr\rbrace +O \bigl( \ell ^{3} \bigr)\\& \Rightarrow\quad \vert \top _{i} \vert \le O \bigl( \ell ^{2} \bigr) ,\quad i=1,2, \dots ,N-1. \end{aligned}$$

Thus we obtain the desired result. □

Lemma 4.4

Let \(Y(s_{i},t_{j+1})\) be the solution of problem (6), and let \(Y^{j+1}_{i}\) be the solution of the discrete problem (22). Then we have the following estimate:

$$ \bigl\vert Y(s_{i},t_{j+1})-Y^{j+1}_{i} \bigr\vert \le O \bigl( \ell ^{2} \bigr). $$

Proof

Rewrite Eq. (22) in matrix vector form as

$$ ZY=H, $$
(23)

where \(Z = ( \chi _{i,j} ) \), \(0\le j\le M-1\), \(1 \le i \le N-1 \), is the tridiagonal matrix with

$$\begin{aligned}& \chi _{i-1,j+1}=-\frac{\sigma (\rho )\varepsilon}{\ell ^{2}}- \frac{\gamma _{i-1}}{60\ell}- \frac{28\gamma _{i}}{60\ell}- \frac{\gamma _{i+1}}{60\ell}+\frac{Q_{i-1}}{30}, \\& \chi _{i,j+1}=\frac{2\sigma (\rho )\varepsilon}{\ell ^{2}}+ \frac{28Q_{i}}{30}, \\& \chi _{i+1,j+1}=-\frac{\sigma (\rho )\varepsilon}{\ell ^{2}}+ \frac{\gamma _{i-1}}{60\ell}- \frac{\gamma _{i+1}}{60\ell}+ \frac{28\gamma _{i}}{60\ell}+\frac{Q_{i+1}}{30}, \end{aligned}$$

and \(H = (\mu ^{j+1}_{i})\) is the column vector with \((\mu ^{j+1}_{i})=\frac{1}{30} ( \vartheta ^{j+1}_{i-1}+28 \vartheta ^{j+1}_{i}+\vartheta ^{j+1}_{i+1} ) \), \(i =1,2,\dots ,N-1\), with local truncation error

$$ \vert \top _{i} \vert \le C \bigl( \ell ^{2} \bigr). $$

We also have

$$ Z\overline{Y}-\top (\ell )=H, $$
(24)

where \(\overline{Y}= ( \overline{Y}_{0},\overline{Y}_{1},\dots , \overline{Y}_{N} ) ^{t}\) and \(\top (\ell )= (\top _{1}(\ell ),\top _{2}(\ell ),\dots ,\top _{N}( \ell ) )^{t} \) are the actual solution and the local truncation error, respectively.

From Eqs. (23) and (24) we get

$$ Z(\overline{Y}-Y)=\top (\ell ). $$
(25)

Then Eq. (25) can be written as

$$ ZE=\top (\ell ), $$
(26)

where \(E=\overline{Y}-Y= ( \top _{0},\top _{1},\top _{2},\dots ,\top _{N} )^{t} \). Let S be the sum of elements of the ith row of Z. Then we have

$$\begin{aligned}& \Gamma _{1}=\sum^{N-1}_{j=1}\chi _{1,j}= \frac{\sigma \varepsilon}{\ell ^{2}}+\frac{\gamma _{i+1}}{60\ell}+ \frac{\gamma _{i-1}}{60\ell}+ \frac{28Q_{i}}{30}+\frac{Q_{i+1}}{30} + \frac{28\gamma _{i}}{60\ell},\\& \Gamma _{N-1}=\sum^{N-1}_{j=1}\chi _{N-1,j}= \frac{\sigma \varepsilon}{\ell ^{2}}-\frac{\gamma _{i+1}}{60\ell}- \frac{\gamma _{i-1}}{60\ell}+ \frac{28Q_{i}}{30}+\frac{Q_{i-1}}{30}- \frac{28\gamma _{i}}{60\ell},\\& \begin{aligned} \Gamma _{i}&=\sum^{N-1}_{j=1} \chi _{i,j}=\frac{1}{30} \bigl( \vartheta ^{j+1}_{i-1}+28 \vartheta ^{j+1}_{i}+\vartheta ^{j+1}_{i+1} \bigr) \\ &=\Gamma _{i}+O \bigl(\ell ^{2} \bigr) =B_{i0}; \quad i=2 ( 1 ) N-2, \end{aligned} \end{aligned}$$

where \(B_{i0}=\Gamma _{i}=\frac{1}{30} ( \vartheta ^{j+1}_{i-1}+28 \vartheta ^{j+1}_{i}+\vartheta ^{j+1}_{i+1} )\).

Since \(0<\varepsilon \ll 1\), for sufficiently small , the matrix Y is irreducible and monotone. Then it follows that \(Z^{-1}\) exists and its elements are nonnegative [27]. Hence from Eq. (26) we obtain

$$ E=Z^{-1}\top (\ell )$$

and

$$ \Vert E \Vert \le \bigl\Vert Z^{-1} \bigr\Vert \bigl\Vert \top (\ell ) \bigr\Vert . $$
(27)

Let \(\overline{\chi}_{ki}\) be the \(( ki )\)th element of \(Z^{-1}\). Since \(\overline{\chi}_{ki}\ge 0\), by the definition of multiplication of matrices with its inverses we have

$$ \sum_{i=1}^{N-1}\overline{ \chi}_{ki}\Gamma _{i}=1, \quad k=1,2,\dots , N-1. $$

Therefore it follows that

$$ \sum_{i=1}^{N-1}\overline{ \chi}_{ki}\le \frac{1}{\min_{0\le i \le N-1} \Gamma _{i}}=\frac{1}{B_{i,0}}\le \frac{1}{ \vert B_{i0} \vert } $$
(28)

for some i0 between 1 and \(N-1\), and \(B_{i0}=\Gamma _{i}\). From equations (23), (27), and (28) we obtain

$$ E_{i}=\sum^{N-1}_{i=1}\overline{ \chi}_{ki} \top (\ell ), \quad i=1 (1 ) N-1,$$

which implies

$$ E_{i}\le \frac{C (\ell ^{2} ) }{ \vert \Gamma _{i} \vert }, \quad i=1 (1 ) N-1.$$

Therefore

$$ \Vert E \Vert \le C \bigl(\ell ^{2} \bigr). $$

This implies that the spatial semidiscretization process is convergent of second order. □

Theorem 4.5

Let \(y(s,t)\) be the solution of problem (1), and let \(Y^{j}_{i}\) be the numerical solution obtained by the proposed scheme (22). Then we have the following error estimate for the totally discrete scheme:

$$ \sup_{0< \varepsilon \ll 1}\max_{s_{i},t_{j}} \bigl\vert y (s_{i},t_{j} ) -Y^{j}_{i} \bigr\vert \le C \bigl( \tau + (\ell )^{2} \bigr). $$

Proof

By combining the result of Lemmas 3.1 and 4.4 we obtain the required bound. □

5 Numerical examples, results, and discussion

In this section, we consider three model problems to verify the theoretical findings of the proposed method. As the exact solutions of the considered examples are not known, we calculate the maximum absolute error for each ε given in [28] by

$$ E^{N,\tau}_{\varepsilon}=\max_{(s_{i},t_{j+1})\in \Im ^{N,M}}\bigl| Y^{N, \tau }({s_{i}}, t_{j+1})-Y^{2N,\tau /2}(s_{i},t_{j+1})\bigr|$$

and the corresponding order of convergence for each ε by

$$ r^{N,\tau}_{\varepsilon}=\log _{2} \bigl( E^{N,\tau}_{\varepsilon}/E^{2N, \tau /2}_{\varepsilon} \bigr).$$

For all N and τ, the ε-uniform maximum error and the corresponding ε-uniform order of convergence are calculated using

$$ E^{N,\tau }=\max_{\varepsilon}E^{N,\tau}_{\varepsilon} \quad \text{and}\quad r^{N, \tau}=\log _{2} \bigl( E^{N,\tau}/E^{2N,\tau /2} \bigr),\quad \text{respectively}. $$

Example 5.1

Consider the following SPBHE:

$$ \textstyle\begin{cases} \frac{\partial y}{\partial t}-\varepsilon \frac{\partial ^{2}y}{\partial s^{2}}+ y\frac{\partial y}{\partial s}- (1-y ) (y-0.5 ) =0,\quad (s,t)\in \Im , \\ y(s,0)=s(1-s^{2}),\quad 0\le s\le 1, \\ y(0,t)=0=y(1,t)=0,\quad t \in (0,T]. \end{cases} $$

Example 5.2

Consider the following SPBHE:

$$ \textstyle\begin{cases} \frac{\partial y}{\partial t}-\varepsilon \frac{\partial ^{2}y}{\partial s^{2}}+ y\frac{\partial y}{\partial s}=0, (s,t)\in \Im , \\ y(s,0)=s(1-s^{2}),\quad 0\le s\le 1, \\ y(0,t)=0=y(1,t)=0,\quad t \in (0,T]. \end{cases} $$

Example 5.3

Consider the following SPBHE:

$$ \textstyle\begin{cases} \frac{\partial y}{\partial t}-\varepsilon \frac{\partial ^{2}y}{\partial s^{2}}+ y\frac{\partial y}{\partial s}= (1-y ) (y-0.5 ) ,\quad (s,t)\in \Im , \\ y(s,0)=\sin (\pi s),\quad 0\le s\le 1, \\ y(0,t)=0=y(1,t)=0,\quad t \in (0,T]. \end{cases} $$

The \(E^{N,\tau}_{\varepsilon} \) \(E^{N,\tau} \), and \(r^{N,\tau} \) for Examples 5.1, 5.2,and 5.3 are tabulated for various values of ε, M, and N in Tables 14. These results show that the proposed scheme reveals the parameter uniform convergence of first order. Besides, the numerical results depict that the proposed method gives better results than those in [1719, 23]. The 3D view of the numerical solution of Examples 5.1 and 5.2 at \(N=64\), \(M=40\), and \(\varepsilon =2^{-18}\) are plotted in Fig. 1. The effects of ε and the time step on the solution profile for the considered problems are displayed in Figs. 2 and 3, respectively. The log-log plots of maximum absolute errors for Examples 5.15.3 are plotted in Fig. 4. This figure shows that the obtained theoretical rate of convergence of the proposed method agrees with numerical experiments.

Figure 1
figure 1

3D view of numerical solution for \(N=64\), \(M=40\), \(\varepsilon =2^{-18}\): (a) Example 5.1, (b) Example 5.2

Figure 2
figure 2

Effect of ε on the behavior of the solution with layer formation: (a) Example 5.1, (b) Example 5.2

Figure 3
figure 3

Numerical solution for \(N=64\), \(M=40\), \(\varepsilon =2^{-16}\) at different time levels: (a) Example 5.1, (b) Example 5.2

Figure 4
figure 4

Example 5.1 (a), Example 5.2 (b), and Example 5.3 (c): log-log scale plots of the maximum absolute errors for different values of ε

Table 1 \(E^{N,\tau}_{\varepsilon}\), \(E^{N,\tau}\), and \(r^{N,\tau}\) for Example 5.1 with \(M=N \)
Table 2 \(E^{N,\tau}_{\varepsilon}\), \(E^{N,\tau}\), and \(r^{N,\tau}\) for Example 5.1
Table 3 \(E^{N,\tau}_{\varepsilon}\), \(E^{N,\tau}\), and \(r^{N,\tau}\) for Example 5.2
Table 4 \(E^{N,\tau}_{\varepsilon}\), \(E^{N,\tau}\), and \(r^{N,\tau}\) for Example 5.3 with \(\tau =0.0001\)

6 Conclusion

We have presented a parameter uniform numerical scheme for the singularly perturbed unsteady Burger–Huxley equation. The developed scheme constitutes the implicit Euler in the time direction and specially fitted finite difference method in the space direction. Theoretical and numerical stability and parameter uniform convergence analysis of the developed scheme is presented. The presented method is shown to be ε-uniformly convergent with convergence order \(O(\tau + \ell ^{2})\). Several model examples are presented to illustrate the efficiency of the proposed method. The proposed scheme gives more accurate numerical results than those in [1719, 23].

Availability of data and materials

There are no associated data arising from this work.

Abbreviations

FDM:

Finite difference method

References

  1. Duan, L., Lu, Q.: Bursting oscillations near codimension-two bifurcations in the Chay neuron model. Int. J. Nonlinear Sci. Numer. Simul. 7(1), 59–64 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  2. Aronson, D.G., Weinberger, H.F.: Multidimensional nonlinear diffusion arising in population genetics. Adv. Math. 30(1), 33–76 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  3. Liu, S., Fan, T., Lu, Q.: The spike order of the winnerless competition (WLC) model and its application to the inhibition neural system. Int. J. Nonlinear Sci. Numer. Simul. 6(2), 133–138 (2005)

    Article  Google Scholar 

  4. Zhang, G.-J., Xu, J.-X., Yao, H., Wei, R.-X.: Mechanism of bifurcation-dependent coherence resonance of an excitable neuron model. Int. J. Nonlinear Sci. Numer. Simul. 7(4), 447–450 (2006)

    Article  Google Scholar 

  5. Lewis, E., Reiss, R., Hamilton, H., Harmon, L., Hoyle, G., Kennedy, D.: An electronic model of the neuron based on the dynamics of potassium and sodium ion fluxes. In: Neural Theory and Modeling, pp. 154–189 (1964)

    Google Scholar 

  6. Ismail, H.N., Raslan, K., Abd Rabboh, A.A.: Adomian decomposition method for Burger’s–Huxley and Burger’s–Fisher equations. Appl. Math. Comput. 159(1), 291–301 (2004)

    MathSciNet  MATH  Google Scholar 

  7. Javidi, M., Golbabai, A.: A new domain decomposition algorithm for generalized Burger’s–Huxley equation based on Chebyshev polynomials and preconditioning. Chaos Solitons Fractals 39(2), 849–857 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  8. Estevez, P.: Non-classical symmetries and the singular manifold method: the Burgers and the Burgers–Huxley equations. J. Phys. A, Math. Gen. 27(6), 2113 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  9. Krisnangkura, M., Chinviriyasit, S., Chinviriyasit, W.: Analytic study of the generalized Burger’s–Huxley equation by hyperbolic tangent method. Appl. Math. Comput. 218(22), 10843–10847 (2012)

    MathSciNet  MATH  Google Scholar 

  10. Satsuma, J., Ablowitz, M., Fuchssteiner, B., Kruskal, M.: Topics in soliton theory and exactly solvable nonlinear equations. Phys. Rev. Lett. (1987)

  11. Wang, X., Zhu, Z., Lu, Y.: Solitary wave solutions of the generalised Burgers–Huxley equation. J. Phys. A, Math. Gen. 23(3), 271 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  12. Hashim, I., Noorani, M.S.M., Al-Hadidi, M.S.: Solving the generalized Burgers–Huxley equation using the Adomian decomposition method. Math. Comput. Model. 43(11–12), 1404–1411 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  13. Hashim, I., Noorani, M., Batiha, B.: A note on the Adomian decomposition method for the generalized Huxley equation. Appl. Math. Comput. 181(2), 1439–1445 (2006)

    MathSciNet  MATH  Google Scholar 

  14. Khattak, A.J.: A computational meshless method for the generalized Burger’s–Huxley equation. Appl. Math. Model. 33(9), 3718–3729 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  15. Mohammadi, R.: B-spline collocation algorithm for numerical solution of the generalized Burger’s–Huxley equation. Numer. Methods Partial Differ. Equ. 29(4), 1173–1191 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  16. Sari, M., Gürarslan, G.: Numerical solutions of the generalized Burgers-Huxley equation by a differential quadrature method. Math. Probl. Eng. 2009, Article ID 370765 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  17. Kaushik, A., Sharma, M.: A uniformly convergent numerical method on non-uniform mesh for singularly perturbed unsteady Burger–Huxley equation. Appl. Math. Comput. 195(2), 688–706 (2008)

    MathSciNet  MATH  Google Scholar 

  18. Gupta, V., Kadalbajoo, M.K.: A singular perturbation approach to solve Burgers–Huxley equation via monotone finite difference scheme on layer-adaptive mesh. Commun. Nonlinear Sci. Numer. Simul. 16(4), 1825–1844 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  19. Liu, L.-B., Liang, Y., Zhang, J., Bao, X.: A robust adaptive grid method for singularly perturbed Burger–Huxley equations. Electron. Res. Arch. 28(4), 1439 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  20. Kabeto, M.J., Duressa, G.F.: Accelerated nonstandard finite difference method for singularly perturbed Burger–Huxley equations. BMC Res. Notes 14(1), 446, 1–7 (2021)

    Article  Google Scholar 

  21. Jima, K.M., File, D.G.: Implicit finite difference scheme for singularly perturbed Burger–Huxley equations. J. Partial Differ. Equ. 35, 87–100 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  22. Derzie, E.B., Munyakazi, J.B., Dinka, T.G.: Parameter-uniform fitted operator method for singularly perturbed Burgers–Huxley equation. J. Math. Model., 1–20 (2022). https://doi.org/10.22124/jmm.2022.21484.1883

    Article  Google Scholar 

  23. Daba, I.T., Duressa, G.F.: Uniformly convergent computational method for singularly perturbed unsteady Burger–Huxley equation. MethodsX 9, 101886 (2022)

    Article  Google Scholar 

  24. O’Malley, R.E.: Singular Perturbation Methods for Ordinary Differential Equations. Applied Mathematical Sciences, vol. 89. Springer, Berlin (1991)

    MATH  Google Scholar 

  25. Ranjan, R., Prasad, H.S.: A novel approach for the numerical approximation to the solution of singularly perturbed differential-difference equations with small shifts. J. Appl. Math. Comput. 65(1), 403–427 (2018)

    MathSciNet  MATH  Google Scholar 

  26. Nichols, N.K.: On the numerical integration of a class of singular perturbation problems. J. Optim. Theory Appl. 60(3), 2050034 (1989)

    Article  MathSciNet  Google Scholar 

  27. File, G., Reddy, Y.N.: Terminal boundary-value technique for solving singularly perturbed delay differential equations. J. Taibah Univ. Sci. 8(3), 289–300 (2014)

    Article  Google Scholar 

  28. Daba, I.T., Duressa, G.F.: Collocation method using artificial viscosity for time dependent singularly perturbed differential–difference equations. Math. Comput. Simul. 192, 201–220 (2022)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous referees for their helpful comments, which improved the quality of this paper.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: I.T.Daba & G.F. Duressa; Investigation and formal analysis: I.T.Daba & G.F. Duressa; Software programming: I.T.Daba; Visualization: I.T.Daba & G.F. Duressa; Writing- original draft: I.T.Daba; Writing- review & editing: G.F. Duressa.

Corresponding author

Correspondence to Imiru T. Daba.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Daba, I.T., Duressa, G.F. Fitted numerical method for singularly perturbed Burger–Huxley equation. Bound Value Probl 2022, 102 (2022). https://doi.org/10.1186/s13661-022-01681-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13661-022-01681-3

MSC

Keywords