Skip to main content

Extinction behavior and recurrence of n-type Markov branching–immigration processes


In this paper, we consider n-type Markov branching–immigration processes. The uniqueness criterion is first established. Then, we construct a related system of differential equations based on the branching property. Furthermore, the explicit expression of extinction probability and the mean extinction time are successfully obtained in the absorbing case by using the unique solution of the related system of differential equations and Kolmogorov forward equations. Finally, the recurrence and ergodicity criteria are given if the zero state 0 is not absorbing.

1 Introduction

Markov branching processes occupy a major niche in the theory and applications of probability. Good general references are Asmussen and Hering [2], Athreya and Jagers [3], Athreya and Ney [4] and Harris [7]. Within the branching structure, both state-independent and state-dependent immigration have been studied. For the former, Sevast’yanov [13] and Vatutin [14] and [15] considered a branching process with state-independent immigration. Aksland [1] considered a modified birth–death process where the state-independent immigration is imposed. On the other hand, for the latter, Kulkarni and Pakes [8] discussed the total progeny of a branching process with state-dependent immigration. Foster [6] and Pakes [11] considered a discrete-time branching process with immigration at state 0. Yamazato [16] and Pakes and Tavaré [12] investigated the continuous-time version.

Let \((Z_{t}:t\geq 0)\) denote an n-type Markov branching process (nTMBP) with per capita birth rate and offspring distribution of the type k particle being \(\theta _{k}>0\) and \(\{p^{(k)}_{\boldsymbol{j}}:\boldsymbol{j}\in \mathbf{Z}_{+}^{n}\}\) \((k=1,\ldots ,n)\), respectively, where \(\mathbf{Z}_{+}^{n}=\{\boldsymbol{j}=(j_{1},\ldots ,j_{n}): j_{1}, \ldots ,j_{n}\in \mathbf{Z}_{+}\}\) with \(\mathbf{Z}_{+}=\{0,1,\ldots \}\). In this paper, we mainly consider a modification \((X_{t}:t\geq 0)\) of the nTMBP that allows it to be resurrected whenever it hits the zero state and allows immigration when it does not hit the zero state. \((X_{t}:t\geq 0)\) is called an n-type Markov branching–immigration process (nTMBPI). In order to clearly describe the evolution of (nTMBPI), we adopt the following conventions throughout this paper.

(C-1) For any \(\boldsymbol{i}=(i_{1},\ldots ,i_{n})\in \mathbf{Z}_{+}^{n}\), denote \(|\boldsymbol{i}|=\sum_{k=1}^{n} i_{k}\).

(C-2) \([0,1]^{n}=\{(u_{1},\ldots ,u_{n}):0\leq u_{1},\ldots ,u_{n} \leq 1\}\). For \(\boldsymbol{u},\boldsymbol{v}\in [0,1]^{n}\), \(\boldsymbol{u}\leq \boldsymbol{v}\) means \(u_{k}\leq v_{k}\) (\(k=1,\ldots ,n\)), while \(\boldsymbol{u}< \boldsymbol{v}\) means \(u_{k}\leq v_{k}\) (\(k=1,\ldots ,n\)) and \(u_{k}< v_{k}\) for at least one k.

(C-3) For \(\boldsymbol{u}\in [0,1]^{n}\) and \(\boldsymbol{i}\in \mathbf{Z}_{+}^{n}\), \(\boldsymbol{u}^{\boldsymbol{i}}=\prod_{k=1}^{n}u_{k}^{i_{k}}\).

(C-4) \(\chi _{{\mathbf{Z}_{+}^{n}}}(\cdot )\) is the indicator of \(\mathbf{Z}_{+}^{n}\).

(C-5) \(\boldsymbol{0}=(0,\ldots ,0)\), \(\boldsymbol{1}=(1,\ldots ,1)\), \(e_{i}=(0,\ldots ,1_{i},\ldots ,0)\) are vectors in \([0,1]^{n}\). \(\mathbf{Z}_{+}^{n}\setminus \{\boldsymbol{0}\}\) is simply written as \(\mathbf{Z}_{++}^{n}\).

The evolution of nTMBPI can be described as follows.

(i) There are n types of particles in the system. The life length of a type k particle is exponentially distributed with parameter \(\theta _{k}\). Upon its death, it produces offspring of the n-types according to the distribution \(\{p^{(k)}_{\boldsymbol{j}}:\boldsymbol{j}\in \mathbf{Z}_{+}^{n}\}\), \(k=1,\ldots ,n\). Particles live and produce independently of each other, and of the past. Without loss of generality, we assume \(p^{(k)}_{e_{k}}=0\) (\(k=1,\ldots ,n\)).

(ii) Let \(\alpha >0\) and \(\{a_{\boldsymbol{j}}:\boldsymbol{j}\in \mathbf{Z}_{++}^{n}\}\) be a discrete law. When the system is nonempty, then Poisson immigration events with parameter α may occur with random numbers of immigrates according to the law \(\{a_{\boldsymbol{j}}:\boldsymbol{j}\in \mathbf{Z}_{++}^{n}\}\). Immigration is independent of particles in the system.

(iii) Let \(\beta \geq 0\) and \(\{h_{\boldsymbol{j}}:\boldsymbol{j}\in \mathbf{Z}_{++}^{n}\}\) be a discrete law. When the system is empty, then Poisson resurrection events with parameter h may occur with random numbers of immigrates according to the law \(\{h_{\boldsymbol{j}}:\boldsymbol{j}\in \mathbf{Z}_{++}^{n}\}\). Resurrection, immigration, and particles in the system are independent of each other.

By the above description, \((X_{t}:t\geq 0)\) is a Markov process satisfying the following conditions:

(a) the state space is \(\mathbf{Z}_{+}^{n}\);

(b) its generator \(Q=(q_{\boldsymbol{ij}}:\boldsymbol{i},\boldsymbol{j}\in \mathbf{Z}_{+}^{n})\) satisfies

$$\begin{aligned} q_{\boldsymbol{ij}} = \textstyle\begin{cases} \beta h_{\boldsymbol{j}}, & \text{if } \vert \boldsymbol{i}\vert =0, \boldsymbol{j}\neq \boldsymbol{0}\\ \sum_{k=1}^{n}i_{k}\theta _{k}p^{(k)}_{\boldsymbol{j}- \boldsymbol{i}+e_{k}} +\alpha a_{\boldsymbol{j}-\boldsymbol{i}}, & \text{if } \vert \boldsymbol{i}\vert >0, \boldsymbol{j}\neq \boldsymbol{i}, \\ -(\sum_{k=1}^{n}i_{k}\theta _{k}+\alpha (1-\delta _{ \boldsymbol{i}\boldsymbol{0}})+\beta \delta _{\boldsymbol{i}\boldsymbol{0}}), & \text{if } \boldsymbol{j}=\boldsymbol{i}, \\ 0, & \text{otherwise}. \end{cases}\displaystyle \end{aligned}$$

Remark 1.1

\(\theta _{k}\), α, and β are viewed as “branching rate”, “immigration rate”, and “resurrection rate”, respectively. The matrix Q given in (1.1) is called an n-type branching–immigration Q-matrix (nTBI Q-matrix).

Li and Chen [9] considered the one-type case. The aim of this paper is to consider the extinction behavior and recurrence property of n-type Markov branching–immigration processes. In contrast to the one-type cases, when a particle of one type in the system splits, the number of particles of different type may change. Therefore, the method used in the one-type case fails and some new approaches should be used in the current situation. In this paper, we find a new method to investigate the extinction behavior and recurrence property of the n-type Markov branching–immigration processes (see, Theorems 3.1 and 3.2).

The structure of this paper is as follows. Regularity and uniqueness criteria together with some preliminary results are first established in Sect. 2. In Sect. 3, we concentrate on discussing the extinction behavior of the absorbing nTBIP (i.e., \(\beta =0\)) and the explicit extinction probability is obtained. In Sect. 4, the recurrence criterion is presented in the case \(\beta >0\).

2 Preliminaries and uniqueness

Since Q is determined by the sequences \(\{p^{(i)}_{\boldsymbol{j}}:\boldsymbol{j}\in \mathbf{Z}_{+}^{n}\}\) (\(i=1,\ldots ,n\)), \(\{a_{\boldsymbol{j}}:\boldsymbol{j}\in \mathbf{Z}_{++}^{n}\}\), and \(\{h_{\boldsymbol{j}}:\boldsymbol{j}\in \mathbf{Z}_{++}^{n}\}\), we define their generating functions as

$$\begin{aligned}& B_{i}(\boldsymbol{u})=\theta _{i}\biggl(\sum _{\boldsymbol{j}\in \mathbf{Z}_{+}^{n}}p^{(i)}_{\boldsymbol{j}} \boldsymbol{u}^{ \boldsymbol{j}}-u_{i}\biggr),\quad i=1,\ldots ,n, \\& I(\boldsymbol{u})=\alpha \biggl(\sum_{\boldsymbol{j}\in \mathbf{Z}_{++}^{n}} a_{\boldsymbol{j}}\boldsymbol{u}^{ \boldsymbol{j}}-1\biggr), \\& R(\boldsymbol{u})=\beta \biggl(\sum_{\boldsymbol{j}\in \mathbf{Z}_{++}^{n}} h_{\boldsymbol{j}}\boldsymbol{u}^{ \boldsymbol{j}}-1\biggr). \end{aligned}$$

It is obvious that all the generating functions are well defined at least on \([0,1]^{n}\). We now investigate the properties of the generating functions \(\{B_{i}(\boldsymbol{u});i=1,\ldots ,n\}\), \(\alpha (\boldsymbol{u})\), and \(\beta (\boldsymbol{u})\). Let

$$\begin{aligned}& B_{ij}(\boldsymbol{u})= \frac{\partial B_{i}(\boldsymbol{u})}{\partial u_{j}},\quad i,j=1, \ldots ,n, \\& I_{j}(\boldsymbol{u})= \frac{\partial I(\boldsymbol{u})}{\partial u_{j}},\quad j=1,\ldots ,n, \\& R_{j}(\boldsymbol{u})= \frac{\partial R(\boldsymbol{u})}{\partial u_{j}},\quad j=1,\ldots ,n, \\& g_{ij}(\boldsymbol{u})=\delta _{ij}+ \frac{B_{ij}(\boldsymbol{u})}{\theta _{i}},\quad i,j=1,\ldots ,n , \end{aligned}$$

where \(\boldsymbol{u}\in [0,1]^{n}\) and \(\delta _{ij}\) is the Dirac function. The matrices \((B_{ij}(\boldsymbol{u}))\) and \((g_{ij}(\boldsymbol{u}))\) are denoted by \(B(\boldsymbol{u})\) and \(G(\boldsymbol{u})\), respectively.

Definition 2.1

The system \(\{B_{i}(\boldsymbol{u}):1\leq i\leq n\}\) is called singular if there exists an \(n\times n\) matrix M such that

$$ \bigl(B_{1}(\boldsymbol{u}),\ldots ,B_{n}(\boldsymbol{u}) \bigr)^{\prime }=M\cdot \boldsymbol{u}', $$

where \(\boldsymbol{u}'\) denotes the transpose of the vector u.

Definition 2.2

A nonnegative \(n\times n\) matrix \(A=(a_{ij})\) is called positively regular if there exists an integer \(N>0\), such that \(A^{N}>0\).

If \(\{B_{i}(\boldsymbol{u}):1\leq i\leq n\}\) is singular, then each particle has exactly one offspring, and hence the branching process will be equivalent to an ordinary finite Markov chain. In order to avoid discussing such trivial cases, we shall assume throughout this paper that the following conditions are satisfied:

(A-1). \(\{B_{i}(\boldsymbol{u}):1\leq i\leq n\}\) is nonsingular;

(A-2). \(B_{ij}(\boldsymbol{1})<+\infty \), \(i,j=1,\ldots ,n\);

(A-3). \(G(\boldsymbol{1})\) is positively regular.

The above conditions guarantee that \(\mathbf{Z}_{++}^{n}\) is irreducible. The following two lemmas are well known and the proofs are omitted.

Lemma 2.1

\(I(\boldsymbol{u})<0\) for all \(\boldsymbol{u}\in [0,1)^{n}\) and \(\lim_{\boldsymbol{u}\uparrow \boldsymbol{1}}I(\boldsymbol{u})=I(\boldsymbol{1})= 0\). A similar property holds for \(R(\boldsymbol{u})\).

Lemma 2.2

Suppose \(G(\boldsymbol{1})\) is positively regular and \(\{B_{i}(\boldsymbol{u}):1\leq i\leq n\}\) is nonsingular. Then, the equation

$$\begin{aligned} \bigl(B_{1}(\boldsymbol{u}),B_{2}(\boldsymbol{u}), \ldots, B_{n}(\boldsymbol{u})\bigr)= \boldsymbol{0} \end{aligned}$$

has at most two solutions in \([0,1]^{n}\). Let \(\boldsymbol{q}=(q_{1},\ldots ,q_{n})\) and \(\rho (\boldsymbol{u})\) denote the smallest nonnegative solution to (2.1) and the maximal eigenvalue of \(B(\boldsymbol{u})\), respectively. Then,

(i) \(q_{i}\) is the extinction probability when the Feller minimal process starts at state \(e_{i}\) (\(i=1,\ldots ,n\)). Moreover, if \(\rho (\boldsymbol{1})\leq 0\), then \(\boldsymbol{q}=\boldsymbol{1}\); while if \(\rho (\boldsymbol{1})>0\), then \(\boldsymbol{q}<\boldsymbol{1}\), i.e., \(q_{1},\ldots ,q_{n}<1\).

(ii) \(\rho (\boldsymbol{q})\leq 0\).

For nTBI Q-matrix Q given in (1.1), let \(P(t)=(p_{\boldsymbol{ij}}(t):\boldsymbol{i},\boldsymbol{j}\in \mathbf{Z}_{+}^{n})\) and \(\Phi (\lambda )=(\phi _{\boldsymbol{ij}}(\lambda ): \boldsymbol{i}, \boldsymbol{j}\in \mathbf{Z}_{+}^{n})\) be the Feller minimal Q-function and Q-resolvent, respectively.

Lemma 2.3

For any \(\boldsymbol{i}\in \mathbf{Z}_{+}^{n}\) and \(\boldsymbol{u}\in [0,1)^{n}\), we have

$$\begin{aligned} \frac{\partial F_{\boldsymbol{i}}(t,\boldsymbol{u})}{\partial t}= R( \boldsymbol{u})p_{\boldsymbol{i0}}(t)+I(\boldsymbol{u}) \sum_{\boldsymbol{j}\in \mathbf{Z}_{++}^{n}}p_{\boldsymbol{ij}}(t)\boldsymbol{u}^{\boldsymbol{j}} +\sum_{k=1}^{n}B_{k}( \boldsymbol{u}) \frac{\partial F_{\boldsymbol{i}}(t,\boldsymbol{u})}{\partial u_{k}}, \end{aligned}$$

where \(F_{\boldsymbol{i}}(t,\boldsymbol{u})=\sum_{\boldsymbol{j}\in \mathbf{Z}_{+}^{n}} p_{\boldsymbol{ij}}(t)\boldsymbol{u}^{\boldsymbol{j}}\), or in the resolvent version

$$\begin{aligned} { } \lambda \Phi _{\boldsymbol{i}}(\lambda ,\boldsymbol{u})-\boldsymbol{u}^{\boldsymbol{i}} =R( \boldsymbol{u})\phi _{\boldsymbol{i0}}(\lambda ) +I(\boldsymbol{u})\sum _{\boldsymbol{j}\in \mathbf{Z}_{++}^{n}}\phi _{\boldsymbol{ij}}(\lambda )\boldsymbol{u}^{ \boldsymbol{j}} +\sum_{k=1}^{n}B_{k}( \boldsymbol{u}) \frac{\partial \Phi _{\boldsymbol{i}}(\lambda ,\boldsymbol{u})}{\partial u_{k}}, \end{aligned}$$

where \(\Phi _{\boldsymbol{i}}(\lambda ,\boldsymbol{u})=\sum_{\boldsymbol{j}\in \mathbf{Z}_{+}^{n}}\phi _{\boldsymbol{ij}}(\lambda )\boldsymbol{u}^{\boldsymbol{j}}\).


By the Kolmogorov forward equations, we have that for any \(\boldsymbol{i}, \boldsymbol{j}\in \mathbf{Z}_{+}^{n}\),

$$\begin{aligned} &p'_{\boldsymbol{ij}}(t) \\ &\quad = \sum_{\boldsymbol{k}\neq \boldsymbol{j}}p_{\boldsymbol{ik}} (t)\Biggl[\sum_{l=1}^{n}k_{l} \theta _{l} p^{(l)}_{\boldsymbol{j}-\boldsymbol{k}+e_{l}}\cdot \chi _{{ \mathbf{Z}_{+}^{n}}}(\boldsymbol{j}-\boldsymbol{k}+e_{l}) +\alpha a_{ \boldsymbol{j}-\boldsymbol{k}}\cdot \chi _{{\mathbf{Z}_{+}^{n}}} ( \boldsymbol{j}-\boldsymbol{k}) (1-\delta _{\boldsymbol{0k}} ) + \beta h_{\boldsymbol{j}}\cdot \delta _{\boldsymbol{0k}} \Biggr] \\ &\quad \quad{} - p_{\boldsymbol{ij}}(t)\Biggl[\sum_{l=1}^{n}j_{l} \theta _{l} +\alpha (1- \delta _{\boldsymbol{0j}}) +\beta \delta _{\boldsymbol{0j}}\Biggr]. \end{aligned}$$

Multiplying by \(\boldsymbol{u}^{\boldsymbol{j}}\) on both sides of the above equality and summing over \(\boldsymbol{j}\in \mathbf{Z}_{+}^{n}\) we immediately obtain (2.2). Taking a Laplace transform on (2.2) immediately yields (2.3). □

Lemma 2.4

Suppose that \(G(\boldsymbol{1})\) is positively regular and \(\{B_{i}(\boldsymbol{u}):1\leq i\leq n\}\) is nonsingular. If \(\rho (\boldsymbol{1})\leq 0\), then the Q-function is honest.


By Lemma 2.5 of Li and Wang [10], we know that if \(\rho (\boldsymbol{1})\leq 0\), then \(\boldsymbol{q}=\boldsymbol{1}\).


$$ r^{*}=\sup \bigl\{ r\geq 0:B_{k}(\boldsymbol{u})=r, k=1,\ldots ,n \text{ has a solution in } [0,1]^{n}\bigr\} . $$

By Lemma 2.9 of Li and Wang [10], we know that \(r^{*}>0\) and for any \(r\in (0,r^{*}]\), there exist \(\boldsymbol{u}(r)=(u_{1}(r),\ldots , u_{n}(r))\in [0,1)^{n}\) such that

$$ B_{k}\bigl(\boldsymbol{u}(r)\bigr)=r, \quad k=1,\ldots ,n $$

and, moreover,

$$ \lim_{r\downarrow 0}\boldsymbol{u}(r)=\boldsymbol{1}. $$

Letting \(\boldsymbol{u}=\boldsymbol{u}(r)\) in (2.2) and letting \(r\downarrow 0\) yield

$$ \sum_{\boldsymbol{j}\in \mathbf{Z}_{+}^{n}} p_{\boldsymbol{ij}}(t) \geq 1, $$

i.e., \(\sum_{\boldsymbol{j}\in \mathbf{Z}_{+}^{n}} p_{ \boldsymbol{ij}}(t)= 1\). Hence, \(P(t)\) is honest. □

Having completed the preparation, we now prove the uniqueness of nTMBPI.

Theorem 2.1

Let Q be given in (1.1). Then, there exists exactly one nTMBPI, i.e., the Feller minimal process.


By Lemma 2.4, We only need to consider the case that \(\rho (\boldsymbol{1})>0\). For this purpose, we will show that the equations

$$\begin{aligned} \textstyle\begin{cases} \eta (\lambda I-Q)=0,\quad \eta _{\boldsymbol{j}}\geq 0, \boldsymbol{j}\in \mathbf{Z}_{+}^{n}, \\ \sum_{\boldsymbol{j}\in \mathbf{Z}_{+}^{n}}\eta _{ \boldsymbol{j}}< +\infty \end{cases}\displaystyle \end{aligned}$$

have only trivial solution. Suppose that the contrary is true and let \(\eta =(\eta _{\boldsymbol{j}}: \boldsymbol{j}\in \mathbf{Z}_{+}^{n})\) be a nontrivial solution of (2.4) corresponding to \(\lambda =1\). Then, by (2.4) we have

$$\begin{aligned} \eta _{\boldsymbol{j}}& = \sum_{\boldsymbol{k}\neq \boldsymbol{j}} \eta _{\boldsymbol{k}}\Biggl[\sum_{i=1}^{n}k_{i} \theta _{i}p^{(i)}_{ \boldsymbol{j}-\boldsymbol{k}+e_{i}}\cdot \chi _{{\mathbf{Z}_{+}^{n}}}( \boldsymbol{j}-\boldsymbol{k}+e_{i}) +\alpha a_{\boldsymbol{j}- \boldsymbol{k}} \cdot \chi _{{\mathbf{Z}_{+}^{n}}}(\boldsymbol{j}- \boldsymbol{k}) (1-\delta _{\boldsymbol{0k}} )+\beta h_{ \boldsymbol{j}}\cdot \delta _{\boldsymbol{0k}}\Biggr] \\ &\quad{} - \eta _{\boldsymbol{j}}\Biggl[\sum_{i=1}^{n}k_{i} \theta _{i} +\alpha (1- \delta _{\boldsymbol{0j}})+\beta \delta _{\boldsymbol{0j}}\Biggr],\quad \boldsymbol{j}\in \mathbf{Z}_{+}^{n}. \end{aligned}$$

Multiplying by \(\boldsymbol{u}^{\boldsymbol{j}}\) on both sides of (2.5) and using some algebra yields that

$$\begin{aligned} \eta (\boldsymbol{u}) =\sum_{i=1}^{n}B_{i}( \boldsymbol{u})\cdot \frac{\partial \eta (\boldsymbol{u})}{\partial u_{i}}+I( \boldsymbol{u}) \bigl(\eta (\boldsymbol{u}) -\eta _{\boldsymbol{0}}\bigr)+R( \boldsymbol{0})\eta _{\boldsymbol{0}}, \end{aligned}$$


$$\begin{aligned} \bigl(1-I(\boldsymbol{u})\bigr)\bigl[\eta (\boldsymbol{u}) -\eta _{ \boldsymbol{0}}\bigr]+\bigl(1-R(\boldsymbol{u})\bigr) \eta _{\boldsymbol{0}}= \sum _{i=1}^{n}B_{i}(\boldsymbol{u})\cdot \frac{\partial \eta (\boldsymbol{u})}{\partial u_{i}}. \end{aligned}$$

If \(\rho (\boldsymbol{1})>0\), then by Lemma 2.2 and the irreducibility of \(\mathbf{Z}_{+}^{n}\setminus \boldsymbol{0}\) we know that (2.1) has a solution \((q_{1},\ldots , q_{n})\in (0,1)^{n}\). Let \(\boldsymbol{u}=(q_{1},\ldots ,q_{n})\) in (2.6), we can see that the right-hand side of (2.6) is zero. Therefore, the left-hand side of (2.6) must be zero, which implies that \(\eta _{\boldsymbol{j}}=0\) (\(\forall \boldsymbol{j}\in \mathbf{Z}_{+}^{n}\)). The proof is complete. □

3 Extinction

In this section, we shall discuss the extinction property of the absorbing nTMBPI (i.e., \(\beta =0\)). Let denote the absorbing nTBI Q-matrix and \(\tilde{P}(t)=(\tilde{p}_{\boldsymbol{ij}}(t):\boldsymbol{i}, \boldsymbol{j}\in \mathbf{Z}_{+}^{n})\) denote the Feller minimal -function. Also, let \(a_{\boldsymbol{i0}}=\lim_{t\rightarrow \infty}\tilde{p}_{ \boldsymbol{i0}}(t)\) be the extinction probability of \(\tilde{P}(t)\) starting at state i. In order to discuss the extinction property, we need the following important result, which plays a key role in our discussion.

Theorem 3.1

Suppose that \(G(\boldsymbol{1})\) is positively regular and \(\{B_{i}(\boldsymbol{u});1\leq i\leq n\}\) is nonsingular. If \(B_{1}(\boldsymbol{0})>0\), then the system of equations

$$\begin{aligned} \textstyle\begin{cases} u'_{k}(u)= \frac{B_{k}(u,u_{2},\ldots ,u_{n})}{B_{1}(u,u_{2},\ldots ,u_{n})}, & 2\leq k \leq n, \\ u_{k}|_{u=0}=0, &2\leq k \leq n \end{cases}\displaystyle \end{aligned}$$

has a unique solution \((u_{k}(u);2\leq k \leq n)\). Furthermore, this solution satisfies

(i) \((u_{k}(u);2\leq k \leq n)\) is well defined on \([0,q_{1}]\);

(ii) \(u'_{k}(0)\geq 0\) and \(u'_{k}(u)>0\) for all \(u\in (0,q_{1})\) and \(2\leq k \leq n\);

(iii) \(u_{k}(q_{1})=q_{k}\), \(2\leq k \leq n\).


Since \(B_{1}(\boldsymbol{0})>0\), we know that \(B_{1}(u,0,\ldots ,0)=0\) has a positive root \(u^{*}\in (0,1]\). For any \(\varepsilon >0\), \(\{\frac{B_{k}(u,u_{2},\ldots ,u_{n})}{B_{1}(u,u_{2},\ldots ,u_{n})}; 2\leq k \leq n\}\) satisfy the Lipschitz condition on \([0,u^{*}-\varepsilon ]\times [0,1]^{n-1}\), therefore, by the theory of differential equations, (3.1) has a unique solution \((u_{k}(u);2\leq k \leq n)\) defined on \([0,u^{*}-\varepsilon ]\). Furthermore, (3.1) has a unique solution \((u_{k}(u);2\leq k \leq n)\) defined on \([0,u^{*})\) since \(\varepsilon >0\) is arbitrary.

We claim that \(u'_{k}(u)\geq 0\) (\(2\leq k \leq n\)) for all \(u\in [0,u^{*})\). In fact, if there exist \(u\in [0,u^{*})\) and \(2\leq k\leq n\) such that \(u'_{k}(u)<0\), denote

$$ \tilde{u}=\inf \bigl\{ u\in [0,u^{*}): u'_{k}(u)< 0 \text{ for some } k \in \{2,\ldots , n\}\bigr\} $$


$$\begin{aligned} H=\bigl\{ k\in \{2,\ldots ,n\}: \exists \varepsilon >0 \text{ s.t. } u'_{k}(u)< 0 \text{ for } u\in (\tilde{u}, \tilde{u}+\varepsilon )\bigr\} . \end{aligned}$$

It is obvious that \(H\neq \emptyset \). Since \((u_{k}(u);2\leq k \leq n)\) is the solution of (3.1), we have

$$\begin{aligned} B_{k}\bigl(\tilde{u},u_{2}(\tilde{u}),\ldots ,u_{n}(\tilde{u})\bigr)=0,\quad k \in H \end{aligned}$$

and there exists \(\bar{u}\in (\tilde{u},u^{*})\) such that \(u_{k}(\bar{u})\geq u_{k}(\tilde{u})\) (\(k\in H^{c}=:\{2,\ldots ,n\} \setminus H\)), \(u_{k}(\bar{u})< u_{k}(\tilde{u})\) (\(k\in H\)) and

$$\begin{aligned} B_{k}\bigl(\bar{u},u_{2}(\bar{u}), \ldots ,u_{n}(\bar{u})\bigr)< 0,\quad k\in H. \end{aligned}$$


$$ I=\bigl\{ B_{k}\bigl(\bar{u},\boldsymbol{u}_{H^{c}}(\bar{u}),\boldsymbol{u}_{H}\bigr): k\in H\bigr\} , $$

where \(\boldsymbol{u}_{H}=(u_{k}:k\in H)\) and \(\boldsymbol{u}_{H^{c}}(\bar{u})=(u_{k}(\bar{u}):k\in H^{c})\). Obviously,

$$ B_{k}\bigl(\bar{u},\boldsymbol{u}_{H^{c}}(\bar{u}), \boldsymbol{u}_{H}( \tilde{u})\bigr) \geq 0,\quad k\in H, $$

where \(\boldsymbol{u}_{H}(\tilde{u})=(u_{k}(\tilde{u}):k\in H)\). Therefore, the smallest nonnegative zero of I is in \(\prod_{k=\tilde{k}}^{n}[u_{k}(\tilde{u}),1]\). Combining with (3.2) we know that \(u_{k}(\bar{u})\geq u_{k}(\tilde{u})\) (\(k\in H\)), which contradicts \(u_{k}(\bar{u})< u_{k}(\tilde{u})\) (\(k\in H\)).

We now further claim that \(u'_{k}(u)> 0\) (\(2\leq k \leq n\)) for all \(u\in (0,u^{*}]\). In fact, suppose that there exists \(\hat{u}\in (0,u^{*}]\) such that

$$ B_{k}\bigl(\hat{u},u_{2}(\hat{u}),\ldots , u_{n}(\hat{u})\bigr)=0 $$

for some \(k\geq 2\). Denote

$$ \hat{H}=\bigl\{ k; B_{k}\bigl(\hat{u},u_{2}(\hat{u}), \ldots , u_{n}(\hat{u})\bigr)=0 \bigr\} $$


$$ \hat{H}^{c}=\{1,2,\ldots ,n\}\setminus \hat{H}. $$

It is easy to see that \(\hat{H}^{c}\neq \emptyset \). By the irreducibility of the set of nonzero states we know that there exist \(k\in \hat{H}\), \(j\in \hat{H}^{c}\) such that

$$ B_{kj}\bigl(\hat{u},u_{2}(\hat{u}),\ldots , u_{n}(\hat{u})\bigr)>0. $$

On the other hand,

$$ \lim_{u\uparrow \hat{u}} \frac{B_{k}(u,u_{2}(u),\ldots ,u_{n}(u))}{u-\hat{u}}=\sum _{i\in \hat{H}^{c}}B_{ki}\bigl(\hat{u},u_{2}( \hat{u}),\ldots , u_{n}(\hat{u})\bigr) \cdot u'_{i}( \hat{u})>0, $$

which contradicts \(B_{k}(u,u_{2}(u),\ldots ,u_{n}(u))\geq 0\) for all \(u\in [0,u^{*}]\), where \(u'_{1}(\hat{u})=1\).

Since \(B_{1}(u^{*},u_{2}(u^{*}),\ldots ,u_{n}(u^{*}))>B_{1}(u^{*},0,\ldots ,0)=0\), we can apply mathematical induction to prove that the solution of (3.1) can be uniquely extended to \([0,q_{1})\). Now, we claim that

$$ u_{k}(q_{1})=\lim_{u\uparrow q_{1}}u_{k}(u)=q_{k}, \quad k\geq 2. $$

Indeed, since \(B_{k}(u,u_{2}(u),\ldots ,u_{n}(u))>0\) (\(k\geq 1\)) for all \(u\in (0,q_{1})\), it can be easily seen that \(u_{k}(u)\in (0,q_{k})\) (\(k\geq 2\)) for all \(u\in (0,q_{1})\) and therefore, \(u_{k}(q_{1})\in (0,q_{k}]\) for all \(k\geq 2\). If \(u_{k}(q_{1})< q_{k}\) for some \(k\geq 2\), denote

$$ M=\bigl\{ k\geq 2; u_{k}(q_{1})< q_{k} \bigr\} ,\qquad M^{c}=\{1,2,\ldots ,n \}\setminus M. $$

It follows from the irreducibility of the set of nonzero states we know that there exists \(j\in M^{c}\) such that

$$ \lim_{u\uparrow q_{1}}B_{j}\bigl(u,u_{2}(u), \ldots ,u_{n}(u)\bigr)=B_{j}\bigl(q_{1},u_{2}(q_{1}), \ldots ,u_{n}(q_{1})\bigr)< 0, $$

which contradicts \(B_{j}(u,u_{2}(u),\ldots ,u_{n}(u))>0\) for all \(u\in (0,q_{1})\). The proof is complete. □

Corollary 3.1

Suppose that \(G(\boldsymbol{1})\) is positively regular, \(\{B_{i}(\boldsymbol{u});1\leq i\leq n\}\) is nonsingular. If \(B_{1}(\boldsymbol{0})>0\), \(B_{2}(\boldsymbol{0})>0\), then the system of equations

$$\begin{aligned} \textstyle\begin{cases} u'_{k}(u)= \frac{B_{k}(u_{1},u,\ldots ,u_{n})}{B_{2}(u_{1},u,\ldots ,u_{n})}, &k\neq 2, \\ u_{k}|_{u=0}=0, &k\neq 2 \end{cases}\displaystyle \end{aligned}$$

has the same solution as (3.1).


By Theorem 3.1, we know that (3.3) has a unique solution. For convenience, we denote the solutions to (3.3) by \((u_{1}(u_{2}),u_{3}(u_{2}),\ldots ,u_{n}(u_{2}))\). Since \(u'_{1}(u_{2})>0\) for all \(u_{2}\in [0,q_{2})\), we know that the function \(u_{1}(u_{2})\) (\(u_{2}\in [0,q_{2})\)) has an inverse function \(u_{2}=f_{2}(u_{1})\), \((u_{1}\in [0,q_{1}))\) satisfying \(\frac{df_{2}}{du_{1}}=1/u'_{1}\). Let \(u_{k}=f_{k}(u_{1})=u_{k}(f_{2}(u_{1}))\) (\(u_{1}\in [0,q_{1}]\)) for \(k\geq 3\). It can be easily seen that \(u_{k}=f_{k}(u_{1})\) (\(k\geq 2\)) is the solution to (3.1). □

By the irreducibility of \(\mathbf{Z}_{++}^{n}\), Theorem 3.1, and Corollary 3.1, we can assume that \(B_{1}(\boldsymbol{0})>0\) without loss of generality and let \((u_{2}(u),\ldots ,u_{n}(u)) (u\in [0,q_{1}])\) denote the unique solution to (3.1).

Before stating our main result in this section, we first provide two useful lemmas.

Lemma 3.1

Let \((\tilde{p}_{\boldsymbol{ij}}(t):\boldsymbol{i},\boldsymbol{j}\in \mathbf{Z}_{+}^{n})\) be the Feller minimal -function, where is an absorbing nTBI Q-matrix. Then, for any \(\boldsymbol{i}\in \mathbf{Z}_{+}^{n}\),

$$\begin{aligned} \int _{0}^{\infty }\tilde{p}_{\boldsymbol{ik}}(t)\,dt < \infty , \quad \boldsymbol{k}\neq \boldsymbol{0} \end{aligned}$$

and thus

$$\begin{aligned} \lim_{t\rightarrow \infty}\tilde{p}_{\boldsymbol{ik}}(t)=0, \quad \boldsymbol{i}\in \mathbf{Z}_{+}^{n}, \boldsymbol{k}\neq \boldsymbol{0}. \end{aligned}$$

Moreover, for any \(\boldsymbol{i}\in \mathbf{Z}_{++}^{n}\) and \(\boldsymbol{u}\in [0,1)^{n}\), we have

$$\begin{aligned} \sum_{\boldsymbol{k}\neq \boldsymbol{0}}\biggl( \int _{0}^{\infty }\tilde{p}_{ \boldsymbol{ik}}(t)\,dt \biggr)\cdot \boldsymbol{u}^{\boldsymbol{k}}< \infty . \end{aligned}$$


By the construction of , all the states in \(\mathbf{Z}_{++}^{n}\) are transient. Hence, (3.4) and (3.5) hold.

We now prove (3.6). For this purpose, we shall consider two different cases separately.

First, consider the case \(\rho (\boldsymbol{1})>0\). By Lemma 2.1(ii), (2.1) has a root \(\boldsymbol{q}\in (0,1)^{n}\). Let \(\tilde{\boldsymbol{u}}\in \prod_{i=1}^{n}(q_{i},1)\). We claim that there exists \(\bar{\boldsymbol{u}}\in \prod_{i=1}^{n}[\tilde{u}_{i},1)\) such that

$$\begin{aligned} B_{i}(\bar{\boldsymbol{u}})< 0, \quad \forall i=1,2,\ldots ,n. \end{aligned}$$

Indeed, let \(H_{1}=\{i:B_{i}(\tilde{\boldsymbol{u}})> 0\}\). By Li and Wang [10] we know that \(H_{1}\neq \{1,2,\ldots ,n\}\) since \(\rho (\boldsymbol{1})>0\). If \(H_{1}=\emptyset \), then \(B_{i}(\tilde{u}_{1},\ldots ,\tilde{u}_{n})\leq 0\) (\(\forall i=1, \ldots ,n\)). If \(H_{1}\neq \emptyset \), then by Lemma 2.2, we know that there exists \(\boldsymbol{u}^{(1)}\in \prod_{i=1}^{n}[\tilde{u}_{i},1)\) such that \(B_{i}(u^{(1)}_{1},\ldots ,u^{(1)}_{n})=0\) for all \(i\in H_{1}\). Let

$$ H_{2}=\bigl\{ i: B_{i}\bigl(\boldsymbol{u}^{(1)} \bigr)>0\bigr\} , $$

then \(H_{2}\subset \{1,2,\ldots ,n\}\setminus H_{1}\). It is obvious that \(H_{1}\cup H_{2}\neq \{1,2,\ldots ,n\}\). If \(H_{2}=\emptyset \), then \(B_{i}(\boldsymbol{u}^{(1)})\leq 0\) (\(\forall i=1,\ldots ,n\)). If \(H_{2}\neq \emptyset \), then by Lemma 2.2, we know that there exists \(\boldsymbol{u}^{(2)}\in \prod_{i=1}^{n}[u^{(1)}_{i},1)\) such that \(B_{i}(\boldsymbol{u}^{(2)})=0\) for all \(i\in H_{1}\cup H_{2}\). By repeatedly using the same argument and noting \(\{1,2,\ldots ,n\}\) is a finite set, we can obtain \(H_{1}, H_{2}, \ldots , H_{m}\) such that \(H_{m+1}=\emptyset \) and hence \(B_{i}(\boldsymbol{u}^{(m)})\leq 0\) (\(\forall i=1,\ldots ,n\)). It is obvious that \(H_{1}\cup \cdots \cup H_{m}\neq \{1,2,\ldots ,n\}\), i.e., \(B_{i}(\boldsymbol{u}^{(m)})<0\) for all \(i\in \{1,\ldots ,n\}\setminus H_{1}\cup \cdots \cup H_{m}\). By the irreducibility of \(\mathbf{Z}_{++}^{n}\), we can see that (3.7) holds for \(\bar{\boldsymbol{u}}\) smaller than (if necessary) but closing to \(\boldsymbol{u}^{(m)}\).

By (2.2) we know that

$$\begin{aligned} \frac{\partial \tilde{F}_{\boldsymbol{i}}(t,\bar{\boldsymbol{u}})}{\partial t}=I( \bar{\boldsymbol{u}})\sum_{\boldsymbol{j}\in \mathbf{Z}_{++}^{n}} \tilde{p}_{\boldsymbol{ij}}(t) \bar{\boldsymbol{u}}^{ \boldsymbol{j}} +\sum _{k=1}^{n}B_{k}(\bar{\boldsymbol{u}}) \frac{\partial \tilde{F}_{\boldsymbol{i}}(t,\bar{\boldsymbol{u}})}{\partial u_{k}}, \end{aligned}$$

which implies (3.6), where \(\tilde{F}_{\boldsymbol{i}}(t,\bar{\boldsymbol{u}}) =\sum_{ \boldsymbol{j}\in \mathbf{Z}_{+}^{n}} \tilde{p}_{\boldsymbol{ij}}(t) \bar{\boldsymbol{u}} ^{\boldsymbol{j}}\).

Next, consider the case that \(\rho (\boldsymbol{1})\leq 0\). Let \(\tilde{\boldsymbol{u}}\in (0,1)^{n}\). By Theorem 3.1, there exists \(v\in (\tilde{u}_{1},1)\) such that \((v,u_{2}(v),\ldots ,u_{n}(v))\in \prod_{i=1}^{n}(\tilde{u}_{i},1)\) and hence by (2.2) and Theorem 3.1 we have

$$\begin{aligned} 1\geq I\bigl(v,u_{2}(v),\ldots ,u_{n}(v) \bigr)G_{\boldsymbol{i}}(T,v)+B_{1}\bigl(v,u_{2}(v), \ldots ,u_{n}(v)\bigr)\cdot \frac{\partial G_{\boldsymbol{i}}(T,v)}{\partial v}, \end{aligned}$$

where \(G_{\boldsymbol{i}}(T,v)=\sum_{\boldsymbol{j}\in \mathbf{Z}_{++}^{n}}(\int _{0}^{T}\tilde{p}_{\boldsymbol{ij}}(t)\,dt )v^{j_{1}}u^{j_{2}}_{2}(v) \cdots u^{j_{n}}_{n}(v)\). Equation (3.6) can be obtained immediately from the above inequality. The proof is complete. □

For any \(\boldsymbol{i}\neq \boldsymbol{0}\), denote \(G_{\boldsymbol{i}}(v)=G_{\boldsymbol{i}}(\infty ,v)\). From Lemma 3.1, \(G_{\boldsymbol{i}}(v)\) is well defined at least for \(v\in [0,1)\).

Theorem 3.2

For any \(\boldsymbol{i}\neq \boldsymbol{0}\), \(a_{\boldsymbol{i0}} = 1\) if and only if \(\rho (\boldsymbol{1})\leq 0\) and \(J = +\infty \) where

$$\begin{aligned} J:= \int _{0}^{1}\frac{1}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))}\cdot e^{ \int _{0}^{y} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy . \end{aligned}$$

More specifically,

(i) If \(\rho (\boldsymbol{1})\leq 0\) and \(J = +\infty \), then \(a_{\boldsymbol{i0}} = 1\) (\(\boldsymbol{i}\neq \boldsymbol{0}\)).

(ii) If \(\rho (\boldsymbol{1})\leq 0\) and \(J < +\infty \), then

$$\begin{aligned} a_{\boldsymbol{i0}}= \frac{\int _{0}^{1}\frac{y^{i_{1}}u^{i_{2}}_{2}(y)\cdots u^{i_{n}}_{n}(y)}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))}\cdot e^{\int _{0}^{y}\frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy }{ \int _{0}^{1}\frac{1}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))}\cdot e^{\int _{0}^{y}\frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy }< 1. \end{aligned}$$

(iii) If \(0<\rho (\boldsymbol{1})\leq +\infty \) and thus equation (2.1) possesses a smallest nonnegative root \(\boldsymbol{q}=(q_{1}, u_{2}(q_{1}),\ldots ,u_{n}(q_{1}))\in (0,1)^{n}\), then

$$\begin{aligned} a_{\boldsymbol{i0}}= \frac{\int _{0}^{q_{1}} \frac{y^{i_{1}}u^{i_{2}}_{2}(y)\cdots u^{i_{n}}_{n}(y)}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))}\cdot e^{\int _{0}^{y}\frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy }{\int _{0}^{q_{1}}\frac{1}{B_{1}(y,u_{2}(y), \ldots ,u_{n}(y))}\cdot e^{\int _{0}^{y}\frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy }< \prod_{k=1}^{n}q_{k}^{i_{k}}< 1, \quad \boldsymbol{i}\neq \boldsymbol{0}. \end{aligned}$$


Integrating the equality (2.2) with respect to \(t\in [0,\infty )\) and using Theorem 3.1, we have that for any \(v\in [0,1)\) and \(\boldsymbol{i}\neq \boldsymbol{0}\),

$$\begin{aligned} &a_{\boldsymbol{i0}}-v^{i_{1}}u^{i_{2}}_{2}(v) \cdots u^{i_{n}}_{n}(v) \\ &\quad = B_{1}\bigl(v,u_{2}(v),\ldots ,u_{n}(v) \bigr)\cdot G'_{\boldsymbol{i}}(v)+I\bigl(v,u_{2}(v), \ldots ,u_{n}(v)\bigr)\cdot G_{\boldsymbol{i}}(v), \end{aligned}$$

where \(G_{\boldsymbol{i}}(v)<+\infty \). First, consider the case \(\rho (1,\ldots ,1)\leq 0\). Solving the ordinary differential equation (3.10) for \(v\in [0,1)\) immediately yields

$$\begin{aligned} &G_{\boldsymbol{i}}(v)\cdot e^{\int _{0}^{v} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx } \\ &\quad = \int _{0}^{v} \frac{a_{\boldsymbol{i0}}-y^{i_{1}}u^{i_{2}}_{2}(y)\cdots u^{i_{n}}_{n}(y)}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))} \cdot e^{\int _{0}^{y} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy , \end{aligned}$$

which implies that if \(J=+\infty \), then \(a_{\boldsymbol{i0}}=1\). Indeed, if \(a_{\boldsymbol{i0}} < 1\), then by letting \(v\uparrow 1\) in (3.11) we see that the right-hand side of (3.11) tends to −∞, while the left-hand side is always nonnegative, which is a contradiction. Hence, (i) is proven.

Now, we turn to (ii). First, note that \(J<+\infty \) implies \(\int _{0}^{1} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx =- \infty \). Since the left-hand side of (3.11) is always nonnegative so is the right-hand side. It follows that \(a_{\boldsymbol{i0}}\geq J^{-1}\cdot \int _{0}^{1} \frac{y^{i_{1}}u^{i_{2}}_{2}(y)\cdots u^{i_{n}}_{n}(y)}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))} \cdot e^{\int _{0}^{y} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy \). Therefore, in order to prove (ii), we only need to show that

$$\begin{aligned} a_{\boldsymbol{i0}}\leq J^{-1}\cdot \int _{0}^{1} \frac{y^{i_{1}}u^{i_{2}}_{2}(y)\cdots u^{i_{n}}_{n}(y)}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))} \cdot e^{\int _{0}^{y} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy . \end{aligned}$$

Take \(x^{*}_{\boldsymbol{j}}=J^{-1}\cdot \int _{0}^{1} \frac{y^{j_{1}}u^{j_{2}}_{2}(y)\cdots u^{j_{n}}_{n}(y)}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))} \cdot e^{\int _{0}^{y} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy \) (\(\boldsymbol{j}\neq \boldsymbol{0}\)), then for any \(\boldsymbol{i}\neq \boldsymbol{0}\),

$$\begin{aligned} &\sum_{\boldsymbol{j}\neq \boldsymbol{0}}q_{\boldsymbol{ij}}x_{ \boldsymbol{j}}^{*} +q_{\boldsymbol{i0}} \\ &\quad = J^{-1}\cdot \int _{0}^{1} \frac{\sum_{\boldsymbol{j}\in \mathbf{Z}_{+}^{n}}q_{\boldsymbol{ij}}\cdot y^{j_{1}}u^{j_{2}}_{2}(y)\cdots u^{j_{n}}_{n}(y)}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))} \cdot e^{\int _{0}^{y} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy \\ &\quad = J^{-1}\cdot \int _{0}^{1}\sum _{k=1}^{\infty }i_{k}y^{i_{1}}u^{i_{2}}_{2}(y) \cdots u^{{i_{k}}-1}_{k}(y)u'_{k}(y) \cdots u^{j_{n}}_{n}(y)\cdot e^{ \int _{0}^{y} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy \\ &\quad \quad{} + J^{-1}\cdot \int _{0}^{1} \frac{y^{j_{1}}u^{j_{2}}_{2}(y)\cdots u^{j_{n}}_{n}(y)I(y,u_{2}(y),\ldots ,u_{n}(y))}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))} \cdot e^{\int _{0}^{y} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy \\ &\quad = 0. \end{aligned}$$

Here, the last equality follows from the integration by parts. Hence, \((x_{\boldsymbol{j}}^{*}:\boldsymbol{j}\neq \boldsymbol{0})\) is a solution of the equation

$$\begin{aligned} \sum_{\boldsymbol{j}\neq \boldsymbol{0}}q_{\boldsymbol{ij}}x_{ \boldsymbol{j}}^{*} +q_{\boldsymbol{i0}}=0, \quad 0\leq x_{ \boldsymbol{j}}^{*}\leq 1, \boldsymbol{i}\neq \boldsymbol{0}. \end{aligned}$$

By Lemma 3.2 in Li and Chen [9], we then have \(a_{\boldsymbol{i0}}\leq x_{\boldsymbol{i}}^{*}\) (\(\boldsymbol{i}\neq \boldsymbol{0}\)) since \(a_{\boldsymbol{i0}}\) is the minimal solution of the above equation. (ii) is proved.

Finally, we consider (iii). Suppose that \(\rho (\boldsymbol{1})>0\). By Lemma 2.1, we know that equation (2.1) has a root \((q_{1},u_{2}(q_{1}),\ldots ,u_{n}(q_{1}))\in (0,1)^{n}\) and \(G_{\boldsymbol{i}}(v)<\infty \) for all \(v\in [0,q_{1}]\). Similarly as in the above, we only need to show that

$$\begin{aligned} a_{\boldsymbol{i0}}& \leq \lim_{v\uparrow q_{1}}\biggl[ \int _{0}^{v} \frac{1}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))}\cdot e^{\int _{0}^{y} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy \biggr]^{-1} \\ &\quad {}\cdot\int _{0}^{v} \frac{y^{j_{1}}u^{j_{2}}_{2}(y)\cdots u^{j_{n}}_{n}(y)}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))} \cdot e^{\int _{0}^{y} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy . \end{aligned}$$

By Lemma 2.1 we know that \(\int _{0}^{q_{1}} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x), \ldots ,u_{n}(x))}\,dx =-\infty \) and

$$ \int _{0}^{y} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx \leq \int _{0}^{y} \frac{I(q_{1},q_{2},\ldots ,q_{n})}{B_{1}(x,q_{2},\ldots ,q_{n})}\,dx \leq C \ln \frac{q_{1}-y}{q_{1}} $$

for \(y\in [0,q_{1})\), where C is a positive constant. Hence, the integral \(\int _{0}^{q_{1}}\frac{1}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))}\cdot e^{ \int _{0}^{y} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy \), denoted by D, is convergent. Now, by letting

$$ y_{\boldsymbol{j}}^{*}=D^{-1}\cdot \int _{0}^{q_{1}} \frac{1}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))}\cdot e^{\int _{0}^{y} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy , \quad \boldsymbol{j}\neq \boldsymbol{0}, $$

we may prove similarly as above that \((y_{\boldsymbol{j}}^{*}:\boldsymbol{j}\neq \boldsymbol{0})\) is a solution of the equation

$$\begin{aligned} \sum_{\boldsymbol{j}\neq \boldsymbol{0}}q_{\boldsymbol{ij}}x_{ \boldsymbol{i}} +q_{\boldsymbol{i0}}=0, \quad 0\leq x_{ \boldsymbol{j}}\leq 1, \boldsymbol{i}\neq \boldsymbol{0}. \end{aligned}$$

Again, by Lemma 3.2 in Li and Chen [9], we have \(a_{\boldsymbol{i0}}\leq y_{\boldsymbol{i}}^{*}\) (\(\boldsymbol{i}\neq \boldsymbol{0}\)), which proves the first equality in (3.5). The last two assertions in (3.5) are obvious. The proof is complete. □

By Theorem 3.2, we see that when immigration occurs then the condition \(\rho (\boldsymbol{1})\leq 0\) (i.e., the death rate is not less than the mean birth rate) is no longer sufficient for the process to be finally extinct. A further condition \(J=\infty \), which reflects the effect of immigration, is necessary to guarantee the final extinction.

Having obtained the extinction probability, we are now in a position to consider the extinction time. We shall use \(E_{\boldsymbol{i}}[\tau _{0}]\) to denote the mean extinction time when the process starts at state \(\boldsymbol{i}\neq \boldsymbol{0}\).

Theorem 3.3

Suppose that \(\rho (\boldsymbol{1})\leq 0\) and \(J=\infty \), where J is given in (3.8) and thus the extinction probability \(a_{\boldsymbol{i0}}=1\) (\(\boldsymbol{i}\neq \boldsymbol{0}\)). Then, for any \(\boldsymbol{i}\neq \boldsymbol{0}\), \(E_{\boldsymbol{i}}[\tau _{0}]<\infty \) if and only if

$$\begin{aligned} \int _{0}^{1} \frac{1-yu_{2}(y)\cdots u_{n}(y)-I(y,u_{2}(y),\ldots ,u_{n}(y))}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))}\,dy < \infty \end{aligned}$$

and in which case, \(E_{\boldsymbol{i}}[\tau _{0}]\) is given by

$$\begin{aligned} E_{\boldsymbol{i}}[\tau _{0}]= \int _{0}^{1} \frac{1-y^{i_{1}}u^{i_{2}}_{2}(y)\cdots u^{i_{n}}_{n}(y)}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))} \cdot e^{-\int _{y}^{1} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy . \end{aligned}$$


It follows from (3.11) that

$$\begin{aligned} &\sum_{\boldsymbol{j}\neq \boldsymbol{0}}\biggl( \int _{0}^{\infty }p_{ \boldsymbol{ij}}(t)\,dt \biggr) \cdot u^{j_{1}}u^{j_{2}}_{2}(u)\cdots u^{j_{n}}_{n}(u) \\ &\quad = \int _{0}^{u} \frac{1-y^{i_{1}}u^{i_{2}}_{2}(y)\cdots u^{i_{n}}_{n}(y)}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))} \cdot e^{-\int _{y}^{u} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy . \end{aligned}$$

Letting \(u\uparrow 1\), using the honesty condition and applying the Monotone Convergence Theorem then yields

$$\begin{aligned} E_{\boldsymbol{i}}[\tau _{0}]& = \int _{0}^{\infty}\bigl(1-p_{ \boldsymbol{i0}}(t) \bigr)\,dt \\ & = \sum_{\boldsymbol{j}\in \mathbf{Z}_{++}^{n}} \int _{0}^{\infty }p_{ \boldsymbol{ij}}(t)\,dt \\ & = \int _{0}^{1} \frac{1-y^{i_{1}}u^{i_{2}}_{2}(y)\cdots u^{i_{n}}_{n}(y)}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))} \cdot e^{-\int _{y}^{1} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy . \end{aligned}$$

Thus, (3.13) is proved. Finally, it is fairly easy to show that the expression in (3.13) is finite if and only if (3.12) holds. □

4 Recurrence Property

In this section we consider the recurrence property of nTMBPI in the case that \(\beta \neq 0\) and thus 0 is no longer an absorbing state. We shall assume that the nTBI Q-matrix Q is regular.

It is well known that the nTMBPI is recurrent if and only if the extinction probability of the related absorbing nTMBPI (i.e., \(\beta =0\)) equals 1. Therefore, by Theorem 3.2 we have the following result.

Theorem 4.1

The nTMBPI is recurrent if and only if \(\rho (\boldsymbol{1})\leq 0\) and \(J=+\infty \), where J is given in (3.8).

Now, we consider the positive recurrence of nTMBPI.

Theorem 4.2

The nTMBPI is positive recurrent (i.e., ergodic) if and only if \(\rho (\boldsymbol{1})\leq 0\) and

$$\begin{aligned} \int _{0}^{1} \frac{-I(y,u_{2}(y),\ldots ,u_{n}(y))-R(y,u_{2}(y),\ldots , u_{n}(y))}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))}\,dy < \infty . \end{aligned}$$

Moreover, if \(\rho (\boldsymbol{1})<0\) and \(\sum_{j=1}^{n}(I_{j}(\boldsymbol{1})+R_{j}(\boldsymbol{1}))<\infty \), then the process is exponentially ergodic.


Denote \(\tilde{R}(x):=R(x,u_{2}(x),\ldots ,u_{n}(x))\), \(\tilde{I}(x):=I(x,u_{2}(x),\ldots ,u_{n}(x))\), and \(\tilde{B}_{k}(x):= B_{k}(x,u_{2}(x),\ldots ,u_{n}(x))\) (\(k=1,\ldots ,n\)).

Suppose that \(\rho (\boldsymbol{1})\leq 0\) and (4.1) holds. By Chen [5], in order to prove the positive recurrence, we only need to show that the equation

$$\begin{aligned} \textstyle\begin{cases} \sum_{\boldsymbol{j}\in \mathbf{Z}_{+}^{n}}q_{ \boldsymbol{ij}}y_{\boldsymbol{j}}\leq -1,\quad \boldsymbol{i}\neq \boldsymbol{0}, \\ \sum_{\boldsymbol{j}\neq \boldsymbol{0}}q_{ \boldsymbol{0j}}y_{\boldsymbol{j}}< \infty \end{cases}\displaystyle \end{aligned}$$

has a finite nonnegative solution. By the irreducibility property and the fact that \(\rho (\boldsymbol{1})\leq 0\), we may obtain from (4.1) that

$$\begin{aligned} \int _{0}^{1} \frac{1-y^{i_{1}}u^{i_{2}}_{2}(y)\cdots u^{i_{n}}_{n}(y)}{\tilde{B}_{1}(y)} \cdot e^{\int _{0}^{y}\frac{\tilde{I}(x)}{\tilde{B}_{1}(x)}\,dx }\,dy < \infty ,\quad \boldsymbol{i}\in \mathbf{Z}_{+}^{n}. \end{aligned}$$

Indeed, since \(\beta >0\), it is easy to see that there exists a positive constant L such that \(1-yu_{2}(y)\cdots u_{n}(y)\leq L\cdot \tilde{R}(y)\). Hence,

$$\begin{aligned} \int _{0}^{1} \frac{1-y^{j_{1}}u^{j_{2}}_{2}(y)\cdots u^{j_{n}}_{n}(y)}{\tilde{B}_{1}(y)}\,dy < \infty \end{aligned}$$

for any \(\boldsymbol{j}\in \mathbf{Z}_{+}^{n}\). Now, let

$$\begin{aligned} y_{\boldsymbol{j}}=e^{-\int _{0}^{1} \frac{\tilde{I}(x)}{\tilde{B}_{1}(x)}\,dx }\cdot \int _{0}^{1} \frac{1-y^{j_{1}} u^{j_{2}}_{2}(y)\cdots u^{j_{n}}_{n}(y)}{\tilde{B}_{1}(y)}\cdot e^{ \int _{0}^{y}\frac{\tilde{I}(x)}{\tilde{B}_{1}(x)}\,dx }\,dy ,\quad \boldsymbol{j}\in \mathbf{Z}_{+}^{n}, \end{aligned}$$

then \(0\leq y_{\boldsymbol{j}}<\infty\) (\(\boldsymbol{j}\in \mathbf{Z}_{+}^{n}\)) and it can be checked that \(\sum_{\boldsymbol{j}\in \mathbf{Z}_{+}^{n}}q_{ \boldsymbol{ij}}y_{\boldsymbol{j}}=-1\) (\(\boldsymbol{i}\neq \boldsymbol{0}\)) and

$$\begin{aligned} \sum_{\boldsymbol{j}\neq \boldsymbol{0}}q_{\boldsymbol{0j}}y_{ \boldsymbol{j}}\leq e^{-\int _{0}^{1} \frac{\tilde{I}(x)}{\tilde{B}_{1}(x)}\,dx } \cdot \int _{0}^{1} \frac{-\tilde{R}(y)}{\tilde{B}_{1}(y)}\,dy < \infty . \end{aligned}$$

Therefore, the nTMBPI is positive recurrent.

Conversely, suppose that the process is positive recurrent and thus possesses an equilibrium distribution \((\pi _{\boldsymbol{j}}:\boldsymbol{j}\in \mathbf{Z}_{+}^{n})\). Letting \(t\rightarrow \infty \) in (2.2) and using the dominated convergence theorem yields

$$\begin{aligned} &\tilde{R}(s)\pi _{\boldsymbol{0}} +\tilde{I}(s)\sum _{ \boldsymbol{j}\neq \boldsymbol{0}}\pi _{\boldsymbol{j}}s^{j_{1}}u^{j_{2}}_{2}(s) \cdots u^{j_{n}}_{n}(s) \\ &\quad{} + \sum_{k=1}^{n} \tilde{B}_{k}(s) \sum_{\boldsymbol{j}\neq \boldsymbol{0}}\pi _{\boldsymbol{j}}j_{k}s^{j_{1}}u^{j_{2}}_{2}(s) \cdots u^{j_{k}-1}_{k}(s) \cdots u^{j_{n}}_{n}(s)=0 \end{aligned}$$

for \(s\in [0,1)\).

Since \(\tilde{R}(s)< 0\) and \(\tilde{I}(s)< 0\) for all \(s\in [0,1)\), by (4.2) and the proof of Theorem 3.1, we know that \(\rho (\boldsymbol{1})\leq 0\). Denote

$$ \pi (s)=\sum_{\boldsymbol{j}\in \mathbf{Z}_{+}^{n}}\pi _{ \boldsymbol{j}}s^{j_{1}}u^{j_{2}}_{2}(s) \cdots u^{j_{n}}_{n}(s). $$

It follows from (4.2) that

$$\begin{aligned} \pi (s)=\pi _{\boldsymbol{0}}\biggl[1+ \int _{0}^{s} \frac{-\tilde{R} (y)}{\tilde{B}_{1}(y)}\cdot e^{-\int _{y}^{s} \frac{\tilde{I}(x)}{\tilde{B}_{1}(x)}\,dx }\,dy \biggr],\quad s\in [0,1). \end{aligned}$$

Since \(\int _{0}^{s}\frac{-\tilde{R}(y)}{\tilde{B}_{1}(y)}\cdot e^{\int _{0}^{y} \frac{\tilde{I}(x)}{\tilde{B}_{1}(x)}\,dx }\,dy \geq \int _{0}^{\frac{1}{2}} \frac{-\tilde{R}(y)}{\tilde{B}_{1}(y)}\cdot e^{\int _{0}^{y} \frac{\tilde{I}(x)}{\tilde{B}_{1}(x)}\,dx }\,dy >0\) for \(s\geq \frac{1}{2}\), we must have \(\int _{0}^{1}\frac{-\tilde{I}(x)}{\tilde{B}_{1}(x)}\,dx <\infty \). Hence,

$$\begin{aligned} \lim_{s\uparrow 1} \int _{0}^{s} \frac{-\tilde{R}(y)}{\tilde{B}_{1}(y)}\leq \lim _{s\uparrow 1} \frac{\int _{0}^{s}\frac{-\tilde{R}(y)}{\tilde{B}_{1}(y)}\cdot e^{\int _{0}^{y}\frac{\tilde{I}(x)}{\tilde{B}_{1}(x)}\,dx }\,dy }{ e^{\int _{0}^{s}\frac{\tilde{I}(x)}{\tilde{B}_{1}(x)}\,dx }}< \infty . \end{aligned}$$

Hence, (4.1) holds. The first part is proved.

Now, suppose that \(\rho (\boldsymbol{1})<0\) and \(\sum_{j=1}^{n}(I_{j}(\boldsymbol{1}) +R_{j}( \boldsymbol{1}))<\infty \). We prove that the nTBIP is exponentially ergodic. Since \(\rho (\boldsymbol{1})\) has a positive eigenvector \((x_{1},\ldots ,x_{n})\), let

$$\begin{aligned} C_{1}:=\Biggl(\sum_{j=1}^{n}I_{j}( \boldsymbol{1})\Biggr)\vee \Biggl(\sum_{j=1}^{n}R_{j}( \boldsymbol{1})\Biggr)\cdot \max \{x_{1},\ldots ,x_{n}\}>0, \qquad C_{2}:=- \rho (\boldsymbol{1})>0 \end{aligned}$$

and \(f_{\boldsymbol{i}}=\sum_{k=1}^{n}i_{k}x_{k}\) (\(\boldsymbol{i}\in \mathbf{Z}_{+}^{n}\)). We can see that for any \(\boldsymbol{i}\in \mathbf{Z}_{+}^{n}\),

$$\begin{aligned} &\sum_{\boldsymbol{j}\in \mathbf{Z}_{+}^{n}}q_{\boldsymbol{ij}}(f_{ \boldsymbol{j}} -f_{\boldsymbol{i}}) \\ &\quad = \sum_{k=1}^{n}i_{k} \sum_{l=1}^{n}B_{kl}(1, \ldots ,1)x_{l}+\sum_{l=1}^{n} \bigl[\delta _{\boldsymbol{0}\boldsymbol{i}}R_{l}(\boldsymbol{1}) +(1- \delta _{\boldsymbol{0}\boldsymbol{i}})I_{l}(\boldsymbol{1})\bigr]. \\ &\quad \leq C_{1}-C_{2}f_{\boldsymbol{i}}. \end{aligned}$$

By Corollary 4.49 of Chen [5], the process is exponentially ergodic. The proof is complete. □

Theorem 4.3

Suppose that the nTMBPI is positive recurrent. Then, its equilibrium distribution \((\pi _{\boldsymbol{j}}:\boldsymbol{j}\in \mathbf{Z}_{+}^{n})\) is given by

$$\begin{aligned} \pi (s)=\pi _{\boldsymbol{0}}\biggl[1+ \int _{0}^{s} \frac{-R(y,u_{2}(y),\ldots ,u_{n}(y))}{B_{1}(y,u_{2}(y),\ldots ,u_{n}(y))} \cdot e^{-\int _{y}^{s} \frac{I(x,u_{2}(x),\ldots ,u_{n}(x))}{B_{1}(x,u_{2}(x),\ldots ,u_{n}(x))}\,dx }\,dy \biggr], \quad s\in [0,1), \end{aligned}$$

where \(\pi (s)=\sum_{\boldsymbol{j}\in \mathbf{Z}_{+}^{n}}\pi _{\boldsymbol{j}}s^{j_{1}}u^{j_{2}}_{2}(s) \cdots u^{j_{n}}_{n}(s)\).


(4.4) follows directly from the proof of Theorem 4.2 (see (4.3)). □

The following conclusion follows immediately from Theorem 3.3.

Theorem 4.4

The nTMBPI is never strongly ergodic.

Finally, we give an example to illustrate our results.

Example 4.1

Consider a two-type Markov branching–immigration process with \(B_{1}(u,v)=p-u+(1-p)v^{2}\), \(B_{2}(u,v)=p-v+(1-p)u^{2}\), \(I(u,v)=\alpha (uv-1)\), and \(R(u,v)=\beta (uv-1)\), where \(\alpha >0\), \(\beta \geq 0\) and \(p\in (0,1)\).

It is easy to see that \(\rho (1,1)=1-2p\). Moreover, the solution of (3.1) is \(v(u)=u\) and the smallest nonnegative solution of (2.1) is \(q_{1}=q_{2}=\min (1,\frac{p}{1-p})\).

(i) For the case \(\beta =0\), by Theorem 3.1,

$$\begin{aligned} a_{\boldsymbol{i0}}= \frac{\int _{0}^{q_{1}}\frac{y^{i_{1}+i_{2}}}{p-y+(1-p)y^{2}}\cdot e^{\int _{0}^{y}\frac{\alpha (x^{2}-1)}{p-x+(1-p)x^{2}}\,dx }\,dy }{ \int _{0}^{q_{1}}\frac{1}{p-y+(1-p)y^{2}}\cdot e^{\int _{0}^{y}\frac{\alpha (x^{2}-1)}{p-x+(1-p)x^{2}}\,dx }\,dy }, \end{aligned}$$

which is equal to 1 if and only if \(p>\frac{1}{2}\) or that \(p=\frac{1}{2}\) and \(\alpha \leq \frac{1}{4}\). Furthermore, if \(p=\frac{1}{2}\) and \(\alpha \leq \frac{1}{4}\), then \(E_{e_{1}}[\tau _{0}]=+\infty \). While if \(p>\frac{1}{2}\), then

$$\begin{aligned} E_{e_{1}}[\tau _{0}]& = \int _{0}^{1}\frac{1}{ p-(1-p)y}\cdot e^{\int _{y}^{1}\frac{\alpha (1+x)}{p-(1-p)x}\,dx }\,dy \\ & = (2p-1)^{-\frac{\alpha}{(1-p)^{2}}} \int _{0}^{1}\bigl[p-(1-p)y \bigr]^{ \frac{\alpha}{(1-p)^{2}}-1} e^{-\frac{\alpha (1-y)}{1-p}}\,dy . \end{aligned}$$

(ii) For the case \(\beta >0\), by Theorem 4.2, the process is positive recurrent if and only if \(p>\frac{1}{2}\).

Data Availability

No datasets were generated or analysed during the current study.


  1. Aksland, M.: A birth, death and migration process with immigration. Adv. Appl. Probab. 7, 44–60 (1977)

    Article  MathSciNet  Google Scholar 

  2. Asmussen, S., Hering, H.: Branching Processes. Birkhäuser, Boston (1983)

    Book  Google Scholar 

  3. Athreya, K.B., Jagers, P.: Classical and Modern Branching Processes. Springer, Berlin (1996)

    Google Scholar 

  4. Athreya, K.B., Ney, P.E.: Branching Processes. Springer, Berlin (1972)

    Book  Google Scholar 

  5. Chen, M.F.: From Markov Chains to Non-equilibrium Particle Systems. World Scientific, Singapore (1992)

    Book  Google Scholar 

  6. Foster, J.H.: A limit theorem for a branching process with state-dependent immigration. Ann. Math. Stat. 42, 1773–1776 (1971)

    Article  MathSciNet  Google Scholar 

  7. Harris, T.E.: The Theory of Branching Processes. Springer, Berlin (1963)

    Book  Google Scholar 

  8. Kulkarni, M.V., Pakes, A.G.: The total progeny of a simple branching process with state-dependent immigration. J. Appl. Probab. 20, 472–481 (1983)

    Article  MathSciNet  Google Scholar 

  9. Li, J.P., Chen, A.Y.: Markov branching processes with immigration and resurrection. Markov Process. Relat. Fields 12, 139–168 (2006)

    MathSciNet  Google Scholar 

  10. Li, J.P., Wang, J.: Decay parameter and related properties of n-type branching processes. Sci. China Ser. A, Math. 55, 2535–2556 (2012)

    Article  MathSciNet  Google Scholar 

  11. Pakes, A.G.: A branching process with a state-dependent immigration component. Adv. Appl. Probab. 3, 301–314 (1971)

    Article  MathSciNet  Google Scholar 

  12. Pakes, A.G., Tavaré, S.: Comments on the age distribution of Markov processes. Adv. Appl. Probab. 13, 681–703 (1981)

    Article  MathSciNet  Google Scholar 

  13. Sevast’yanov, B.A.: Limit theorems for branching stochastic processes of special form. Theory Probab. Appl. 2(3), 321–331 (1957)

    Article  MathSciNet  Google Scholar 

  14. Vatutin, V.A.: The asymptotic probability of the first degeneration for branching processes with immigration. Theory Probab. Appl. 19(1), 25–34 (1974)

    Article  MathSciNet  Google Scholar 

  15. Vatutin, V.A.: A conditional limit theorem for a critical branching process with immigration. Math. Notes 21, 405–411 (1977)

    Article  MathSciNet  Google Scholar 

  16. Yamazato, M.: Some results on continuous time branching processes with state-dependent immigration. J. Math. Soc. Jpn. 17, 479–497 (1975)

    MathSciNet  Google Scholar 

Download references


This work is substantially supported by the National Natural Science Foundations of China (No. 11771452) and the Science Foundations of Hunan in China (No. 2020JJ4674).


The work of Junping Li in this paper is substantially supported by the National Natural Science Foundations of China (No. 11771452) and the Science Foundations of Hunan in China (No. 2020JJ4674).

Author information

Authors and Affiliations



Junping Li proved Theorems 3.1-3.3 and wrote the main manuscript, Juan Wang proved Theorems 4.2-4.3. Both authors reviewed the manuscript.

Corresponding author

Correspondence to Junping Li.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, J., Wang, J. Extinction behavior and recurrence of n-type Markov branching–immigration processes. Bound Value Probl 2024, 1 (2024).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:

Mathematics Subject Classification