Skip to main content

The multiple birth properties of multi-type Markov branching processes

Abstract

The main purpose of this paper is to consider the multiple birth properties for multi-type Markov branching processes. We first construct a new multi-dimensional Markov process based on the multi-type Markov branching process, which can reveal the multiple birth characteristics. Then the joint probability distribution of multiple birth of multi-type Markov branching process until any time t is obtained by using the new process. Furthermore, the probability distribution of multiple birth until the extinction of the process is also given.

1 Introduction

Markov branching processes play an important role in the research and application of stochastic processes. Standard references are Anderson [1], Harris [2], Athreya & Ney [3], Asmussen & Hering [4], Athreya & Jagers [5] and others.

The basic property governing the evolution of a Markov branching process is the branching property, i.e., different individuals act independently when giving offsprings. The classical Markov branching processes are well studied, some related references are Harris [2], Athreya & Ney [3], Asmussen & Hering [4], and Athreya & Jagers [5]. Based on the branching structure, there are many references concentrating on generalization of ordinary Markov branching processes. For example, Vatutin [6], Li, Chen & Pakes [7] considered the branching processes with state-independent immigration. Chen, Li & Ramesh [8] and Chen, Pollet, Zhang & Li [9] considered weighted Markov branching processes, Li & Chen [10] considered generalized Markov interacting branching processes, Li & Wang [11, 12] and Meng & Li [13] considered n-type branching processes with or without immigration. Recently, Li & Li [14, 15] considered down/up crossing properties of weighted Markov collision processes and one-dimensional Markov branching processes.

In this paper, we mainly discuss the multiple birth properties of multi-type Markov branching processes. Different from the one-type case, the number of individuals of other types may change when an individual splits.

For convenience of our discussion, we make the following notations throughout of this paper. Let \(\mathbf{Z}_{+}\) be the set of non-negative integers.

(C-1) \(\mathbf{Z}_{+} ^{d} :=\{\boldsymbol{i}=({i_{1}},\ldots ,{i_{d}}):{i_{1}}, \ldots ,{i_{d}} \in {\mathbf{Z}_{+}}\}\), and for any \(\boldsymbol{i}=(i_{1},\ldots ,i_{d}) \in \mathbf{Z}_{+}^{d}\), denote \(\mid \boldsymbol{i}\mid =\sum \limits _{k=1}^{d} i_{k}\).

(C-2) \({[0,1]^{d}} = \{\boldsymbol{x}=({x_{1}},\ldots ,{x_{d}}):0 \leq {x_{1}},\ldots ,{x_{d}} \leq 1\} \).

(C-3) \({\chi _{{\mathbf{Z}_{+}^{d}}}}(\cdot )\) is the indicator of \(\mathbf{Z}_{+}^{d}\)

(C-4) \(\boldsymbol{0}=(0,\ldots ,0)\), \(\boldsymbol{1}= (1,\ldots ,1)\), \({\boldsymbol{e}_{k}} = (0,\ldots ,{1_{k}},\ldots ,0)\) are vectors in \({[0,1]^{d}}\).

(C-5) For any \(\boldsymbol{x},\boldsymbol{y}\in [0,1]^{d}\), \(\boldsymbol{x}\leq \boldsymbol{y}\) means \(x_{k} \leq y_{k}\) for all \(k= 1,\ldots ,d\). \(\boldsymbol{x}<\boldsymbol{y}\) means \(x_{k} \leq y_{k}\) for all \(k= 1,\ldots ,d\), and \(x_{k}< y_{k}\) for at least one k.

(C-6) For any \(\boldsymbol{x}\in [0,1]^{d}\), denote \(\| \boldsymbol{x}\|_{1} = \sum \limits _{k=1}^{d} |x_{k}|\).

A d-type Markov branching process can be intuitively described as follows:

(1) Consider a system involving d types of individuals. The life length of a type-k individual is exponentially distributed with mean \({\boldsymbol{\theta}_{k}}\ (k= 1,\ldots ,d)\).

(2) Individuals in the system split independently. When a type-k individual dies after a random time, it is replaced by \({j_{1}}\) individuals of type-1, ⋯ , and \(j_{d}\) individuals of type-d, with probability \(p^{(a)}_{\boldsymbol{j}}\), here \(\boldsymbol{j}=(j_{1},\ldots ,j_{d})\). Without loss of generality, we can assume \(p^{(k)}_{\boldsymbol{e}_{k}}=0\ (k=1,\ldots ,d)\), since such split does not change the state of the system.

(3) When this system is empty, it stops, i.e., 0 is an absorbing state.

We now define the infinitesimal generator of d-type Markov branching processes, i.e., the Q-matrix.

Definition 1.1

A Q-matrix \(Q = (q_{\boldsymbol{ij}}:\boldsymbol{i},\boldsymbol{j} \in \mathbf{Z}_{+}^{d})\) is called a d-type Markov branching Q-matrix (henceforth referred to as a dTMB Q-matrix), if

$$\begin{aligned} q_{\boldsymbol{ij}}= \textstyle\begin{cases} \sum \limits _{k=1}^{d} i_{k}b_{\boldsymbol{j}-\boldsymbol{i}- \boldsymbol{e}_{k}}^{(k)},\ & if \mid \boldsymbol{i}\mid > 0, \\ 0,\ &\ otherwise, \end{cases}\displaystyle \end{aligned}$$
(1.1)

where \(b^{(k)}_{\boldsymbol{j}}=0\) for \(\boldsymbol{j}\notin \mathbf{Z}_{+}^{d}\) and

$$\begin{aligned} b_{\boldsymbol{j}}^{(k)}=\theta _{k}p_{\boldsymbol{j}}^{(k)} \geq 0 \ (\ \boldsymbol{j}\neq \boldsymbol{e}_{k}), \quad b^{(k)}_{ \boldsymbol{e}_{k}}=-\sum \limits _{\boldsymbol{j}\neq \boldsymbol{e}_{k}} b_{\boldsymbol{j}}^{(k)}\ (k=1,\ldots ,d). \end{aligned}$$
(1.2)

Definition 1.2

A d-type Markov branching process (henceforth referred to as dTMBP) is a continuous-time Markov chain with state space \(\mathbf{Z}_{+} ^{d}\) whose transition probability function \(P(t)=(p_{\boldsymbol{ij}}(t):\boldsymbol{i},\boldsymbol{j}\in \mathbf{Z}_{+}^{d})\) satisfies the Kolmogorov forward equation

$$\begin{aligned} P'(t)=P(t)Q, \end{aligned}$$

where Q is given in (1.1)–(1.2),

2 Preliminaries

In this section, we make some preliminaries related to the problem considered in this paper. For \(k=1,\ldots , d\), let \(R_{k}\subset \mathbf{Z}_{+}^{d}\) be finite subsets. Since if \(b^{(k)}_{\boldsymbol{j}_{0}}=0\) for some \(\boldsymbol{j}_{0}\in R_{k}\), then there is no individual may giving \(\boldsymbol{j}_{0}\)-birth, therefore, we assume \(b^{(k)}_{\boldsymbol{j}}>0\) for all \(\boldsymbol{j}\in R_{k}\). Also, let \(r_{k}\) denote the number of elements in \(R_{k}\) and \(r=r_{1}+\cdots +r_{d}\). This paper is devoted to considering the probability distribution property of the number of type-k individuals giving \(R_{k}\)-birth until time t.

For convenience of our discussion, we only discuss the case of 2-type Markov branching process. The general case of the d-type \((d \geq 3)\) can be studied analogously.

Define

$$\begin{aligned} B_{k}(\boldsymbol{x})=\sum \limits _{\boldsymbol{j}\in {\mathbf{Z}}_{+}^{2}}b^{(k)}_{ \boldsymbol{j}}\boldsymbol{x} ^{\boldsymbol{j}},\quad \boldsymbol{x}\in [0,1]^{2},\ \ k=1,2, \end{aligned}$$
(2.1)

and

$$\begin{aligned} B_{ij}(\boldsymbol{x})= \frac{\partial B_{i}(\boldsymbol{x})}{\partial x_{j}},\quad \boldsymbol{x}\in [0,1]^{2},\ \ i,j=1,2. \end{aligned}$$

In order to avoid some trivial cases, we assume the following conditions hold.

(A-1) \((B_{1}(\boldsymbol{x}),B_{2}(\boldsymbol{x}))\) is nonsingular, i.e., there is no \(2\times 2\)-matrix M such that \((B_{1}(\boldsymbol{x}), B_{2}(\boldsymbol{x}))=\boldsymbol{x}M\);

(A-2) \(B_{ij}(1,1)<\infty \), \(i,j=1,2\);

(A-3) The matrix \((B_{ij}(1,1):i,j=1,2)\) is positively regular, i.e., there exists an integer m such that \((B_{ij}(1,1):i,j=1,2)^{m}>0\) in sense of all the elements are positive.

(A-1) guarantees that the model under consideration is not trivial. (A-2) guarantees the regularity of the process. (A-3) guarantees different type of individuals can exchange.

For any \(\boldsymbol{x}\in [0,1]^{2}\), the maximal eigenvalue of \((B_{ij}(\boldsymbol{x}):i,j=1,2)\) is denoted by \(\rho (\boldsymbol{x})\). The following lemma is due to Li & Wang [12], we only state it without proof.

Lemma 2.1

The system of equations

$$\begin{aligned} \textstyle\begin{cases} B_{1}(\boldsymbol{x})=0, \\ B_{2}(\boldsymbol{x})=0, \end{cases}\displaystyle \end{aligned}$$
(2.2)

has at most two solutions in \([0,1]^{2}\). Let \(\boldsymbol{q}=(q_{1},q_{2})\) denote the smallest nonnegative solution to (2.2). Then,

(i) \(q_{i}\) is the extinction probability when the Feller minimal process starts at state \(\boldsymbol{e}_{i}\ (i=1,2)\). Moreover, if \(\rho (\boldsymbol{1})\leq 0\), then \(\boldsymbol{q}=\boldsymbol{1}\); while if \(\rho (\boldsymbol{1})>0\), then \(\boldsymbol{q}<\boldsymbol{1}\), i.e., \(q_{1}, q_{2}<1\).

(ii) \(\rho (\boldsymbol{q})\leq 0\).

The following result is well known and reveals the basic property of 2-type Markov branching processes.

Lemma 2.2

Let \(P(t)=(p_{\boldsymbol{i}\boldsymbol{j}}(t):\boldsymbol{i},\boldsymbol{j}\in \mathbf{Z}_{+}^{2})\) be the transition function with Q-matrix Q given in (1.1)–(1.2). Then,

$$\begin{aligned} \frac{\partial F_{\boldsymbol{i}}(t,\boldsymbol{x})}{\partial t} =B_{1}( \boldsymbol{x}) \frac{\partial F_{\boldsymbol{i}}(t,\boldsymbol{x})}{\partial x_{1}}+B_{2}( \boldsymbol{x}) \frac{\partial F_{\boldsymbol{i}}(t,\boldsymbol{x})}{\partial x_{2}}, \end{aligned}$$

where \(F_{\boldsymbol{i}}(t,\boldsymbol{x})=\sum \limits _{\boldsymbol{j}\in \mathbf{Z}_{+}^{2}}p_{ \boldsymbol{i}\boldsymbol{j}}(t)\boldsymbol{x} ^{\boldsymbol{j}}\) with \(\boldsymbol{x}^{\boldsymbol{j}}=x_{1}^{j_{1}}x_{2}^{j_{2}}\).

Li & Meng [16] derived the regularity criteria for 2-type Markov branching processes. Assumption (A-1) guarantees the regularity of the process.

Let \(\boldsymbol{Y}(t)=(Y_{\boldsymbol{k}}(t):\boldsymbol{k}\in R_{1})\) be the number of type-1 individuals giving \(R_{1}\)-birth until time t and \(\boldsymbol{Z}(t)=(Z_{\boldsymbol{k}}(t):\boldsymbol{k}\in R_{2})\) be the number of type-2 individuals giving \(R_{2}\)-birth until time t. We will discuss the probability distribution property of \((\boldsymbol{Y}(t),\boldsymbol{Z}(t))\). For this end, we define

$$\begin{aligned}& B_{1}(\boldsymbol{x},\boldsymbol{y}) = \sum \limits _{ \boldsymbol{j}\in R_{1}} b_{\boldsymbol{j}}^{(1)}\boldsymbol{x}^{ \boldsymbol{j}} y_{\boldsymbol{j}} ,\quad \bar{B}_{1}( \boldsymbol{x})= \sum \limits _{\boldsymbol{j}\in R_{1}^{c}} b_{ \boldsymbol{j}}^{(1)}\boldsymbol{x}^{\boldsymbol{j}}, \end{aligned}$$
(2.3)
$$\begin{aligned}& B_{2}(\boldsymbol{x},\boldsymbol{z}) = \sum \limits _{ \boldsymbol{j}\in R_{2}} b_{\boldsymbol{j}}^{(2)}\boldsymbol{x}^{ \boldsymbol{j}} z_{\boldsymbol{j}} ,\quad \bar{B}_{2}( \boldsymbol{x}) =\sum \limits _{\boldsymbol{j}\in R_{2}^{c}} b_{ \boldsymbol{j}}^{(2)}\boldsymbol{x}^{\boldsymbol{j}}, \end{aligned}$$
(2.4)

where \(\boldsymbol{x}=(x_{1},x_{2})\in \mathbf{Z}_{+}^{2}\); \(\boldsymbol{y}=(y_{\boldsymbol{j}}:\boldsymbol{j}\in R_{1})\), \(\boldsymbol{z}=(z_{\boldsymbol{j}}:\boldsymbol{j}\in R_{2})\). It is obvious that \(\bar{B}_{1}(\boldsymbol{x})\), \(\bar{B}_{2}(\boldsymbol{x})\) are well defined at least on \({[0,1]^{2}}\). \(B_{1}(\boldsymbol{x},\boldsymbol{y})\), \(B_{2}(\boldsymbol{x},\boldsymbol{z})\) are well defined at least on \([0,1]^{2+r_{1}}\) and \([0,1]^{2+r_{2}}\), respectively.

Since the 2-type branching process itself cannot directly reveal the detailed multi-birth, we define a new Q-matrix \(\tilde{Q} =(q_{{(\boldsymbol{i},\boldsymbol{k}, \tilde{\boldsymbol{k}}), (\boldsymbol{j},\boldsymbol{l}, \tilde{\boldsymbol{l}})}}: (\boldsymbol{i},\boldsymbol{k}, \tilde{\boldsymbol{k}}), (\boldsymbol{j},\boldsymbol{l}, \tilde{\boldsymbol{l}})\in \mathbf{Z}_{+}^{2+r_{1}+r_{2}})\) as follows:

$$\begin{aligned} q_{{(\boldsymbol{i},\boldsymbol{k},\tilde{\boldsymbol{k}}), ( \boldsymbol{j},\boldsymbol{l},\tilde{\boldsymbol{l}})}}= \textstyle\begin{cases} \sum \limits _{a=1}^{2} i_{a}b^{(a)}_{\boldsymbol{j}- \boldsymbol{i}+ \boldsymbol{e}_{a}},& if\ |~\boldsymbol{i}~|>0, \boldsymbol{l}=\boldsymbol{k}+I_{{R_{1}}}(\boldsymbol{j} \!-\! \boldsymbol{i}\!+\!\boldsymbol{e}_{1})\varepsilon _{ \boldsymbol{j} \!-\boldsymbol{i}\!+\boldsymbol{e}_{1}},\\ &\hphantom{if\ }\tilde{\boldsymbol{l}}=\tilde{\boldsymbol{k}}+I_{{R_{2}}} ( \boldsymbol{j} \!-\!\boldsymbol{i}\!+\!\boldsymbol{e}_{2}) \tilde{\varepsilon} _{\boldsymbol{j} \!-\boldsymbol{i}\!+ \boldsymbol{e}_{2}}, \\ 0,& otherwise, \end{cases}\displaystyle \end{aligned}$$
(2.5)

where \(\varepsilon _{\boldsymbol{k}}\ (\boldsymbol{k}\in R_{1})\) denotes the vector in \(\mathbf{Z}_{+}^{r_{1}}\) with the k’th element being 1 and the others being 0. \(\tilde{\varepsilon}_{\tilde{\boldsymbol{k}}}\ ( \tilde{\boldsymbol{k}}\in R_{2})\) denotes the vector in \(\mathbf{Z}_{+}^{r_{2}}\) with the \(\tilde{\boldsymbol{k}}\)’th element being 1 and the others being 0. \(I_{{R_{1}}}\) and \(I_{{R_{2}}}\) are the indicators of \(R_{1}\) and \(R_{2}\) respectively. It follows from the definition of Q̃, we can see that \(\boldsymbol{l}=\boldsymbol{k}+\varepsilon _{\boldsymbol{j} \!- \boldsymbol{i}\!+\boldsymbol{e}_{1}}\) if and only if \(\boldsymbol{j} \!-\boldsymbol{i}\!+\boldsymbol{e}_{1}\in R_{1}\), \(\tilde{\boldsymbol{l}}=\tilde{\boldsymbol{k}}+ \tilde{\varepsilon}_{\boldsymbol{j} \!-\boldsymbol{i}\!+ \boldsymbol{e}_{2}}\) if and only if \(\boldsymbol{j} \!-\boldsymbol{i}\!+\boldsymbol{e}_{2}\in R_{2}\). Hence, Q̃ counts the multi-birth.

It is obvious that the Q-matrix Q̃ defined in (2.5) determines a \((2+r_{1}+r_{2})\)-dimensional continuous-time Markov chain \((\boldsymbol{X}(t),\boldsymbol{Y}(t),\boldsymbol{Z}(t))\), where \(\boldsymbol{X}(t)\) is the 2-type Markov branching process, \(\boldsymbol{Y}(t) =(Y_{\boldsymbol{k}}(t):\boldsymbol{k} \in R_{1})\) (or \(\boldsymbol{Z}(t) =(Z_{\boldsymbol{k}}(t):\boldsymbol{k} \in R_{2})\)) counts the number of type-1 (or type-2) individuals giving \(R_{1}\)-birth (or \(R_{2}\)-birth) until time t. We assume that \(Y_{\boldsymbol{k}}(0)=0\) and \(Z_{\boldsymbol{k}}(0)=0\) for all \(\boldsymbol{k}\in R_{1}\) and \(\boldsymbol{k}\in R_{2}\). In particular,

(1) if \(R_{1}=\{\boldsymbol{0}\}\) (or \(R_{2}=\{\boldsymbol{0}\}\)), then \(Y_{\boldsymbol{0}}(t)\) (or \(Z_{\boldsymbol{0}}(t)\)) counts the pure death number of type-1 (or type-2) individuals until time t.

(2) If \(R_{1}=\{(n_{1},n_{2})\}\), then \(Y_{(n_{1},n_{2})}(t)\) counts the \((n_{1},n_{2})\)-birth number of type-1 individuals until time t.

(3) If \(R_{2}=\{(n_{1},n_{2})\}\), then \(Z_{(n_{1},n_{2})}(t)\) counts the \((n_{1},n_{2})\)-birth number of type-2 individuals until time t.

Let \(\tilde{P}(t):=(\tilde{p}_{{(\boldsymbol{i},\boldsymbol{k}, \tilde{\boldsymbol{k}}), (\boldsymbol{j},\boldsymbol{l}, \tilde{\boldsymbol{l}})}}(t): (\boldsymbol{i},\boldsymbol{k}, \tilde{\boldsymbol{k}}), (\boldsymbol{j},\boldsymbol{l}, \tilde{\boldsymbol{l}})\in \mathbf{Z}_{+}^{2+r_{1}+r_{2}})\) be the transition probability of \((\boldsymbol{X}(t), \boldsymbol{Y}(t),\boldsymbol{Z}(t))\). Define

$$ F_{{\boldsymbol{i},\boldsymbol{k}, \tilde{\boldsymbol{k}}}}(t, \boldsymbol{x},\boldsymbol{y}, \boldsymbol{z}) =\sum \limits _{( \boldsymbol{j},\boldsymbol{l}, \tilde{\boldsymbol{l}})\in \mathbf{Z}_{+}^{2+r_{1}+r_{2}}}\tilde{p}_{{(\boldsymbol{i}, \boldsymbol{k}, \tilde{\boldsymbol{k}}), (\boldsymbol{j}, \boldsymbol{l},\tilde{\boldsymbol{l}})}}(t) \boldsymbol{x}^{ \boldsymbol{j}}\boldsymbol{y}^{\boldsymbol{l}} \boldsymbol{z}^{ \tilde{\boldsymbol{l}}},\quad (\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z})\in [0,1]^{2+r_{1}+r_{2}}, $$

where \(\boldsymbol{x}^{\boldsymbol{j}}=x_{1}^{j_{1}}x_{2}^{j_{2}}\), \(\boldsymbol{y}^{\boldsymbol{l}}=\prod \limits _{\boldsymbol{m} \in R_{1}}y_{\boldsymbol{m}}^{l_{\boldsymbol{m}}}\) and \(\boldsymbol{z}^{\tilde{\boldsymbol{l}}}=\prod \limits _{ \boldsymbol{m}\in R_{2}}z_{\boldsymbol{m}}^{\tilde{l}_{ \boldsymbol{m}}}\).

Lemma 2.3

Let \(\tilde{P}(t)=(\tilde{p}_{{(\boldsymbol{i},\boldsymbol{k}, \tilde{\boldsymbol{k}}), (\boldsymbol{j},\boldsymbol{l},\tilde{\boldsymbol{l}})}}(t): (\boldsymbol{i},\boldsymbol{k}, \tilde{\boldsymbol{k}}), (\boldsymbol{j},\boldsymbol{l},\tilde{\boldsymbol{l}})\in \mathbf{Z}_{+}^{2+r_{1}+r_{2}})\) be the transition probability of \((\boldsymbol{X}(t),\boldsymbol{Y}(t),\boldsymbol{Z}(t))\). Then,

(1) for any \((\boldsymbol{x},\boldsymbol{y},\boldsymbol{z}) \in [0,1]^{2+r_{1}+r_{2}}\),

$$\begin{aligned} & \frac{\partial F_{\boldsymbol{i},\boldsymbol{0},\tilde{\boldsymbol{0}}}(t,\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z})}{\partial t} \\ =&[B_{1}(\boldsymbol{x},\boldsymbol{y})+\bar{B} _{1}(\boldsymbol{x})] \frac{\partial F_{\boldsymbol{i},\boldsymbol{0},\tilde{\boldsymbol{0}}}(t,\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z})}{{\partial {x_{1}}}}+[B_{2}(\boldsymbol{x},\boldsymbol{z})+\bar{B} _{2}( \boldsymbol{x})] \frac{\partial F_{\boldsymbol{i},\boldsymbol{0},\tilde{\boldsymbol{0}}}(t,\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z})}{{\partial x_{2}}}, \end{aligned}$$
(2.6)

where \(B_{1}(\boldsymbol{x},\boldsymbol{y})\), \(B_{2}(\boldsymbol{x},\boldsymbol{z})\), \(\bar{B} _{1}(\boldsymbol{x})\) and \(\bar{B} _{2}(\boldsymbol{x})\) are defined in (2.1), (2.3)–(2.4).

(2) For any \((\boldsymbol{x},\boldsymbol{y},\boldsymbol{z}) \in [0,1]^{2+r_{1}+r_{2}}\) and \((\boldsymbol{i},\boldsymbol{k},\tilde{\boldsymbol{k}}) \in \mathbf{Z}_{+}^{2+r_{1}+r_{2}}\),

$$\begin{aligned} F_{{\boldsymbol{i},\boldsymbol{k}, \tilde{\boldsymbol{k}}}}(t,\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z}) =\boldsymbol{y}^{\boldsymbol{k}}\boldsymbol{z}^{\tilde{\boldsymbol{k}}} [ \boldsymbol{F}(t,\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z})]^{\boldsymbol{i}}, \end{aligned}$$
(2.7)

where \(\boldsymbol{F}(t,\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z})=(F_{1}(t,\boldsymbol{x}, \boldsymbol{y}, \boldsymbol{z}),F_{2}(t,\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z}))\) with \(F_{k}(t,\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z})=F_{\boldsymbol{e}_{k},\boldsymbol{0}, \boldsymbol{0}}(t,\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z})\ (k=1,2)\).

Proof

(1) By the Kolmogorov forward equation, for any \((\boldsymbol{i},\boldsymbol{k}, \tilde{\boldsymbol{k}}),( \boldsymbol{j},\boldsymbol{l}, \tilde{\boldsymbol{l}})\in \mathbf{Z}_{+}^{2+r_{1}+r_{2}}\),

$$ \tilde{p}'_{(\boldsymbol{i},\boldsymbol{k}, \tilde{\boldsymbol{k}}), (\boldsymbol{j},\boldsymbol{l}, \tilde{\boldsymbol{l}})}(t) =\sum \limits _{(\boldsymbol{a}, \boldsymbol{m},\tilde{\boldsymbol{m}}) \in \mathbf{Z}_{+}^{2+r_{1}+r_{2}}} \tilde{p}_{(\boldsymbol{i},\boldsymbol{k}, \tilde{\boldsymbol{k}}), (\boldsymbol{a},\boldsymbol{m}, \tilde{\boldsymbol{m}})}(t) q_{(\boldsymbol{a},\boldsymbol{m}, \tilde{\boldsymbol{m}}), (\boldsymbol{j},\boldsymbol{l}, \tilde{\boldsymbol{l}})}. $$

Multiplying \(\boldsymbol{x}^{\boldsymbol{j}}\boldsymbol{y}^{\boldsymbol{l}} \boldsymbol{z}^{\tilde{\boldsymbol{l}}}\) on both sides of the above equation and summing over \((\boldsymbol{j},\boldsymbol{l},\tilde{\boldsymbol{l}})\in \mathbf{Z}_{+}^{2+r_{1}+r_{2}}\) yield (2.6).

(2) Let \(\boldsymbol{X}_{a,k}(t)\) denote the offsprings at time t of the k’th individual of type-a at initial, \(\boldsymbol{Y}_{a,k}(t)\) denote the number of \(R_{1}\)-birth individuals of \(\boldsymbol{X}_{a,k}(t)\ (a=1,2)\) and \(\boldsymbol{Z}_{a,k}(t)\) denote the number of \(R_{2}\)-birth individuals of \(\boldsymbol{X}_{a,k}(t)\ (a=1,2)\). Then, \(\{(\boldsymbol{X}_{a,k}(t),\boldsymbol{Y}_{a,k}(t), \boldsymbol{Z}_{a,k}(t)): k=1,\ldots , i_{a}; a=1,2\}\) are independent. Moreover, for \(a=1,2\), \((\boldsymbol{X}_{a,k}(t),\boldsymbol{Y}_{a,k}(t), \boldsymbol{Z}_{a,k}(t))\) has the common distribution of \((\boldsymbol{X}(t),\boldsymbol{Y}(t), \boldsymbol{Z}(t))\) starting at \((\boldsymbol{e}_{a},\boldsymbol{0},\boldsymbol{0})\). Thus,

$$ \textstyle\begin{array}{l} \cr E[\boldsymbol{x}^{\boldsymbol{X}(t)}\boldsymbol{y} ^{ \boldsymbol{Y}(t)}\boldsymbol{z}^{\boldsymbol{Z}(t)}\mid ( \boldsymbol{X}(0),\boldsymbol{Y}(0),\boldsymbol{Z}(0)) =( \boldsymbol{i},\boldsymbol{k},\tilde{\boldsymbol{k}})] \\ \cr =E[\boldsymbol{x}^{\sum \limits _{a=1}^{2}\sum \limits _{k=1}^{i_{a}} \boldsymbol{X}_{a,k}(t)}\boldsymbol{y} ^{\boldsymbol{k}+\sum \limits _{a=1}^{2}\sum \limits _{k=1}^{i_{a}} \boldsymbol{Y}_{a,k}(t)} \boldsymbol{z}^{\tilde{\boldsymbol{k}} +\sum \limits _{a=1}^{2} \sum \limits _{k=1}^{i_{a}} \boldsymbol{Z}_{a,k}(t)}] \\ \cr =\boldsymbol{y}^{\boldsymbol{k}}\boldsymbol{z} ^{ \tilde{\boldsymbol{k}}}E[\prod \limits _{k=1}^{i_{1}} \boldsymbol{x} ^{\boldsymbol{X}_{1,k}(t)}\prod \limits _{k=1}^{i_{1}} \boldsymbol{y} ^{\boldsymbol{Y}_{1,k}(t)}\prod \limits _{k=1}^{i_{1}} \boldsymbol{z}^{ \boldsymbol{Z}_{1,k}(t)}\cdot \prod \limits _{k=1}^{i_{2}} \boldsymbol{x} ^{\boldsymbol{X}_{2,k}(t)}\prod \limits _{k=1}^{i_{2}} \boldsymbol{y} ^{\boldsymbol{Y}_{2,k}(t)}\prod \limits _{k=1}^{i_{2}} \boldsymbol{z}^{ \boldsymbol{Z}_{2,k}(t)}] \\ \cr =\boldsymbol{y}^{\boldsymbol{k}}\boldsymbol{z} ^{ \tilde{\boldsymbol{k}}}(E[\boldsymbol{x} ^{\boldsymbol{X}_{1,1}(t)} \boldsymbol{y} ^{\boldsymbol{Y}_{1,1}(t)}\boldsymbol{z}^{ \boldsymbol{Z}_{1,1}(t)}])^{i_{1}}\cdot (E[\boldsymbol{x} ^{ \boldsymbol{X}_{2,1}(t)}\boldsymbol{y} ^{\boldsymbol{Y}_{2,1}(t)} \boldsymbol{z}^{ \boldsymbol{Z}_{2,1}(t)}])^{i_{2}} \\ \cr =\boldsymbol{y}^{\boldsymbol{k}}\boldsymbol{z} ^{ \tilde{\boldsymbol{k}}}[\boldsymbol{F}(t,\boldsymbol{x}, \boldsymbol{y},\boldsymbol{z})]^{\boldsymbol{i}}. \end{array} $$

The proof is complete. □

The functions \(B_{1}(\boldsymbol{x},\boldsymbol{y}) +\bar{B}_{1}( \boldsymbol{x})\) and \(B_{2}(\boldsymbol{x},\boldsymbol{z}) +\bar{B}_{2}( \boldsymbol{x})\) will play a significant role in the later discussion. The following theorem reveals their properties.

Theorem 2.1

(1) For any \(\boldsymbol{y}\in [0,1)^{r_{1}}\), \(\boldsymbol{z}\in [0,1)^{r_{2}}\),

$$\begin{aligned} \textstyle\begin{cases} B_{1}(\boldsymbol{x},\boldsymbol{y})+\bar{B}_{1}(\boldsymbol{x})=0, \\ B_{2}(\boldsymbol{x},\boldsymbol{z})+\bar{B}_{2}(\boldsymbol{x})=0 \end{cases}\displaystyle \end{aligned}$$
(2.8)

possesses exact one root in \({[0,1]^{2}}\), denoted by \(\boldsymbol{q}(\boldsymbol{y},\boldsymbol{z}):=(q_{1}(\boldsymbol{y},\boldsymbol{z}), q_{2}( \boldsymbol{y},\boldsymbol{z}))\). Moreover, \(\boldsymbol{q}(\boldsymbol{y},\boldsymbol{z})\leq \boldsymbol{q}\), where \(\boldsymbol{q}=(q_{1},q_{2})\) is the minimal nonnegative solution of (2.2) given in Lemma 2.1.

(2) \(q_{k}(\boldsymbol{y},\boldsymbol{z})\in {C^{\infty }}([0,1)^{r_{1}+r_{2}}) \ (k=1,2)\), and \(q_{k}(\boldsymbol{y},\boldsymbol{z})\) can be expanded as a multivariate nonnegative Taylor series

$$\begin{aligned} q_{k}(\boldsymbol{y},\boldsymbol{z})=\sum \limits _{(\boldsymbol{k,l})\in \mathbf{Z}_{+}^{r_{1}+r_{2}}} \beta ^{(a)}_{\boldsymbol{k,l}}\boldsymbol{y}^{\boldsymbol{k}} \boldsymbol{z}^{\boldsymbol{l}}, \quad (\boldsymbol{y},\boldsymbol{z})\in [0,1)^{r_{1}+r_{2}},\ \ k=1,2. \end{aligned}$$

Proof

Note that \(B_{1}(\boldsymbol{1},\boldsymbol{y})+\bar{B}_{1}(\boldsymbol{1})<0\) and \(B_{2}(\boldsymbol{1},\boldsymbol{z})+\bar{B}_{2}(\boldsymbol{1})<0\), by a similar argument as Lemma 2.8 in Li & Wang [12], we can prove that (2.8) possesses exact one root in \({[0,1]^{2}}\). Note that

$$\begin{aligned} \textstyle\begin{cases} B_{1}(\boldsymbol{x},\boldsymbol{y}) +\bar{B}_{1}(\boldsymbol{x}) \leq B_{1}(\boldsymbol{x}), \\ B_{2}(\boldsymbol{x},\boldsymbol{z}) +\bar{B}_{2}(\boldsymbol{x}) \leq B_{2}(\boldsymbol{x}), \end{cases}\displaystyle \end{aligned}$$

we further know that \(\boldsymbol{q}(\boldsymbol{y},\boldsymbol{z})\leq \boldsymbol{q}\).

Next to prove (2). Integrating (2.6) yields that for \(k=1,2\),

$$\begin{aligned} &\sum \limits _{(\boldsymbol{j},\boldsymbol{k}, \tilde{\boldsymbol{k}})\in \mathbf{Z}_{+}^{2+r_{1}+r_{2}}} \tilde{p}_{{(\boldsymbol{e}_{k},\boldsymbol{0}, \tilde{\boldsymbol{0}}), (\boldsymbol{j},\boldsymbol{l}, \tilde{\boldsymbol{l}})}}(t) \boldsymbol{x}^{\boldsymbol{j}} \boldsymbol{y}^{\boldsymbol{l}} \boldsymbol{z}^{ \tilde{\boldsymbol{l}}}-\boldsymbol{x} ^{\boldsymbol{e}_{k}} \\ =&[B_{1}(\boldsymbol{x},\boldsymbol{y})+\bar{B} _{1}( \boldsymbol{x})]\int _{0}^{t} \frac{\partial F_{\boldsymbol{e}_{k},\boldsymbol{0},\boldsymbol{0}} (u,\boldsymbol{x}, \boldsymbol{y},\boldsymbol{z})}{{\partial {x_{1}}}}du \\ &+[B_{2}( \boldsymbol{x},\boldsymbol{z})+\bar{B} _{2}(\boldsymbol{x})]\int _{0}^{t} \frac{\partial F_{\boldsymbol{e}_{k},\boldsymbol{0},\boldsymbol{0}}(u,\boldsymbol{x}, \boldsymbol{y},\boldsymbol{z})}{{\partial {x_{2}}}}du. \end{aligned}$$

Since all the states \((\boldsymbol{i},\boldsymbol{l},\tilde{\boldsymbol{l}})\) with \(|~\boldsymbol{i}~|>0\) are transient and all the states \((\boldsymbol{0},\boldsymbol{l},\tilde{\boldsymbol{l}})\) are absorbing, letting \(\boldsymbol{x}=\boldsymbol{q}(\boldsymbol{y},\boldsymbol{z})\) in the above equality and then letting \(t\rightarrow \infty \) yield that

$$\begin{aligned} q_{k}(\boldsymbol{y}, \boldsymbol{z})=\sum \limits _{( \boldsymbol{k},\tilde{\boldsymbol{k}}) \in \mathbf{Z}_{+}^{r_{1}+r_{2}}} \tilde{p}_{{(\boldsymbol{e}_{k},\boldsymbol{0}, \tilde{\boldsymbol{0}}), (\boldsymbol{0},\boldsymbol{l}, \tilde{\boldsymbol{l}})}}(+\infty ) \boldsymbol{y}^{ \boldsymbol{l}} \boldsymbol{z}^{\tilde{\boldsymbol{l}}}, \quad k=1,2. \end{aligned}$$

The proof is complete. □

3 Multiple birth property

Having prepared some preliminaries in the previous section, we now consider the multiple birth property of 2-type Markov branching processes.

We first give the following theorem, which will play a key role in discussing the multiple birth property of 2-type Markov branching processes.

Theorem 3.1

Suppose that \(\boldsymbol{x}\in [0,1]^{2}\), \(\boldsymbol{y}\in [0,1)^{r_{1}}, [0,1)^{r_{2}}\).

(1) The differential equation

$$ \textstyle\begin{cases} \frac{\partial u_{1}}{\partial t}=B_{1}(\boldsymbol{u},\boldsymbol{y}) +\bar{B}_{1}( \boldsymbol{u}), \\ \frac{\partial u_{2}}{\partial t}=B_{2}(\boldsymbol{u},\boldsymbol{z}) +\bar{B}_{2}( \boldsymbol{u}), \\ \boldsymbol{u}(0)=\boldsymbol{x} \end{cases} $$
(3.1)

has unique solution \(\boldsymbol{u}(t)=\boldsymbol{G}(t,\boldsymbol{x},\boldsymbol{y},\boldsymbol{z})\), where

$$ \boldsymbol{u}(t)=(u_{1}(t),u_{2}(t)), \quad \boldsymbol{G}(t,\boldsymbol{x}, \boldsymbol{y},\boldsymbol{z}) =(g_{1}(t,\boldsymbol{x},\boldsymbol{y},\boldsymbol{z}), g_{2}(t, \boldsymbol{x},\boldsymbol{y},\boldsymbol{z})). $$

(2) \(\lim \limits _{t\rightarrow \infty}\boldsymbol{G}(t,\boldsymbol{x}, \boldsymbol{y},\boldsymbol{z})=\boldsymbol{q}(\boldsymbol{y}, \boldsymbol{z})\), where \(\boldsymbol{q}(\boldsymbol{y},\boldsymbol{z})\) is given in Theorem 2.1.

Proof

We first prove (1). For fixed \((\boldsymbol{y},\boldsymbol{z})\in [0,1)^{r_{1}+r_{2}}\), denote

$$ \textstyle\begin{cases} H_{1}(\boldsymbol{u})=B_{1}(\boldsymbol{u},\boldsymbol{y}) + \bar{B}_{1}(\boldsymbol{u})-b_{\boldsymbol{e}_{1}}^{(1)}u_{1}, \\ H_{2}(\boldsymbol{u})=B_{2}(\boldsymbol{u},\boldsymbol{z})+ \bar{B}_{2}(\boldsymbol{u})-b_{\boldsymbol{e}_{2}}^{(2)}u_{2}. \end{cases} $$

By the assumption (A-2), we know that \(H_{k}(\boldsymbol{u})\) satisfies Lipchitz condition, i.e., there exists a constant L such that for any \(\boldsymbol{u}=(u_{1},u_{2})\), \(\tilde{\boldsymbol{u}}=(\tilde{u}_{1}, \tilde{u}_{2})\in [0,1]^{2}\),

$$\begin{aligned} |H_{k}(\boldsymbol{u})-H_{k}(\tilde{\boldsymbol{u}})|\leq L\| \boldsymbol{u}-\tilde{\boldsymbol{u}}\|_{1},\quad k=1,2, \end{aligned}$$

For \(\boldsymbol{x}\in [0,1]^{2}\), define \(u_{k}^{(0)}(t)=x_{k}e^{b_{\boldsymbol{e}_{k}}^{(k)}t}\ (k=1,2)\) and

$$ u_{k}^{(n)}(t)=e^{b_{\boldsymbol{e}_{k}}^{(k)}t}[x_{k}+\int _{0}^{t} e^{-b_{ \boldsymbol{e}_{k}}^{(k)}s}H_{k}(\boldsymbol{u}^{(n-1)}(s))ds], \quad n\geq 1,\ \ k=1,2. $$

We can prove that

$$ 0\leq u^{(n)}_{k}(t)\leq 1,\quad t\geq 0, n\geq 1,\ k=1,2 $$
(3.2)

and

$$\begin{aligned} \|\boldsymbol{u}^{(n+1)}(t)-\boldsymbol{u}^{(n)}(t)\|_{1}\leq \frac{M(2L)^{n}}{(n+1)!}t^{n+1},\quad t\geq 0,\ n\geq 1. \end{aligned}$$
(3.3)

where \(M:=|b^{(1)}_{\boldsymbol{e}_{1}}|+|b^{(2)}_{\boldsymbol{e}_{2}}|\). Indeed, it is obvious that \(0\leq u^{(0)}_{k}(t)=x_{k}e^{b_{\boldsymbol{e}_{k}}^{(k)}t}\leq 1\ (k=1,2)\). Assume that

$$ 0\leq u^{(n)}_{k}(t)\leq 1,\ \quad t\geq 0,\ k=1,2. $$

Then it is obvious that \(u^{(n+1)}_{k}(t)\geq 0\), since \(H_{k}(\boldsymbol{u})\geq 0\) for all \(\boldsymbol{u}\in [0,1)^{2}\). On the other hand, for \(k=1,2\),

$$\begin{aligned} u_{k}^{(n+1)}(t) =& e^{b_{\boldsymbol{e}_{k}}^{(k)}t}[x_{k}+\int _{0}^{t} e^{-b_{\boldsymbol{e}_{k}}^{(k)}s}H_{k}(\boldsymbol{u}^{(n)}(s))ds] \\ \leq &e^{b_{\boldsymbol{e}_{k}}^{(k)}t}[x_{k}+\int _{0}^{t} e^{-b_{ \boldsymbol{e}_{k}}^{(k)}s}H_{k}(\boldsymbol{1})ds] \\ =&e^{b_{\boldsymbol{e}_{k}}^{(k)}t}[x_{k}-b_{\boldsymbol{e}_{k}}^{(k)} \int _{0}^{t} e^{-b_{\boldsymbol{e}_{k}}^{(k)}s}ds] \\ =&e^{b_{\boldsymbol{e}_{k}}^{(k)}t}[x_{k}+ e^{-b_{\boldsymbol{e}_{k}}^{(k)}t}-1] \\ \leq &1. \end{aligned}$$

(3.2) is proved. As for (3.3), by the definition of \(\boldsymbol{u}^{(n)}(t)\),

$$\begin{aligned} |u^{(n+1)}_{k}(t)-u^{(n)}_{k}(t)| \leq & e^{b_{\boldsymbol{e}_{k}}^{(k)}t} \int _{0}^{t} e^{-b_{\boldsymbol{e}_{k}}^{(k)}s}|H_{k}( \boldsymbol{u}^{(n)}(s)) -H_{k}(\boldsymbol{u}^{(n-1)}(s))|\ ds \\ \leq & L\int _{0}^{t}\|\boldsymbol{u}^{(n)}(s)- \tilde{\boldsymbol{u}}^{(n-1)}(s)\|_{1}\ ds,\quad n\geq 1,\ k=1,2. \end{aligned}$$

Hence,

$$\begin{aligned} \|\boldsymbol{u}^{(n+1)}(t)-\boldsymbol{u}^{(n)}(t)\|_{1} \leq & 2L \int _{0}^{t}\|\boldsymbol{u}^{(n)}(s)-\tilde{\boldsymbol{u}}^{(n-1)}(s) \|_{1}\ ds,\quad n\geq 1. \end{aligned}$$
(3.4)

Note that

$$\begin{aligned} |u^{(1)}_{k}(t)-u^{(0)}_{k}(t)|=e^{b_{\boldsymbol{e}_{k}}^{(k)}t} \int _{0}^{t} e^{-b_{\boldsymbol{e}_{k}}^{(k)}s}H_{k}( \boldsymbol{u}^{(0)}(s))ds \leq |b^{(k)}_{\boldsymbol{e}_{k}}|t, \quad k=1,2, \end{aligned}$$

we know that

$$\begin{aligned} \|\boldsymbol{u}_{1}(t)-\boldsymbol{u}_{0}(t)\|_{1}\leq Mt, \end{aligned}$$
(3.5)

It follows from (3.4), (3.5) and mathematical induction that (3.3) holds.

Since

$$ u^{(n)}_{k}(t)=u^{(0)}_{k}(t)+\sum \limits _{j=1}^{n}(u^{(j)}_{k}(t)-u^{(j-1)}_{k}(t)), \quad k=1,2, $$

by (3.3), we know that \(u^{(n)}_{k}(t)\ (k=1,2)\) converges uniformly in any finite interval \([0,T]\). Therefore, \(u_{k}(t):=\lim \limits _{n\rightarrow \infty}u^{(n)}_{k}(t)\) exists and it can be easily checked that \(\boldsymbol{u}(t)=(u_{1}(t),u_{2}(t))\) is a solution of (3.1). On the other hand, since \(B_{1}(\boldsymbol{u},\boldsymbol{y})\), \(\bar{B}_{1}( \boldsymbol{u})\), \(B_{2}(\boldsymbol{u},\boldsymbol{z})\) and \(\bar{B}_{2}(\boldsymbol{u})\) satisfy Lipchitz condition, by the differential equations theory, we know that (3.1) has unique solution. The unique solution of (3.1) is denoted by \(\boldsymbol{G}(t,\boldsymbol{x},\boldsymbol{y},\boldsymbol{z})\).

We now prove (2). For fixed \((\boldsymbol{x},\boldsymbol{y},\boldsymbol{z})\in [0,1]^{2} \times [0,1)^{r_{1}+r_{2}}\), denote

$$\begin{aligned} &f_{1}(\boldsymbol{u}):=B_{1}(\boldsymbol{u},\boldsymbol{y})+ \bar{B}_{1} (\boldsymbol{u}), \\ &f_{2}(\boldsymbol{u}):=B_{2}(\boldsymbol{u},\boldsymbol{z})+ \bar{B}_{2} (\boldsymbol{u}), \\ & \boldsymbol{G}(t)=(g_{1}(t),g_{2}(t)):=\boldsymbol{G}(t, \boldsymbol{x}, \boldsymbol{y},\boldsymbol{z}) \end{aligned}$$

for a moment.

(a) Suppose that \(f_{1}(\boldsymbol{x})\geq 0\), \(f_{2}(\boldsymbol{x})\geq 0\). We prove that

$$\begin{aligned} \omega :=\inf \limits _{t\geq 0}\{\min (f_{1}(\boldsymbol{G}(t)),f_{2}( \boldsymbol{G}(t)))\}\geq 0. \end{aligned}$$

Indeed, suppose that \(\omega <0\). Then by the continuity of \(f_{1}\), \(f_{2}\) and \(\boldsymbol{G}(t)\), there exist \(\tilde{t}<+\infty \) and \(\delta >0\) such that

$$\begin{aligned} \min (f_{1}(\boldsymbol{G}(\tilde{t})),f_{2}(\boldsymbol{G} ( \tilde{t})))=0,\quad \min (f_{1}(\boldsymbol{G}(\tilde{t})+s),f_{2}( \boldsymbol{G} (\tilde{t}+s)))< 0,\ \ \forall s\in (0,\delta ). \end{aligned}$$
(3.6)

We can assume \(f_{1}(\boldsymbol{G}(\tilde{t}))=0\) without loss of generality. If \(f_{2}(\boldsymbol{G}(\tilde{t}))>0\), then there exists \(\tilde{\delta}\in (0,\delta )\) such that

$$\begin{aligned} f_{1}(\boldsymbol{G}(\tilde{t}+s))< 0, \quad f_{2}(\boldsymbol{G}( \tilde{t}+s))>0, \quad s\in (0,\tilde{\delta}), \end{aligned}$$

which, by (3.1), implies that

$$\begin{aligned} g_{1}(\boldsymbol{G}(\tilde{t}+s))< g_{1}(\boldsymbol{G}(\tilde{t})), \quad g_{2}(\boldsymbol{G}(\tilde{t}+s))>g_{2}(\boldsymbol{G}( \tilde{t})),\quad s\in (0,\tilde{\delta}). \end{aligned}$$

Therefore,

$$\begin{aligned} f_{1}(g_{1}(\boldsymbol{G}(\tilde{t}+s)),g_{2}(\boldsymbol{G} ( \tilde{t})))\leq f_{1}(\boldsymbol{G}(\tilde{t}+s))< 0, \quad s\in (0, \tilde{\delta}). \end{aligned}$$
(3.7)

However, it is well known that \(u=g_{1}(\boldsymbol{G} (\tilde{t}))\) is the unique root of \(f_{1}(u,g_{2}(\boldsymbol{G} (\tilde{t})))=0\) in \([0,1]\) with \(f_{1}(u,g_{2}(\boldsymbol{G} (\tilde{t})))>0\) for \(u\in [0,g_{1}(\boldsymbol{G} (\tilde{t})))\), which contradicts with (3.7). Therefore,

$$\begin{aligned} f_{1}(\boldsymbol{G}(\tilde{t}))=0,\quad f_{2}(\boldsymbol{G}( \tilde{t}))=0. \end{aligned}$$

By Theorem 2.1, \(\boldsymbol{G}(\tilde{t})=\boldsymbol{q}(\boldsymbol{y}, \boldsymbol{z})\). Hence, by (1), we know that \(\boldsymbol{G}(t)=\boldsymbol{q}(\boldsymbol{y},\boldsymbol{z})\) for \(t\geq \tilde{t}\). Thus,

$$\begin{aligned} f_{1}(\boldsymbol{G}(\tilde{t}+s))=f_{2}(\boldsymbol{G}(\tilde{t}+s))=0, \quad s\geq 0, \end{aligned}$$

which contradicts with (3.6). Therefore, we have \(\omega \geq 0\). Hence, \(\boldsymbol{G}(t)\) is increasing in \(t\geq 0\). By (3.1),

$$\begin{aligned} g_{k}(t)=e^{b_{\boldsymbol{e}_{k}}^{(k)}t}[x_{k}+\int _{0}^{t} e^{-b_{ \boldsymbol{e}_{k}}^{(k)}s}H_{k}(\boldsymbol{G}(s))ds],\quad k=1,2. \end{aligned}$$
(3.8)

Letting \(t\rightarrow \infty \) in the above equality yields

$$\begin{aligned} \textstyle\begin{cases} B_{1}(\lim \limits _{t\rightarrow \infty}\boldsymbol{G}(t), \boldsymbol{y}) +\bar{B}_{1}(\lim \limits _{t\rightarrow \infty} \boldsymbol{G}(t))=0, \\ B_{2}(\lim \limits _{t\rightarrow \infty}\boldsymbol{G}(t), \boldsymbol{z}) +\bar{B}_{2}(\lim \limits _{t\rightarrow \infty} \boldsymbol{G}(t))=0. \end{cases}\displaystyle \end{aligned}$$

Therefore,

$$\begin{aligned} \lim \limits _{t\rightarrow \infty}\boldsymbol{G}(t)= \boldsymbol{q}(\boldsymbol{y}, \boldsymbol{z}). \end{aligned}$$

(b) Suppose that \(f_{1}(\boldsymbol{x})\leq 0\), \(f_{2}(\boldsymbol{x})\leq 0\). We can prove that

$$\begin{aligned} \omega :=\sup \limits _{t\geq 0}\{\min (f_{1}(\boldsymbol{G}(t)),f_{2}( \boldsymbol{G}(t)))\}\leq 0. \end{aligned}$$

By a similar argument as in (a), it can be proved that \(\boldsymbol{G}(t)\) is decreasing in \(t\geq 0\) and

$$\begin{aligned} \lim \limits _{t\rightarrow \infty}\boldsymbol{G}(t)= \boldsymbol{q}(\boldsymbol{y}, \boldsymbol{z}). \end{aligned}$$

(c) Suppose that \(f_{1}(\boldsymbol{x})\geq 0\), \(f_{2}(\boldsymbol{x})< 0\). Let

$$\begin{aligned} \sigma =\inf \{t\geq 0: f_{1}(\boldsymbol{G}(t))\leq 0\ {\mathrm{{or}}}\ f_{2}( \boldsymbol{G}(t))\geq 0\}. \end{aligned}$$

If \(\sigma <+\infty \), then \(g_{1}(\boldsymbol{G}(t))\) is increasing and \(g_{2}(\boldsymbol{G}(t))\) is decreasing in \([0,\sigma )\). It can be easily checked that \(\boldsymbol{G}(\sigma +t)\) is the solution of (3.1) with initial condition \(\boldsymbol{G}(\sigma )\). Furthermore, we have that \(f_{1}(\boldsymbol{G}(\sigma ))\geq 0\), \(f_{2}(\boldsymbol{G}( \sigma ))=0\) or that \(f_{1}(\boldsymbol{G}(\sigma ))=0\), \(f_{2}(\boldsymbol{G}(\sigma ))<0\). In the case that \(f_{1}(\boldsymbol{G}(\sigma ))\geq 0\), \(f_{2}(\boldsymbol{G}( \sigma ))=0\), by (a), we know that \(g_{1}(\boldsymbol{G}(t))\) and \(g_{2}(\boldsymbol{G}(t))\) are both increasing in \(t\in [\sigma ,+\infty )\) and

$$\begin{aligned} \lim \limits _{t\rightarrow \infty}\boldsymbol{G}(t)= \boldsymbol{q}(\boldsymbol{y}, \boldsymbol{z}). \end{aligned}$$

while in the case that \(f_{1}(\boldsymbol{G}(\sigma ))=0\), \(f_{2}(\boldsymbol{G}(\sigma ))<0\), by (b), we know that \(g_{1}(\boldsymbol{G}(t))\) and \(g_{2}(\boldsymbol{G}(t))\) are both decreasing in \(t\in [\sigma ,+\infty )\) and

$$\begin{aligned} \lim \limits _{t\rightarrow \infty}\boldsymbol{G}(t)= \boldsymbol{q}(\boldsymbol{y}, \boldsymbol{z}). \end{aligned}$$

If \(\sigma =+\infty \), then \(g_{1}(\boldsymbol{G}(t))\) is increasing and \(g_{2}(\boldsymbol{G}(t))\) is decreasing in \(t\geq 0\). By (3.8), we still have

$$\begin{aligned} \lim \limits _{t\rightarrow \infty}\boldsymbol{G}(t)= \boldsymbol{q}(\boldsymbol{y}, \boldsymbol{z}). \end{aligned}$$

(d) Suppose that \(f_{1}(\boldsymbol{x})<0\), \(f_{2}(\boldsymbol{x})\geq 0\). Let

$$\begin{aligned} \sigma =\inf \{t\geq 0: f_{1}(\boldsymbol{G}(t))\geq 0\ {\mathrm{{or}}}\ f_{2}( \boldsymbol{G}(t))\leq 0\}. \end{aligned}$$

A similar argument as in (c) yields the conclusion. The proof is complete. □

The following theorem gives the joint probability generating function of \((\boldsymbol{Y}(t),\boldsymbol{Z}(t))\).

Theorem 3.2

Suppose that \(\{\boldsymbol{X}(t):t \geq 0\} \) is a 2-type Markov branching process with \(\boldsymbol{X}(0) = \boldsymbol{e}_{k}\), \((k=1\ {\mathrm{{or}}}\ 2)\). \(\boldsymbol{G}(t,\boldsymbol{x},\boldsymbol{y},\boldsymbol{z}) = (g_{1}(t,\boldsymbol{x}, \boldsymbol{y},\boldsymbol{z}),g_{2}(t,\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z}))\) is the unique solution of (3.1). Then, the joint probability generating function of \((\boldsymbol{Y}(t),\boldsymbol{Z}(t))\) is given by

$$ E[\boldsymbol{y}^{\boldsymbol{Y}(t)}\boldsymbol{z}^{\boldsymbol{Z}(t)}\mid \boldsymbol{X}(0)= \boldsymbol{e}_{k}]=g_{k}(t,\boldsymbol{1},\boldsymbol{y},\boldsymbol{z}), \quad ( \boldsymbol{y},\boldsymbol{z})\in [0,1)^{r_{1}+r_{2}},\ \ k=1,2. $$
(3.9)

In particular, the joint probability generating function of \(\boldsymbol{Y}(t)\) and \(\boldsymbol{Z}(t)\) are given by

$$ E[\boldsymbol{y}^{\boldsymbol{Y}(t)}\mid \boldsymbol{X}(0)=\boldsymbol{e}_{k}]=g_{k}(t, \boldsymbol{1},\boldsymbol{y},\boldsymbol{1}), \quad \boldsymbol{y}\in [0,1)^{r_{1}},\ \ k=1,2. $$
(3.10)

and

$$ E[\boldsymbol{z}^{\boldsymbol{Z}(t)}\mid \boldsymbol{X}(0)=\boldsymbol{e}_{k}]=g_{k}(t, \boldsymbol{1},\boldsymbol{1},\boldsymbol{z}), \quad \boldsymbol{z}\in [0,1)^{r_{2}},\ \ k=1,2, $$
(3.11)

respectively.

Proof

Let \(\tilde{P}(t)=(\tilde{p}_{{(\boldsymbol{i},\boldsymbol{k}, \tilde{\boldsymbol{k}}), (\boldsymbol{j},\boldsymbol{l}, \tilde{\boldsymbol{l}})}}(t): (\boldsymbol{i},\boldsymbol{k}, \tilde{\boldsymbol{k}}), (\boldsymbol{j},\boldsymbol{l}, \tilde{\boldsymbol{l}})\in \mathbf{Z}_{+}^{2+r_{1}+r_{2}})\) be the transition probability of \((\boldsymbol{X}(t),\boldsymbol{Y}(t),\boldsymbol{Z}(t))\). We need to prove that for any fixed \((\boldsymbol{x},\boldsymbol{y},\boldsymbol{z}) \in [0,1]^{2+r_{1}+r_{2}}\),

$$ g_{k}(t,\boldsymbol{x},\boldsymbol{y},\boldsymbol{z})= F_{k}(t, \boldsymbol{x},\boldsymbol{y},\boldsymbol{z}),\quad k=1,2, $$
(3.12)

where \(F_{k}(t,\boldsymbol{x},\boldsymbol{y},\boldsymbol{z})\ (k=1,2)\) are given in Lemma 2.3. It is sufficient to prove that for any \((\boldsymbol{y},\boldsymbol{z})\in [0,1)^{r_{1}+r_{2}}\),

$$ u_{k}(t,\boldsymbol{x}):=F_{k}(t,\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z}),\quad k=1,2. $$

is a solution of (3.1). Indeed, suppose \(k=1\) without loss of generality, by Kolmogorov backward equation, for any \(t \ge 0\), we have,

$$ \tilde{p}'_{{(\boldsymbol{e}_{1},\boldsymbol{0}, \tilde{\boldsymbol{0}}), (\boldsymbol{j},\boldsymbol{l}, \tilde{\boldsymbol{l}})}}(t) =\sum \limits _{(\boldsymbol{i}, \boldsymbol{k},\tilde{\boldsymbol{k}}) \in \mathbf{Z}_{+}^{2+r_{1}+r_{2}}}q_{{( \boldsymbol{e}_{1},\boldsymbol{0}, \tilde{\boldsymbol{0}}), ( \boldsymbol{i},\boldsymbol{k},\tilde{\boldsymbol{k}})}} \tilde{p}_{{(\boldsymbol{i},\boldsymbol{k}, \tilde{\boldsymbol{k}}), (\boldsymbol{j},\boldsymbol{l}, \tilde{\boldsymbol{l}})}}(t). $$

Multiply \(\boldsymbol{x}^{\boldsymbol{j}}\boldsymbol{y}^{\boldsymbol{l}} \boldsymbol{z}^{\tilde{\boldsymbol{l}}}\) on both sides of the above equality and take summation over \((\boldsymbol{j},\boldsymbol{l},\tilde{\boldsymbol{l}})\in \mathbf{Z}_{+}^{2+r_{1}+r_{2}}\), we get

$$\begin{aligned} &\sum \limits _{(\boldsymbol{j},\boldsymbol{l}, \tilde{\boldsymbol{l}}) \in \mathbf{Z}_{+}^{2+r_{1}+r_{2}}} \tilde{p}'_{{(\boldsymbol{e}_{1},\boldsymbol{0}, \tilde{\boldsymbol{0}}), (\boldsymbol{j},\boldsymbol{l}, \tilde{\boldsymbol{l}})}}(t) \boldsymbol{x}^{\boldsymbol{j}} \boldsymbol{y}^{\boldsymbol{l}} \boldsymbol{z}^{ \tilde{\boldsymbol{l}}} = \sum \limits _{\boldsymbol{i}\in R_{1}}b^{(1)}_{ \boldsymbol{i}}F_{{\boldsymbol{i},\varepsilon _{ \boldsymbol{i}}, \tilde{\boldsymbol{0}}}}(t,\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z})+\sum \limits _{\boldsymbol{i}\in R_{1}^{c}}b^{(1)}_{ \boldsymbol{i}}F_{{\boldsymbol{i}, \boldsymbol{0}, \tilde{\boldsymbol{0}}}}(t,\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z}) \end{aligned}$$

By (2.7),

$$\begin{aligned} \frac{\partial F_{1}(t,\boldsymbol{x},\boldsymbol{y},\boldsymbol{z})}{\partial t} =B_{1}(\boldsymbol{F}(t,\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z}), \boldsymbol{y})+\bar{B}_{1}(\boldsymbol{F}(t, \boldsymbol{x},\boldsymbol{y}, \boldsymbol{z})). \end{aligned}$$

By a similar argument, we have

$$\begin{aligned} \frac{\partial F_{2}(t,\boldsymbol{x},\boldsymbol{y},\boldsymbol{z})}{\partial t} =B_{2}(\boldsymbol{F}(t,\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z}), \boldsymbol{y})+\bar{B}_{2}(\boldsymbol{F}(t, \boldsymbol{x},\boldsymbol{y}, \boldsymbol{z})). \end{aligned}$$

Note that \(F_{k}(0,\boldsymbol{x},\boldsymbol{y},\boldsymbol{z})=x_{k}\ (k=1,2)\), we know that \(u_{k}(t,\boldsymbol{x})=F_{k}(t,\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z})\ (k=1,2)\) is a solution of (3.1).

Therefore, (3.12) and hence (3.9) hold. Finally, (3.10) and (3.11) follow directly from (3.9). The proof is complete. □

The following proposition presents the probability generating function of \((\boldsymbol{Y}(t),\boldsymbol{Z}(t))\) when the process t starts at \(\boldsymbol{X}(0)=\boldsymbol{i}\).

Proposition 3.1

Suppose that \(\{\boldsymbol{X}(t):t \geq 0\} \) is a 2-type Markov branching process with \(\boldsymbol{X}(0)=\boldsymbol{i}\). Then,

$$ E[\boldsymbol{y}^{\boldsymbol{Y}(t)}\boldsymbol{z}^{\boldsymbol{Z}(t)}\mid \boldsymbol{X}(0)= \boldsymbol{i}]=[\boldsymbol{G}(t,\boldsymbol{1},\boldsymbol{y},\boldsymbol{z})] ^{\boldsymbol{i}}, \quad (\boldsymbol{y},\boldsymbol{z})\in [0,1)^{r_{1}+r_{2}}. $$
(3.13)

In particular,

$$ E[\boldsymbol{y}^{\boldsymbol{Y}(t)}\mid \boldsymbol{X}(0)=\boldsymbol{i}]=[\boldsymbol{G}(t, \boldsymbol{1},\boldsymbol{y},\boldsymbol{1})] ^{\boldsymbol{i}}, \quad \boldsymbol{y}\in [0,1)^{r_{1}}. $$
(3.14)

and

$$ E[\boldsymbol{z}^{\boldsymbol{Z}(t)}\mid \boldsymbol{X}(0)=\boldsymbol{i}]=[\boldsymbol{G}(t, \boldsymbol{1},\boldsymbol{1},\boldsymbol{z})] ^{\boldsymbol{i}}, \quad \boldsymbol{z}\in [0,1)^{r_{2}}. $$
(3.15)

Proof

Since \(E[\boldsymbol{y}^{\boldsymbol{Y}(t)}\boldsymbol{z}^{ \boldsymbol{Z}(t)}\mid \boldsymbol{X}(0)=\boldsymbol{i}]=F_{ \boldsymbol{i}, \boldsymbol{0}, \tilde{\boldsymbol{0}}}(t, \boldsymbol{1},\boldsymbol{y}, \boldsymbol{z})\), by (2.7) and Theorem 3.2, we immediately obtain (3.13). (3.14) and (3.15) follows directly from (3.13). The proof is complete. □

As direct consequences of Theorem 3.2, the following corollaries give the probability generating functions of the pure death number of type-k individuals and twins-birth number of type-k individuals.

Corollary 3.1

Suppose that \(\{\boldsymbol{X}(t):t \ge 0\}\) is a 2-type Markov branching process with \(\boldsymbol{X}(0)=\boldsymbol{e}_{k}\ (k=1,2)\), \(Y(t)\) and \(Z(t)\) are the pure death numbers of type-1 and type-2 individuals, respectively. Then,

$$ E[y^{Y(t)}z^{Z(t)}\mid \boldsymbol{X}(0)=\boldsymbol{e}_{k}]= g_{k}(t,y,z), \quad y,z\in [0,1),\ k=1,2. $$
(3.16)

In particular,

$$ E[y^{Y(t)}\mid \boldsymbol{X}(0)=\boldsymbol{e}_{k}]= g_{k}(t,y,1),\quad y\in [0,1), \ k=1,2 $$
(3.17)

and

$$ E[z^{Z(t)}\mid \boldsymbol{X}(0)=\boldsymbol{e}_{k}]= g_{k}(t,1,z),\quad z\in [0,1), \ k=1,2, $$
(3.18)

where \((g_{1}(t,y,z),g_{2}(t,y,z))\) is the unique solution of the equation

$$\begin{aligned} \textstyle\begin{cases} \frac{\partial u_{1}}{\partial t}=B_{1}(u_{1},u_{2})- b_{\boldsymbol{0}}^{(1)}(1-y), \\ \frac{\partial u_{2}}{\partial t}=B_{2}(u_{1},u_{2})- b_{\boldsymbol{0}}^{(2)}(1-z), \\ u_{1}(0)=u_{2}(0)=1. \end{cases}\displaystyle \end{aligned}$$

Proof

Take \(R_{1}=R_{2}=\{\boldsymbol{0}\}\subset \mathbf{Z}_{+}^{2}\). Then, we have

$$\begin{aligned} &B_{1}(\boldsymbol{u},y)+\bar{B}_{1}(\boldsymbol{u})=B_{1}( \boldsymbol{u}) -b^{(1)}_{\boldsymbol{0}}(1-y), \\ &B_{2}(\boldsymbol{u},z)+\bar{B}_{2}(\boldsymbol{u})=B_{2}( \boldsymbol{u}) -b^{(1)}_{\boldsymbol{0}}(1-z). \end{aligned}$$

By Theorem 3.2, we immediately obtain (3.16). (3.17) and (3.18) follows directly from (3.16). The proof is complete. □

Corollary 3.2

Suppose that \(\{\boldsymbol{X}(t):t \ge 0\}\) is a 2-type Markov branching process with \(\boldsymbol{X}(0)=\boldsymbol{e}_{k}\ (k=1,2)\), \(Y(t)\) is the \(2\boldsymbol{e}_{1}\)-birth numbers of type-1 individuals and \(Z(t)\) is the \(2\boldsymbol{e}_{2}\)-birth numbers of type-2 individuals. Then,

$$ E[y^{Y(t)}z^{Z(t)}\mid \boldsymbol{X}(0)=\boldsymbol{e}_{k}]= g_{k}(t,y,z), \quad y,z\in [0,1),\ k=1,2. $$

In particular,

$$ E[y^{Y(t)}\mid \boldsymbol{X}(0)=\boldsymbol{e}_{k}]= g_{k}(t,y,1),\quad y\in [0,1), \ k=1,2 $$

and

$$ E[z^{Z(t)}\mid \boldsymbol{X}(0)=\boldsymbol{e}_{k}]= g_{k}(t,1,z),\quad z\in [0,1), \ k=1,2, $$

where \((g_{1}(t,y,z),g_{2}(t,y,z))\) is the unique solution of the equation

$$\begin{aligned} \textstyle\begin{cases} \frac{\partial u_{1}}{\partial t}=B_{1}(u_{1},u_{2})- b_{2\boldsymbol{e}_{1}}^{(1)}(1-y)u_{1}^{2}, \\ \frac{\partial u_{2}}{\partial t}=B_{2}(u_{1},u_{2})- b_{2\boldsymbol{e}_{2}}^{(2)}(1-z)u_{2}^{2}, \\ u_{1}(0)=u_{2}(0)=1. \end{cases}\displaystyle \end{aligned}$$

Proof

Take \(R_{1}=\{2\boldsymbol{e}_{1}\}\subset \mathbf{Z}_{+}^{2}\) and \(R_{2}=\{2\boldsymbol{e}_{2}\}\subset \mathbf{Z}_{+}^{2}\). Then we have

$$\begin{aligned} &B_{1}(\boldsymbol{u},y)+\bar{B}_{1}(\boldsymbol{u})=B_{1}( \boldsymbol{u}) -b^{(1)}_{2\boldsymbol{e}_{1}}(1-y)u_{1}^{2}, \\ &B_{2}(\boldsymbol{u},z)+\bar{B}_{2}(\boldsymbol{u})=B_{2}( \boldsymbol{u}) -b^{(2)}_{2\boldsymbol{e}_{2}}(1-z)u_{2}^{2}. \end{aligned}$$

By Theorem 3.2, we immediately obtain all the conclusions. The proof is complete. □

Since 0 is the absorbing state of \(\{\boldsymbol{X}(t):t \geq 0\}\), now we consider the multiple birth property until the extinction of the system. Let

$$ \tau =\inf \{t\geq 0: \boldsymbol{X}(t)=\boldsymbol{0}\} $$

be the extinction time of \(\{\boldsymbol{X}(t):t \geq 0\}\).

The following theorem gives the joint probability generating function of multi-birth number of individuals until the extinction of the system.

Theorem 3.3

Suppose that \(\{\boldsymbol{X}(t):t \geq 0\}\) is a 2-type Markov branching process with \(\boldsymbol{X}(0)=\boldsymbol{e}_{k}\ (k=1,2)\).

(i) If \(\rho (\boldsymbol{1})\leq 0\), then the probability generating function of \((\boldsymbol{Y}(\tau ),\boldsymbol{Z}(\tau ))\) is given by

$$ E[\boldsymbol{y}^{\boldsymbol{Y}(\tau )}\boldsymbol{z}^{\boldsymbol{Z}(\tau )}\mid \boldsymbol{X}(0)=\boldsymbol{e}_{k}]=q_{k}(\boldsymbol{y},\boldsymbol{z}),\quad ( \boldsymbol{y},\boldsymbol{z})\in [0,1)^{r_{1}+r_{2}},\ k=1,2, $$

where \((q_{1}(\boldsymbol{y},\boldsymbol{z}),q_{2}(\boldsymbol{y},\boldsymbol{z}))\) is the unique solution of

$$\begin{aligned} \textstyle\begin{cases} B_{1}(\boldsymbol{u},\boldsymbol{y})+\bar{B}_{1}(\boldsymbol{u})=0, \\ B_{2}(\boldsymbol{u},\boldsymbol{z})+\bar{B}_{2}(\boldsymbol{u})=0. \end{cases}\displaystyle \end{aligned}$$

(ii) If \(\rho (\boldsymbol{1})>0\), then the probability generating function of \((\boldsymbol{Y}(\tau ),\boldsymbol{Z}(\tau ))\) conditioned on \(\tau <\infty \) is given by

$$ E[\boldsymbol{y}^{\boldsymbol{Y}(\tau )}\boldsymbol{z}^{\boldsymbol{Z}(\tau )}\mid \tau < \infty ,\boldsymbol{X}(0)=\boldsymbol{e}_{k}]= \frac{q_{k}(\boldsymbol{y},\boldsymbol{z})}{q_{k}},\quad (\boldsymbol{y}, \boldsymbol{z})\in [0,1)_{+}^{r_{1}+r_{2}},\ k=1,2, $$

where \((q_{1},q_{2})\) is the minimal non-negative solution of

$$\begin{aligned} \textstyle\begin{cases} B_{1}(\boldsymbol{u})=0, \\ B_{2}(\boldsymbol{u})=0. \end{cases}\displaystyle \end{aligned}$$

Proof

We first prove (i). It follows from Lemma 2.3(i) that for \(k=1,2\) and any \((\boldsymbol{x},\boldsymbol{y},\boldsymbol{z}) \in [0,1]^{2} \times [0,1)^{r_{1}+r_{2}}\),

$$\begin{aligned} &\sum \limits _{(\boldsymbol{j},\boldsymbol{l}, \tilde{\boldsymbol{l}})\in \mathbf{Z}_{+}^{2+r_{1}+r_{2}}} \tilde{p}_{{(\boldsymbol{e}_{k},\boldsymbol{0}, \tilde{\boldsymbol{0}}), (\boldsymbol{j},\boldsymbol{l}, \tilde{\boldsymbol{l}})}}(t) \boldsymbol{x}^{\boldsymbol{j}} \boldsymbol{y}^{\boldsymbol{l}} \boldsymbol{z}^{ \tilde{\boldsymbol{l}}}-x_{k} \\ =&[B_{1}(\boldsymbol{x},\boldsymbol{y})+\bar{B} _{1}( \boldsymbol{x})]\int _{0}^{t} \frac{\partial F_{\boldsymbol{e}_{k},\boldsymbol{0},\tilde{\boldsymbol{0}}} (s,\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z})}{{\partial {x_{1}}}}ds+[B_{2}(\boldsymbol{x}, \boldsymbol{z})+\bar{B} _{2}(\boldsymbol{x})]\int _{0}^{t} \frac{\partial F_{\boldsymbol{e}_{k},\boldsymbol{0},\tilde{\boldsymbol{0}}}(s,\boldsymbol{x},\boldsymbol{y}, \boldsymbol{z})}{{\partial x_{2}}}ds. \end{aligned}$$

Letting \(\boldsymbol{x}=\boldsymbol{q}(\boldsymbol{y},\boldsymbol{z}) =(q_{1}( \boldsymbol{y},\boldsymbol{z}),q_{2}(\boldsymbol{y}, \boldsymbol{z}))\) in the above equality and then letting \(t\rightarrow \infty \) yield that

$$\begin{aligned} \sum \limits _{(\boldsymbol{l}, \tilde{\boldsymbol{l}})\in \mathbf{Z}_{+}^{r_{1}+r_{2}}}\tilde{p}_{{(\boldsymbol{e}_{k}, \boldsymbol{0}, \tilde{\boldsymbol{0}}), (\boldsymbol{0}, \boldsymbol{l},\tilde{\boldsymbol{l}})}}(\infty ) \boldsymbol{y}^{\boldsymbol{l}} \boldsymbol{z}^{ \tilde{\boldsymbol{l}}}-q_{k}(\boldsymbol{y}, \boldsymbol{z})=0. \end{aligned}$$

If \(\rho (\boldsymbol{1})\leq 0\), then \(q_{k}=P(\tau <\infty \mid \boldsymbol{X}(0)=\boldsymbol{e}_{k})=1\). Therefore, noting that \((\boldsymbol{0},\boldsymbol{l},\tilde{\boldsymbol{l}})\) is absorbing state, we have

$$\begin{aligned} &E[\boldsymbol{y}^{\boldsymbol{Y}(\tau )}\boldsymbol{z} ^{ \boldsymbol{Z} (\tau )}\mid \boldsymbol{X}(0)=\boldsymbol{e}_{k}] \\ =&\sum \limits _{(\boldsymbol{l}, \tilde{\boldsymbol{l}})\in \mathbf{Z}_{+}^{r_{1}+r_{2}}}P((\boldsymbol{Y}(\tau ), \boldsymbol{Z}(\tau )) =(\boldsymbol{l},\tilde{\boldsymbol{l}}) \mid \boldsymbol{X}(0)=\boldsymbol{e}_{k}) \boldsymbol{y}^{ \boldsymbol{l}} \boldsymbol{z}^{\tilde{\boldsymbol{l}}} \\ =&\sum \limits _{(\boldsymbol{l}, \tilde{\boldsymbol{l}})\in \mathbf{Z}_{+}^{r_{1}+r_{2}}}\lim \limits _{t\rightarrow \infty}P(( \boldsymbol{Y}(\tau ),\boldsymbol{Z}(\tau )) =(\boldsymbol{l}, \tilde{\boldsymbol{l}}), \tau < t\mid \boldsymbol{X}(0)= \boldsymbol{e}_{k}) \boldsymbol{y}^{\boldsymbol{l}} \boldsymbol{z}^{\tilde{\boldsymbol{l}}} \\ =&\sum \limits _{(\boldsymbol{l}, \tilde{\boldsymbol{l}})\in \mathbf{Z}_{+}^{r_{1}+r_{2}}}\lim \limits _{t\rightarrow \infty}P(( \boldsymbol{Y}(t),\boldsymbol{Z}(t)) =(\boldsymbol{l}, \tilde{\boldsymbol{l}}), \tau < t\mid \boldsymbol{X}(0)= \boldsymbol{e}_{k}) \boldsymbol{y}^{\boldsymbol{l}} \boldsymbol{z}^{\tilde{\boldsymbol{l}}} \\ =&\sum \limits _{(\boldsymbol{l}, \tilde{\boldsymbol{l}})\in \mathbf{Z}_{+}^{r_{1}+r_{2}}}\lim \limits _{t\rightarrow \infty} \tilde{p}_{{(\boldsymbol{e}_{k},\boldsymbol{0}, \tilde{\boldsymbol{0}}), (\boldsymbol{0},\boldsymbol{l}, \tilde{\boldsymbol{l}})}}(t) \boldsymbol{y}^{\boldsymbol{l}} \boldsymbol{z}^{\tilde{\boldsymbol{l}}} \\ =&\sum \limits _{(\boldsymbol{l}, \tilde{\boldsymbol{l}})\in \mathbf{Z}_{+}^{r_{1}+r_{2}}}\tilde{p}_{{(\boldsymbol{e}_{k}, \boldsymbol{0}, \tilde{\boldsymbol{0}}), (\boldsymbol{0}, \boldsymbol{l},\tilde{\boldsymbol{l}})}}(\infty ) \boldsymbol{y}^{\boldsymbol{l}} \boldsymbol{z}^{ \tilde{\boldsymbol{l}}} \\ =& q_{k}(\boldsymbol{y}, \boldsymbol{z}). \end{aligned}$$

(i) is proved.

Next we prove (ii). If \(\rho (\boldsymbol{1})> 0\), then \(q_{k}=P(\tau <\infty \mid \boldsymbol{X}(0)=\boldsymbol{e}_{k})<1\). Therefore, similarly as the above argument, we have

$$\begin{aligned} &E[\boldsymbol{y}^{\boldsymbol{Y}(\tau )}\boldsymbol{z} ^{ \boldsymbol{Z} (\tau )}\mid \tau < \infty , \boldsymbol{X}(0)= \boldsymbol{e}_{k}] \\ =&q_{k}^{-1}\sum \limits _{(\boldsymbol{l}, \tilde{\boldsymbol{l}})\in \mathbf{Z}_{+}^{r_{1}+r_{2}}}P(( \boldsymbol{Y}(\tau ),\boldsymbol{Z}(\tau )) =(\boldsymbol{l}, \tilde{\boldsymbol{l}}),\tau < \infty \mid \boldsymbol{X}(0)= \boldsymbol{e}_{k}) \boldsymbol{y}^{\boldsymbol{l}} \boldsymbol{z}^{\tilde{\boldsymbol{l}}} \\ =&q_{k}^{-1}\sum \limits _{(\boldsymbol{l}, \tilde{\boldsymbol{l}})\in \mathbf{Z}_{+}^{r_{1}+r_{2}}}\lim \limits _{t\rightarrow \infty}P((\boldsymbol{Y}(\tau ), \boldsymbol{Z}(\tau )) =(\boldsymbol{l},\tilde{\boldsymbol{l}}), \tau < t\mid \boldsymbol{X}(0)=\boldsymbol{e}_{k}) \boldsymbol{y}^{ \boldsymbol{l}} \boldsymbol{z}^{\tilde{\boldsymbol{l}}} \\ =& \frac{q_{k}(\boldsymbol{y},\boldsymbol{z})}{q_{k}}. \end{aligned}$$

The proof is complete. □

By Theorem 3.3, we immediately obtain the following corollaries, which gives the probability generating functions of the pure death number of type-k individuals until the extinction of the system and twins-birth number of type-k individuals until the extinction of the system.

Corollary 3.3

Suppose that \(\{\boldsymbol{X}(t):t \ge 0\}\) is a 2-type Markov branching process with \(\boldsymbol{X}(0)=\boldsymbol{e}_{k}\ (k=1,2)\), \(Y(t)\) and \(Z(t)\) are the pure death numbers of type-1 and type-2 individuals, respectively. If \(\rho (\boldsymbol{1})\leq 0\), then

$$ E[y^{Y(\tau )}z^{Z(\tau )}\mid \boldsymbol{X}(0)=\boldsymbol{e}_{k}]= q_{k}(y,z), \quad y,z\in [0,1),\ k=1,2. $$

If \(\rho (\boldsymbol{1})>0\), then

$$ E[y^{Y(\tau )}z^{Z(\tau )}\mid \tau < \infty ,\boldsymbol{X}(0)=\boldsymbol{e}_{k}]= \frac{q_{k}(y,z)}{q_{k}},\quad y,z\in [0,1),\ k=1,2, $$

where \((q_{1}(y,z),q_{2}(y,z))\) is the unique solution of the equation

$$\begin{aligned} \textstyle\begin{cases} B_{1}(u_{1},u_{2})- b_{\boldsymbol{0}}^{(1)}(1-y)=0, \\ B_{2}(u_{1},u_{2})- b_{\boldsymbol{0}}^{(2)}(1-z)=0. \end{cases}\displaystyle \end{aligned}$$

Proof

Note \(R_{1}=R_{2}=\{\boldsymbol{0}\}\), we immediately get the conclusions. □

Corollary 3.4

Suppose that \(\{\boldsymbol{X}(t):t \ge 0\}\) is a 2-type Markov branching process with \(\boldsymbol{X}(0)=\boldsymbol{e}_{k}\ (k=1,2)\), \(Y(t)\) is the \(2\boldsymbol{e}_{1}\)-birth numbers of type-1 individuals and \(Z(t)\) is the \(2\boldsymbol{e}_{2}\)-birth numbers of type-2 individuals. If \(\rho (\boldsymbol{1})\leq 0\), then

$$ E[y^{Y(\tau )}z^{Z(\tau )}\mid \boldsymbol{X}(0)=\boldsymbol{e}_{k}]= q_{k}(y,z), \quad y,z\in [0,1),\ k=1,2. $$

If \(\rho (\boldsymbol{1})>0\), then

$$ E[y^{Y(\tau )}z^{Z(\tau )}\mid \tau < \infty ,\boldsymbol{X}(0)=\boldsymbol{e}_{k}]= \frac{q_{k}(y,z)}{q_{k}},\quad y,z\in [0,1),\ k=1,2, $$

where \((q_{1}(y,z),q_{2}(y,z))\) is the unique solution of the equation

$$\begin{aligned} \textstyle\begin{cases} B_{1}(u_{1},u_{2})- b_{2\boldsymbol{e}_{1}}^{(1)}(1-y)u_{1}^{2}=0, \\ B_{2}(u_{1},u_{2})- b_{2\boldsymbol{e}_{2}}^{(2)}(1-z)u_{2}^{2}=0. \end{cases}\displaystyle \end{aligned}$$

Proof

Note \(R_{1}=\{2\boldsymbol{e}_{1}\}\) and \(R_{2}=\{2\boldsymbol{e}_{2}\}\), we immediately get the conclusions. □

Finally, we give an example to illustrate the main results obtained.

Example 3.1

Suppose that \(\{\boldsymbol{X}(t):t\geq 0\}\) is a 2-type birth-death branching process with

$$ B_{1}(\boldsymbol{x})=p-x_{1}+qx_{2}^{2},\quad B_{2}( \boldsymbol{x})=\alpha -x_{2}+\beta x_{1}, $$

where \(p,\ \alpha \in (0,1)\), \(q=1-p\), \(\beta =1-\alpha \). \(Y(t)\) is the pure death number of type-1 individuals until time t and \(Z(t)\) is the pure death number of type-2 individuals until time t. By Corollary 3.1, we know that

$$\begin{aligned} E[y^{Y(t)}z^{Z(t)}\mid \boldsymbol{X}(0)=\boldsymbol{e}_{k}]= \textstyle\begin{cases} u(t,y,z),\ & k=1, \\ v(t,y,z),\ & k=2, \end{cases}\displaystyle \quad y,z\in [0,1), \end{aligned}$$

where \((u(t,y,z),v(t,y,z))\) is the unique solution of

$$\begin{aligned} \textstyle\begin{cases} \frac{\partial u}{\partial t}=qv^{2}-u+py, \\ \frac{\partial v}{\partial t}=\beta u-v+\alpha z, \\ u(0)=v(0)=1. \end{cases}\displaystyle \end{aligned}$$

It is easy to see that the maximum eigenvalue of \((B_{ij}(\boldsymbol{1}):i,j=1,2)\) is \(\rho (\boldsymbol{1})=\sqrt{2q\beta}-1\). For \(y,z\in [0,1)\), solving the equation

$$\begin{aligned} \textstyle\begin{cases} qv^{2}-u+py=0, \\ \beta u-v+\alpha z=0, \end{cases}\displaystyle \end{aligned}$$

yields that

$$\begin{aligned} &u=u(y,z)=\frac{1}{2q\beta ^{2}}[1- \sqrt{1-4q\beta (p\beta y+\alpha z)}]-\frac{\alpha z}{\beta}, \\ &v=v(y,z)=\frac{1}{2q\beta}[1-\sqrt{1-4q\beta (p\beta y+\alpha z)}]. \end{aligned}$$

By Corollary 3.3, if \(2q\beta \leq 1\), then

$$\begin{aligned}& E[y^{Y(\tau )}z^{Z(\tau )}\mid \boldsymbol{X}(0)=\boldsymbol{e}_{1}] = \frac{1-\sqrt{1-4q\beta (p\beta y+\alpha z)}-2q\beta \alpha z}{2q\beta ^{2}}, \quad y,z\in [0,1),\\& E[y^{Y(\tau )}z^{Z(\tau )}\mid \boldsymbol{X}(0)=\boldsymbol{e}_{2}] =\frac{1-\sqrt{1-4q\beta (p\beta y+\alpha z)}}{2q\beta},\quad y,z\in [0,1), \end{aligned}$$

If \(2q\beta >1\), then

$$\begin{aligned}& E[y^{Y(\tau )}z^{Z(\tau )}\mid \boldsymbol{X}(0)=\boldsymbol{e}_{1}] = \frac{1-\sqrt{1-4q\beta (p\beta y+\alpha z)}-2q\beta \alpha z}{2(1-2q\beta +q\beta ^{2})}, \quad y,z\in [0,1),\\& E[y^{Y(\tau )}z^{Z(\tau )}\mid \boldsymbol{X}(0)=\boldsymbol{e}_{2}] =\frac{1-\sqrt{1-4q\beta (p\beta y+\alpha z)}}{2(1-q\beta )},\quad y,z \in [0,1). \end{aligned}$$

Data availability

No datasets were generated or analysed during the current study.

References

  1. Anderson, W.: Continuous-Time Markov Chains: An Applications-Oriented Approach. Springer, New York (1991)

    Book  Google Scholar 

  2. Harris, T.E.: The Theory of Branching Processes. Springer, Berlin (1963)

    Book  Google Scholar 

  3. Athreya, K.B., Ney, P.E.: Branching Processes. Springer, Berlin (1972)

    Book  Google Scholar 

  4. Asmussen, S., Hering, H.: Branching Processes. Birkhäuser, Boston (1983)

    Book  Google Scholar 

  5. Asmussen, S., Jagers, P.: Classical and Mordern Branching Processes. Springer, Berlin (1997)

    Google Scholar 

  6. Vatutin, V.A.: Asymptotic behavior of the probability of the first degeneration for branching processes with immigration. Teor. Veroâtn. Primen. 27(2), 26–35 (1974)

    MathSciNet  Google Scholar 

  7. Li, J.P., Chen, A.Y., Pakes, A.G.: Asymptotic properties of the Markov branching process with immigration. J. Theor. Probab. 25(1), 122–143 (2012)

    Article  MathSciNet  Google Scholar 

  8. Chen, A.Y., Li, J.P., Ramesh, N.: Uniqueness and extinction of weighted Markov branching processes. Methodol. Comput. Appl. Probab. 7(4), 489–516 (2005)

    Article  MathSciNet  Google Scholar 

  9. Chen, A.Y., Pollett, P., Li, J.P., Zhang, H.J.: A remark on the uniqueness of weighted Markov branching processes. J. Appl. Probab. 44(1), 279–283 (2007)

    Article  MathSciNet  Google Scholar 

  10. Li, J.P., Generalized, C.A.Y.: Markov interacting branching processes. Sci. China Math. 61(3), 545–561 (2018)

    Article  MathSciNet  Google Scholar 

  11. Li, J.P.: Decay parameter and related properties of 2-type branching processes. Sci. China Math. 52(5), 875–894 (2009)

    Article  MathSciNet  Google Scholar 

  12. Li, J.P., Wang, J.: Decay parameter and related properties of n-type branching processes. Sci. China Math. 55(12), 2535–2556 (2012)

    Article  MathSciNet  Google Scholar 

  13. Meng, W.W., Li, J.P.: n-Type branching processes with immigration and disasters. Acta Math. Appl. Sin. 41(5), 608–619 (2018) (in Chinese)

    MathSciNet  Google Scholar 

  14. Li, Y.Y., Li, J.P.: Down/up crossing properties of weighted Markov collision processes. Front. Math. China 16(2), 525–542 (2021)

    Article  MathSciNet  Google Scholar 

  15. Li, Y.Y., Li, J.P., Chen, A.Y.: The down/up crossing properties of Markov branching processes. Sci. Sin., Math. 52(4), 433–446 (2022)

    Article  Google Scholar 

  16. Li, J.P., Meng, W.W.: Regularity criterion for 2-type Markov branching processes with immigration. Stat. Probab. Lett. 121, 109–118 (2017)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was substantially supported by the National Natural Sciences Foundations of China (No. 11771452, No. 11971486).

Funding

Funding provided by the National Natural Science Foundation of China (No. 11771452, No. 11971486).

Author information

Authors and Affiliations

Authors

Contributions

Junping Li proved Theorem 3.1 and Theorem 3.3. Wanting Zhang proved Theorem 3.2 and gave Example 3.1.

Corresponding author

Correspondence to Junping Li.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, J., Zhang, W. The multiple birth properties of multi-type Markov branching processes. Bound Value Probl 2024, 105 (2024). https://doi.org/10.1186/s13661-024-01914-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13661-024-01914-7

Mathematics Subject Classification

Keywords