In this section, we show the existenceuniqueness of the positive solution to (OP; u) by applying a fixed point theorem of mixed monotone operator (Lemma 2.1). Throughout this section, let \(\widetilde{P}=\{u\in PC[J,R];u(t)\geq 0,\forall t\in J\}\). Obviously, P̃ is a normal cone in \(PC[J,R]\); moreover, the normality constant of P̃ is 1.
Definition 3.1
([14])
Let \(v\in H\) and M be a given constant. Then a function \(u\in PC[J,R]\cap C'[J',R]\) is called a solution to \((OP;v)\) on J if it satisfies (1.1).
Lemma 3.1
Assume that \(f:J\times R^{+}\times R^{+}\rightarrow R\) is continuous and \(u\in H\). So, \(x\in PC{[J,R]}\cap C[J',R]\) is a solution to \((IP;u)\) on J if and only if \(x\in PC[J,R]\) is the solution to the following integral equation:
$$ x(t)=\textstyle\begin{cases} {} x(0)\frac{1}{\Gamma (\alpha )}\int _{0}^{t}(ts)^{ \alpha 1}[f(s,x(s),x(s))+u(s)]\,ds,\quad t\in J_{0}; \\ x(0)\frac{1}{\Gamma (\alpha )}\int _{0}^{t}(ts)^{\alpha 1}[f(s,x(s),x(s))+u(s)]\,ds+I_{1}(x(t_{1}),x(t_{1})),\\ \quad t\in J_{1}; \\ x(0)\frac{1}{\Gamma (\alpha )}\int _{0}^{t}(ts)^{\alpha 1}[f(s,x(s),x(s)) +u(s)]\,ds\\ \quad {}+\sum_{0< t_{k}< t}I_{k}(x(t_{k}),x(t_{k})),\quad t\in J_{k}. \end{cases} $$
(3.1)
Proof
If \(t\in J_{0}\), we take α times integral for the first equation on both sides of (1.1) at the same time, then the following contents can be obtained:
$$\begin{aligned} &{}_{0}I_{t}^{\alpha }{_{0}^{C}D_{t}^{\alpha }}x(t)=_{0}I_{t}^{ \alpha }{_{0}I_{t}^{1\alpha }}x^{\prime }(t)= \int _{0}^{t}x^{\prime }(t)\,dt, \\ &\int _{0}^{t}x^{\prime }(t) \,dt={}_{0}I_{t}^{\alpha }\bigl[f\bigl(t,x(t),x(t) \bigr)+u(t)\bigr])\\ &\phantom{ \int _{0}^{t}x^{\prime }(t) \,dt}{} = \frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{\alpha 1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr])\,ds. \end{aligned}$$
Then
$$ x(t)=x(0)\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{ \alpha 1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds. $$
If \(t\in J_{1}\), integrating on both sides of the first equation of (1.1), we have
$$ _{0}I_{t}^{\alpha }{_{0}^{C}D_{t}^{\alpha }}x(t)={}_{0}I_{t}^{\alpha } \bigl[f\bigl(t,x(t),x(t)\bigr)+u(t)\bigr]=\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{ \alpha 1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds. $$
Since \(x(t)\) has a break point \(t=t_{1}\) within \((0, t) \), we get
$$ _{0}I_{t}^{\alpha }{_{0}^{C}D_{t}^{\alpha }}x(t)= {_{0}I_{t}^{ \alpha }} {_{0}I_{t}^{1\alpha }}x^{\prime }(t)= \int _{0}^{t}x^{\prime }(s)\,ds= \int _{0}^{t_{1}}x^{\prime }(s)\,ds \int _{t_{1}}^{t}x^{\prime }(s)\,ds $$
and
$$\begin{aligned} {x\bigl(t_{1}^{}\bigr)+x(0)x(t)+x\bigl(t_{1}^{+} \bigr)}&={_{0}I_{t}^{ \alpha }\bigl[f\bigl(s,x(s),x(s) \bigr)+u(s)\bigr]} \\ &=\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{\alpha 1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds. \end{aligned}$$
Furthermore, we obtain
$$ x(t)=x(0)+I_{1}\bigl(x(t_{1}),x(t_{1})\bigr) \frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{\alpha 1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds. $$
Similarly, if \(t\in J_{k}\), we have
$$\begin{aligned} {}_{0}I_{t}^{\alpha }{_{0}^{C}D_{t}^{\alpha }}x(t)={}&{}_{0}I_{t}^{ \alpha }{_{0}I_{t}^{1\alpha }}x^{\prime }(t)= \int _{0}^{t}x^{\prime }(s)\,ds \\ ={}& \int _{0}^{t_{1}}x^{\prime }(s)\,ds+ \int _{t_{1}}^{t_{2}}x^{\prime }(s)\,ds+\cdots+ \int _{t_{k}}^{t}x^{\prime }(s)\,ds \\ ={}&x\bigl(t_{1}^{}\bigr)x(0)+x\bigl(t_{2}^{} \bigr)x\bigl(t_{1}^{+}\bigr)+\cdots +x(t)x\bigl(t_{k}^{+} \bigr),  \int _{0}^{t}x^{\prime }(s) \,ds\\ ={}&{}_{0}I_{t}^{\alpha }\bigl[f\bigl(t,x(t),x(t) \bigr)+u(t)\bigr]\\ = {}&\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{\alpha 1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds \end{aligned}$$
and
$$\begin{aligned} &{}\bigl[x\bigl(t_{1}^{}\bigr)x(0)+x\bigl(t_{2}^{} \bigr)x\bigl(t_{1}^{+}\bigr)+\cdots+x(t)x \bigl(t_{k}^{+}\bigr)\bigr]\\ &\quad = \frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{\alpha 1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds. \end{aligned}$$
Finally, we get
$$\begin{aligned} x(t)={}&x(0)+\bigl(x\bigl(t_{1}^{+}\bigr)x \bigl(t_{1}^{}\bigr)\bigr)+\bigl(x\bigl(t_{2}^{+} \bigr)x\bigl(t_{2}^{}\bigr)\bigr)+\cdots+\bigl(x \bigl(t_{k}^{+}\bigr)x\bigl(t_{k}^{} \bigr)\bigr) \\ &{}\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{\alpha 1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds \\ ={}&x(0)+I_{1}\bigl(x(t_{1}),x(t_{1}) \bigr)+I_{2}\bigl(x(t_{2})\bigr),x(t_{2})+ \cdots+I_{k}\bigl(x(t_{k}),x(t_{k})\bigr) \\ &{}\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{\alpha 1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds \\ ={}&x(0)\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{\alpha 1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds+ \sum_{0< t_{k}< t}I_{k} \bigl(x(t_{k}),x(t_{k})\bigr). \end{aligned}$$
Then we know that (3.1) is equivalent to (1.1).
Now, we prove that (3.1) meets the differential system (1.1).
If \(t\in J_{0}\), let \(t=0\), by (3.1) we get \(x(0)=x_{0}\).
If \(t\in J_{1}\), taking derivative on both sides of (3.1), we have
$$\begin{aligned} _{0}^{C}D_{t}^{\alpha }x(t)={}&{}_{0}^{C}D_{t}^{\alpha } \biggl\{ x_{0}+I_{1}\bigl(x(t_{1}),x(t_{1}) \bigr) \frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{\alpha 1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr] \biggr\} \,ds \\ =&{}_{0}^{C}D_{t}^{\alpha }x_{0}+{_{0}^{C}D}_{t}^{\alpha }I_{1} \bigl(x(t_{1}),x(t_{1})\bigr) \\ &{}{_{0}^{C}D}_{t}^{\alpha } \frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{ \alpha 1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds \\ ={}&f\bigl(t,x(t),x(t)\bigr)u(t). \end{aligned}$$
In the first type of (3.1), let \(t\rightarrow t_{1}^{}\), we have
$$ x\bigl(t_{1}^{}\bigr)=x(0)\frac{1}{\Gamma (\alpha )} \int _{0}^{t_{1}^{}}\bigl({t_{1}^{}}s \bigr)^{ \alpha 1}\bigl[f\bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds. $$
In the second type of (3.1), let \(t\rightarrow t_{1}^{+}\), we have
$$ x\bigl(t_{1}^{+}\bigr)=x(0)\frac{1}{\Gamma (\alpha )} \int _{0}^{t_{1}^{+}}\bigl({t_{1}^{+}}s \bigr)^{ \alpha 1}\bigl[f\bigl(s,x(s),x(s)\bigr)+u(s)\bigr] \,ds+I_{1}\bigl(x(t_{1}),x(t_{1})\bigr), $$
and then we know
$$ I_{1}\bigl(x(t_{1}),x(t_{1})\bigr)=x \bigl(t_{1}^{+}\bigr)x\bigl(t_{1}^{} \bigr). $$
So it is easy to know, when \(t\in J_{1}\), (3.1) meets all kinds of (1.1). Likewise, if \(t\in J_{k}\), (3.1) meets all kinds of (1.1) too, i.e., (3.1) and (1.1) are completely equivalent. It constitutes a proof. □
For convenience, set \(A:PC[J,R]\times PC[J,R]\rightarrow PC[J,R]\) by
$$ A(x,y) (t)=x(0)\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{\alpha 1}\bigl[f \bigl(s,x(s),y(s)\bigr)+u(s)\bigr]\,ds+ \sum_{o< t_{k}< t}I_{k} \bigl(x(t_{k}),y(t_{k})\bigr). $$
(3.2)
Theorem 3.1
Assume that \(M>0\) and
 \((H_{1})\):

\(f:J\times R^{+}\times R^{+}\rightarrow R^{}\) for all \(t\in J\) and \(x,y\in R^{+}\), \(f(t,x,y)\) is monotone decreasing in x for each \(t\in J\) and \(y\in R^{+}\) and is monotone increasing in y for each \(t\in J\) and \(x\in R^{+}\); furthermore, \(f(t,\frac{1}{2},1)<0\) for all \(t\in J\);
 \((H_{2})\):

for each \(k=1,2,\ldots,m, I_{k}\in C[R^{+}\times R^{+}]\), and \(I_{k}\geq 0\), \(I_{k}(x,y)\) is monotone increasing in x for each \(y\in R^{+}\) and is monotone decreasing in y for each \(x\in R^{+}\);
 \((H_{3})\):

for all \(\gamma \in (0,1)\) and \(x,y\in R^{+}\), there exists \(\varphi _{1}(\gamma )\in (\gamma,1]\) such that
$$ f\bigl(t,\gamma x,\gamma ^{1}y\bigr)\leq \varphi _{1}( \gamma )f(t,x,y); $$
for all \(\gamma \in (0,1)\), \(\forall x,y\in R^{+}\), and \(k=1,2,\ldots,m\), there exists \(\varphi _{2}(\gamma )\in (\gamma,1]\) such that
$$ I_{k}\bigl(\gamma x,\gamma ^{1}y\bigr)\geq \varphi _{2}(\gamma )I_{k}(x,y). $$
Then, for all \(u\in H\) with \(M\leq u(t)\leq 0\), the problem \((OP;u)\) has a unique positive solution \(x^{*}\in \widetilde{P}_{h}\), where \(h(t)=\frac{1}{2}+\frac{1}{\Gamma (\alpha )}\int _{0}^{t}(ts)^{ \alpha 1}\,ds\) and \(\widetilde{P}_{h}=\{u\in \widetilde{P}\mid u\sim h\}\).
Proof
From (3.2), \((H_{1})\), and \((H_{2})\), we have \((A(x,y))(t)\geq 0\) for \(\forall x,y\in \tilde{P}\), that is, \(A:\tilde{P}\times \tilde{P}\rightarrow \tilde{P}\). Also, the operator \(A:\tilde{P}\times \tilde{P}\rightarrow \tilde{P}\) is a mixed monotone operator. Now, we show that A is a φconcaveconvex operator. Put \(\varphi (\gamma )=\min \{\varphi _{1}(\gamma ),\varphi _{2}(\gamma )\}\), where \(\gamma \in (0,1)\). Since \(\varphi _{1}(\gamma )\in (\gamma,1]\) and \(\varphi _{2}(\gamma )\in (\gamma,1]\), it is easy to see that \(\gamma \leq \varphi (\gamma )\leq 1\). Hence, from \((H_{1})\)–\((H_{3})\) and \(u(t)\leq 0\), for \(\forall \gamma \in (0,1)\) and \(x,y\in \tilde{P}\), we obtain
$$\begin{aligned} A\bigl(\gamma x,\gamma ^{1}y\bigr) (t) ={}&x_{0} \frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{(\alpha 1)}\bigl[f\bigl(s, \gamma x(s),\gamma ^{1}y(s)\bigr)+u(s)\bigr]\,ds \\ &{}+\sum_{o< t_{k}< t}I_{k}\bigl(\gamma x(t_{k}),\gamma ^{1}y(t_{k})\bigr) \\ \geq {}& x_{0}\frac{\varphi _{1}(\gamma )}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{( \alpha 1)}\bigl[f \bigl(s,x(s),y(s)\bigr)+u(s)\bigr]\,ds \\ &{}+\varphi _{2}(\gamma )\sum_{0< t_{k}< t}I_{k} \bigl(x(t_{k}),y(t_{k})\bigr) \\ \geq{} &\varphi (\gamma )A(x,y) (t),\quad \forall t\in J, \end{aligned}$$
that is, \(A(\gamma x,\gamma ^{1}y)(t)\geq \varphi (\gamma )A(x,y)(t)\) for \(\forall x,y\in \tilde{P}\) and \(\gamma \in (0,1)\).
Let \(h(t):=\frac{1}{2}+\frac{1}{\Gamma (\alpha )}\int _{0}^{t}(ts)^{ \alpha 1}\,ds=\frac{1}{2}+\frac{t^{\alpha }}{\Gamma (\alpha +1)}, \forall t\in J\). Then we can easily obtain that \(\frac{1}{2}\leq h(t)\leq \frac{1}{2}+\frac{1}{\Gamma (\alpha +1)}, \forall t\in J\). Set
$$ r_{1}=\min_{t\in J}\biggl[f\biggl(t, \frac{1}{2},\frac{1}{2}+ \frac{1}{\Gamma (\alpha +1)}\biggr) \biggr],\qquad r_{2}=\min_{t\in J}\biggl[f\biggl(t, \frac{1}{2}+ \frac{1}{\Gamma (\alpha +1)},\frac{1}{2}\biggr)\biggr], $$
then \(0\leq r_{1}\leq r_{2}\). Furthermore, from \(M\leq u\leq 0\), it is easy to know that
$$\begin{aligned} A(h,h) (t)={}&x_{0}\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{ \alpha 1}\bigl[f \bigl(s,h(s),h(s)\bigr)+u(s)\bigr]\,ds+\sum_{o< t_{k}< t}I_{k} \bigl(h(t_{k}),h(t_{k})\bigr) \\ \geq{} & x_{0}+\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{\alpha 1}\biggl[f\biggl(s, \frac{1}{2},\frac{1}{2}+\frac{1}{\Gamma (\alpha +1)}\biggr))\,ds\biggr] \\ \geq{} &x_{0}+\frac{r_{1}}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{ \alpha 1}\,ds \\ \geq{} &\frac{2\Gamma (\alpha +1)}{2+\Gamma (\alpha +1)} (x_{0}+r_{1} )h(t) \\ ={}&r_{3}h(t), \quad\forall t\in J, \end{aligned}$$
where \(r_{3}=\frac{2\Gamma (\alpha +1)}{2+\Gamma (\alpha +1)} (x_{0}+r_{1} )\). Furthermore,
$$\begin{aligned} A(h,h) (t)={}&x_{0}\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{ \alpha 1}\bigl[f \bigl(s,h(s),h(s)\bigr)+u(s)\bigr]\,ds+\sum_{0< t_{k}< t}I_{k} \bigl(h(t_{k}),h(t_{k})\bigr) \\ \leq {}& x_{0}+\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{\alpha 1}\biggl[f\biggl(s, \frac{1}{2}+\frac{1}{\Gamma (\alpha +1)},\frac{1}{2}\biggr))\biggr]\,ds \\ &+{} M\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{\alpha 1}\,ds+\sum _{0< t_{k}< t}I_{k}\biggl(\frac{1}{2}+ \frac{1}{\Gamma (\alpha +1)}, \frac{1}{2}\biggr) \\ \leq {}&x_{0}+\frac{r_{2}}{\Gamma (\alpha )} \int _{0}^{t}(ts)^{ \alpha 1}\,ds+Mh(t)+\sum _{0< t_{k}< 1}I_{k}\biggl(\frac{1}{2}+ \frac{1}{\Gamma (\alpha +1)},\frac{1}{2}\biggr) \\ \leq {}&x_{0}+r_{2}h(t)+Mh(t)+\sum _{0< t_{k}< 1}I_{k} \biggl( \frac{1}{2}+ \frac{1}{\Gamma (\alpha +1)},\frac{1}{2} \biggr) \\ \leq {}&2 \biggl(x_{0}+r_{2}+M+\sum _{0< t_{k}< 1}I_{k} \biggl( \frac{1}{2}+ \frac{1}{\Gamma (\alpha +1)},\frac{1}{2} \biggr) \biggr)h(t) \\ ={}& r_{4}h(t),\quad \forall t\in J, \end{aligned}$$
where \(r_{4}=2 (x_{0}+r_{2}+M+\sum_{0< t_{k}<1}I_{k} ( \frac{1}{2}+\frac{1}{\Gamma (\alpha +1)},\frac{1}{2} ) )\).
From the above, we know that \(r_{3}h\leq A(h,h)\leq r_{4}h\), that is, \(A(h,h)\in \widetilde{P}_{h}\). Therefore, by employing Lemma 2.1, the equation \(x=A(x,x)\) has a unique positive solution in \(\tilde{P_{h}}\) for \((IP;u)\) on J. The proof is complete. □
Corollary 3.1
Suppose that
 \((H_{1}')\):

\(f:J\times R^{+}\rightarrow (\infty,0]\) for all \(t\in J\) and \(x\in R^{+}\). Also, \(f(t,x)\) is nondecreasing in x for each \(t\in J\) and \(x\in R^{+}\). Moreover, \(f(t,\frac{1}{2})<0\) for all \(t\in J\);
 \((H_{2}')\):

for each \(k=1,2,\ldots,m, I_{k}:R^{+}\rightarrow R^{+}, I_{k}(x)\) is nondecreasing in x;
 \((H_{3}')\):

for all \(\gamma \in (0,1)\) and \(\forall x\in R^{+}\), there exists \(\varphi _{1}(\gamma )\in (\gamma,1]\) such that
$$ f(t,\gamma x)\leq \varphi _{1}(\gamma )f(t,x); $$
for all \(\gamma \in (0,1)\), \(x\in R^{+}\), and \(\forall k=1,2,\ldots,m\), there exists \(\varphi _{2}(\gamma )\in (\gamma,1]\) such that
$$ I_{k}(\gamma x)\geq \varphi _{2}(\gamma )I_{k}(x). $$
Then, for \(\forall u\in H\) with \(M\leq u(t)\leq 0\), the following optimal control system
$$ (IP_{1};u)\textstyle\begin{cases} {_{0}^{C}D}_{t}^{\alpha }x(t)=f(t,x(t))+u(t),\quad t\in (0,1)/ \{t_{1},t_{2},\ldots,t_{m}\}, \\ \Delta x_{t=t_{k}}=I_{k}(x(t_{k})),\quad k=1,2,\ldots m, \\ x(0)=x_{0}, \end{cases} $$
has a unique positive solution \(x^{*}\in P_{h}\) on J, where \(h(t)=\frac{1}{2}+\frac{1}{\Gamma (\alpha )}\int _{0}^{t}(ts)^{ \alpha 1}\,ds\).