Skip to main content

Classical and weak solutions of the partial differential equations associated with a class of two-point boundary value problems

Abstract

This paper is concerned with a kind of first-order quasilinear parabolic partial differential equations associated with a class of ordinary differential equations with two-point boundary value problems. We prove that the function given by the solution of an ordinary differential equation is the unique solution of a first-order quasilinear parabolic partial differential equation in both classical and weak senses.

1 Introduction

In this paper, we study the problem of solving the following first-order quasilinear parabolic partial differential equation (PDE):

$$ \textstyle\begin{cases} \partial_{t} u(t,x)+\triangledown_{x} u(t,x)b(t,x,u(t,x))+f(t,x,u(t,x)) = 0,\\ u(T,x)=h(x),\quad(t,x)\in[0,T]\times\mathbb{R}^{n}, \end{cases} $$
(1)

where \(\triangledown_{x}u=(\frac{\partial u}{\partial x_{i}})_{1\leq i\leq n}\) is an n-dimensional row vector. We notice that PDE (1) is novel since its factor b depends on \(u(t,x)\), which is different from the traditional PDE form. It is very complicated and difficult to study the existence and uniqueness of solution of this kind of partial differential equations by traditional methods of the theory partial differential equations; some related studies can be found in [14]. However, PDE (1) should be related to a family of coupled ordinary differential equations (ODEs) associated with a kind of two-point boundary problems parameterized by \((t,x)\in[0,T]\times \mathbb{R}^{n}\) as follows:

$$ \textstyle\begin{cases} \dot{X}_{s}^{t,x}=b(s,{X}_{s}^{t,x},{Y}_{s}^{t,x}),\\ -\dot{Y}_{s}^{t,x}=f(s,{X}_{s}^{t,x},{Y}_{s}^{t,x}),\\ X_{t}^{t,x}=x,\qquad Y_{T}=h(X^{t,x}_{T}). \end{cases} $$

This two-point boundary value problem can be embedded into an optimal control problem when applying the maximum principle; the existence and uniqueness results were obtained in [5]. Peng and Pardoux [6] studied the relationship between a system of quasilinear PDEs and a kind of backward stochastic differential equations. They proved that under different assumptions, the function defined by solution of the backward stochastic differential equation is a classical and viscosity solution of a kind of second-order quasilinear PDEs. In the related field, for the stochastic cases, by introducing a family of coupled forward–backward stochastic differential equations (FBSDEs) Wu and Yu [7] gave a probabilistic interpretation for a kind of systems of second-order quasilinear parabolic PDEs combined with algebra equations in the viscosity sense. Ouknine and Turpin [8] studied weak solutions of second-order PDEs in Sobolev spaces and gave a probabilistic interpretation via the FBSDEs (see also Wei and Wu [9] and Kunita [10]). By some analysis techniques of those related references, in this paper, we study PDE (1) in both classical and weak senses, including a Sobolev weak solution and viscosity solution of the two-point boundary value problem.

The paper is organized as follows. In Sect. 2, we recall some existence and uniqueness results for the two-point boundary value problem from [5] and give some regularity properties of solutions of the ODEs; Then, in Sect. 3, we prove that the function defined by the solution of an ODE is the unique classical solution of PDE (1). Meanwhile, we derive the existence and uniqueness of a solution of PDE (1) in the Sobolev space and in the viscosity sense in Sects. 4 and 5, respectively. Finally, we list some conclusive remarks.

2 Preliminary results of the ODEs

In this paper, we work with a finite time horizon \(T>0\). We denote by \(\mathbb{R}^{n}\) the n-dimensional Euclidean space and by \(\mathbb {R}^{m\times n}\) the collection of \(m\times n\) matrices. For a given Euclidean space, we denote by \(\langle\cdot,\cdot\rangle\) (resp., \(|\cdot|\)) the inner product (resp., norm). The superscript denotes the transpose of vectors or matrices.

Let a (terminal) function \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) and a couple of coefficients \((b,f):[0,T]\times\mathbb{R}^{n} \times\mathbb {R}^{m} \rightarrow\mathbb{R}^{n+m}\) be given. We introduce a family of coupled ODEs parameterized by the initial time \(t\in[0,T]\) and initial state \(x\in\mathbb{R}^{n}\):

$$ \textstyle\begin{cases} \dot{X}_{s}^{t,x}=b(s,{X}_{s}^{t,x},{Y}_{s}^{t,x}),\\ -\dot{Y}_{s}^{t,x}=f(s,{X}_{s}^{t,x},{Y}_{s}^{t,x}),\\ X^{t,x}_{t}=x,\qquad Y^{t,x}_{T}=h(X^{t,x}_{T}). \end{cases} $$
(2)

Let also an \(m\times n\) full-rank matrix G be given. For each \((x,y)\in\mathbb{R}^{n+m}\), we denote

$$\Gamma=\begin{pmatrix}{x}\\{y}\end{pmatrix}, \qquad A(t,\Gamma)=\begin{pmatrix}{-G^{\top}f}\\{Gb}\end{pmatrix} (t, \Gamma). $$

In this paper, we use the following standard assumptions:

(H1) For each Γ, \(A(\cdot,\Gamma)\) is in \(L^{2}(0,T)\); h and A are uniformly Lipschitz continuous with respect to x and Γ, respectively. Here \(L^{2}(0,T)=\{ \varphi:\int_{0}^{T}|\varphi (s)|^{2}\,ds\leq\infty\}\).

(H2) There exist three nonnegative constants μ, \(U_{1}\), and \(U_{2}\) satisfying \(U_{1}+U_{2}>0\) and \(\mu+U_{2}>0\). Moreover, \(\mu>0\) and \(U_{1}>0\) (resp., \(U_{2}>0\)) in the case of \(m>n\) (resp., \(n>m\)), and for all \(h=(x,y,z)^{\top}\) and \(\overline{h}=(\overline{x}, \overline{y})^{\top}\),

$$\bigl\langle h(x)-h(\overline{x}), (x-\overline{x})\bigr\rangle \geq\mu \bigl\vert G(x-\overline{x}) \bigr\vert ^{2} $$

and

$$\bigl\langle A(t,\Gamma)-A(t,\overline{\Gamma}), \Gamma-\overline{\Gamma} \bigr\rangle \leq-U_{1} \bigl\vert G(x-\overline{x}) \bigr\vert ^{2}-U_{2} \bigl\vert G^{\top}(y-\overline{y}) \bigr\vert ^{2}. $$

Under Assumptions (H1) and (H2), ODEs (2) admit a unique solution

$$\bigl(X_{s}^{t,x}, Y_{s}^{t,x} \bigr)_{s\in[t,T]} \in C(0,T), $$

(see Theorem 1.1 in [5]). In addition, we have the following \(L^{2}\)-estimates.

Proposition 2.1

Let \((b,f,h)\) and \((\overline{b},\overline{f},\overline{h})\) satisfy (H1) and (H2), and let \(x, x'\in\mathbb{R}^{n}\) and \(s\in[t,T]\). Let \(({X}_{s}^{t,x},{Y}_{s}^{t,x})\) (resp., \((\overline{X}_{s}^{t,x'},\overline {Y}_{s}^{t,x'})\)) denote the unique solution of ODEs

$$\begin{aligned} &\textstyle\begin{cases} \dot{X}_{s}^{t,x}=b(t,{X}_{s}^{t,x},{Y}_{s}^{t,x}),\\ -\dot{Y}_{s}^{t,x}=f(t,{X}_{s}^{t,x},{Y}_{s}^{t,x}),\\ X_{t}^{t,x}=x,\qquad Y_{T}^{t,x}=h(X^{t,x}_{T}). \end{cases}\displaystyle \\ & \left(\textit{resp.},\ \textstyle\begin{cases} \dot{\overline{X}}_{s}^{t,x'}=\overline{b}(t,\overline {X}_{s}^{t,x'},\overline{Y}_{s}^{t,x'}),\\ -\dot{\overline{Y}}_{s}^{t,x'}=\overline{f}(t,\overline {X}_{s}^{t,x'},\overline{Y}_{s}^{t,x'}),\\ \overline{X}_{t}^{t,x}=x',\qquad\overline{Y}_{T}^{t,x}=\overline{h}(X^{t,x'}_{T}) \end{cases}\displaystyle \right). \end{aligned}$$

Then the following estimates hold:

$$\begin{aligned} &\sup_{t\leq s \leq T} \bigl\vert X_{s}^{t,x} \bigr\vert ^{2}+\sup_{t\leq s \leq T} \bigl\vert Y_{s}^{t,x} \bigr\vert ^{2}\leq C\bigl(1+ \vert x \vert ^{2}\bigr), \end{aligned}$$
(3)
$$\begin{aligned} &\begin{aligned}[b] &\sup_{t\leq s\leq T} \bigl\vert {X}_{s}^{t,x}-\overline{X}_{s}^{t,x'} \bigr\vert ^{2}+\sup_{t\leq s\leq T} \bigl\vert {Y}_{s}^{t,x}-\overline{Y}_{s}^{t,x'} \bigr\vert ^{2} \\ &\quad \leq C \biggl[ \bigl\vert x-x' \bigr\vert ^{2}+ \bigl\vert h\bigl(\overline{X}_{T}^{t,x'} \bigr)-\overline{h}\bigl(\overline {X}_{T}^{t,x'}\bigr) \bigr\vert ^{2} \\ &\qquad {}+ \int_{t}^{T} \bigl( \bigl\vert b\bigl(s, \overline{X}_{s}^{t,x'},\overline {Y}_{s}^{t,x'} \bigr)-\overline{b}\bigl(s,\overline{X}_{s}^{t,x'},\overline {Y}_{s}^{t,x'}\bigr) \bigr\vert ^{2} \\ &\qquad {}+ \bigl\vert f\bigl(s,\overline{X}_{s}^{t,x'}, \overline {Y}_{s}^{t,x'}\bigr)-\overline{f}\bigl(s, \overline{X}_{s}^{t,x'},\overline {Y}_{s}^{t,x'} \bigr) \bigr\vert ^{2} \bigr) \biggr]. \end{aligned} \end{aligned}$$
(4)

Proof

These \(L^{2}\)-estimates are standard (see [7, 11]). For reader’s convenience, we only prove (4). Let

$$\begin{aligned} &(\hat{X}_{s},\hat{Y}_{s})=\bigl({X}_{s}^{t,x}- \overline {X}_{s}^{t,x'},{Y}_{s}^{t,x}- \overline{Y}_{s}^{t,x'}\bigr); \\ &\alpha(s):=\hat{f}\bigl(s,\overline{X}_{s}^{t,x},\overline {Y}_{s}^{t,x}\bigr)=f\bigl(s,\overline{X}_{s}^{t,x}, \overline{Y}_{s}^{t,x}\bigr)-\overline {f}\bigl(s, \overline{X}_{s}^{t,x},\overline{Y}_{s}^{t,x} \bigr); \\ &\beta(s):=\hat{b}\bigl(s,\overline{X}_{s}^{t,x},\overline {Y}_{s}^{t,x}\bigr)=b\bigl(s,\overline{X}_{s}^{t,x}, \overline{Y}_{s}^{t,x}\bigr)-\overline {b}\bigl(s, \overline{X}_{s}^{t,x},\overline{Y}_{s}^{t,x} \bigr). \end{aligned}$$

We consider \(\langle G\hat{X}_{s},\hat{Y}_{s}\rangle\) and, using Assumptions (H1) and (H2), by differential equation we get

$$ \begin{aligned}[b] &\mu \vert \hat{X}_{T} \vert ^{2}+U_{1} \int_{t}^{T}\langle G\hat{X}_{s},G \hat{X}_{s}\rangle \,ds+U_{2} \int_{t}^{T}\bigl\langle G^{\top} \hat{Y}_{s},G^{\top}\hat{Y}_{s}\bigr\rangle \,ds \\ &\quad \leq C \lambda \bigl\vert x-x' \bigr\vert ^{2}+\frac{C}{\lambda}\sup_{t\leq s\leq T} \vert \hat {Y}_{t} \vert ^{2}+C \int_{t}^{T} \bigl( \bigl\vert \alpha(s) \bigr\vert ^{2}+ \bigl\vert \beta(s) \bigr\vert ^{2} \bigr) \,ds, \end{aligned} $$
(5)

where \(\lambda\in R\) can be any positive number. For \(|\hat{X}_{s}|^{2}\) and \(|\hat{Y}_{s}|^{2}\), by the Gronwall inequality we have

$$\begin{aligned} &\sup_{t\leq s\leq T } \vert \hat{X}_{s} \vert ^{2}\leq C \bigl\vert x-x' \bigr\vert ^{2}+C \int_{t}^{T} \bigl( \vert \hat {Y}_{s} \vert ^{2}+ \bigl\vert \beta(s) \bigr\vert ^{2} \bigr) \,ds, \end{aligned}$$
(6)
$$\begin{aligned} &\sup_{t\leq s\leq T } \vert \hat{Y}_{s} \vert ^{2}\leq C \vert \hat{X}_{T} \vert ^{2}+C \int_{t}^{T} \bigl( \vert \hat{X}_{s} \vert ^{2}+ \bigl\vert \alpha(s) \bigr\vert ^{2} \bigr) \,ds. \end{aligned}$$
(7)

Combining (5) and (7), we have

$$\begin{aligned} &\biggl(\mu-\frac{C}{\lambda}\biggr) \vert \hat{X}_{T} \vert ^{2}+\biggl(U_{1}-\frac{C}{\lambda}\biggr) \int _{t}^{T} \vert \hat{X}_{s} \vert ^{2} \,ds+U_{2} \int_{t}^{T} \vert \hat{Y}_{s} \vert ^{2} \,ds \\ &\quad \leq C \lambda \bigl\vert x-x' \bigr\vert ^{2}+C \int_{t}^{T} \bigl( \bigl\vert \alpha(s) \bigr\vert ^{2}+ \bigl\vert \beta(s) \bigr\vert ^{2} \bigr) \,ds. \end{aligned}$$
(8)

Case 1: \(\mu>0\), \(U_{1}>0\): We can choose \(\lambda=\lambda_{0}\) such that

$$\mu-\frac{C}{\lambda_{0}} >0, \qquad U_{1}-\frac{C}{\lambda_{0}}>0. $$

Combining with (7), we get

$$\sup_{t\leq s\leq T} \vert \hat{Y}_{s} \vert ^{2}\leq C \bigl\vert x-x' \bigr\vert ^{2}+C \int_{t}^{T} \bigl( \bigl\vert \alpha(s) \bigr\vert ^{2}+ \bigl\vert \beta(s) \bigr\vert ^{2} \bigr) \,ds. $$

Now let us prove (4).

Case 2: \(U_{2} >0\):

Combining (6) and (8), we have

$$\sup_{t\leq s\leq T} \vert \hat{X}_{s} \vert ^{2}\leq C \lambda \bigl\vert x-x' \bigr\vert ^{2}+\frac{C}{\lambda }\biggl[ \vert \hat{X}_{T} \vert ^{2}+ \int_{t}^{T} \vert \hat{X}_{s} \vert ^{2}\,ds\biggr]+C \int_{t}^{T} \bigl( \bigl\vert \alpha(s) \bigr\vert ^{2}+ \bigl\vert \beta(s) \bigr\vert ^{2} \bigr) \,ds. $$

Then

$$\sup_{t\leq s\leq T} \vert \hat{X_{s}} \vert ^{2}\leq C\lambda \bigl\vert x-x' \bigr\vert ^{2}+\frac{C}{\lambda }(1+T)\sup_{t\leq s\leq T} \vert \hat{X_{s}} \vert ^{2}+C \int_{t}^{T} \bigl( \bigl\vert \alpha (s) \bigr\vert ^{2}+ \bigl\vert \beta(s) \bigr\vert ^{2} \bigr) \,ds. $$

We choose \(\lambda=\lambda_{1}\) such that

$$\frac{C}{\lambda_{1}}(1+T) \leq\frac{1}{2}. $$

Then we get

$$ \sup_{t\leq s\leq T } \vert \hat{X_{s}} \vert ^{2}\leq C \bigl\vert x-x' \bigr\vert ^{2}+C \int_{t}^{T} \bigl( \bigl\vert \alpha(s) \bigr\vert ^{2}+ \bigl\vert \beta(s) \bigr\vert ^{2} \bigr) \,ds. $$
(9)

Combining (7) and (9), we get (4) and conclude the proof. □

We define \(Y_{t}^{t,x}\) by a function u from \([0,T]\times\mathbb{R}^{n}\) to \(\mathbb{R}^{m}\):

$$ u(t,x):=Y_{t}^{t,x}. $$
(10)

Remark 2.1

Assumption (H2) can be relaxed to ensure the existence and uniqueness of the solution of the ODEs (2); see [5].

Remark 2.2

For each deterministic \((t,x) \in[0,T]\times\mathbb{R}^{n} \), from the uniqueness of a solution of ODEs (2) we have

$$ u\bigl(s,X_{s}^{t,x}\bigr)=Y_{s}^{s,X_{s}^{t,x}}=Y_{s}^{t,x}. $$

Proposition 2.2

Let Assumptions (H1) and (H2) hold. The function u defined by (10) is continuous with respect to \((t,x)\). In particular, u is Lipschitz continuous in x.

Proof

First, from Proposition 2.1 we get the Lipschitz continuity of u in x:

$$\bigl\vert u(t,x+\delta)-u(t,x) \bigr\vert = \bigl\vert Y^{t,x+\delta}_{t}-Y^{t,x}_{t} \bigr\vert \leq C \bigl\vert x-x' \bigr\vert . $$

Next, we prove that \(u(t,x)\) is continuous in t. We have

$$\begin{aligned} \bigl\vert u(t+\delta,x)-u(t,x) \bigr\vert =& \bigl\vert u(t,x)-u\bigl(t+ \delta,X_{t+\delta }^{t,x}\bigr)+u\bigl(t+\delta,X_{t+\delta}^{t,x} \bigr)-u(t+\delta,x) \bigr\vert \\ \leq& \bigl\vert Y_{t}^{t,x}-Y_{t+\delta}^{t+\delta,X_{t+\delta }^{t,x}} \bigr\vert + \bigl\vert Y_{t+\delta}^{t+\delta,X_{t+\delta}^{t,x}}-Y_{t+\delta }^{t+\delta,x} \bigr\vert \\ \leq& \bigl\vert Y_{t}^{t,x}-Y_{t+\delta}^{t,x} \bigr\vert +C \bigl\vert X_{t+\delta}^{t,x}-x \bigr\vert \\ \leq& \int_{t}^{t+\delta} \bigl\vert f\bigl(r,X_{\mathbb{R}}^{t,x},Y_{\mathbb{R}}^{t,x}\bigr) \bigr\vert \,dr+C \bigl\vert X_{t+\delta}^{t,x}-x \bigr\vert \\ :=& \rho(\delta), \end{aligned}$$

where \(\rho(\delta)\rightarrow0\) as \(\delta\rightarrow0\). The continuity in t and the Lipschitz continuity in x imply the joint continuity of u in \((t,x)\). □

To improve the smoothness of the solutions of ODEs (2), we add the following assumption:

(H3) For any \(s\in[t,T]\), the functions \(b(s,\cdot, \cdot)\), \(f(s,\cdot , \cdot)\), and \(h(\cdot)\) are of class \(C^{2}\) with bounded derivatives.

Theorem 2.1

Under Assumptions (H1)–(H3), the function \(x\rightarrow (X^{t,x},Y^{t,x})\) is differentiable with continuous derivatives given by \((\partial_{x_{i}} X^{t,x}, \partial_{x_{i}} Y^{t,x} )_{1\leq i\leq n}\) satisfying the following ODEs:

$$ \textstyle\begin{cases} d\partial_{x_{i}} X_{s}^{t,x}=\triangledown_{x} b(s,X_{s}^{t,x},Y_{s}^{t,x})\partial_{x_{i}} X_{s}^{t,x}\,ds+\triangledown _{y}b(s,X_{s}^{t,x},Y_{s}^{t,x})\partial_{x_{i}} Y_{s}^{t,x}\,ds,\\ -d\partial_{x_{i}} Y_{s}^{t,x}=\triangledown_{x} f(s,X_{s}^{t,x},Y_{s}^{t,x})\partial_{x_{i}} X_{s}^{t,x}\,ds+\triangledown _{y}f(s,X_{s}^{t,x},Y_{s}^{t,x})\partial_{x_{i}} Y_{s}^{t,x}\,ds,\\ \partial_{x_{i}} X_{t}^{t,x}=(0,0,\ldots,1_{i},\ldots,0,0)^{\top},\qquad \partial_{x_{i}} Y_{T}^{t,x}=\triangledown_{x} h(X_{T}^{t,x})\partial_{x_{i}} X_{T}^{t,x},s\in[t,T]. \end{cases} $$
(11)

Proof

A similar technique can be found in [11] and [12]. Here we give a detailed proof. Let \(h_{i}=h\cdot e_{i}\), \(\triangle_{h_{i}}X_{t}={h_{i}}^{-1}(X_{s}^{t,x+{h_{i}}}-X_{s}^{t,x})\), and \(\triangle_{h_{i}}Y_{t}={h_{i}}^{-1}(Y_{s}^{t,x+{h_{i}}}-Y_{s}^{t,x})\). Then

$$ \textstyle\begin{cases} d\triangle _{h_{i}}{X}_{s}^{t,x}={h_{i}}^{-1}[b(t,{X}_{s}^{t,x+{h_{i}}},{Y}_{s}^{t,x+{h_{i}}})-b(t,{X}_{s}^{t,x},{Y}_{s}^{t,x})]\,ds,\\ -d\triangle _{h_{i}}{Y}_{s}^{t,x}={h_{i}}^{-1}[f(t,{X}_{s}^{t,x+{h_{i}}},{Y}_{s}^{t,x+{h_{i}}})-f(t,{X}_{s}^{t,x},{Y}_{s}^{t,x})]\,ds,\\ \triangle_{h_{i}} X_{t}^{t,x}=1,\qquad\triangle _{h_{i}}{Y}_{T}^{t,x}={h_{i}}^{-1}[h(X^{t,x+h}_{T})-h(X^{t,x}_{T})]. \end{cases} $$
(12)

Hence we treat this equation as a linear one:

$$ \textstyle\begin{cases} d\triangle_{h_{i}}{X}_{s}^{t,x}=\phi(s,\triangle_{h_{i}}{X}_{s}^{t,x},\triangle _{h_{i}}{Y}_{s}^{t,x})\,ds,\\ -d\triangle_{h_{i}}{Y}_{s}^{t,x}=\psi(s,\triangle_{h_{i}}{X}_{s}^{t,x},\triangle _{h_{i}}{Y}_{s}^{t,x})\,ds,\\ \triangle_{h_{i}} X^{t,x}_{t}=1,\qquad\triangle _{h_{i}}{Y}_{T}^{t,x}={h_{i}}^{-1}[h(X^{t,x+{h_{i}}}_{T})-h(X^{t,x}_{T})], \end{cases} $$
(13)

where ϕ and ψ are defined by

$$\begin{aligned} &\phi_{h}(s,x,y)=A_{h}(s)x+B_{h}(s)y, \\ &\psi_{h_{i}}(s,x,y)=C_{h_{i}}(s)x+D_{h_{i}}(s)y, \end{aligned}$$

and

$$\begin{aligned} &A_{h_{i}}(s)= \textstyle\begin{cases} \frac{b(s,X^{t,x+{h_{i}}}_{s},Y^{t,x+{h_{i}}}_{s})-b(s,X^{t,x }_{s},Y^{t,x+{h_{i}} }_{s})}{X^{t,x+{h_{i}}}_{s}-X^{t,x }_{s}} \quad\mbox{if } X^{t,x+{h_{i}}}_{s}\neq X^{t,x }_{s} ,\\ \triangledown_{x} b(s,X_{s}^{t,x},Y_{s}^{t,x+{h_{i}}})\quad\mbox{otherwise}, \end{cases}\displaystyle \end{aligned}$$
(14)
$$\begin{aligned} &B_{h_{i}}(s)= \textstyle\begin{cases} \frac{b(s,X^{t,x}_{s},Y^{t,x+{h_{i}}}_{s})-b(s,X^{t,x }_{s},Y^{t,x }_{s})}{Y^{t,x+h}_{s}-Y^{t,x }_{s}} \quad\mbox{if } Y^{t,x+{h_{i}}}_{s}\neq Y^{t,x }_{s}, \\ \triangledown_{y}b(s,X_{s}^{t,x}, Y_{s}^{t,x})\quad\mbox{otherwise}, \end{cases}\displaystyle \end{aligned}$$
(15)
$$\begin{aligned} &C_{h_{i}}(s)= \textstyle\begin{cases} \frac{f(s,X^{t,x+{h_{i}}}_{s},Y^{t,x+{h_{i}}}_{s})-f(s,X^{t,x }_{s},Y^{t,x+{h_{i}} }_{s})}{X^{t,x+{h_{i}}}_{s}-X^{t,x }_{s}} \quad\mbox{if } X^{t,x+{h_{i}}}_{s}\neq X^{t,x }_{s}, \\ \triangledown_{x} f(s,X_{s}^{t,x},Y_{s}^{t,x+{h_{i}}})\quad \mbox{otherwise}, \end{cases}\displaystyle \end{aligned}$$
(16)
$$\begin{aligned} &D_{h_{i}}(s)= \textstyle\begin{cases} \frac{f(s,X^{t,x}_{s},Y^{t,x+{h_{i}}}_{s})-f(s,X^{t,x }_{s},Y^{t,x }_{s})}{Y^{t,x+{h_{i}}}_{s}-Y^{t,x }_{s}} \quad\mbox{if }Y^{t,x+{h_{i}}}_{s}\neq Y^{t,x }_{s}, \\ \triangledown_{y}f(s,X_{s}^{t,x},Y_{s}^{t,x})\quad\mbox{otherwise}, \end{cases}\displaystyle \end{aligned}$$
(17)

for \(h\neq0\). Let

$$\begin{aligned} &\phi_{0}(s,x,y)=\triangledown_{x} b\bigl(s,X_{s}^{t,x},Y_{s}^{t,x} \bigr)x+\triangledown _{y}b\bigl(s,X_{s}^{t,x},Y_{s}^{t,x} \bigr)y, \\ &\psi_{0}(s,x,y)=\triangledown_{x} f\bigl(s,X_{s}^{t,x},Y_{s}^{t,x} \bigr)x+\triangledown _{y}f\bigl(s,X_{s}^{t,x},Y_{s}^{t,x} \bigr)y. \end{aligned}$$

From Proposition 2.1 we know that \((X^{t,x+{h_{i}}},Y^{t,x+{h_{i}}})\) converges to \((X^{t,x},Y^{t,x})\). We have to prove that \((\triangle _{h_{i}}X,\triangle_{h_{i}}Y)\) converges to \((\partial_{x_{i}} X,\partial _{x_{i}} Y)\), the solution of ODEs (11). To use the same convergence argument, we have to show that \(\phi_{h_{i}}(s,\partial_{x_{i}} X^{t,x},\partial_{x_{i}} Y^{t,x})\) converges to \(\phi_{0}(s,\partial_{x_{i}} X^{t,x},\partial_{x_{i}} Y^{t,x})\) and \(\psi_{h_{i}}(s,\partial_{x_{i}} X^{t,x},\partial_{x_{i}} Y^{t,x})\) converges to \(\psi_{0}(s,\partial_{x_{i}} X^{t,x},\partial_{x_{i}} Y^{t,x})\) as \(h_{i}\) goes to 0. Notice that

$$ A_{h_{i}}(s)= \int_{0}^{1}\triangledown_{x} b \bigl(s,X_{s}^{t,x}+\lambda \bigl(X_{s}^{t,x+{h_{i}}}-X_{s}^{t,x} \bigr),Y_{s}^{t,x+{h_{i}}}\bigr)\,d\lambda. $$

Thus

$$ \begin{aligned} & \int_{0}^{T}\bigl[A_{h_{i}}(s)- \triangledown_{x} b\bigl(s,X_{s}^{t,x},Y_{s}^{t,x+{h_{i}}} \bigr)\bigr]^{2}\bigl(\partial_{x_{i}} X_{s}^{t,x} \bigr)^{2}\,ds \\ &\quad \leq \int_{0}^{T} \int_{0}^{1}\bigl[\triangledown_{x} b \bigl(s,X_{s}^{t,x}+\lambda \bigl(X_{s}^{t,x+{h_{i}}}-X_{s}^{t,x} \bigr),Y_{s}^{t,x+{h_{i}}}\bigr) \\ &\qquad{}-\triangledown_{x} b\bigl(s,X_{s}^{t,x},Y_{s}^{t,x+{h_{i}}} \bigr)\bigr]^{2}\bigl(\partial_{x_{i}} X_{s}^{t,x} \bigr)^{2}\,d\lambda \,ds. \end{aligned} $$

We split this integral into two terms on the sets \(\{ |X_{s}^{t,x+{h_{i}}}-X_{s}^{t,x}|\leq\eta\}\) and \(\{ |X_{s}^{t,x+{h_{i}}}-X_{s}^{t,x}|>\eta\}\). By Assumption (H3), \(\triangledown _{x} b(s,x,y)\) is uniformly continuous and bounded (by a constant K). It follows that, for each \(\varepsilon>0\), there exists \(\eta>0\) such that

$$ \begin{aligned} &\bigl\Vert \bigl(A_{h_{i}}(s)-\triangledown_{x} b \bigl(s,X_{s}^{t,x},Y_{s}^{t,x+{h_{i}}}\bigr)\bigr) \bigl(\partial _{x_{i}} X_{s}^{t,x}\bigr) \bigr\Vert \\ &\quad \leq\varepsilon^{2} \bigl\Vert \partial_{x_{i}} X_{s}^{t,x} \bigr\Vert +K^{2} \int_{0}^{T}\mathrm{1}_{\{ \vert X_{s}^{t,x+{h_{i}}}-X_{s}^{t,x} \vert >\eta\}} \bigl\vert \partial_{x_{i}} X_{s}^{t,x} \bigr\vert ^{2}\,ds. \end{aligned} $$

We split the term into two parts corresponding to the set \(\{|\partial _{x_{i}} X_{s}^{t,x}|\leq M\}\) and its complement. Then we have

$$ \int_{0}^{T}\mathrm{1}_{\{ \vert X_{s}^{t,x+{h_{i}}}-X_{s}^{t,x} \vert >\eta\}} \bigl\vert \partial _{x_{i}} X_{s}^{t,x} \bigr\vert ^{2}\,ds\leq\frac{M^{2}}{\eta ^{2}} \bigl\Vert X_{s}^{t,x+{h_{i}}}-X_{s}^{t,x} \bigr\Vert ^{2}+ \int_{0}^{T}\mathrm{1}_{\{ \vert \partial _{x_{i}} X_{s}^{t,x} \vert >M\}} \bigl\vert \partial_{x_{i}} X_{s}^{t,x} \bigr\vert ^{2} \,ds. $$

By the Lebesgue theorem, since \(\partial_{x_{i}} X^{t,x}\) is square integrable, \(\int_{0}^{T}\mathrm{1}_{\{|\partial_{x_{i}} Y^{t,x}|>M\} }|\partial_{x_{i}} X_{s}^{t,x}|^{2}\,ds\) converges to 0 as \(M \rightarrow\infty \). Choosing M sufficiently large and using the convergence of \(X_{s}^{t,x+{h_{i}}}\) to \(X_{s}^{t,x}\), it follows that

$$ \lim_{{h_{i}}\rightarrow0} \bigl\Vert \bigl(A_{h_{i}}(s)- \triangledown_{x} b\bigl(s,X_{s}^{t,x},Y_{s}^{t,x+{h_{i}}} \bigr)\bigr) \bigl(\partial_{x_{i}} X_{s}^{t,x}\bigr) \bigr\Vert =0. $$

By the same method we get

$$\lim_{{h_{i}}\rightarrow0}\bigl\| \bigl(\triangledown_{x} b \bigl(s,X_{s}^{t,x},Y_{s}^{t,x+{h_{i}}}\bigr)\bigr) \bigl(\partial_{x_{i}} X_{s}^{t,x}\bigr)-\triangledown _{x} b\bigl(s,X_{s}^{t,x},Y_{s}^{t,x} \bigr)) \bigl(\partial_{x_{i}} X_{s}^{t,x}\bigr)\bigr\| =0. $$

Hence it follows that

$$\lim_{{h_{i}}\rightarrow0} \bigl\Vert \bigl(A_{h_{i}}(s)- \triangledown_{x} b\bigl(s,X_{s}^{t,x},Y_{s}^{t,x} \bigr)\bigr) \bigl(\partial_{x_{i}} X_{s}^{t,x}\bigr) \bigr\Vert =0. $$

Similar arguments give that

$$\begin{aligned} &\lim_{{h_{i}}\rightarrow0} \bigl\Vert \bigl(B_{h_{i}}(s)- \triangledown _{y}b\bigl(s,X_{s}^{t,x},Y_{s}^{t,x} \bigr)\bigr) \bigl(\partial_{x_{i}} Y_{s}^{t,x}\bigr) \bigr\Vert =0, \\ &\lim_{{h_{i}}\rightarrow0} \bigl\Vert \bigl(C_{h_{i}}(s)- \triangledown_{x} f\bigl(s,X_{s}^{t,x},Y_{s}^{t,x} \bigr)\bigr) \bigl(\partial_{x_{i}} X_{s}^{t,x}\bigr) \bigr\Vert =0, \\ &\lim_{{h_{i}}\rightarrow0} \bigl\Vert \bigl(D_{h_{i}}(s)- \triangledown _{y}f\bigl(s,X_{s}^{t,x},Y_{s}^{t,x} \bigr)\bigr) \bigl(\partial_{x_{i}} Y_{s}^{t,x}\bigr) \bigr\Vert =0. \end{aligned}$$

It is easy to verify that the functions ϕ and ψ are continuous with respect to parameter \(h_{i}\) at point 0. By Proposition 2.1, as \(h_{i}\rightarrow0\), \(\phi_{h_{i}}(s,\partial_{x_{i}} X_{s}^{t,x},\partial_{x_{i}} Y_{s}^{t,x})\) converges to \(\phi_{0}(s,\partial_{x_{i}} X_{s}^{t,x},\partial_{x_{i}} Y_{s}^{t,x})\), and \(\psi_{h}(s,\partial_{x_{i}} X_{s}^{t,x},\partial_{x_{i}} Y_{s}^{t,x})\) converges to \(\psi_{0}(s,\partial_{x_{i}} X_{s}^{t,x},\partial _{x_{i}} Y_{s}^{t,x})\). By Proposition 2.1, \((\triangle_{h_{i}}X,\triangle _{h_{i}}Y)\) converges to \((\partial_{x_{i}} X,\partial_{x_{i}} Y)\). □

We also have the following property.

Proposition 2.3

Under Assumptions (H1)–(H3), there exists a constant \(C>0\), depending only on \(L,\mu\), and T, such that

$$ \sup_{0\leq t\leq T} \bigl\Vert \triangledown_{x} X_{s}^{t,x} \bigr\Vert ^{2}+\sup _{0\leq t\leq T} \bigl\Vert \triangledown_{x} Y_{s}^{t,x} \bigr\Vert ^{2}\leq C. $$

Proof

From Proposition 2.1 and Theorem 2.1 we get

$$ \begin{aligned} &\sup_{0\leq t\leq T} \bigl\Vert \triangledown_{x} X_{s}^{t,x} \bigr\Vert ^{2}+\sup_{0\leq t\leq T} \bigl\Vert \triangledown_{x} Y_{s}^{t,x} \bigr\Vert ^{2}\\ &\quad =\sup _{0\leq t\leq T}\lim_{h\rightarrow0} \bigl\Vert \triangle_{h}{X}_{s}^{t,x}+\triangle_{h}{Y}_{s}^{t,x} \bigr\Vert ^{2} \\ &\quad =\sup_{0\leq t\leq T}\lim_{h\rightarrow0} \vert h \vert ^{-1} \bigl( \bigl\Vert X_{s}^{t,x+h}-X_{s}^{t,x} \bigr\Vert ^{2}+ \bigl\Vert Y_{s}^{t,x+h}-Y_{s}^{t,x} \bigr\Vert ^{2} \bigr) \\ &\quad \leq C. \end{aligned} $$

 □

3 Classical solution to the PDE

We now relate the function u defined by (10) to the parabolic partial differential equations (1).

Theorem 3.1

Let Assumptions (H1)–(H3) hold. Then the function u defined by (10) is of class \(C^{1,1}([0,T]\times\mathbb{R}^{n})\) and solves PDE (1). In particular, \(u(t,x)\) is the unique solution of PDE (1).

Proof

By Theorem 2.1, \(u\in C^{0,1}([0,T]\times\mathbb{R}^{n})\). Let \(h>0\) be such that \(t+h\leq T\). Clearly, \(Y_{t+h}^{t,x}=Y_{t+h}^{t+h,X_{t+h}^{t,x}}\). Hence

$$ u(t+h,x)-u(t,x)=u\bigl(t+h,X_{t+h}^{t,x} \bigr)-u\bigl(t+h,X_{t+h}^{t,x}\bigr)+u(t+h,x)-u(t,x). $$
(18)

Differentiating \(u(t,X_{s}^{t,x})\), we get

$$ Du\bigl(t,X_{s}^{t,x}\bigr)= \triangledown_{x} u\bigl(t,X_{s}^{t,x}\bigr) b \bigl(s,X_{s}^{t,x},Y_{s}^{t,x}\bigr)\,ds. $$
(19)

Here \(Du(t,x)=(du_{1}(t,x),du_{2}(t,x),\ldots,du_{m}(t,x))^{\top}\) is an m-dimensional rank victor. Combining (18) with (19), we get

$$ \begin{aligned}[b] u(t+h,x)-u(t,x)={}&{-} \int_{t}^{t+h}\triangledown_{x} u \bigl(t+h,X_{s}^{t,x}\bigr) b\bigl(s,X_{s}^{t,x},Y_{s}^{t,x} \bigr)\,ds \\ &{}- \int_{t}^{t+h} f\bigl(s,X_{s}^{t,x},Y_{s}^{t,x} \bigr)\,ds. \end{aligned} $$
(20)

Let \(t=t_{0}< t_{1}<\cdots<t_{n}=T\). We have

$$ \begin{aligned}[b] u(T,x)-u(t,x)={}&{-}\sum _{i=0}^{n-1} \int_{t_{i}}^{t_{i+1}}\triangledown_{x} u \bigl(t_{i+1},X_{s}^{t_{i},x}\bigr) b \bigl(s,X_{s}^{t_{i},x},Y_{s}^{t_{i},x}\bigr)\,ds \\ &{}-\sum_{i=0}^{n-1} \int_{t_{i}}^{t_{i+1}} f\bigl(s,X_{s}^{t_{i},x},Y_{s}^{t_{i},x} \bigr)\,ds. \end{aligned} $$
(21)

It follows from Theorem 2.1 that if we take a sequence of meshes \(t=t_{0}< t_{1}<\cdots<t_{n}=T\) such that \(\lim_{n\rightarrow\infty}\sup_{i\leq n-1}(t_{i+1}^{n}-t_{i}^{n})=0\), then we obtain:

$$ \begin{aligned} u(t,x)=h(x)+ \int_{t}^{T} \triangledown_{x} u(t,x)b \bigl(t,x,u(t,x)\bigr)+f\bigl(t,x,u(t,x)\bigr). \end{aligned} $$

Hence \(u\in C^{1,1}([0,T]\times\mathbb{R}^{n})\) and satisfies PDE (1).

Now we consider the uniqueness of the solution. It suffices to show that \((X_{s}^{t,x},u(s,X_{s}^{t,x}); t\leq s\leq T)\) solves ODEs (2). From the uniqueness of the solution of ODEs (2) we get the uniqueness of the solution of the PDE (1). Suppose that a function \(v(t,x) \in C^{1,1}([0,T]\times\mathbb{R}^{n})\) is a solution of PDE (1). Set

$$ DX_{s}^{t,x}=b\bigl(s,X_{s}^{t,x},v \bigl(s,X_{s}^{t,x}\bigr)\bigr)\,ds. $$

Letting \(t=t_{0}< t_{1}< t_{2}<\cdots<t_{n}=T\), we have

$$ \begin{aligned} &\sum_{i=0}^{n-1} \bigl[v\bigl(t_{i},X_{t_{i}}^{t,x}\bigr)-v \bigl(t_{i+1},X_{t_{i+1}}^{t,x}\bigr)\bigr] \\ &\quad =\sum_{i=0}^{n-1}\bigl[v \bigl(t_{i},X_{t_{i}}^{t,x}\bigr)-v \bigl(t_{i},X_{t_{i+1}}^{t,x}\bigr)\bigr]+\sum _{i=0}^{n-1}\bigl[v\bigl(t_{i+1},X_{t_{i}}^{t,x} \bigr)-v\bigl(t_{i+1},X_{t_{i+1}}^{t,x}\bigr)\bigr] \\ &\quad =-\sum_{i=0}^{n-1} \int_{t_{i}}^{t_{i+1}}\triangledown_{x} v \bigl(t_{i},X_{s}^{t,x}\bigr)b\bigl(s,X_{s}^{t,x},v \bigl(s,X_{s}^{t,x}\bigr)\bigr)\,ds \\ &\qquad {}+\sum_{i=0}^{n-1} \int_{t_{i}}^{t_{i+1}} \bigl[\triangledown_{x} v \bigl(s,X_{t_{i+1}}^{t,x}\bigr)b\bigl(s,X_{t_{i+1}}^{t,x},v \bigl(s,X_{t_{i+1}}^{t,x}\bigr)\bigr)+f\bigl(s,X_{t_{i+1}}^{t,x},u \bigl(s,X_{t_{i+1}}^{t,x}\bigr)\bigr) \bigr]\,ds. \end{aligned} $$

Here we apply the differential equation to \(u(t_{i},\cdot)\) and calculate \(v(t_{i},X_{t_{i}}^{t,x})-v(t_{i},X_{t_{i+1}}^{t,x})\). Then we compute \(v(t_{i+1},X_{t_{i}}^{t,x})-v(t_{i+1},X_{t_{i+1}}^{t,x})\) from the PDE (1). Finally, by the fact that \(v\in C^{1,1}([0,T]\times \mathbb{R}^{n})\) and by the assumption we let the mesh size go to zero to obtain

$$ v\bigl(s,X_{s}^{t,x}\bigr)-h\bigl(X_{T}^{t,x} \bigr)= \int_{s}^{T}f\bigl(r,X_{r}^{t,x},v \bigl(r,X_{r}^{t,x}\bigr)\bigr)\,dr, $$

where \((X_{s}^{t,x},v(s,X_{s}^{t,x});t\leq s\leq T)\) solves ODEs (2). By the uniqueness of solution of ODEs, \((X_{s}^{t,x},v(s,X_{s}^{t,x}))=(X_{s}^{t,x},Y_{s}^{t,x})\). In particular, \(v(t,x)=Y_{t}^{t,x}\). □

4 Weak solutions in Sobolev space

In this section, we prove that the function \(u(t,x)\) defined by (10) is the unique weak solution of PDE (1) in the Sobolev space under some usual assumptions. First, we recall the definition of a Sobolev weak solution for PDE (1) from [8] and [9].

Definition 4.1

A function u is called a Sobolev weak solution (solution in \(L^{2}_{\rho}(\mathbb{R}^{n};\mathbb{R}^{m})\)) of PDE(1) if \(u\in L^{2}([0,T];L^{2}_{\rho}(\mathbb{R}^{n};\mathbb{R}^{m}))\) and for an arbitrary \(\varphi\in C_{c}^{1,\infty}([0,T]\times\mathbb{R}^{n};\mathbb{R}^{m})\),

$$ \begin{aligned}[b] & \int_{t}^{T} \int_{\mathbb{R}^{n}}u(s,x)\partial_{s}\varphi(s,x)\,dx\,ds+ \int _{\mathbb{R}^{n}}u(t,x)\varphi(t,x)\,dx- \int_{\mathbb{R}^{n}}u(T,x)\varphi (T,x)\,dx \\ &\qquad {}+ \int_{t}^{T} \int_{\mathbb{R}^{n}}\triangledown_{x} \bigl(b\bigl(s,x,u(s,x) \bigr)\varphi (s,x)\bigr)u(s,x)\,dx\,ds \\ &\quad = \int_{t}^{T} \int_{\mathbb{R}^{n}}f\bigl(s,x,u(s,x)\bigr)\varphi(s,x)\,dx\,ds, \end{aligned} $$
(22)

where ρ is the weight function defined as \(\rho(x):=(1+|x|^{2})^{q}, q\leq-2\), and \(L^{2}_{\rho}\) is the Hilbert space with the inner product

$$ \langle u_{1}, u_{2}\rangle_{L^{2}_{\rho}}= \int_{\mathbb {R}^{n}} u_{1}(x)u_{2}(x) \rho(x)\,dx, $$

where \(u_{1}(x)u_{2}(x)\) is the inner product of the Euclidean space.

We make the following assumption:

(H4) For any \(s\in[t,T]\), the function \(b(s,\cdot, \cdot)\) is in \(C^{1,1}(\mathbb{R}^{n}\times\mathbb{R}^{m};\mathbb{R}^{n})\) with bounded derivatives.

Theorem 4.1

Under Assumptions (H1), (H2), and (H4), \(X_{s}^{t,x}\) is the solution defined in the forward equation in ODEs (2). Then the map \(X_{s}^{t,\cdot}: \mathbb{R}^{n} \rightarrow\mathbb{R}^{n}\) is a homeomorphism. This means the map \(X_{s}^{t,\cdot}\) is one-to-one and onto, so that its inverse map exists. Moreover, the inverse map, denoted by \(\hat{X}_{s}^{t,\cdot}: \mathbb{R}^{n} \rightarrow\mathbb {R}^{n}\), is also continuous.

Proof

The one-to-one property of the map \(X_{s}^{t,x}\) follows from Proposition 2.1. The rest of proof is similar to [10] (pp. 225–227), and hence we omit it. □

Lemma 4.1

(Norm equivalence principle)

Assume Assumptions (H1), (H2), and (H4). Let \(X_{s}^{t,x}\) be the solution of forward equation in ODEs (2), let ρ be a weight function. Then there exist constants \(c, C>0\) such that, for any \(s\in[t,T]\) and \(\varphi\in L_{\rho}^{1}(\mathbb {R}^{n},\mathbb{R}^{m})\),

$$ c \int_{\mathbb{R}^{n}} \bigl\vert \varphi(x) \bigr\vert \rho(x)\,dx\leq \int_{\mathbb {R}^{n}} \bigl\vert \varphi\bigl(X_{s}^{t,x} \bigr) \bigr\vert \rho(x)\,dx\leq C \int_{\mathbb{R}^{n}} \bigl\vert \varphi (x) \bigr\vert \rho(x)\,dx, $$
(23)

and, for any \(\Psi\in L_{\rho}^{1}([t,T]\otimes\mathbb{R}^{n}; \mathbb{R}^{m})\),

$$ \begin{aligned}[b] c \int_{t}^{T} \int_{\mathbb{R}^{n}} \bigl\vert \Psi(s,x) \bigr\vert \rho(x)\,dx &\leq \int _{t}^{T} \int_{\mathbb{R}^{n}} \bigl\vert \Psi\bigl(s,X_{s}^{t,x} \bigr) \bigr\vert \rho(x)\,dx\\ &\leq C \int _{t}^{T} \int_{\mathbb{R}^{n}} \bigl\vert \Psi(s,x) \bigr\vert \rho(x)\,dx, \end{aligned} $$
(24)

where c and C depend on \(T, L, \rho\), and the bounds of the first derivatives of \(b,f,h\), but do not depend on the initial value x.

Proof

First, we take \(\rho(x):=(1+|x|^{2})^{q}\), \(q\in R\). We claim that there exists constants \(c,C>0\) such that

$$ c\leq \frac{J(\hat{X}_{s}^{t,y})\rho(\hat{X}_{s}^{t,y})}{\rho(x)} \leq C, \quad\forall y\in \mathbb{R}^{n}, t\leq s\leq T. $$
(25)

Here \(\hat{X}_{s}^{t,y}\) is the inverse flow of \({X}_{s}^{t,y}\), \(J(\hat {X}_{s}^{t,y}):=\operatorname{det} \triangledown_{y} \hat{X}_{s}^{t,y}\) is the determinant of the Jacobian matrix of \(\hat{X}_{s}^{t,y}\). The existence of \(\hat {x}_{s}^{t,y}\) is given by Theorem 4.1. Now we prove (25). Assume that \(T-h\leq t\leq T\) for some small \(h>0\). We substitute \(x=\hat{X}_{s}^{t,y}\) into ODEs (2) with \(X_{s}^{t,\hat{X}_{s}^{t,y}}=X_{s}^{t,\cdot}\circ\hat {X}_{s}^{t,y}=y\). Then

$$ \textstyle\begin{cases} \hat{X}_{s}^{t,y}=y-\int_{t}^{s}b(r,\hat{X}_{s}^{r,y},{Y}_{r}^{t,\hat {X}_{s}^{t,y}})\,dr,\\ Y_{s}^{t,\hat{X}_{s}^{t,y}}=h(X_{T}^{t,\hat{X}_{s}^{t,y}})+\int _{s}^{T}f(r,X_{r}^{t,\hat{X}_{s}^{t,y}},Y_{r}^{t,\hat{X}_{s}^{t,y}})\,dr. \end{cases} $$
(26)

We differentiate (26) with respect to y to get

$$\begin{aligned} \triangledown_{y} \hat{X}_{s}^{t,y}& = I- \int_{t}^{s}b'_{x} \bigl(r,X_{r}^{t,\hat {X}_{s}^{t,y}},Y_{r}^{t,\hat{X}_{s}^{t,y}}\bigr) \triangledown_{y} X_{r}^{t,\hat {X}_{s}^{t,y}}\,dr- \int_{t}^{s}b'_{y} \bigl(r,X_{r}^{t,\hat{X}_{s}^{t,y}},Y_{r}^{t,\hat {X}_{s}^{t,y}}\bigr) \triangledown_{y} Y_{r}^{t,\hat{X}_{s}^{t,y}}\,dr \\ &=:I+J_{s}^{t}(y). \end{aligned}$$
(27)

We define \(\rho(x):=e^{F(X)}\), so \(F(x)\) is \(C_{l,b}^{2}(\mathbb{R}^{n})\). Using the differential equation for \(F(\hat{X}_{s}^{t,y})\) (see[10], pp. 262–263), we get

$$ F\bigl(\hat{X}_{s}^{t,y}\bigr)-F(y)= \int_{t}^{s}b\bigl(r,\hat{X}_{r}^{t,y},{Y}_{r}^{t,\hat {X}_{s}^{t,y}} \bigr)\triangledown_{x} F\bigl(\hat{X}_{r}^{t,y} \bigr)\,dr. $$

It follows that

$$ \frac{\rho(\hat{X}_{s}^{t,y})}{\rho(y)}=\exp\bigl(F\bigl(\hat{X}_{s}^{t,y} \bigr)-F(y)\bigr)=\exp \biggl( \int_{t}^{s}b\bigl(r,\hat{X}_{r}^{t,y},{Y}_{r}^{t,\hat{X}_{s}^{t,y}} \bigr)\triangledown _{x} F\bigl(\hat{X}_{r}^{t,y} \bigr)\,dr\biggr):=N_{t}^{s}(y). $$

Since the first derivatives of F are bounded, it is easy to verify that \(N_{t}^{s}(x)\) is bounded by Assumptions (H1) and (H4). Then there exist two constants \(r>0\) and \(R>0\) such that

$$ r\leq N_{t}^{s}(y) \leq R,\quad\forall y\in \mathbb{R}^{n}. $$
(28)

Since \(J(\hat{X}_{s}^{t,y}):=\operatorname{det}\triangledown_{y}\hat{X}_{s}^{t,y}\), from(27) we obtain

$$ 1- \bigl\Vert J_{s}^{t}(y) \bigr\Vert \leq J_{s}^{t}\bigl(\hat{X}_{s}^{t,y}\bigr) \leq1+ \bigl\Vert J_{s}^{t}(y) \bigr\Vert . $$

We consider (27), apply the differential equation, and use a similar method in Proposition 2.3. Then there exists a constant \(c_{0}>0\), depending only on \(L,\mu,T\), and the bounds of the first-order derivatives of \(b,f\), and h, such that

$$ \sup_{t\leq r\leq T}\bigl( \bigl\Vert \triangledown_{y} X_{r}^{t,\hat {X}_{s}^{t,y}} \bigr\Vert ^{2}+ \bigl\Vert \triangledown_{y} Y_{r}^{t,\hat{X}_{s}^{t,y}} \bigr\Vert ^{2}\bigr)\leq c_{0}. $$

So

$$ \bigl\Vert J_{s}^{t}(y) \bigr\Vert ^{2}\leq c_{0}(s-t). $$
(29)

By (28) and (29) the upper and lower bounds can be estimated as

$$ r\bigl(1-\sqrt{c_{0}(s-t)}\bigr)\leq\frac{J(\hat{X}_{s}^{t,y})\rho(\hat {X}_{s}^{t,y})}{\rho(y)}\leq R\bigl(1+ \sqrt{c_{0}(s-t)}\bigr). $$

If \(s-t\) is small enough, the lower bound \(r(1-\sqrt{c_{0}(s-t)})>0\). Therefore, we can take h small enough such that (25) holds for \(T-h\leq s\leq T\). Note that c and C do not depend on the initial value y. So we use the flow property \(\hat {X}_{s}^{t,y}=\hat{X}_{r}^{t,\cdot}\circ\hat{X}_{s}^{r,y}, \forall t\leq r \leq s \leq T\) (from Remark 2.1) to drop the restriction \(T-h\leq t\leq T\) and extend inequality (25) to the whole interval of \([t,T]\).

Finally, we prove (23). Using the change of variable \(y={X}_{s}^{t,x}\), we get

$$ \int_{\mathbb{R}^{n}} \bigl\vert \varphi \bigl(X_{s}^{t,x}\bigr) \bigr\vert \rho(x)\,dx= \int_{\mathbb {R}^{n}} \bigl\vert \varphi(y) \bigr\vert \rho(y)\frac{J(\hat{X}_{s}^{t,y})\rho(\hat {X}_{s}^{t,y})}{\rho(y)}\,dy. $$

By (25) we get (23). Moreover, for a function \((s,x)\rightarrow\Psi(s,x)\), we consider \(x\rightarrow\Psi (s,x)\) in the same way as before. We integrate with respect to \(s\in [t,T]\) to get (24). The lemma is proved. □

Let the mollifier \(K_{d}\) be defined as \(K_{d}(x):=C_{d}\exp(\frac {-1}{1-|x|^{2}})\) for \(|x|<1\) and \(K_{d}(x)=0\) otherwise, where \(C_{d}\) is chosen such that \(\int_{\mathbb{R}^{n}} K_{d}(x)\,dx=1\). Denote \(K_{d}^{m}(x):=m^{d}K_{d}(mx)\). Suppose that \(\phi:\mathbb{R}^{n} \rightarrow \mathbb{R}\) is a Hölder-continuous function with exponent \(\gamma\in (0,1)\) and define

$$ \phi^{m}(x):= \int_{\mathbb{R}^{n}} K_{d}^{m}(x-y) \phi(y)\,dy $$

for \(m>0\).

By [13] \(\phi^{m}\) is a \(C^{\infty}\) function and is Hölder-continuous with respect to exponent γ, Moreover, \(\phi ^{m}\rightarrow\phi\) uniformly on \(\mathbb{R}\) as \(m\rightarrow\infty\). Similarly, we define

$$ \begin{aligned} &h^{m}(x)= \int_{\mathbb{R}^{n}} K_{d}^{m} \bigl(x-x'\bigr)h\bigl(x'\bigr)\,dx', \\ &b^{m}(r,x,y)= \int_{\mathbb{R}^{n}\times\mathbb {R}^{m}} K_{d}^{m} \bigl(x-x'\bigr)K_{k}^{m}\bigl(y-y' \bigr)b\bigl(r,x',y'\bigr)\,dy, \\ &f^{m}(r,x,y)= \int_{\mathbb{R}^{n}\times\mathbb {R}^{m}} K_{d}^{m} \bigl(x-x'\bigr)K_{k}^{m}\bigl(y-y' \bigr)f\bigl(r,x',y'\bigr)\,dy. \end{aligned} $$

It is easy to see that \((h^{m}(\cdot),b^{m}(t,\cdot,\cdot),f^{m}(t,\cdot,\cdot ))_{m\in N}\) are \(C^{\infty}\) functions such that, for any \(t\in [0,T],x\in\mathbb{R}^{n}\), and \(y\in\mathbb{R}^{m}\), \((b^{m},f^{m},h^{m})(t,x,y)\rightarrow(b,f,h)(t,x,y)\) as \(m\rightarrow\infty \). From the definition we can easily check that, when m is large enough, \(h^{m},b^{m},f^{m}\) also satisfy Assumptions (H1)–(H3) independently of m. By [5] and Proposition 2.1 the smootherized ODEs

$$ \textstyle\begin{cases} \dot{X}_{s,m}^{t,x}=b^{m}(t,{X}_{s,m}^{t,x},{Y}_{s,m}^{t,x}),\\ -\dot{Y}_{s,m}^{t,x}=f^{m}(t,{X}_{s,m}^{t,x},{Y}_{s,m}^{t,x}),\\ X_{t,m}^{t,x}=x,\qquad Y_{T}=h(X^{t,x}_{T,m}), \end{cases} $$
(30)

have a unique solution \((X_{\cdot,m}^{t,\cdot},Y_{\cdot,m}^{t,\cdot })\in L^{2}([0,T];L_{\rho}^{2}(\mathbb{R}^{n};\mathbb{R}^{n}))\otimes L^{2}([0,T];L_{\rho}^{2}(\mathbb{R}^{n};\mathbb{R}^{m}))\).

We define \(Y_{t,m}^{t,x}\) by a function \(u_{m}\) from \([0,T]\times\mathbb {R}^{n}\) to \(\mathbb{R}^{m}\):

$$ u_{m}(t,x):=Y_{t,m}^{t,x}. $$

Lemma 4.2

Under Assumptions (H1) and (H2), \((X_{\cdot,m}^{t,\cdot},Y_{\cdot ,m}^{t,\cdot})\rightarrow(X_{\cdot}^{t,\cdot},Y_{\cdot}^{t,\cdot})\) in \(L^{2}([0,T]; L_{\rho}^{2}(\mathbb{R}^{n};\mathbb{R}^{n}))\otimes L^{2}([0,T];L_{\rho}^{2}(\mathbb{R}^{n};\mathbb{R}^{m}))\) as \(m\rightarrow\infty\).

Proof

We set \(\tilde{X}_{s}^{t,x}={X}_{s,m}^{t,x}-{X}_{s}^{t,x}\) and \(\tilde {Y}_{s}^{t,x}={Y}_{s,m}^{t,x}-{Y}_{s}^{t,x}\). Applying the differential equation tor \(\langle G\tilde{X}_{s}^{t,x}, \tilde{Y}_{s}^{t,x} \rangle\) and a similar method as in Proposition 2.1, we get

$$ \begin{aligned} &\bigl\langle G\hat{X}_{T}^{t,x},h_{m}(X_{T})-h(X_{T}) \bigr\rangle +U_{1} \int _{t}^{T}\langle G\hat{X}_{s},G \hat{X}_{s}\rangle \,ds+U_{2} \int_{t}^{T}\bigl\langle G^{\top} \hat{Y}_{s},G^{\top}\hat{Y}_{s}\bigr\rangle \,ds \\ &\quad \leq- \int_{t}^{T}\bigl\langle G\hat{X}_{s}, f_{m}\bigl(s,{X}_{s}^{t,x},{Y}_{s}^{t,x} \bigr)-f\bigl(s,{X}_{s}^{t,x},{Y}_{s}^{t,x} \bigr)\bigr\rangle \,ds \\ &\qquad {}+ \int_{t}^{T}\bigl\langle G\hat{X}_{s}, b_{m}\bigl(s,{X}_{s}^{t,x},{Y}_{s}^{t,x} \bigr)-b(s,{X}_{s}^{t,x},{Y}_{s}^{t,x}\bigr\rangle \,ds \\ &\quad \rightarrow0 \quad\mbox{as } m \rightarrow\infty. \end{aligned} $$

Eventually, by (3) and the definition of ρ, similarly to the discussion in Proposition 2.1, we have \((X_{\cdot ,m}^{t,\cdot},Y_{\cdot,m}^{t,\cdot})\rightarrow(X_{\cdot}^{t,\cdot },Y_{\cdot}^{t,\cdot})\) in \(L^{2}([0,T];L_{\rho}^{2}(\mathbb{R}^{n};\mathbb {R}^{n}))\otimes L^{2}([0,T];L_{\rho}^{2}(\mathbb{R}^{n};\mathbb{R}^{m}))\) as \(m\rightarrow\infty\). □

Theorem 4.2

Under Assumptions (H1), (H2), and (H4), the function \(u(t,x)\) defined by (10) is the unique Sobolev weak solution of PDE (1) with \(u(T,x)=h(x)\).

Proof

Existence. By Lemma 4.1 and Proposition 2.1 we have

$$ \begin{aligned} \int_{0}^{T} \int_{\mathbb{R}^{n}} \bigl\vert u(s,x) \bigr\vert ^{2}\rho^{-1}(x)\,dx\,ds&\leq C \int _{0}^{T} \int_{\mathbb{R}^{n}} \bigl\vert u\bigl(s,X_{s}^{t,x} \bigr) \bigr\vert ^{2}\rho^{-1}(x)\,dx\,ds \\ &=C \int_{0}^{T} \int_{\mathbb{R}^{n}} \bigl\vert Y_{s}^{t,x} \bigr\vert ^{2}\rho^{-1}(x)\,dx\,ds< \infty. \end{aligned} $$

So \(u(s,x)\in L^{2}([0,T];L_{\rho}^{2}(\mathbb{R}^{n};\mathbb{R}^{m}))\).

From the structure of smootherized ODEs (30) and Theorem 3.1 we get that \(u_{m}(t,x):=Y_{t,m}^{t,x}\) is the unique classical solution of the following PDFs:

$$ \textstyle\begin{cases} \partial_{t} u_{m}(t,x)+\triangledown_{x} u_{m}(t,x)b_{m}(t,x,u_{m}(t,x))+f_{m}(t,x,u_{m}(t,x)) = 0,\\ u_{m}(T,x)=h_{m}(x),\quad(t,x)\in[0,T]\times\mathbb{R}^{n}. \end{cases} $$

Moreover, by the integration-by-parts formula it is easy to verify that \(u_{m}(t,x)\) also satisfies the following weak formulation: for any smooth test function \(\varphi\in C_{c}^{1,\infty}([0,T]\times\mathbb {R}^{n};\mathbb{R}^{m})\),

$$ \begin{aligned}[b] & \int_{t}^{T} \int_{\mathbb{R}^{n}}u_{m}(s,x)\partial_{s}\varphi(s,x) \,dx\,ds+ \int _{\mathbb{R}^{n}}u_{m}(t,x)\varphi(t,x)\,dx- \int_{\mathbb{R}^{n}}u_{m}(T,x)\varphi (T,x)\,dx \\ &\qquad {}+ \int_{t}^{T} \int_{\mathbb{R}^{n}}\triangledown_{x} \bigl(b \bigl(s,x,u_{m}(s,x)\bigr)\varphi (s,x)\bigr)u_{m}(s,x)\,dx \,ds \\ &\quad = \int_{t}^{T} \int_{\mathbb{R}^{n}}f\bigl(s,x,u_{m}(s,x)\bigr)\varphi(s,x)\,dx \,ds. \end{aligned} $$
(31)

Now we verify that \(u(t,x)\) satisfies (22) with \(u(T,x)=h(x)\) by passing the limit for \(u_{m}\) in \(L^{2}_{\rho}\) in (31). We only show the convergence of the last term. By the Lipschitz assumption and the fact that \(f^{m}\rightarrow f\) in the \(L^{2}_{\rho}\) sense as \(m\rightarrow\infty\), for any \(\Psi\in C^{1,\infty }_{c}([0,T]\times\mathbb{R}^{n})\), we have

$$ \begin{aligned} & \biggl\vert \int_{t}^{T} \int_{\mathbb{R}^{n}} f^{m}\bigl(s,x,u^{m}(s,x) \bigr)\Psi(s,x)\,dx\,ds- \int _{t}^{T} \int_{\mathbb{R}^{n}} f\bigl(s,x,u(s,x)\bigr)\Psi(s,x)\,dx \,ds \biggr\vert ^{2} \\ &\quad \leq C_{p} \int_{t}^{T} \int_{\mathbb {R}^{n}} \bigl\vert f^{m} \bigl(s,x,u^{m}(s,x)\bigr)-f^{m}\bigl(s,x,u(s,x)\bigr) \bigr\vert ^{2}\Psi(s,x)\,dx\,ds \\ &\qquad {}+ \int_{t}^{T} \int_{\mathbb {R}^{n}} \bigl\vert f^{m} \bigl(s,x,u(s,x)\bigr)-f\bigl(s,x,u(s,x)\bigr) \bigr\vert ^{2} \Psi(s,x)\,dx\,ds \\ &\quad \leq C_{p,L} \int_{t}^{T} \int_{\mathbb{R}^{n}} \bigl|u^{m}(s,x)-u(s,x)) \bigr| ^{2}\Psi (s,x)\,dx\,ds \\ &\qquad {}+ \int_{t}^{T} \int_{\mathbb {R}^{n}} \bigl\vert f^{m} \bigl(s,x,u(s,x)\bigr)-f\bigl(s,x,u(s,x)\bigr) \bigr\vert ^{2} \Psi(s,x)\,dx\,ds \\ &\quad \rightarrow 0 \quad\mbox{as } m\rightarrow\infty. \end{aligned} $$

Therefore \(u(t,x)\) satisfies (22) and is a Sobolev weak solution of (1) with \(u(T,x)=h(x)\).

Uniqueness. Let v be another solution of PDE (1). By Definition 4.1, we get:

$$ \begin{aligned}[b] & \int_{t}^{T} \int_{\mathbb{R}^{n}}v(s,x)\partial_{s}\varphi(s,x)\,dx\,ds+ \int _{\mathbb{R}^{n}}v(t,x)\varphi(t,x)\,dx- \int_{\mathbb{R}^{n}}v(T,x)\varphi (T,x)\,dx \\ &\qquad {}+ \int_{t}^{T} \int_{\mathbb{R}^{n}}v(s,x)\triangledown_{x} \bigl(b \bigl(s,x,u(s,x)\bigr)\varphi(s,x)\bigr)\,dx\,ds \\ &\quad = \int_{t}^{T} \int_{\mathbb{R}^{n}}f\bigl(s,x,u(s,x)\bigr)\varphi(t,x)\,dx\,ds. \end{aligned} $$
(32)

By Lemma 4.3 in [8], for the test function \(\psi_{t}(s)=\varphi(\hat{X}_{s}^{t})J(\hat{X}_{s}^{t})\), we obtain

$$ \int_{s}^{T} \int_{\mathbb{R}^{n}} v(s,x)\partial_{s} \varphi(s,x)\,dx\,ds= - \int_{t}^{T} \int_{\mathbb{R}^{n}}v(s,x)\triangledown _{x}\bigl(b \bigl(s,x,u(s,x)\bigr)\varphi(s,x)\bigr)\,dx\,ds. $$
(33)

Taking (33) into (32), we get

$$ \int_{\mathbb{R}^{n}}v(t,x)\varphi(t,x)\,dx- \int_{\mathbb{R}^{n}}v(T,x)\varphi (T,x)\,dx= \int_{t}^{T} \int_{\mathbb{R}^{n}}f\bigl(s,x,u(s,x)\bigr)\varphi(t,x)\,dx\,ds. $$
(34)

Let us make the change of variable \(y=\hat{X}_{s}^{t,x}\) in each term of (34). Then (34) becomes

$$ \int_{\mathbb{R}^{n}}v\bigl(t,X_{s}^{t,y}\bigr) \varphi(y)\,dy- \int_{\mathbb {R}^{n}}h\bigl(T,X_{T}^{t,y}\bigr) \varphi(y)\,dy= \int_{t}^{T} \int_{\mathbb {R}^{n}}f\bigl(s,X_{s}^{t,y},u \bigl(s,X_{s}^{t,y}\bigr)\bigr)\varphi(y)\,dy\,ds. $$

Since φ is arbitrary, we can prove that, for almost every y,

$$ v\bigl(t,X_{s}^{t,y}\bigr)=h\bigl(T,X_{T}^{t,y} \bigr)+ \int_{t}^{T}f\bigl(s,X_{s}^{t,y},u \bigl(s,X_{s}^{t,y}\bigr)\bigr)\,ds. $$

Then we get that \(v(s,X_{s}^{t,x})\) is a solution of ODEs (2), and we obtain the uniqueness result by the uniqueness of the solution of ODEs (2), and the proof is completed. □

5 Viscosity solution to the PDE

In this section, we prove that the function \(u(t,x)\) defined by (10) is the unique viscosity solution of PDE (1) under Assumptions (H1) and (H2). We first recall the definition of a viscosity solution for (1) from [14] and [15].

Definition 5.1

Let u be a continuous function on \([0,T] \times\mathbb {R}^{n}\rightarrow\mathbb{R}^{m}\) satisfying \(u_{i}(T,x)=h_{i}(x)\), \(x \in \mathbb{R}^{n} , 1\leq i\leq m\). It is called a viscosity subsolution (resp., supersolution) of PDE (1) if for any \(1\leq i \leq m\), \((t,x)\in[0,T)\times\mathbb{R}^{n}\), and \(\varphi\in C^{1,1}([0,T] \times\mathbb{R}^{n};R)\) such that \((\varphi-u_{i})\) attains a local minimum (resp., maximum) at \((t,x)\), \(\varphi(t,x)-u_{i}(t,x)=0\), such that

$$\begin{aligned} &\partial_{t} \varphi(t,x)+ \triangledown_{x} \varphi (t,x)b \bigl(t,x,u(t,x)\bigr)+f_{i}\bigl(t,x,u(t,x)\bigr) \geq0, \\ &\bigl( \textit{resp. }\partial_{t}\varphi(t,x)+ \triangledown_{x} \varphi (t,x)b\bigl(t,x,u(t,x)\bigr)+f_{i}\bigl(t,x,u(t,x)\bigr) \leq0 \bigr). \end{aligned}$$

A function u is called a viscosity solution of PDE (1) if it is both a viscosity subsolution and a viscosity supersolution.

Theorem 5.1

Let Assumptions (H1) and (H2) hold. The function \(u(t,x)\) defined by (10) is continuous and is a viscosity solution of PDE (1).

Proof

The continuity of u follows from Proposition 2.2. Next, we only show that u is a viscosity subsolution of PDE (1). A similar argument would show that u is also a viscosity supersolution.

Let \((t,x) \in[0,T)\times\mathbb{R}^{n}\), \(\varphi\in C^{1,1}([0,T)\times\mathbb{R}^{n};R)\), and let \((\varphi-u_{i})\) attain a local minimum at \((t,x)\), \(\varphi(t,x)-u_{i}(t,x)=0\). Assuming that

$$\partial_{t}\varphi(t,x)+ \triangledown_{x} \varphi (t,x)b \bigl(t,x,u(t,x)\bigr)+f_{i}\bigl(t,x,u(t,x)\bigr) < 0, $$

we will obtain a contradiction.

It follows from above that there exists \(0<\alpha<T-t\) such that, for all \((s,y)\in[t,T]\times\mathbb{R}^{n}\) satisfying \(t \leq s \leq t+\alpha\),

$$u_{i}(s,y) \leq\phi(s,y) $$

and

$$\partial_{t} \varphi(s,y)+ \triangledown_{x} \varphi (t,x),b\bigl(s,y,u(s,y)\bigr)+f_{i}\bigl(s,y,u_{i}(s,y) \bigr) < 0. $$

We denote

$$t^{\star}:=\inf\bigl\{ s>t: \vert X_{s}-x \vert \geq\alpha \bigr\} \wedge(t+\alpha). $$

From ODEs (2) we have

$$\begin{aligned} &Y_{s}^{t,x}=Y_{T}+ \int_{s}^{T}f\bigl(r,X_{r}^{t,x},u \bigl(r,X_{r}^{t,x}\bigr)\bigr)\,dr, \\ &Y_{t^{\star}}^{t,x}=Y_{T}+ \int_{t^{\star}}^{T}f\bigl(r,X_{r}^{t,x},u \bigl(r,X_{r}^{t,x}\bigr)\bigr)\,dr. \end{aligned}$$

Then we get

$$Y_{s}^{t,x}=Y_{t^{\star}}^{t,x}+ \int_{s\wedge t^{\star}}^{t^{\star}}f(r,X_{r}^{t,x},u \bigl(r,X_{r}^{t,x}\bigr)\,dr. $$

Since \(Y_{t^{\star}}^{t,x}=u(t^{\star},X_{t^{\star}}^{t,x})\), we have

$$ \begin{aligned} &Y_{s}^{t,x}=u\bigl(t^{\star},X_{t^{\star}}^{t,x} \bigr)+ \int_{s\wedge t^{\star}}^{t^{\star}}f(r,X_{r}^{t,x},u \bigl(r,X_{r}^{t,x}\bigr)\,dr, \\ &Y_{s}^{t,x,i}=u_{i} \bigl(t^{\star},X_{t^{\star}}^{t,x}\bigr)+ \int_{s\wedge t^{\star}}^{t^{\star}}f_{i}(r,X_{r}^{t,x},u \bigl(r,X_{r}^{t,x}\bigr)\,dr. \end{aligned} $$
(35)

Let \(\hat{Y}_{s}=\varphi(s,X_{s}^{t,x})\). By the differential equation,

$$ \begin{aligned} &d\hat{Y}_{s}= \partial_{t}\varphi\bigl(s,X_{s}^{t,x}\bigr)\,ds+ \triangledown_{x} \varphi \bigl(s,X_{s}^{t,x}\bigr)b \bigl(s,X_{s}^{t,x},u\bigl(s,X_{s}^{t,x} \bigr)\bigr)\,ds, \\ &\hat{Y}_{s}=\varphi\bigl(t^{\star},X_{t^{\star}}^{t,x} \bigr)- \int_{s\wedge t^{\star}}^{t^{\star}} \bigl[\partial_{t}\varphi \bigl(r,X_{r}^{t,x}\bigr)\,ds+\triangledown_{x} \varphi \bigl(r,X_{r}^{t,x}\bigr)b\bigl(r,X_{r}^{t,x},u \bigl(r,X_{s}^{t,x}\bigr)\bigr)\bigr]\,dr. \end{aligned} $$
(36)

We consider the difference between (35) and (36):

$$\begin{aligned} \hat{Y}_{s}-Y_{s}^{t,x,i} =&\varphi \bigl(t^{\star},X_{t^{\star }}^{t,x}\bigr)-u_{i} \bigl(t^{\star},X_{t^{\star}}^{t,x}\bigr) \\ &{}- \int_{s\wedge t^{\star}}^{t^{\star}} \bigl[\partial_{t}\varphi \bigl(r,X_{r}^{t,x}\bigr)\,ds+\triangledown_{x} \varphi \bigl(r,X_{r}^{t,x}\bigr)b\bigl(r,X_{r}^{t,x},u \bigl(r,X_{s}^{t,x}\bigr)\bigr) \\ &{}+f_{i}\bigl(r,X_{r}^{t,x},u \bigl(r,X_{r}^{t,x}\bigr)\bigr)\bigr]\,dr. \end{aligned}$$

From the definition of \(t^{\star}\) we have

$$\varphi\bigl(t^{\star},X_{t^{\star}}^{t,x} \bigr)-u_{i}\bigl(t^{\star},X_{t^{\star }}^{t,x} \bigr) \geq0 $$

and

$$ -\beta(r)=\partial_{t}\varphi \bigl(r,X_{r}^{t,x}\bigr)+\triangledown_{x} \varphi \bigl(r,X_{r}^{t,x}\bigr)b\bigl(r,X_{r}^{t,x},u \bigl(r,X_{s}^{t,x}\bigr)\bigr) +f_{i} \bigl(r,X_{r}^{t,x},u\bigl(r,X_{r}^{t,x} \bigr)\bigr) < 0. $$

Then

$$\hat{Y}_{s}-Y_{s}^{t,x,i}=\varphi \bigl(t^{\star},X_{t^{\star }}^{t,x}\bigr)-u_{i} \bigl(t^{\star},X_{t^{\star}}^{t,x}\bigr)+ \int_{t}^{t^{\star}}\beta(r)\,dr >0, $$

a contradiction with \(\varphi(t,x)=u(t,x)\). □

Remark 5.1

It is obvious that if a function u is Lipschitz continuous in x and continuous in t, then u satisfies the linear growth assumption, that is, there exists a constant \(C>0\) such that

$$ \bigl\vert u(t,x) \bigr\vert \leq C\bigl(1+ \vert x \vert \bigr), \quad \forall(t,x)\in[0,T]\times\mathbb{R}^{n}. $$
(37)

Lemma 5.1

Let Assumptions (H1) and (H2) hold, let \(u,v\in C([0,T]\times\mathbb{R}^{n})\) be Lipschitz continuous in x, and let u be a viscosity subsolution and v be a viscosity supersolution of PDE (1). Then the function \(\omega:=u-v\) is a viscosity subsolution of the following PDE:

$$ \textstyle\begin{cases} \partial_{t}\omega(t,x)+C(1+ \vert x \vert ) \vert \triangledown_{x}\omega(t,x) \vert +C \vert \omega (t,x) \vert =0,\\ \omega(T,x)=0, \end{cases} $$
(38)

where C is a constant only depending on the Lipschitz constants and the linear constants of \(b,f\).

Proof

Let \(\varphi\in C^{1,1}([0,T]\times\mathbb{R}^{n})\), and let \((t_{0},x_{0})\in[0,t]\times\mathbb{R}^{n}\) be a global maximum point of \(\omega_{i}-\varphi\) for some \(1\leq i \leq m\).

We introduce the function

$$\psi_{\alpha,\beta}(t,x,s,y)=u_{i}(t,x)-v_{i}(s,y)- \frac{ \vert x-y \vert ^{2}}{\alpha }-\frac{(t-s)^{2}}{\beta}-\varphi(t,x), $$

where \(\alpha,\beta\) are positive parameters devoted to zero.

Let \((\bar{t},\bar{x},\bar{s},\bar{y})\) be a global maximum point of \(\psi_{\alpha,\beta}\) in \(([0,T]\times\bar{B}_{R})^{2}\), where \(B_{R}\) is a ball with large radius R. We drop the dependence of \(\bar{t},\bar {x},\bar{s},\bar{y}\) on α and β for simplicity of notations.

Noting that

$$\psi_{\alpha,\beta}(\bar{t},\bar{x},\bar{s},\bar{y})\geq\max _{(t,x)\in [0,T]\times\bar{B}_{R}}\psi_{\alpha,\beta}({t},{x},{t},{x}), $$

we have

$$ \begin{aligned}[b] &u_{i}(\bar{t}, \bar{x})-v_{i}(\bar{s},\bar{y})-\frac{ \vert \bar{x}-\bar {y} \vert ^{2}}{\alpha}-\frac{ \vert \bar{t}-\bar{s} \vert ^{2}}{\beta}- \varphi(\bar{t},\bar {x}) \\ &\quad \geq\max_{[0,T]\times\bar{B}_{R}}\bigl[u_{i}(t,x)-v_{i}(t,x)- \varphi(t,x)\bigr]:=M. \end{aligned} $$
(39)

We set

$$N=\max_{(t,x,s,y)\in([0,T]\times\bar{B}_{R})^{2}} \bigl(u_{i}({t},{x})-v_{i}({s},{y}) -\varphi({t},{x}) \bigr). $$

Then we have

$$\frac{ \vert \bar{x}-\bar{y} \vert ^{2}}{\alpha}+\frac{ \vert \bar{t}-\bar{s} \vert ^{2}}{\beta }\leq-M+N. $$

Since \([0,T]\times\bar{B}_{R}\) is compact, we can find \((x_{0},y_{0})\in [0,T]\times\bar{B}_{R}\) and \(\alpha_{k}, \beta_{k}>0\) such that \((\bar {t},\bar{x})\) and \((\bar{s},\bar{y})\) tend to \((t_{0},x_{0})\) as \(\alpha_{k}\) and \(\beta_{k}\) tend to 0.

Relation (39) again implies

$$\begin{aligned} 0 \leq& \liminf_{\alpha,\beta\rightarrow0}\biggl(\frac{ \vert \bar{x}-\bar {y} \vert ^{2}}{\alpha}+ \frac{ \vert \bar{t}-\bar{s} \vert ^{2}}{\beta}\biggr) \\ \leq&\limsup_{\alpha,\beta\rightarrow0}\biggl(\frac{ \vert \bar{x}-\bar {y} \vert ^{2}}{\alpha}+ \frac{ \vert \bar{t}-\bar{s} \vert ^{2}}{\beta}\biggr) \\ \leq&\limsup_{\alpha,\beta\rightarrow0}\bigl(u_{i}(\bar{t}, \bar{x})-v_{i}(\bar {s},\bar{y}) -\varphi(\bar{t},\bar{x})\bigr)-M \\ =&0. \end{aligned}$$

We have

$$\lim_{\alpha\rightarrow0}\frac{ \vert \bar{x}-\bar{y} \vert ^{2}}{\alpha}=\lim_{\beta \rightarrow0} \frac{ \vert \bar{t}-\bar{s} \vert ^{2}}{\beta}=0. $$

Letting α and β tend to 0, we get that \((t_{0},x_{0})\) is a global maximum point of \(u_{i}-v_{i}-\varphi\) in \([0,T]\times\bar{B}_{R}\).

Now we are going to use the definition of the viscosity solution. By the definition of \((\bar{t},\bar{x},\bar{s},\bar{y})\) the function

$$(t,x)\mapsto u_{i}(t,x)- \biggl(v_{i}(s,y)+ \frac{ \vert x-y \vert ^{2}}{\alpha}+\frac {(t-s)^{2}}{\beta}+\varphi(t,x) \biggr) $$

attains a global maximum point at \((\bar{t},\bar{x})\) in \([0,T]\times \bar{B}_{R}\). Hence we have

$$\frac{2(\bar{t}-\bar{s})}{\beta}+\partial_{t}\varphi(\bar{t},\bar {x})+\biggl( \frac{2(\bar{x}-\bar{y})^{\top}}{\alpha}+\triangledown_{x}\varphi (\bar{t},\bar{x})\biggr)b \bigl(\bar{t},\bar{x},u(\bar{t},\bar{x})\bigr)+f_{i}\bigl(\bar{t},\bar {x},u(\bar{t},\bar{x})\bigr)\geq0. $$

Similarly, the function

$$(s,y)\mapsto v_{i}(s,y)- \biggl(u_{i}(t,x)- \frac{ \vert x-y \vert ^{2}}{\alpha}-\frac {(t-s)^{2}}{\beta}-\varphi(t,x) \biggr) $$

attains a global minimum point at \((\bar{s},\bar{y})\) in \([0,T]\times \bar{B}_{R}\). Thus

$$\frac{2(\bar{t}-\bar{s})}{\beta} +\frac{2(\bar{x}-\bar{y})^{\top }}{\alpha}b\bigl(\bar{s},\bar{y},v(\bar{s},\bar{y}) \bigr)+f_{i}\bigl(\bar{s},\bar {y},v(\bar{s},\bar{y})\bigr)\geq0. $$

Considering the difference between the last inequalities, we obtain

$$\begin{aligned} 0 \leq&\partial_{t}\varphi(\bar{t},\bar{x})+\frac{2(\bar{x}-\bar{y})^{\top }}{\alpha}\bigl[b \bigl(\bar{t},\bar{x},u(\bar{t},\bar{x})\bigr)-b\bigl(\bar{s},\bar {y},v(\bar{s}, \bar{y})\bigr)\bigr] \\ &{} +D\varphi(\bar{t},\bar{x})b\bigl(\bar{t},\bar{x},u(\bar{t},\bar{x}) \bigr)+f_{i}\bigl(\bar {t},\bar{x},u(\bar{t},\bar{x}) \bigr)-f_{i}\bigl(\bar{s},\bar{y},v(\bar{s},\bar {y})\bigr) \\ \leq&\partial_{t}\varphi(\bar{t},\bar{x})+\frac{2 \vert \bar{x}-\bar {y} \vert }{\alpha} \bigl[ \bigl\vert b\bigl(\bar{t},\bar{x},u(\bar{t},\bar{x})\bigr)-b\bigl(\bar{t},\bar {y},u(\bar{t},\bar{x})\bigr) \bigr\vert \\ &{}+ \bigl\vert b\bigl(\bar{t},\bar{y},u(\bar{t},\bar{x})\bigr) -b\bigl(\bar{t}, \bar{y},v(\bar{t},\bar{x})\bigr) \bigr\vert + \bigl\vert b\bigl(\bar{t}, \bar{y},v(\bar {t},\bar{x})\bigr)-b\bigl(\bar{t},\bar{y},v(\bar{t},\bar{y})\bigr) \bigr\vert \bigr] \\ &{}+ \bigl\vert D\varphi(\bar{t},\bar{x})b\bigl(\bar{t},\bar{x},u(\bar{t},\bar {x})\bigr) \bigr\vert + \bigl\vert f_{i}\bigl(\bar{t},\bar{x},u( \bar{t},\bar{x})\bigr)-f_{i}\bigl(\bar{t},\bar {y},u(\bar{t},\bar{x}) \bigr) \bigr\vert \\ &{}+ \bigl\vert f_{i}\bigl(\bar{t},\bar{y},u(\bar{t},\bar{x})\bigr) -f_{i}\bigl(\bar{t},\bar{y},v(\bar{t},\bar{x})\bigr) \bigr\vert + \bigl\vert f_{i}\bigl(\bar{t},\bar{y},v(\bar {t},\bar{x}) \bigr)-f_{i}\bigl(\bar{t},\bar{y},v(\bar{t},\bar{y})\bigr) \bigr\vert ]|. \end{aligned}$$

By the Lipschitz continuity and the linear growth property of b and f, letting β tend to zero, the above inequality becomes

$$\begin{aligned} 0 \leq& \partial_{t}\varphi(\bar{t},\bar{x})+\frac{2 \vert \bar{x}-\bar {y} \vert }{\alpha}\times C \bigl[ \vert \bar{x}-\bar{y} \vert + \bigl\vert u(\bar{t},\bar{x})-v(\bar {t},\bar{x}) \bigr\vert + \bigl\vert v(\bar{t},\bar{x})-v(\bar{t},\bar{y}) \bigr\vert \bigr] \\ & {}+C\bigl(1+ \vert \bar{x} \vert +|u(\bar{t},\bar{x})\bigr) \bigl\vert \triangledown_{x}\varphi(\bar {t},\bar{x}) \bigr\vert \\ &{}+C \bigl[ \vert \bar{x}-\bar{y} \vert + \bigl\vert u(\bar{t},\bar{x})-v( \bar{t},\bar {x}) \bigr\vert + \bigl\vert v(\bar{t},\bar{x})-v(\bar{t}, \bar{y}) \bigr\vert \bigr]. \end{aligned}$$

Letting \(\alpha\rightarrow0\), we have \(\frac{2|\bar{x}-\bar {y}|}{\alpha} \rightarrow0\). We get

$$\partial_{t}\varphi(t_{0},x_{0})+C\bigl(1+ \vert x_{0} \vert \bigr) \bigl\vert \triangledown_{x} \varphi (t_{0},x_{0}) \bigr\vert +C \bigl\vert \omega_{i}(t_{0},x_{0}) \bigr\vert \geq0. $$

Since \((t_{0},x_{0})\) is a maximum point of \(\omega_{i}-\varphi\), by Definition 5.1 the function ω is a subsolution of PDE (35), and we conclude the proof. □

Lemma 5.2

Let Assumptions (H1) and (H2) hold. For any \(A>0\), there exists \(C_{1}>0\) such that the function

$$\chi(t,x)=\exp\bigl\{ \bigl(C_{1}(T-t)+A\bigr)\psi(x)\bigr\} , $$

where

$$\psi(x)= \bigl[\log \bigl( \bigl( \vert x \vert ^{2}+1 \bigr)^{\frac{1}{2}} \bigr) \bigr]^{2}, $$

satisfies

$$\partial_{t}\chi(t,x)+C\bigl(1+ \vert x \vert \bigr) \bigl\vert \triangledown_{x}\chi(t,x) \bigr\vert +C\chi(x)< 0 $$

in \([t_{1},T]\times\mathbb{R}^{n}\). Here \(t_{1}=T-\frac{A}{C_{1}}\).

Proof

By the definition of χ and ψ we have

$$\bigl\vert \triangledown_{x}\psi(x) \bigr\vert \leq \frac{2[\psi(x)]^{\frac {1}{2}}}{(1+ \vert x \vert ^{2})^{\frac{1}{2}}}. $$

Then we have

$$\begin{aligned} |\triangledown_{x}\chi(t,x) \leq&\bigl(C_{1}(T-t)+A\bigr) \chi(t,x) \bigl\vert \triangledown_{x}\psi (x) \bigr\vert \\ \leq& C\chi(t,x)\frac{[\psi(x)]^{\frac{1}{2}}}{(1+ \vert x \vert ^{2})^{\frac{1}{2}}}. \end{aligned}$$

Because of the choice of \(t_{1}\), these estimates do not depend on \(C_{1}\). Easy computations yield

$$ \begin{aligned} &\partial_{t}\chi(t,x)+C\bigl(1+ \vert x \vert \bigr) \bigl\vert \triangledown_{x}\chi(t,x) \bigr\vert +C\chi(x) \\ &\quad \leq \biggl(-C_{1}\psi(x)+C\bigl(1+ \vert x \vert \bigr) \frac{[\psi(x)]^{\frac{1}{2}}}{(1+ \vert x \vert ^{2})^{\frac {1}{2}}}+C\biggr)\chi(t,x). \end{aligned} $$

Since \(\psi(x)>1\), it is clear that when \(C_{1}\) is large enough, the quantity in the brackets is negative, and the proof is complete. □

Theorem 5.2

Let Assumptions (H1) and (H2) hold. Then there exists at most one viscosity solution of PDE (1) in the class of continuous functions that are Lipschitz continuous in the spatial variable x.

In particular, the function \(u(t,x)=Y_{t}^{t,x}\) is the unique viscosity solution of (1) in the class of continuous functions that are Lipschitz continuous in the spatial variable x.

Proof

First, we will show that \(\omega=u-v\) satisfies

$$\bigl\vert \omega(t,x) \bigr\vert \leq\alpha\chi(t,x), \quad[0,T]\times \mathbb{R}^{n}, $$

for any \(\alpha>0\). Then, we let α tend to zero.

By (37), to prove this inequality, we first remark that

$$\lim_{ \vert x \vert \rightarrow\infty} \bigl\vert \omega(t,x) \bigr\vert e^{-A[\log(( \vert x \vert ^{2}+1)^{\frac {1}{2}})]^{2}}=0 $$

uniformly for \(t\in[0,T]\) and for some \(A>0\). This implies, in particular, that \(\omega(t,x)-\alpha\chi(t,x)\) is bounded from above in \([t_{1},T]\times\mathbb{R}^{n}\) for any \(\alpha>0\), and we have that

$$M:=\max_{1\leq i \leq m}\max_{[t_{1},T]\times\mathbb{R}^{n}}\bigl( \vert \omega _{i} \vert -\alpha\chi\bigr) (t,x)e^{-L(T-t)} $$

is achieved at some point \((t_{0},x_{0})\in[t_{1},T]\times\mathbb{R}^{n}\). Since \(|\cdot|\) is the supremum norm in \(\mathbb{R}^{m}\), we have

$$ M:=\max_{[t_{1},T]\times\mathbb{R}^{n}}\bigl( \vert \omega \vert -\alpha\chi\bigr) (t,x)e^{-L(T-t)} $$

and \(|\omega_{i_{0}}(t_{0},x_{0})|=|\omega(t_{0},x_{0})|\). We may assume w.l.o.g. that \(|\omega_{i_{0}}(t_{0},x_{0})|\neq0\); otherwise, we are done.

Then we have \(|\omega_{i_{0}}(t_{0},x_{0})|>0\).

From the maximum point property we deduce that

$$\omega_{i_{0}}(t,x)-\alpha\chi(t,x)\leq\bigl(\omega_{i_{0}}(t_{0},x_{0})- \alpha\chi (t_{0},x_{0})\bigr)e^{-L(t-t_{0})},\quad(t,x) \in[t_{1},T]\times\mathbb{R}^{n}. $$

We define

$$\varphi(t,x)=\alpha\chi(t,x)+\bigl(\omega_{i_{0}}(t_{0},x_{0})- \alpha\chi (t_{0},x_{0})\bigr)e^{-L(t-t_{0})} $$

and get

$$(\omega_{i_{0}}-\varphi) (t,x)\leq(\omega_{i_{0}}-\varphi) (t_{0},x_{0}),\quad (t,x)\in[t_{1},T]\times \mathbb{R}^{n}. $$

Since \(\varphi(t_{0},x_{0})=\omega_{i_{0}}(t_{0},x_{0})>0\) and Lemma 5.1, we have

$$\partial_{t}\varphi(t_{0},x_{0})+C\bigl(1+ \vert x_{0} \vert \bigr) \bigl\vert \triangledown_{x} \varphi (t_{0},x_{0}) \bigr\vert \geq0 $$

for all \(t_{0}\in[t_{1},T)\). By the definition of φ we can rewrite this inequality as

$$\partial_{t}\chi(t_{0},x_{0})+C\bigl(1+ \vert x_{0} \vert \bigr) \bigl\vert \triangledown_{x} \chi(t_{0},x_{0}) \bigr\vert +C\chi (t_{0},x_{0}) \geq0. $$

This is a contradiction with Lemma 5.1. Thus \(t_{0}=T\). Since \(|\omega (T,x)|=0\), we have

$$ \bigl\vert u(t,x)-v(t,x) \bigr\vert \leq\alpha\chi(t,x),\quad(t,x) \in[t_{1},T]\times\mathbb{R}^{n}. $$

Letting α tend to zero, we obtain

$$u(t,x)=v(t,x),\quad(t,x)\in[t_{1},T]\times\mathbb{R}^{n}. $$

Applying successively the same argument on the intervals \([t_{2},t_{1}]\), where \(t_{2}=(t_{1}-A/C_{1})^{+}\), and then, if \(t_{2}>0\), then on \([t_{3},t_{2}]\), where \(t_{3}=(t_{2}-A/C_{1})^{+}\), and so on, we finally obtain that

$$u(t,x)=v(t,x),\quad(t,x)\in[0,T]\times\mathbb{R}^{n}. $$

The proof is complete. □

6 Conclusion

In this paper, to our best knowledge, we are the first to study this kind of PDE systems associated with the two-point boundary value problems. The distinguishing feature is that we consider the coefficient b of PDE (1) dependent on \(u(t,x)\). We give three kinds of solutions of PDE (1). The first one is the classical solution, which needs the coefficients of ODEs (2) be twice continuously differentiable with bounded derivatives besides the usual assumptions in [5]. If the coefficient b of ODEs (2) is only once continuously differentiable with bounded derivatives and if f satisfies the usual Lipschitz condition, then we can prove that the associated PDE has a unique weak solution in the Sobolev space. In addition, we also prove that the function defined by the solution of ODEs (2) is the unique viscosity solution of PDE (1) if the coefficients of ODEs (2) only satisfy the usual assumptions and Lipschitz condition. This kind of two-point boundary problems is quite important in the ordinary differential equations and has meaningful applications in optimal control theory. By virtue of the solution of PDE (1) we give a method to solve the numerical solution of the two-point boundary value problem and provide a powerful tool to solve the related optimal control problems.

Abbreviations

PDE:

partial differential equation

ODEs:

ordinary differential equations

PDEs:

partial differential equations

FBSDEs:

forward–backward stochastic differential equations

References

  1. Papageorgiou, N.S., Rădulescu, V.D., Repovš, D.D.: Sensitivity analysis for optimal control problems governed by nonlinear evolution inclusions. Adv. Nonlinear Anal. 6(2), 199–235 (2017)

    MathSciNet  MATH  Google Scholar 

  2. Fragnelli, G., Mugnai, D.: Carleman estimates for singular parabolic equations with interior degeneracy and non smooth coefficients. Adv. Nonlinear Anal. 6(2), 339–378 (2016)

    MathSciNet  MATH  Google Scholar 

  3. Nursultanov, M., Rozenblum, G.: Eigenvalue asymptotics for the Sturm–Liouville operator with potential having a strong local negative singularity. Opusc. Math. 37(1), 109 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  4. Ghergu, M., Radulescu, V.D.: Nonlinear PDEs: Mathematical Models in Biology, Chemistry and Population Genetics. Monographs in Mathematics. Springer, Heidelberg (2012)

    Book  MATH  Google Scholar 

  5. Wu, Z.: A class of ordinary differential equations of two-point boundary value problems and applications. J. Shandong Univ. Sci. Technol. Nat. Sci. 1, 16–23 (1997)

    Google Scholar 

  6. Pardoux, E., Peng, S.: Backward stochastic differential equations and quasilinear parabolic partial differential equations. In: Stochastic Partial Differential Equations and Their Applications, pp. 200–217 (1992)

    Chapter  Google Scholar 

  7. Wu, Z., Yu, Z.: Probabilistic interpretation for a system of quasilinear parabolic partial differential equation combined with algebra equations. Stoch. Process. Appl. 124(12), 3921–3947 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  8. Ouknine, Y., Turpin, I.: Weak solutions of semilinear PDEs in Sobolev spaces and their probabilistic interpretation via the FBSDEs. Stoch. Anal. Appl. 24(4), 871–888 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  9. Wei, L., Wu, Z., Zhao, H.: Sobolev weak solutions of the Hamilton–Jacobi–Bellman equations. SIAM J. Control Optim. 52(3), 1499–1526 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  10. Kunita, H.: Stochastic differential equations and stochastic flows of diffeomorphisms. In: École d’Été de Probabilités de Saint-Flour XII – 1982, pp. 143–303 (1984)

    Chapter  Google Scholar 

  11. Wu, Z.: Adapted solution of generalized forward-backward stochastic differential equations and its dependence on parameters. Chin. J. Contem. Math 19(1) (1998)

  12. Elkaroui, N., Peng, S., Quenez, M.C.: Backward stochastic differential equations in finance. Math. Finance 7(1), 1–71 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  13. Brezis, H.: Functional Analysis, Sobolev Spaces and Partial Differential Equations. Springer, Heidelberg (2010)

    Book  Google Scholar 

  14. Barles, G., Buckdahn, R., Pardoux, E.: Backward stochastic differential equations and integral-partial differential equations. Stoch. Int. J. Probab. Stoch. Process. 60(1–2), 57–83 (1997)

    MathSciNet  MATH  Google Scholar 

  15. Crandall, M.G., Hitoshi Ishii, P.-L.L.: User’s guide to viscosity solutions of second order partial differential equations. Bull. Am. Math. Soc. 27, 1–67 (1992)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the anonymous reviewers for carefully reading this paper and for their comments and suggestions which have improved the paper.

Availability of data and materials

Not applicable.

Authors’ information

Not applicable.

Funding

This work was supported by the Natural Science Foundation of China (61573217), the National High-level personnel of special support program and the Chang Jiang Scholar Program of Chinese Education Ministry.

Author information

Authors and Affiliations

Authors

Contributions

Bothl authors have contributed equally in this paper. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Zhen Wu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ma, N., Wu, Z. Classical and weak solutions of the partial differential equations associated with a class of two-point boundary value problems. Bound Value Probl 2018, 120 (2018). https://doi.org/10.1186/s13661-018-1040-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13661-018-1040-9

Keywords