• Research
• Open Access

# A reduced-order extrapolating collocation spectral method based on POD for the 2D Sobolev equations

Boundary Value Problems20192019:63

https://doi.org/10.1186/s13661-019-1176-2

• Accepted: 21 March 2019
• Published:

## Abstract

In this paper, we mainly use a proper orthogonal decomposition (POD) to reduce the order of the coefficient vectors of the solutions for the classical collocation spectral (CS) method of two-dimensional (2D) Sobolev equations. We first establish a reduced-order extrapolating collocation spectral (ROECS) method for 2D Sobolev equations so that the ROECS method includes the same basis functions as the classic CS method and the superiority of the classic CS method. Then we use the matrix means to discuss the existence, stability, and convergence of the ROECS solutions so that the procedure of theoretical analysis becomes very concise. Lastly, we present two set of numerical examples to validate the effectiveness of theoretical conclusions and to illuminate that the ROECS method is far superior to the classic CS method, which shows that the ROECS method is quite valid to solve Sobolev equations. Therefore, both theory and method of this paper are completely different from the existing reduced-order methods.

## Keywords

• Proper orthogonal decomposition
• Classic collocation spectral method
• Sobolev equations
• Reduced-order extrapolating collocation spectral method
• Existence, stability, and convergence

• 65N30
• 65N12
• 65M15

## 1 Introduction

Let $$\varOmega \subset \mathbb{R}^{2}$$ be an open bounded domain with boundary ∂Ω. We consider the following two-dimensional (2D) Sobolev equation:
$$\textstyle\begin{cases} u_{t}-\varepsilon \Delta u_{t} - \gamma \Delta u = f(x,y,t), \quad (x,y,t) \in \varOmega \times (0,T), \\ u(x,y,t) = \varphi (x,y,t), \quad (x,y,t)\in \partial \varOmega \times (0,T), \\ u(x,y,0) = u_{0}(x,y), \quad (x,y)\in \varOmega , \end{cases}$$
(1)
where $$u_{t} = {\partial u}/{\partial t}$$, $$\Delta u = {\partial ^{2} u}/{\partial x^{2}}+{\partial ^{2} u}/{\partial y^{2}}$$, $$\Delta u_{t} = {\partial ^{2} u_{t}}/{\partial x^{2}}+{\partial ^{2} u_{t}}/{\partial y^{2}}$$, T represents the final moment, $$\varepsilon >0$$ and $$\gamma >0$$ are two given constants, $$f(x,y,t)$$ is the known source item, and $$u_{0}(x,y)$$ and $$\varphi (x,y,t)$$ are the known initial and boundary values, respectively. For simplicity but without loss of generality, we further assume that $$\varphi (x,y,t) = 0$$.

The system of Sobolev equations (1) is a class of significant partial differential equations (PDEs) with practical physical background, which favorably simulated many engineering problems (see [1, 2]). Particularly, it can be applied to simulate the porous phenomenons (see [3, 4]). Nevertheless, in actual applications, Sobolev equations frequently contain intricate boundary and initial values, complicated source items, or discontinuous constants. As a result, we cannot generally seek analytic solutions, so that we can only rest upon numerical methods.

Currently, finite difference (FD), finite volume element (FVE), finite element (FE), and spectral methods are four famous computational techniques. However, the spectral method gives the highest accuracy among the four computational methods since the unknown functions in the spectral methods are approximated with some sufficiently smooth functions, such as Chebyshev polynomials, trigonometric functions, Legendre polynomials, or Jacobi polynomials, whereas the unknown functions in the FVE and FE methods are commonly approximated with some standard polynomials, and the derivatives in the FD scheme are approximated with some difference quotients. The spectral method is commonly sorted into the collocation spectral (CS) method, Galerkin’s spectral method, and the spectral tau method. It has been applied to settle various PDEs including second-order elliptic, parabolic, hyperbolic, telegraph, and hydromechanical equations (see ).

However, Sobolev equations are mainly solved with the FD scheme, the FE method, and the FVE method (see, e.g., [3, 4, 914]), except that one-dimensional Sobolev equations have been settled by the Fourier spectral method , and 2D Sobolev equations have recently been settled by the classic CS method . Though the classic CS method (see ) for 2D Sobolev equations can attain higher accuracy than the FD scheme, FE method, and FVE method, it also contains a lot of degrees of freedom (unknowns). In this way, because of the round-off error accumulation in numerical calculations, after several computational steps, there generally occurs a floating-point overflow such that we cannot obtain the desired consequences. Hence, to ensure a sufficiently high precision of the classic CS solutions, the crucial question is how to lessen the unknowns (i.e., degrees of freedom) of the CS method to ease the round-off error accumulation in the calculations, which is also central task in this paper.

Many examples have proven that the proper orthogonal decomposition (POD) can significantly reduce the order of numerical methods (see ). It can vastly decrease the degrees of freedom in the numerical methods. It has been applied in many fields including pattern recognition and signal analysis , statistical calculations , and computational fluid mechanics . For the past few years, it has successfully been used to the order reduction for the Galerkin methods [24, 25], the FE methods [26, 27], the FD schemes , the FVE methods [31, 32], and the reduced basis methods for PDEs . Nevertheless, the existing POD-based reduced-order methods (see [1727, 2935]) are mostly created with the POD bases produced by the classic solutions at all the time nodes on $$[0, T]$$, before repeatedly finding the order reduction solutions on the same time nodal points. In fact, they belong to some undesirable repeated computations. To get rid of the repeated computations, several reduced-order extrapolated approaches based on POD have been proposed .

Nevertheless, to our knowledge, there is no reduced-order extrapolating CS (ROECS) method for 2D Sobolev equations created by reducing the order for the coefficient vectors of the CS solutions of the classic CS method via POD. Hence, in this paper, utilizing POD to reduce the order of the coefficient vectors of the CS solutions for the classic CS method, we construct a ROECS method only holding few degrees of freedom. We employ matrix means to discuss the existence, convergence, and stability of the ROECS solutions so that the theoretical means here becomes very concise. In particular, we only employ the classic CS solutions on the first several time nodal points to form the snapshots, and then we use them to produce the POD bases and create the ROECS format so as to obtain the ROECS solutions on all the time nodal points. Thus, we avoid the repeated computations. Moreover, in this paper, we adopt the error estimates to serve as the suggestion of choice of POD bases. The ROECS format contains both advantages that the POD method can reduce the unknowns and the CS method has higher accuracy, so it is an innovation and development of the existing reduced-order methods.

The main merits of the ROECS method hare the following. First, we only reduce the order of the coefficient vectors of the solutions for the classic CS method by POD and have not altered the basis functions for the classic CS method so that the ROECS method holds simultaneously both virtues that POD can reduce the unknowns and the classic CS method has higher accuracy. Second, the classic CS method is totally different from the Galerkin spectral method, and the Sobolev equations not only include a first-order derivative term of time and two 2nd-order derivative terms of spacial variables but also contain two mixed derivative terms of the first order with respect to time and of the second order with respect to spacial variables, that is, the 2D Sobolev equations are more complex than the hyperbolic and parabolic equations in [42, 43]. So the ROECS method is totally different from the methods in [42, 43], but 2D Sobolev equations have some special applications as stated before. Third, we use the matrix means to discuss the existence, convergence, and stability of the ROECS solutions so that theoretical analysis becomes very concise and our theory and methods are totally different from the other existing order reduction methods. Therefore, our method is totally new and superior over the existing order reduction methods.

The rest of this paper is organized the following. In Sect. 2, we first retrospect the classic CS format of the 2D Sobolev equations and gain snapshots from the initial few classic CS solutions. Then, in Sect. 3, we produce a cluster of POD basis from the snapshots, develop the ROECS format, prove the existence, convergence, and stability of the ROECS solutions by the matrix means, and supply the flow-chart for settling the ROECS format. Next, in Sect. 4, we use two sets of numerical examples to illuminate that the ROECS format is distinctly superior to the classic CS model, to validate that the numeric computational conclusions accord with the theoretical ones and that the ROECS format is quite valid to solve Sobolev equations, and to confirm that the ROECS format can greatly lessen the unknowns (i.e., degrees of freedom), the calculation load, the CPU elapsed time, and the required storage volumes in numerical computations. Finally, in Sect. 5, we provide the chief conclusions and discussions.

## 2 The classic CS method for 2D Sobolev equations

Because any bounded closed domain Ω̅ in $$\mathbb{R}^{2}$$ can be approximately covered with several rectangles $$[a_{i}, b_{i}] \times [c_{i}, d_{i}]$$ ($$i=1, 2, \ldots, I$$), for simplicity and without loss of generality, let $$\overline{\varOmega }=[a, b]\times [c, d] \subset \mathbb{R}^{2}$$. Moreover, using the transforms $$x' = -1+{2(x-a)}/ {(b-a)}$$ and $$y' = -1+{2(y-c)}/{(d-c)}$$, we can ensure $$[a,b]\leftrightarrow [-1,1]$$ and $$[c,d]\leftrightarrow [-1,1]$$, respectively. Thus, for convenience, we can further assume that $$a = c = -1$$ and $$b = d =1$$.

### 2.1 The variational formulation for the 2D Sobolev equations

The Sobolev spaces and norms used in this paper are standard, whose detailed descriptions can be found in . For example, we set $$\omega ={1}/{\sqrt{(1-x^{2})(1-y^{2})}}$$, and $$L^{2}_{\omega }( \varOmega )$$ denotes the set of all square-integrable functions on Ω equipped with inner product and norm
$$(u,v)_{\omega }= \int _{\varOmega }uv\omega \,\mathrm{d}x\,\mathrm{d}y, \qquad \|u\|_{0,\omega } = \biggl( \int _{\varOmega }|u|^{2}\omega \,\mathrm{d}x\,\mathrm{d}y\biggr) ^{1/2}, \quad \forall u, v\in L^{2}_{\omega }(\varOmega ),$$
whereas $$H^{m}_{\omega }(\varOmega ):=\{u\in L^{2}_{\omega }(\varOmega ): D ^{\alpha }u\in L^{2}_{\omega }(\varOmega ),0\leqslant |\alpha | \leqslant m\}$$ denotes the weighted Sobolev space on Ω with the CGL quadrature weight function, equipped with the norm
$$\|u\|_{m,\omega } = \biggl(\sum_{0\leqslant |\alpha |\leqslant m} \bigl\Vert D^{\alpha }u \bigr\Vert ^{2}_{0,\omega } \biggr)^{\frac{1}{2}}.$$
Furthermore, set $$H_{0,\omega }^{1}(\varOmega ) = \{u\in H^{1}_{\omega }( \varOmega ): u|_{\partial \varOmega } = 0\}$$, and let $$\|\cdot \|_{H^{l}(H ^{m}_{\omega })}$$ be the norm in the space
$$H^{l}\bigl(0, T; H^{m}_{\omega }(\varOmega )\bigr)\equiv \Biggl\{ v(t)\in H^{m}_{ \omega }(\varOmega ): \|v\|_{H^{l}(H^{m}_{\omega })}^{2} \equiv \int _{0} ^{T}\sum_{i=0}^{l} \biggl\Vert \frac{\mathrm{d}^{i}}{\mathrm{d}t ^{i}}v(t) \biggr\Vert _{m, \omega }^{2} \,\mathrm{d}t< \infty \Biggr\} .$$

We consider the following variational formulation for 2D Sobolev equations.

### Problem 1

For $$t\in (0,T)$$, find $$u\in H_{0,\omega }^{1}(\varOmega )$$ such that
$$\textstyle\begin{cases} (u_{t},v)_{\omega } + \varepsilon (\nabla u_{t},\nabla v)_{\omega } + \gamma (\nabla u, \nabla v)_{\omega } = (f,v)_{\omega }, \quad \forall v\in H_{0,\omega }^{1}(\varOmega ), \\ u(x,y,0) = u_{0}(x,y), \quad (x,y) \in \varOmega . \end{cases}$$
(2)

The following result on the existence, uniqueness, and stability of the generalized solution for Problem 1 has been provided in .

### Theorem 2

If $$f\in L^{2}(0,T;H^{-1}_{\omega }(\varOmega ))$$ and $$u_{0}\in H^{1} _{\omega }(\varOmega )$$, then there exists a unique generalized solution for the variational formulation (2) satisfying the following stability:
$$\|u\|_{1,\omega }\leqslant \tilde{c}\bigl( \Vert u_{0} \Vert _{1,\omega }+\|f\|_{L^{2}(H ^{-1}_{\omega })}\bigr),$$
(3)
where $$\tilde{c}=\sqrt{\max \{1, \varepsilon , {1}/{(\gamma c_{p} ^{2})}\}/\min \{1, \varepsilon \}}$$, and $$c_{p}$$ is the Poincaré coefficient.

### 2.2 The classic CS method for the 2D Sobolev equations

Too solve time-dependent PDEs by the CS format, it is necessary to discretize $$u_{t}$$ by means of the difference quotient and spatial variables by means of the CS method. The CS method consists in seeking some approximate solutions at time and spatial nodes. In this paper, we take the Chebyshev–Gauss–Lobatto (CGL) type interpolation points (see ) as the space nodes, namely, let $$\{x_{j}\}_{j=0} ^{N}$$ and $$\{y_{k}\}_{k=0}^{N}$$ be the space nodes in the x and y directions, respectively, with
$$x_{j} = -\cos \frac{j\pi }{N}, \qquad y_{k} = -\cos \frac{k\pi }{N},$$
where the positive integer N denotes the number of nodes in a certain direction. For integer $$K>0$$, let $$\Delta t=T/K$$ be the time step, that is, $$K\Delta t = T$$. We approximate $$u(x,y,n\Delta t)$$ with $$u^{n}$$, the time derivative $$u_{t}$$ of $$u(x,y,t)$$ at time $$t_{n}=n \Delta t$$ with $${(u^{n+1}-u^{n})}/{\Delta t}$$, and $$u^{n}(x,y)$$ with $$u_{N}^{n}(x,y)$$, namely,
$$u^{n}(x,y) \approx u_{N}^{n}(x,y) = \sum _{j=0}^{N}\sum_{k=0}^{N}u_{N} ^{n}(x_{j},y_{k})h_{j}(x)h_{k}(y), \quad 0\leqslant n\leqslant K,$$
where $$\{h_{j}(x)\}^{N}_{j=0}$$ and $$\{h_{k}(y)\}^{N}_{j=0}$$ are the Lagrange basis polynomials associated with the sets of the CGL points $$\{x_{j}\}^{N}_{j=0}$$ and $$\{y_{k}\}^{N}_{k=0}$$, respectively.
Define the $$H_{\omega }^{1}$$-orthogonal projection $$R_{N}: H_{0, \omega }^{1}(\varOmega )\rightarrow P_{N}$$, that is, for any $$u\in H_{0, \omega }^{1}(\varOmega )$$, it satisfies
$$\bigl(\nabla (R_{N}u - u),\nabla v\bigr)_{\omega } = 0, \quad \forall v\in P_{N}.$$
Thus, $$R_{N}$$ has the following important property (see ).

### Theorem 3

For any $$u\in H^{q}_{\omega }(\varOmega )$$ with $$q\geqslant 1$$, we have
$$\Vert \nabla R_{N}u \Vert _{0,\omega }\leqslant \Vert \nabla u \Vert _{0,\omega },\qquad \bigl\Vert \partial ^{k}(R_{N}u-u) \bigr\Vert _{0,\omega } \leqslant CN^{k-q}, \quad 0\leqslant k \leqslant q\leqslant N+1,$$
where C is a general positive constant independent of N and Δt.

Now, we obtain the following CS format for 2D Sobolev equations.

### Problem 4

Find $$u_{N}^{n} \in U_{N} \equiv H_{0,\omega }^{1}(\varOmega )\cap P _{N}$$ such that
$$\textstyle\begin{cases} (u^{n+1}_{N}-u^{n}_{N},v_{N})_{\omega } + \varepsilon (\nabla u_{N} ^{n+1}-\nabla u_{N}^{n},\nabla v_{N})_{\omega }+\gamma \Delta t ( \nabla u_{N}^{n+1},\nabla v_{N})_{\omega } \\ \quad = \Delta t(f(t_{n+1}),v_{N})_{\omega },\quad \forall v_{N}\in U_{N}, 0\leqslant n\leqslant K, \\ u^{0}_{N}(x,y) =R_{N} u_{0}(x,y),\quad (x,y)\in \varOmega , \end{cases}$$
(4)
where $$f(t_{n})=f(x,y,t_{n})$$.

The result on the existence, uniqueness, stability, and convergence about the CS solutions for Problem 4 is given in .

### Theorem 5

If $$f\in L^{2}(0,T;L^{2}_{\omega }(\varOmega ))$$ and $$u_{0}\in H^{1}_{ \omega }(\varOmega )$$, then there exists a unique series of solutions $$u_{N}^{n}\in U_{N}$$ ($$n=1, 2, \ldots, K$$) for the CS format (4) satisfying the following stability:
$$\bigl\Vert \nabla u_{N}^{n} \bigr\Vert ^{2}_{0,\omega } \leqslant \Vert \nabla u_{0} \Vert ^{2}_{0, \omega }+\frac{\Delta t}{\gamma } \sum _{j=1}^{n} \bigl\Vert f(t_{j}) \bigr\Vert _{0, \omega }^{2}, \quad n=1, 2, \ldots, K.$$
(5)
Furthermore, when $$\Delta t=O(N^{-1})$$ and solutions of Problem 1 $$u(t_{n})\in H_{\omega }^{q}(\varOmega )$$ ($$2\leqslant q\leqslant N+1$$), the errors between the solution for Problem 1 and the series of solutions of Problem 4 have the following estimates:
\begin{aligned} & \bigl\Vert \nabla \bigl(u(t_{n})-u_{N}^{n} \bigr) \bigr\Vert _{0,\omega }\leqslant C\bigl(\Delta t+N ^{-q+1} \bigr), \quad n=1, 2, \ldots, K, \end{aligned}
(6)
\begin{aligned} & \bigl\Vert u(t_{n})-u_{N}^{n} \bigr\Vert _{0,\omega } \leqslant C\bigl(\Delta t^{2}+N^{-q} \bigr), \quad n=1, 2, \ldots, K. \end{aligned}
(7)

### Remark 1

The error estimates in Theorem 5 attain an optimal order. Theorem 5 shows that the classic CS format, that is, Problem 4 for 2D Sobolev equations has a unique series of solutions, which is stable and continuously depends on the initial value and source functions. This theoretically ensures that Problem 4 is effective and reliable for solving 2D Sobolev equations.

### 2.3 The matrix representation of the classic CS format

To understand more easily the classic CS format for 2D Sobolev equations, we will rewrite the classic CS format (4) in the matrix form so that it can be easily programmed and computed by a computer. Let $$u_{N_{j,k}}^{n}$$ ($$0\leqslant j,k\leqslant N$$) denote the spectral approximate values of $$u(x_{j},y_{k},n\Delta t)$$, namely
$$u(x,y,n\Delta t) \approx u_{N}^{n} = \sum _{j=0}^{N}\sum_{k=0} ^{N}u_{N_{j,k}}^{n}h_{j}(x)h_{k}(y).$$
(8)
Taking $$v_{N} = h_{m}(x)h_{l}(y)\in U_{N}$$ ($$0\leqslant m,l\leqslant N$$) in scheme (4), we come to the conclusion
\begin{aligned}& \bigl(u_{N}^{n+1},v_{N}\bigr)_{\omega } = \sum_{j=0}^{N}\sum _{k=0}^{N}u _{N_{j,k}}^{n+1} \bigl(h_{j}(x)h_{k}(y),h_{m}(x)h_{l}(y) \bigr)_{\omega }, \\& \begin{aligned} \bigl(\nabla u_{N}^{n+1},\nabla v_{N}\bigr)_{ \omega } &= \sum_{j=0}^{N} \sum_{k=0}^{N}u_{N_{j,k}}^{n+1} \bigl(h^{\prime }_{j}(x)h _{k}(y),h^{\prime }_{m}(x)h_{l}(y) \bigr)_{\omega } \\ &\quad {}+\sum_{j=0}^{N}\sum _{k=0}^{N}u_{N_{j,k}}^{n+1} \bigl(h_{j}(x)h^{\prime }_{k}(y),h _{m}(x)h^{\prime }_{l}(y) \bigr)_{\omega }. \end{aligned} \end{aligned}
From Problem 4 we obtain
\begin{aligned}& \begin{aligned}[b] A_{jm,kl}&=\bigl(h_{j}(x)h_{k}(y),h_{m}(x)h_{l}(y) \bigr)_{\omega } \\ & =\sum_{p=0}^{N}\sum _{q=0}^{N}h_{j}(x_{p})h_{m}(x_{p}) \omega _{p}h_{k}(y _{q})h_{n}(y_{q}) \omega _{q}, \end{aligned} \end{aligned}
(9)
\begin{aligned}& \begin{aligned}[b] B_{jm,kl}&=\bigl(h_{j}^{\prime }(x)h_{k}(y),h_{m}^{\prime }(x)h_{l}(y) \bigr)_{\omega }+\bigl(h _{j}(x)h_{k}^{\prime }(y),h_{m}(x)h_{l}^{\prime }(y) \bigr)_{\omega } \\ & =\sum_{p=0}^{N}\sum _{q=0}^{N}h^{\prime }_{j}(x_{p})h^{\prime }_{m}(x_{p}) \omega _{p}h_{k}(y_{q})h_{l}(y_{q}) \omega _{q} \\ &\quad {} +\sum_{p=0}^{N}\sum _{q=0}^{N}h_{j}(x_{p})h_{m}(x_{p}) \omega _{p}h^{\prime } _{k}(y_{q})h^{\prime }_{l}(y_{q}) \omega _{q}, \end{aligned} \end{aligned}
(10)
where $$0\leqslant j, m, k, l\leqslant N$$.
Then we can rewrite the classic CS format (4) for the 2D equations as the following matrix form with $$(N+1)^{2}$$ equations for $$\{u_{N_{z}}^{n}\}_{n=0}^{K}$$:
$$\textstyle\begin{cases} (\boldsymbol{A} + \varepsilon \boldsymbol{B} +\gamma \Delta t\boldsymbol{B})\boldsymbol{U}_{N}^{n+1} = \Delta t\boldsymbol{F}^{n+1}+(\boldsymbol{A}+\varepsilon \boldsymbol{B})\boldsymbol{U}_{N}^{n},\quad 0\leqslant n \leqslant K-1, \\ \boldsymbol{U}_{N}^{0} = \boldsymbol{U}_{0}, \end{cases}$$
(11)
where
\begin{aligned}& \boldsymbol{A} = [A_{jm,kl}]_{(N+1)^{2}\times (N+1)^{2}}, \qquad \boldsymbol{B} =[B_{jm,kl}]_{(N+1)^{2}\times (N+1)^{2}}, \\& \boldsymbol{U}_{N}^{n+1} = \bigl[u_{N_{0,0}}^{n+1},u_{N_{1,0}}^{n+1}, \ldots ,u _{N_{N,0}}^{n+1},u_{N_{0,1}}^{n+1}, u_{N_{1,1}}^{n+1},\ldots ,u_{N _{N,1}}^{n+1},\ldots ,u_{N_{0,N}}^{n+1}, \ldots , u_{N_{N,N}}^{n+1} \bigr]^{T}, \\& \boldsymbol{F}^{n+1} = \bigl[F^{n+1}_{0,0},F^{n+1}_{1,0}, \ldots ,F^{n+1}_{N,0},F ^{n+1}_{0,1},\ldots ,F^{n+1}_{N,1}, \ldots ,F^{n+1}_{0,N},\ldots ,F ^{n+1}_{N,N}\bigr]^{T}, \\& F^{n+1}_{m,l} = f\bigl(x_{m},y_{l},(n+1) \Delta t\bigr), \quad 0\leqslant n\leqslant N-1, \\& \boldsymbol{U}_{0} = \bigl[u_{0}(x_{0},y_{0}),u_{0}(x_{1},y_{0}), \ldots ,u_{0}(x _{N},y_{0}),u_{0}(x_{0},y_{1}), \ldots ,u_{0}(x_{N},y_{1}),\ldots , \\& \hphantom{\boldsymbol{U}_{0} ={}}{}u_{0}(x_{0},y_{N}),\ldots ,u_{0}(x_{N},y_{N})\bigr]^{T}. \end{aligned}

### Remark 2

Because the classic CS format adopts the Chebyshev polynomials as basic functions, it has a higher accuracy than general numerical methods, such as the FE method, FD scheme, and FVE method, but it also contains as many unknowns as the general numerical methods, so that it has to bear a lot of computing load. Thus, reducing the order for the classic CS format is more significative than for other numerical methods. For this purpose, we extract the initial L coefficient vectors $${\boldsymbol{U}}_{N}^{1}, {\boldsymbol{U}}_{N}^{2}, \ldots, {\boldsymbol{U}}_{N}^{L}$$ ($$L\ll K$$) in the series of coefficient vectors $$\{{\boldsymbol{U}}_{N}^{n}\}_{n=1}^{K}$$ for the classic CNCS matrix format (11) to form a set of snapshots.

## 3 The ROECS method based on POD for 2D Sobolev equations

### 3.1 Formulation of POD basis

We use the set of snapshots obtained by Sect. 2.3 to form a snapshot matrix $$\boldsymbol{P}=({\boldsymbol{U}}_{N}^{1}, {\boldsymbol{U}}_{N}^{2}, \ldots, {\boldsymbol{U}}_{N}^{L})$$ of volume $$(2N+1)^{2}\times L$$. Let $$\lambda _{j}>0$$ ($$j=1, 2, \ldots , r=: \operatorname{rank}(\boldsymbol{P})$$) be the positive eigenvalues of $$\boldsymbol{P}\boldsymbol{P}^{T}$$ arranged nonincreasingly, and let $$\boldsymbol{U}=(\boldsymbol{\phi }_{1},\boldsymbol{\phi }_{2},\ldots , \boldsymbol{\phi }_{r}) \in \mathbb{R}^{(2N+1)^{2}\times r}$$ be the associated orthonormal eigenvectors of $$\boldsymbol{P}\boldsymbol{P}^{T}$$. Thus, the POD basis $$\boldsymbol{\varPhi }=(\boldsymbol{\phi } _{1},\boldsymbol{\phi }_{2},\ldots , \boldsymbol{\phi }_{d})$$ ($$d\leqslant r$$) is formed by the first d vectors in U and satisfies (see )
$$\bigl\Vert \boldsymbol{P}-\boldsymbol{\varPhi }\boldsymbol{\varPhi }^{T} \boldsymbol{P} \bigr\Vert _{2,2}=\sqrt{ \lambda _{d+1}},$$
(12)
where $$\|{\boldsymbol{P}}\|_{2,2}=\sup_{\boldsymbol{\chi }\neq\boldsymbol{0}}{\|{\boldsymbol{P}} \boldsymbol{\chi }\|_{2}}/{\| \boldsymbol{\chi }\|_{2}}$$, and $$\|\boldsymbol{\chi }\|_{2}$$ is the norm of a vector χ. Further, we obtain
\begin{aligned} \bigl\Vert \boldsymbol{U}_{N}^{n}-\boldsymbol{\varPhi } \boldsymbol{\varPhi }^{T}\boldsymbol{U}_{N}^{n} \bigr\Vert _{2} =& \bigl\Vert \bigl(\boldsymbol{P}-\boldsymbol{\varPhi } \boldsymbol{\varPhi }^{T}\boldsymbol{P}\bigr)\boldsymbol{e}_{n} \bigr\Vert _{2} \\ \leqslant& \bigl\Vert \boldsymbol{P}-\boldsymbol{\varPhi }\boldsymbol{\varPhi }^{T}\boldsymbol{P} \bigr\Vert _{2,2} \bigl\Vert \boldsymbol{e}_{n} \bigr\Vert _{2}\leqslant \sqrt{ \lambda _{d+1}},\quad n=1,2, \ldots, L, \end{aligned}
(13)
where $$\boldsymbol{e}_{n}=(0, \ldots, 0, 1, 0, \ldots, 0)^{T}$$ ($$n=1,2,\ldots ,L$$) with the nth component equal to 1. Hence, $$\boldsymbol{\varPhi }=(\boldsymbol{\phi } _{1},\boldsymbol{\phi }_{2},\ldots , \boldsymbol{\phi }_{d})$$ is an optimal POD basis.

### Remark 3

Since the order $$(2N+1)^{2}$$ of the matrix $${\boldsymbol{P}}{\boldsymbol{P}}^{T}$$ is far larger than the order L of the matrix $${\boldsymbol{P}}^{T}{\boldsymbol{P}}$$, the number of the nodes of spatial meshes $$(2N+1)^{2}$$ is far larger than that of extracted snapshots L. Nevertheless, both positive eigenvalues $$\lambda _{i}$$ ($$i=1,2,\ldots,r$$) are the same, and thus we may first search out the eigenvalues $$\lambda _{i}$$ ($$i=1,2,\ldots,r$$) of $${\boldsymbol{P}}^{T}{\boldsymbol{P}}$$ and the associated eigenvectors $${\boldsymbol{\varphi }} _{i}$$ ($$i=1,2,\ldots,r$$), and then by the formula $${\boldsymbol{\phi }}_{i}= {\boldsymbol{P}}{\boldsymbol{\varphi }}_{i}/\sqrt{{\lambda _{i}}}$$ ($$i=1,2,\ldots,r$$) we can gain the eigenvectors $${\boldsymbol{\phi }}_{i}$$ ($$i=1,2,\ldots,r$$) associated with the positive eigenvalues $$\lambda _{i}$$ ($$i=1,2,\ldots,r$$) of $${\boldsymbol{P}}{\boldsymbol{P}}^{T}$$ and such that we can expediently obtain the POD basis.

### 3.2 Establishment of the ROECS model

By (13) in Sect. 3.1, we can obtain the first L ($$L\leqslant K$$) coefficient vectors of ROESE solutions: $$\boldsymbol{U}_{d} ^{n}=\boldsymbol{\varPhi }\boldsymbol{\varPhi }^{T}\boldsymbol{U}_{N}^{n}=:\boldsymbol{\varPhi }\boldsymbol{\beta } _{d}^{n}$$ ($$n=1,2,\ldots , L$$), where $$\boldsymbol{U}_{d}^{n}=(u_{d,0,0}^{n}, u_{d,1,0}^{n},\ldots ,u_{d, N,0}^{n},u_{d,0,1}^{n}, u_{d,1,1}^{n}, \ldots , u_{d,N,1}^{n},\ldots ,u_{d,0,N}^{n},u_{d,1,N}^{n},\ldots , u _{d,N,N}^{n+1})^{T}$$ and $$\boldsymbol{\beta }_{d}^{n}=(\beta _{1}^{n}, \beta _{2}^{n},\ldots ,\beta _{d}^{n})^{T}$$. When the coefficient vectors $$\boldsymbol{U}_{N}^{n}$$ in (11) are replaced with $$\boldsymbol{U}_{d}^{n}= \boldsymbol{\varPhi }\boldsymbol{\beta }_{d}^{n}$$ ($$n=L+1, L+2, \ldots ,K$$), we can obtain the following ROECS format:
$$\textstyle\begin{cases} \boldsymbol{\varPhi }\boldsymbol{\beta }_{d}^{n} =\boldsymbol{\varPhi }\boldsymbol{\varPhi }^{T}\boldsymbol{U}_{N}^{n},\quad 1\leqslant n\leqslant L, \\ (\boldsymbol{A} + \varepsilon \boldsymbol{B} +\gamma \Delta t\boldsymbol{B})\boldsymbol{\varPhi } \boldsymbol{\beta }_{d}^{n+1} =(\boldsymbol{A}+\varepsilon \boldsymbol{B})\boldsymbol{\varPhi } \boldsymbol{\beta }_{d}^{n} +\Delta t\boldsymbol{F}^{n+1},\quad L\leqslant n \leqslant K-1, \\ \boldsymbol{U}_{d}^{n}=\boldsymbol{\varPhi }\boldsymbol{\beta }_{d}^{n}, \quad n=1,2, \ldots, K, \end{cases}$$
(14)
where $$\boldsymbol{U}_{N}^{n}$$ ($$n=1,2,\ldots ,L$$) are the initial L coefficient vectors in (11), and the matrices A and B are provided in (11). Further, due to the reversibility of the matrix $$(\boldsymbol{A} + \varepsilon \boldsymbol{B} +\gamma \Delta t \boldsymbol{B} )$$, the format (14) is abbreviated as follows:
$$\textstyle\begin{cases} \boldsymbol{\beta }^{n}_{d}=\boldsymbol{\varPhi }^{T}\boldsymbol{U}_{N}^{n},\quad 1\leqslant n\leqslant L, \\ \boldsymbol{\beta }_{d}^{n+1} =\boldsymbol{\beta }_{d}^{n}-\gamma \Delta t\varPhi ^{T}( \boldsymbol{A} + \varepsilon \boldsymbol{B} +\gamma \Delta t\boldsymbol{B})^{-1}\boldsymbol{B} \boldsymbol{\varPhi }\boldsymbol{\beta }_{d}^{n} \\ \hphantom{\boldsymbol{\beta }_{d}^{n+1} ={}}{}+\Delta t\varPhi ^{T}(\boldsymbol{A} + \varepsilon \boldsymbol{B} +\gamma \Delta t\boldsymbol{B})^{-1} \boldsymbol{F}^{n+1}, \quad L\leqslant n \leqslant K-1, \\ \boldsymbol{U}_{d}^{n}=\boldsymbol{\varPhi }\boldsymbol{\beta }_{d}^{n}, \quad n=1, 2, \ldots, K. \end{cases}$$
(15)

### Remark 4

As equation (11) contains $$(N+1)^{2}$$ unknowns at each time node, but the ROECS model, that is, the format (15) at the same time node only involves d unknowns ($$d\leqslant L\ll (N+1)^{2}$$, for example, $$d=6$$, but $$(N+1)^{2}=10\text{,}201$$ in Sect. 4), the format (15) is obviously superior to equation (11). After we have gained $$\boldsymbol{U}_{d}^{n}=(u_{d,0,0}^{n}, u_{d,1,0}^{n},\ldots ,u _{d, N,0}^{n},u_{d,0,1}^{n}, u_{d,1,1}^{n},\ldots ,u_{d,N,1}^{n}, \ldots ,u_{d,0,N}^{n},u_{d,1,N}^{n},\ldots , u_{d,N,N}^{n+1})^{T}$$ ($$1\leqslant n\leqslant K$$) via (15), we can obtain the ROECS solutions by the formula $$u_{d}^{n}(x,y) = \sum_{j=0}^{N}\sum_{k=0}^{N}u_{d,j,k} ^{n}h_{j}(x) h_{k}(y)$$ ($$n=1, 2, \ldots, K$$).

### 3.3 The existence, stability, and convergence for the ROECS solutions

To discuss the existence, stability, and convergence of the ROECS solutions, we think about the max-norms of a matrix and vector (for more detail, see ), which are, respectively, defined dy
\begin{aligned}& \|{\boldsymbol{D}}\|_{\infty }=\max_{1\leqslant i\leqslant m}\sum _{j=1}^{l}|d_{ij}|,\quad \forall { \boldsymbol{D}}=(d_{ij})_{m\times l}\in \mathbb{R}^{m}\times \mathbb{R}^{l}, \\& \|{\boldsymbol{\chi }}\|_{\infty }=\max_{1\leqslant j\leqslant m}|\chi _{j}|,\quad \forall {\boldsymbol{\chi }}=(\chi _{1}, \chi _{2}, \ldots, \chi _{m})^{T}\in \mathbb{R}^{m}. \end{aligned}
We also employ the following discrete Gronwall inequality (see [46, Lemma 3.4] or [36, Lemma 1.4.1]).

### Lemma 6

If $$\{a_{n}\}$$ and $$\{b_{n}\}$$ are two nonnegative sequences and $$\{c_{n}\}$$ is a positive monotone sequence satisfying
$$a_{n}+b_{n}\leqslant c_{n}+\bar{\lambda }\sum _{i=0}^{n-1}a_{i} \quad (\bar{ \lambda }>0), \qquad a_{0}+b_{0}\leqslant c_{0},$$
then
$$a_{n}+b_{n}\leqslant c_{n}\exp (n\bar{\lambda }), \quad n=0, 1,2, \ldots .$$

We have the following main result of the existence, stability, and convergence of the ROECS solutions for the format (15).

### Theorem 7

Under the conditions of Theorem 5, there exists a unique series of ROECS solutions $$u_{d}^{n}$$ ($$n=1, 2, \ldots, K$$) satisfying the following stability:
$$\bigl\Vert \nabla u_{d}^{n} \bigr\Vert _{0,\omega }\leqslant C(u_{0},f),\quad n=1, 2, \ldots, L, L+1, L+2, \ldots, K,$$
(16)
where $$C(u_{0},f)$$ are positive constants dependent on $$u_{0}$$ and f but independent of N and Δt. Moreover, when $$u(t_{n}) \in H_{\omega }^{q}(\varOmega )$$ ($$2\leqslant q\leqslant N+1$$), the errors between the solutions for Problem 1 and the ROECS solutions $$u_{d}^{n}$$ ($$n=1, 2, \ldots, K$$) have the following estimates:
$$\bigl\Vert u(t_{n})-u_{d}^{n} \bigr\Vert _{0,\omega } \leqslant {C} \bigl(\Delta t^{2}+N ^{-q}+\sqrt{\lambda _{d+1}} \bigr), \quad 1\leqslant n \leqslant K.$$
(17)

### Proof

(1) The existence and stability for the ROECS solutions.

Due to the reversibility of the matrix $$(\boldsymbol{A} + \varepsilon \boldsymbol{B} +\gamma \Delta t\boldsymbol{B} )$$, from the format (15) and Remark 4 we can conclude that the format (15) has a unique series of the ROECS solutions.

From (14) we can recover the following format:
\begin{aligned}& \boldsymbol{U}_{d}^{n} =\boldsymbol{\varPhi } \boldsymbol{\varPhi }^{T}\boldsymbol{U}_{N}^{n}, \quad 1\leqslant n\leqslant L; \end{aligned}
(18)
\begin{aligned}& \begin{aligned}[b] \boldsymbol{U}_{d}^{n+1}&= \boldsymbol{U}_{d}^{n}-\gamma \Delta t (\boldsymbol{A} + \varepsilon \boldsymbol{B} +\gamma \Delta t\boldsymbol{B} )^{-1} \boldsymbol{B}\boldsymbol{U}_{d}^{n} \\ &\quad {} +\Delta t (\boldsymbol{A} + \varepsilon \boldsymbol{B} +\gamma \Delta t\boldsymbol{B} ) ^{-1}\boldsymbol{F}^{n},\quad L\leqslant n \leqslant K-1. \end{aligned} \end{aligned}
(19)

Write $$\boldsymbol{H}(x,y)=(h_{0}(x)h_{0}(y),h_{1}(x)h_{0}(y),\ldots ,h_{N}(x)h _{0}(y),h_{0}(x)h_{1}(y), h_{1}(x)h_{1}(y), \ldots , h_{N}(x) h _{1}(y), \ldots , h_{0}(x)h_{N}(y), h_{1}(x)h_{N}(y), \ldots , h_{N}(x)h _{N}(y))^{T}$$. Then we denote the solutions for Problem 4 by $$u_{N}^{n} =(\boldsymbol{U}_{N}^{n})^{T}\boldsymbol{H}(x,y)=\boldsymbol{U} _{N}^{n}\cdot \boldsymbol{H}(x,y)$$. Similarly, $$u_{d}^{n} = (\boldsymbol{U}_{d}^{n})^{T} \boldsymbol{H}(x,y) = \boldsymbol{U}_{d}^{n}\cdot \boldsymbol{H}(x,y)$$.

When $$n=1, 2, \ldots, L$$, we have
\begin{aligned} \bigl\| u_{d}^{n}\bigr\| _{0,\omega } =&\bigl\| \boldsymbol{\varPhi } \boldsymbol{\varPhi }^{T}\boldsymbol{U}_{N} ^{n}\cdot \boldsymbol{H}(x,y)\bigr\| _{0,\omega }\leqslant \bigl\| \boldsymbol{\varPhi }\boldsymbol{ \varPhi }^{T}\bigr\| _{\infty }\bigl\| \boldsymbol{U}_{N}^{n} \cdot \boldsymbol{H}(x,y)\bigr\| _{0,\omega } \\ \leqslant& \bigl\| u_{N}^{n}\bigr\| _{0,\omega }, \quad n=1,2, \ldots, L. \end{aligned}
(20)
Furthermore, by Theorem 5 we conclude that (16) is correct for $$n=1, 2, \ldots, L$$.
When $$n=L+1, L+2, \ldots, K$$, we rewrite (19) as follows:
\begin{aligned} \bigl\Vert \boldsymbol{U}_{d}^{n+1} \bigr\Vert _{\infty } \leqslant& \bigl\Vert \boldsymbol{U}_{d}^{n} \bigr\Vert _{2}+\gamma \Delta t \bigl\Vert (\boldsymbol{A} + \varepsilon \boldsymbol{B} +\gamma \Delta t\boldsymbol{B} ) ^{-1} \boldsymbol{B} \bigr\Vert _{\infty } \bigl\Vert \boldsymbol{U}_{d}^{n} \bigr\Vert _{\infty } \\ &{} +\Delta t \bigl\Vert (\boldsymbol{A} + \varepsilon \boldsymbol{B} +\gamma \Delta t\boldsymbol{B} ) ^{-1} \bigr\Vert _{\infty } \bigl\Vert \boldsymbol{F}^{n} \bigr\Vert _{\infty },\quad L \leqslant n \leqslant K-1. \end{aligned}
(21)
Moreover, from the FE method (see, e.g., [46, Lemmas 1.18 and 1.22]), the CS method (see, e.g., [6, Chapters II and III]), and the properties of matrix norms we can attain the inequalities
$$\Vert {\boldsymbol{A}} \Vert _{\infty }\leqslant C; \qquad \bigl\Vert {\boldsymbol{A}}^{-1} \bigr\Vert _{\infty } \leqslant C; \qquad \Vert {\boldsymbol{B}} \Vert _{\infty }\leqslant CN; \qquad \bigl\Vert {\boldsymbol{B}}^{-1} \bigr\Vert _{\infty } \leqslant CN^{-1}.$$
(22)
Furthermore, by the properties of matrixes (see [45, Lemma 1.4.1]) and (22) we obtain
\begin{aligned}& \begin{aligned}[b] \bigl\Vert (\boldsymbol{A} + \varepsilon \boldsymbol{B} +\gamma \Delta t\boldsymbol{B} ) ^{-1} \bigr\Vert _{\infty }&=\frac{1}{\varepsilon +\gamma \Delta t} \bigl\Vert \bigl(\gamma \Delta t \boldsymbol{A}\boldsymbol{B}^{-1} +\boldsymbol{I} \bigr)^{-1} \boldsymbol{B}^{-1} \bigr\Vert _{\infty } \\ &\leqslant \frac{1}{\varepsilon +\gamma \Delta t} \bigl\Vert \boldsymbol{B}^{-1} \bigr\Vert _{\infty }\leqslant CN ^{-1}, \end{aligned} \end{aligned}
(23)
\begin{aligned}& \bigl\Vert (\boldsymbol{A} + \varepsilon \boldsymbol{B} +\gamma \Delta t\boldsymbol{B} ) ^{-1}\boldsymbol{B} \bigr\Vert _{\infty }=\frac{1}{\varepsilon +\gamma \Delta t} \bigl\Vert \bigl(\gamma \Delta t \boldsymbol{A}\boldsymbol{B}^{-1} +\boldsymbol{I} \bigr)^{-1} \bigr\Vert _{\infty } \leqslant \frac{1}{ \varepsilon +\gamma }. \end{aligned}
(24)
Thus, from (21), (23), and (24) we get
$$\bigl\Vert \boldsymbol{U}_{d}^{n+1} \bigr\Vert _{\infty }\leqslant \bigl\Vert \boldsymbol{U}_{d}^{n} \bigr\Vert _{2}+C\Delta t \bigl\Vert \boldsymbol{U}_{d}^{n} \bigr\Vert _{\infty } +C\Delta tN^{-1} \bigl\Vert \boldsymbol{F}^{n} \bigr\Vert _{\infty },\quad L\leqslant n \leqslant K-1.$$
(25)
Summing (25) from L to n, we obtain
$$\bigl\Vert \boldsymbol{U}_{d}^{n+1} \bigr\Vert _{\infty }\leqslant \bigl\Vert \boldsymbol{U}_{d}^{L} \bigr\Vert _{2}+C\Delta t \sum_{i=L}^{n} \bigl\Vert \boldsymbol{U}_{d}^{i} \bigr\Vert _{\infty } +C\Delta tN^{-1}\sum_{i=L} ^{n} \bigl\Vert \boldsymbol{F}^{i} \bigr\Vert _{\infty }, \quad L\leqslant n \leqslant K-1.$$
(26)
Applying the discrete Gronwall lemma (Lemma 6) to (26), we get
$$\bigl\Vert \boldsymbol{U}_{d}^{n+1} \bigr\Vert _{\infty }\leqslant \Biggl( \bigl\Vert \boldsymbol{U}_{d}^{L} \bigr\Vert _{2}+C \Delta tN^{-1}\sum _{i=L}^{n} \bigl\Vert \boldsymbol{F}^{i} \bigr\Vert _{\infty } \Biggr)\exp \bigl[C(n-L) \Delta t\bigr],$$
(27)
where $$L\leqslant n \leqslant K-1$$. Thus, we obtain
\begin{aligned} \bigl\Vert u_{d}^{n} \bigr\Vert _{0,\omega } =& \bigl\Vert \boldsymbol{\varPhi }\boldsymbol{\varPhi }^{T} \boldsymbol{U}_{N}^{n} \cdot \boldsymbol{H}(x,y) \bigr\Vert _{0,\omega } \\ \leqslant& \bigl\Vert \boldsymbol{\varPhi }\boldsymbol{\varPhi }^{T} \bigr\Vert _{\infty } \bigl\Vert \boldsymbol{U}_{N}^{n} \bigr\Vert _{\infty } \bigl\Vert \boldsymbol{H}(x,y) \bigr\Vert _{0,\omega }\leqslant C(u_{0},f), \quad L+1\leqslant n \leqslant K, \end{aligned}
(28)
which shows that (16) is correct for $$n=L+1, L+2, \ldots, K$$.

(2) Error estimates (17).

Set $$\boldsymbol{e}^{n}=\boldsymbol{U}_{N}^{n}-\boldsymbol{U}_{d}^{n}$$. For $$n=1, 2, \ldots, L$$, from (13) we get
\begin{aligned} \bigl\Vert \boldsymbol{e}^{n} \bigr\Vert _{\infty } \leqslant& \bigl\Vert \boldsymbol{e}^{n} \bigr\Vert _{2}= \bigl\Vert \boldsymbol{U}_{N}^{n}- \boldsymbol{U}_{d}^{n} \bigr\Vert _{2} \\ =& \bigl\Vert \boldsymbol{U}_{N}^{n}-\boldsymbol{\varPhi } \boldsymbol{\varPhi }^{T}\boldsymbol{U}_{N}^{n} \bigr\Vert _{2}\leqslant \sqrt{ \lambda _{d+1}},\quad n=1,2, \ldots, L. \end{aligned}
(29)
For $$n=L+1, L+2, \ldots, K$$, from (11) and (19) we obtain
$$\boldsymbol{e}^{n+1}=\boldsymbol{e}^{n}- \gamma \Delta t (\boldsymbol{A} + \varepsilon \boldsymbol{B} +\gamma \Delta t \boldsymbol{B} )^{-1} \boldsymbol{B}\boldsymbol{e}^{n}, \quad L\leqslant n \leqslant K-1.$$
(30)
Further, from (24) we get
\begin{aligned} \bigl\Vert \boldsymbol{e}^{n+1} \bigr\Vert _{\infty } \leqslant& \bigl\Vert \boldsymbol{e}^{n} \bigr\Vert _{\infty }+\gamma \Delta t \bigl\Vert (\boldsymbol{A} + \varepsilon \boldsymbol{B} +\gamma \Delta t\boldsymbol{B} ) ^{-1} \boldsymbol{B} \bigr\Vert _{\infty } \bigl\Vert \boldsymbol{e}^{n} \bigr\Vert _{\infty } \\ \leqslant& \bigl\Vert \boldsymbol{e}^{n} \bigr\Vert _{\infty }+C\Delta t \bigl\Vert \boldsymbol{e}^{n} \bigr\Vert _{\infty }, \quad L\leqslant n \leqslant K-1. \end{aligned}
(31)
Summing (31) from L to n, we obtain
$$\bigl\Vert \boldsymbol{e}^{n+1} \bigr\Vert _{\infty }\leqslant \bigl\Vert \boldsymbol{e}^{L} \bigr\Vert _{\infty }+C\Delta t\sum_{i=L}^{n} \bigl\Vert \boldsymbol{e}^{i} \bigr\Vert _{\infty },\quad L\leqslant n \leqslant K-1.$$
(32)
Applying the discrete Gronwall lemma to (32), from (29) we get
$$\bigl\Vert \boldsymbol{e}^{n} \bigr\Vert _{\infty }\leqslant \bigl\Vert \boldsymbol{e}^{L} \bigr\Vert _{\infty }\exp \bigl[C\Delta t(n-L)\bigr] \leqslant C\sqrt{\lambda _{d+1}} , \quad L+1\leqslant n \leqslant K.$$
(33)
Thus, by $$u_{N}^{n}={\boldsymbol{U}}_{N}^{n}\cdot \boldsymbol{H}$$, $$u_{d}^{n}={\boldsymbol{U}} _{d}^{n}\cdot \boldsymbol{H}$$, and $$\|{\boldsymbol{H}}\|_{0,\omega }\leqslant 1$$, with the orthogonality of elements in $$\boldsymbol{H}(x,y)$$ and the inverse estimate theorem, we attain
\begin{aligned} \bigl\Vert u_{N}^{n}-u_{d}^{n} \bigr\Vert _{0,\omega } =& \bigl\Vert \boldsymbol{e}^{n}\cdot \boldsymbol{H}(x,y) \bigr\Vert _{0,\omega }\leqslant \bigl\Vert \boldsymbol{e}^{n} \bigr\Vert _{\infty } \bigl\Vert \boldsymbol{H}(x,y) \bigr\Vert _{0,\omega } \\ \leqslant& C\sqrt{\lambda _{d+1}}, \quad 1\leqslant n \leqslant K. \end{aligned}
(34)
Combining Theorem 5 and (34), we gain (17). This finishes the argument of Theorem 7. □

### Remark 5

We have two annotations for the ROECS format.
1. (1)

The factor $$\sqrt{\lambda _{d+1}}$$ in Theorem 7 is caused by the order reduction for the CS format and can serve as the criterion of choice of the POD basis, that is, it is necessary to choose the number of the POD basis d and L satisfying $$\sqrt{\lambda _{d+1}}\leqslant \max \{\Delta t^{2},N^{-2}\}$$.

2. (2)

We clearly can get that the matrix representation of the classic CS format (11) contains $$(N+1)^{2}$$ unknowns at each time node; nevertheless, the ROECS format (15) has only d unknowns ($$d\leqslant L\ll (N+1)^{2}$$, for instance, $$d=6$$, but $$(N+1)^{2}=10\text{,}201$$ in Sect. 4) at the same time node. Therefore, in comparison with the classic CS format, the ROECS format can greatly lessen unknowns, so that it can alleviate the calculation load and save the CPU consuming time and the storage requirements in the computational process for solving 2D Sobolev equations.

### 3.4 The flowchart for solving the ROECS format

We further provide the flowchart of finding the ROECS solutions for 2D Sobolev equations, which consists of the following five steps.
Step 1.:

For given parameters ε and γ, the source term $$f(x,y,t)$$ and the initial function $$u_{0}(x,y)$$, the number of nodes N in the direction of x or y, the nodes $$\{x_{m}\}_{m=0}^{N}=-\cos (m\pi /N)$$ and $$\{y_{l}\}_{l=0} ^{N}=-\cos (l\pi /N)$$, the time increment Δt. Solving the classic CS format (11) on the first L steps obtains the numerical solutions $$\boldsymbol{U}^{n}_{N}$$ ($$1\leqslant n\leqslant L$$).

Step 2.:

Put $$\boldsymbol{P}=(\boldsymbol{U}^{1}_{N}, \boldsymbol{U}^{2}_{N}, \ldots, \boldsymbol{U}^{L}_{N})_{(N+1)^{2}\times L}$$ and seek the positive eigenvalues $$\lambda _{1}\geqslant \lambda _{2}\geqslant \cdots \geqslant \lambda _{\kappa }> 0$$ ($$r=\dim \{u_{N}^{n}:1\leqslant n\leqslant L\}$$) and the associated eigenvectors $$\boldsymbol{\varphi }_{i}$$ ($$i=1,2,\ldots ,r$$) of $$\boldsymbol{P}^{T}\boldsymbol{P}$$.

Step 3.:

Determine the number d of POD basis by means of the inequality $$\lambda _{d+1}\leqslant \max \{\Delta t^{4},N^{-2q}\}$$ and produce the POD basis $$\boldsymbol{\varPhi }= (\phi _{1},\phi _{2},\ldots ,\phi _{d})$$ by the formula $${\boldsymbol{\phi }}_{i}={\boldsymbol{P}}{\boldsymbol{\varphi }}_{i}/\sqrt{ {\lambda _{i}}}$$ ($$1\leqslant i\leqslant d$$).

Step 4.:

First, obtain the ROECS solutions $$\boldsymbol{U}_{d}^{n}=(u _{d,0,0}^{n}, u_{d,1,0}^{n},\ldots ,u_{d, N,0}^{n},u_{d,0,1}^{n}, u_{d,1,1}^{n},\ldots , u_{d,N,1}^{n},\ldots , u_{d,0,N}^{n},u _{d,1,N}^{n},\ldots , u_{d,N,N}^{n+1})^{T}$$ ($$1\leqslant n\leqslant K$$) by solving the ROECS format, that is, the format (15), and then we can obtain the ROECS solutions by the formula $$u_{d}^{n}(x,y) = \sum_{j=0} ^{N}\sum_{k=0}^{N}u_{d,j,k}^{n}h_{j}(x)h_{k}(y)$$ ($$n=1, 2, \ldots, K$$).

Step 5.:

If $$\|u_{d}^{n}-u_{d}^{n+1}\|_{0,\omega }\leqslant \max \{\Delta t^{2},N^{-q}\}$$ ($$n=L,L+1,\ldots ,K-1$$), then end. Else, let $$\boldsymbol{U}_{N}^{i}=\boldsymbol{U}_{d}^{n-L-i}$$ ($$i=1,2,\ldots ,L$$) and return to Step 2.

## 4 Some numerical examples

In this section, we present several sets of comparative numerical examples to show the advantage of the ROECS method for the 2D Sobolev equation.

### Example 1

In the Sobolev equations, we take $$f(x,y,t) = 2(\cos 2\pi x\cos 2\pi y -1)\exp (-2t)$$, $$u_{0}(x,y) = 1-\cos 2\pi x\cos 2\pi y$$ (depicted in Fig. 1), $$\varphi (x,y,t)=u _{0}(x,y)\exp (-2t)$$, $$\varepsilon = {1}/{\pi ^{2}}$$, $$\gamma ={2}/ {\pi ^{2}}$$, the time step $$\Delta t = 0.01$$, and the number of nodes in two directions $$N = 100$$, $$q=2$$. Figure 1 Initial value function when $$t=0$$
We first compute the initial $$L=20$$ coefficient vectors $$\boldsymbol{U}_{N} ^{n}$$ of the CS solutions of (11) at time nodes $$t_{n}$$ ($$n=1,2,\ldots ,20$$) to form the snapshot matrix $$\boldsymbol{P}=(\boldsymbol{U}_{N} ^{1}, \boldsymbol{U}_{N}^{2}, \ldots, \boldsymbol{U}_{N}^{20})$$. Then we find the eigenvalues $$\lambda _{1}\geqslant \lambda _{2}\geqslant \cdots \geqslant \lambda _{20}\geqslant 0$$ and the associated eigenvectors $$\boldsymbol{\varphi }_{i}$$ ($$i=1,2,\ldots ,r$$) of $$\boldsymbol{P}^{T}\boldsymbol{P}$$ according to Step 2 in Sect. 3.4. By computing we obtain that the error factor $$\sqrt{\lambda _{7}}\leqslant 4\times 10^{-4}$$. Thus, we only need to produce the POD basis $$\boldsymbol{\varPhi }= (\phi _{1},\phi _{2},\ldots ,\phi _{d})$$ by the formula $${\boldsymbol{\phi }}_{i}={\boldsymbol{P}}{\boldsymbol{\varphi }}_{i}/\sqrt{ {\lambda _{i}}}$$ ($$1\leqslant i\leqslant d$$). Then, by the ROECS format, we find the ROECS solutions at $$T =0.3, 0.6, 0.9$$, respectively, depicted in (b) of Figs. 2 to 4, respectively. Figure 2 (a) The classic CS solution; (b) the ROECS solution; (c) the error between between the ROECS solution and the CS solution when $$t=0.3$$ Figure 3 (a) The classic CS solution; (b) the ROECS solution; (c) the error between the ROECS solution and the CS solution when $$t=0.6$$ Figure 4 (a) The classic CS solution; (b) the ROECS solution; (c) the error between the ROECS solution and the CS solution when $$t=0.9$$

To make a reasonable comparison, we also compute out the classic CS solutions at $$T =0.3, 0.6, 0.9$$ by the CS format (4), respectively, depicted in (a) in Figs. 2 to 4, respectively.

The comparison of (a) and (b) in Figs. 2 to 4 exhibits a quasi-identical similarity. In the computational process, the ROECS format at each time level only contains six unknowns, whereas the classic CS format has 10,201 unknowns. Therefore, the ROECS format can not only alleviate the calculation load and reduce the accumulation of round-off errors, but can also save CPU elapsed time and resources in the computational process. Photos (c) in Figs. 2 to 4 are, respectively, the errors between the ROECS solutions and the CS solutions when $$t=0.3, 0.6, 0.9$$, which accord with the theoretical errors, because both errors are $$O(10^{-4})$$.

By operating records from solving the classic CS format and the ROECS format in the same Laptop (Microsoft Surface Book: Int Core i7 Processor, 16 GB RAM), we find that the CPU elapsed time for solving the classic CS format on $$0\leqslant t\leqslant 0.9$$ is about 6059 seconds, but the CPU elapsed time for solving the ROECS format is about 48 seconds, that is, the CPU consuming time for solving the classic CS format is 125 times greater than for solving the ROECS format. This shows that the ROECS format is far superior to the classic CS format.

### Example 2

To compare the CS and ROECS methods, we give another example, which also has an analytical solution for a 2D Sobolev equation. In the 2D Sobolev equation (1), we take $$\varepsilon = 1$$, $$\gamma = 10$$, $$f(x,y,t) = (\frac{1}{2}+ \varepsilon \pi ^{2} + 2\gamma \pi ^{2})\sin \pi x \sin \pi y \exp ({t/2})$$, $$u_{0}(x,y) = \sin \pi x \sin \pi y$$, $$\varphi (x,y,t)=0$$. Then this Sobolev equation has an exact solution $$u(x,y,t) =\sin \pi x\sin \pi y \exp ({t/2})$$.

First, we depict the exact solution $$u(x,y,t) = e^{t/2}\sin \pi x \sin \pi y$$ at $$T = 0.9$$ in (a) of Fig. 5. Figure 5 (a) The analytical solution when $$t=0.9$$; (b) the CS solution when $$t=0.9$$, (c) The ROECS solution when $$t=0.9$$

Next, we take time step $$\Delta t = 0.01$$ and the number $$N = 100$$ of nodes in the x and y directions. By the classic CS format (4) we compute out the classic CS solution at $$T=0.9$$, depicted in (b) of Fig. 5.

Finally, to make a reasonable comparison, in solving the ROECS format, we adopt the same time step and number of nodes as in the classic CS format. We also compute the initial $$L=20$$ coefficient vectors $$\boldsymbol{U}_{N}^{n}$$ of the CS solutions with the classic CS format (11) at time nodes $$t_{n}$$ ($$n=1,2,\ldots ,20$$) to form the snapshot matrix $$\boldsymbol{P}=(\boldsymbol{U}_{N}^{1}, \boldsymbol{U}_{N}^{2}, \ldots, \boldsymbol{U} _{N}^{20})$$. Then we find the eigenvalues $$\lambda _{1}\geqslant \lambda _{2}\geqslant \cdots \geqslant \lambda _{20}\geqslant 0$$ and the associated eigenvectors $$\boldsymbol{\varphi }_{i}$$ ($$i=1,2,\ldots ,r$$) of $$\boldsymbol{P}^{T}\boldsymbol{P}$$ according to Step 2 in Sect. 3.4. By computing we obtain that the error factor $$\sqrt{\lambda _{7}}\leqslant 2\times 10^{-4}$$, so that it is only necessary to adopt the most main six POD bases $$\boldsymbol{\varPhi }= (\phi _{1}, \phi _{2},\ldots ,\phi _{d})$$ by the formula $${\boldsymbol{\phi }}_{i}={\boldsymbol{P}} {\boldsymbol{\varphi }}_{i}/\sqrt{{\lambda _{i}}}$$ ($$1\leqslant i\leqslant d$$) when solving the ROECS format. The obtaining ROECS solution at $$T =0.9$$ is depicted in (c) of Fig. 5.

Moreover, we depict the errors between the analytical solution and CS solution, the analytical solution and ROECS solution, and the CS solution and ROECS solution at $$T=0.9$$ in photos (a), (b), and (c) of Fig. 6, respectively. Figure 6 (a) The error between the CS solution and the analytical solution solution when $$t=0.9$$; (b) the error between the ROECS solution and the analytical solution solution when $$t=0.9$$; (c) the error between the ROECS solution and the CS solution when $$t=0.9$$

By observing three photos of Fig. 5, we clearly find that the photos of the analytical solution, the CS numerical solution, and the ROECS numerical solution are basically identical and that the errors between the analytical solution and CS solution, the analytical solution and ROECS solution, and the CS solution and ROECS solution are less than $$2\times 10^{-4}$$, which verify the correctness of the theory for error analysis because the theoretical error is $$O(10^{-4})$$ according to Theorem 7. Especially, the unknowns (only six) in the ROECS format are far fewer than those in the classic CS format (i.e., $$(N+1)^{2}=10\text{,}201$$), so that the CPU consuming time of the ROECS format is far less than that of the classic CS format; for instance, it takes only about $$260~\mbox{s}$$ when computing the ROECS solution at $$T=0.9$$, but takes $$7033~\mbox{s}$$ when computing the classic CS solution at the same time in the same Laptop (Microsoft Surface Book: Int Core i7 Processor, 16 GB RAM).

## 5 Conclusions and discussions

In this study, we have studied the reduced-order of the coefficient vectors of the solutions for the classic CS method of 2D Sobolev equations. We have established the ROECS format in matrix form for 2D Sobolev equations via the POD technique, proven the existence, uniqueness, stability, and convergence of the ROECS solutions by the matrix means, and also given the flowchart for solving the ROECS format of 2D Sobolev equations. Moreover, we have supplied two numerical examples to verify the correctness of the theoretical analysis to explain that the ROECS format is far superior to the classic CS format because the unknowns of the ROECS format are far fewer than those of the classic CS format, so that, compared to the classic CS format, the ROECS format can greatly lessen the computational load, retard the round-off error accumulation, and save the CPU consuming time in the operational process.

Especially, the ROECS format for the 2D Sobolev equations is first presented in this paper and is a development and improvement over the existing reduced-order methods because the ROECS format has higher accuracy than other reduced-order methods, such as the reduced-order FE method, FVE method, and FD scheme. Both theory and method of this paper are new and completely different from the existing reduced-order methods.

Although we restrict our ROECS method to Sobolev equations on rectangular domain $$\overline{\varOmega }=[a, b]\times [c, d]$$, our technique can be extended to more general domains and applied in more complex engineering problems. Therefore, our technique has important applied prospect.

## Declarations

### Acknowledgements

The authors are thankful to the honorable reviewers and editors for their valuable suggestions and comments, which improved the paper.

### Availability of data and materials

The authors declare that all data and material in the paper are available and veritable.

### Funding

This research was supported by the National Science Foundation of China grant 11671106 and Qian Science Cooperation Platform Talent grant 5726-41.

### Authors’ contributions

Both authors contributed equally and significantly in writing this article. Both authors wrote, read, and approved the final manuscript.

### Competing interests

The authors declare that they have no competing interests. 