- Research
- Open access
- Published:
A class of singular diffusion equations based on the convex–nonconvex variation model for noise removal
Boundary Value Problems volume 2021, Article number: 8 (2021)
Abstract
This paper focuses on the problem of noise removal. First, we propose a new convex–nonconvex variation model for noise removal and consider the nonexistence of solutions of the variation model. Based on the new variation method, we propose a class of singular diffusion equations and prove the of solutions and comparison rule for the new equations. Finally, experimental results illustrate the effectiveness of the model in noise reduction.
1 Introduction and motivation
Image denoising is used to recover/decompose a true image from an observed noisy image. Specifically, let \(f:\Omega \to \mathbb{R}\) be a given image defined on the domain \(\Omega \subset \mathbb{R}^{N}\). Image denoising is used to decompose f into two functions u and n with \(f=u+n\), where u contains the most meaningful signals depicted by f and n represents the noise. In the ideal case, the noise part n has no signal information. The task of removing noise can be accomplished in traditional ways such as employing linear filters, which, though very simple to implement, may cause the restored image to be blurred at the edges. Various adaptive filters for noise removal have been proposed. Among these the variational method is one of the most extensively used techniques. In genal, nonlinear PDEs associated to the variational method are used as anisotropic diffusion filters because they apply different strengths of diffusivity to different locations in the image. These variational methods can be classified into the following two cases.
1.1 Convex variational model and forward diffusion equation
A classical variational model for image denoising was proposed by Rudin, Osher, and Fatemi [1]. In [1], for a given noisy image \(f\in L^{2}(\Omega )\), the image denoising problem is equivalent to the following minimization problem (the ROF model):
where \(\lambda >0\) is a tuning parameter. In [2], Vese proposed the following general framework of variational model for image denoising:
The author discussed the minimizing problem \(\min_{u\in \operatorname{BV}(\Omega )} E(u)\), when \(\phi (s)\) is a strictly convex function. In order to use the direct method for the calculus of variations, the convexity of the function \(\phi (s)\) is always assumed. The BV norm, i.e., the total variation, is well suited for \(\phi (\vert \nabla u\vert )\). And the total variation has also been widely used in other tasks of image processing, since it can help prevent the noise from staying in the denoised image u because the noise part yields a large total variation of u.
The ROF model yields very satisfactory results for removing image noise while preserving edges and contours of objects. However, it also possesses some unfavorable properties under some circumstances, such as the loss of image contrast, the smearing of corners, and the staircase effect. For instance, in [3], Meyer showed that the ROF model cannot preserve image contrast (cf. Theorem 3, p. 32) and cannot keep corners (cf. Proposition 6, p. 39). A study of the loss of image contrast can also be found in [4]. And in [5], Bellettini, Caselles, and Novaga pointed out what kind of shapes can be preserved by the ROF model, which indicates that the ROF model will smear object corners. A full discussion of these undesirable properties of the ROF model can also be found in [6].
To remedy these unfavorable properties of the ROF model, new models or techniques have been proposed [7–17]. Chan and Strong [7] proposed an adaptive total variation based on a control factor. Chambole and Lions [8] proposed to minimize a combination of total variation and the integral of the squared norm of the gradient. Yunmei et al. [9] observed that this model is successful in restoring images where homogeneous regions are separated by distinct edges, but may become sensitive to the thresholding parameter, in the event of nonuniform image intensities or heavy degradation. And Yunmei et al. [9] proposed a variable-exponent approach adaptive model which exploits the benefits of Gaussian smoothing and the strength of TV regularization. On the other hand, in [10, 11], the authors introduced new variational models based on high-order derivatives of the denoised image u. In addition to the basic requirements of images denoising, such as edge preservation and noise removal, these new models effectively ameliorate the staircase effect.
It is worth mentioning that the diffusion equations associated to these methods are the forward anisotropic diffusion equations, which smooth homogeneous regions while preserving edges. However, these diffusion equations cannot enhance the image, for example, by preserving corners, smoothing parts of objects, as well as image greyscale intensity contrasts.
1.2 Nonconvex variational model and backward diffusion equation
Most of those existing algorithms are based on a convex potential. For a convex potential, ϕ needs to increase near-linearly at least; but for better edge-preservation, ϕ needs to increase less-linearly, then ϕ becomes nonconvex, and such a form of ϕ has been suggested in [18–23]. It is interesting that Vese proposes several variational models for image denoising, when \(\phi (s)\) is nonconvex and then implements numerical simulations for this case in [2]. Unfortunately, there is not necessarily a unique solution for the variational model with nonconvex potential, and Chipot et al. [22] proved that there is no minimizer in any reasonable space if f is not a constant. And they introduced the following energy:
They proved that \(E_{\varepsilon }(u)\) is convex for \(\varepsilon \geq \lambda /4\) and nonconvex for \(\varepsilon <\lambda /4\); for \(\varepsilon <\lambda /4\), \(E_{\varepsilon }(u)\) has quadratic growth at infinity, and then they use convexification tools to obtain the existence of a minimizer for \(E_{\varepsilon }(u)\) in the one-dimensional case. For dimensions greater than one, the problem is quite open. The behavior of the minimizing sequence is also a challenging problem, which is closely related to Perona and Malik anisotropic diffusion [23] whose associated potential is nonconvex also. In spite of the lack of a rigorous mathematical theory for the continuous minimization problem with nonconvex potential, its associated discrete version can be solved numerically, for example, with the gradient decent algorithm [23], the simulated annealing algorithm [24], the half-quadratic algorithms [18, 20, 21, 25–29], and so on. The nonconvex potential always leads to the backward diffusion equation or the forward–backward diffusion equation, which can sharpen the edges, corners, as well as the singular features.
In this paper, we intend to propose a new convex–nonconvex variational model for image denoising. In addition to removing noise and keeping edges and contours of objects, the new model aims at preserving corners, smoothing parts of objects, as well as image greyscale intensity contrasts. As corners and edges differ from ordinary points or contours in their singularities, a natural idea is to incorporate the related geometric quantities into the process of denoising. Our idea can be described as follows: First, inspired by [22], instead of \(\varepsilon \int _{\Omega }\vert \nabla u\vert ^{2}\,dx\), we consider the linear growth functional \(\varepsilon \int _{\Omega }\psi (\vert \nabla u\vert )\,dx\), which can also preserve edges and corners. The new variational model is a combination of the convex and nonconvex variational models. Second, based on the new idea, we propose a new variational framework for image denoising, which is under some basic hypotheses and may not satisfy the convexity condition. Third, we propose and analyze a class of singular diffusion equations associated with the new variational model. To efficiently solve the singular diffusion equations, one might employ some fast methods, such as AOS [30], etc. In this paper, we also use the standard time marching scheme and PM scheme [1].
In fact, the anisotropic diffusion equation has been widely used in the modeling of image processing during the last two decades. In the famous work [23], Perona and Malik proposed a framework to deal with the denoising problem based on the diffusion equation. To make the images more pleasing to the eye, it would be useful to reduce staircasing effects. Many models reducing this effect have been proposed in the literature. In [31, 32], Charbonnier and Weickert developed and studied the forward diffusion equation by proposing the different diffusivity. In [33], Catt et al. proposed the regularization of the Perona and Malik model to obtain a smoother image. In [34], Keeling et al. proposed the nonlinear anisotropic diffusion filtering for multiscale edge enhancement. In [35] Gilboa et al. proposed forward–backward diffusion processes for adaptive image enhancement and denoising. In [36], Smolka proposed combined forward and backward anisotropic diffusion filtering of color images. These forward–backward diffusions are related to nonconvex potentials. In all these works, the nonlinear anisotropic diffusion equation was considered, while in our present work, based on the new nonconvex variational model, we consider the singular forward–backward diffusion equation for denoising, which can obtain the singular solution to preserve the singular parts of the image, such as edges, corners, and so on.
The rest of this paper is organized as follows. In Sect. 2, our convex–nonconvex variational model is introduced in detail. We discuss the ill-posed problem. We prove the existence of Young measure solutions in Sect. 3. Of course, we are very interested in the investigation of the properties of Young measure solutions in Sect. 4. Numerical implementation is then developed in Sect. 5. We list numerical experiments for synthetic image denoising as well as real world image denoising, and compare our results with those obtained by the ROF model. A conclusion is subsequently given in Sect. 6.
2 Convex–nonconvex variational framework for denoising model
The following new variational model is proposed:
where \(\mu _{1}>0\), \(\mu _{2}>0\). The functions \(\psi _{C}(s)\) and \(\psi _{\mathrm{NC}}(s)\) are convex and nonconvex, respectively. For the image processing, \(\psi _{C}(s)\) satisfies the following assumptions [2]:
-
\(\psi _{C}\) is a strictly convex, nondecreasing function from \(\mathbb{R}^{+}\) to \(\mathbb{R}^{+}\), with \(\psi _{C}(0)=0\) (without a loss of generality);
-
\(\lim_{s\to +\infty }\psi _{C}(s)=+\infty \);
-
There exist two constants \(c>0\) and \(b\geq 0\) such that
$$ cs-b\leq \psi _{C}(s)\leq cs +b,\quad \forall s\geq 0, $$
and \(\psi _{\mathrm{NC}}(s)\) satisfies the following assumptions [18]:
-
\(\psi _{\mathrm{NC}}\) is nonconvex;
-
\(\psi _{\mathrm{NC}}\approx cs^{2}\) as \(s\to 0^{+}\);
-
\(\lim_{s\to +\infty }\psi _{\mathrm{NC}}(s)\approx \gamma >0\).
Compared with the conditions about \(\phi _{C}\) and \(\phi _{\mathrm{NC}}\) in [2] and [18], the hypotheses on \(\psi _{\mathrm{NC}}\) and \(\psi _{C}\) in this paper are as follows:
-
(H1)
\(\psi _{C}\in C^{1}(\mathbb{R}^{N})\). There exist two constants \(0<\lambda \leq \Lambda \) such that
$$ \bigl(\lambda \vert X \vert -1 \bigr)^{+}\leq \psi _{C} \bigl( \vert X \vert \bigr)\leq \Lambda \vert X \vert +1,\quad \forall X \in \mathbb{R}^{N}; $$ -
(H2)
\(Z(X)=\nabla \psi _{C}(X)\) and \(\vert Z(X)\vert \leq \Lambda \);
-
(H3)
Moreover, we assume that there exist a sequence \(\{\varphi _{p}\}_{1< p<2}\subset C^{1}(\mathbb{R}^{N})\) and \(C_{0}>0\) such that \(\{Z_{p}=\nabla \varphi _{p}\}_{1< p<2}\) locally and uniformly converges to Z in \(\mathbb{R}^{N}\). For all \(p\in (1,2)\), \(\varphi _{p}\) and \(Z_{p}\) satisfy the structure conditions
$$ \bigl(\lambda \vert X \vert ^{p}-1 \bigr)^{+}\leq \varphi _{p}(X)\leq \Lambda \vert X \vert ^{p}+1, \qquad \forall X\in \mathbb{R}^{N}, $$and
$$ \bigl\vert Z_{p}(X) \bigr\vert \leq \Lambda \vert X \vert ^{p-1},\quad \forall X\in \mathbb{R}^{N}; $$ -
(H4)
\(\psi _{\mathrm{NC}}\in C^{1}(\mathbb{R}^{N})\) and \(\psi _{\mathrm{NC}}\) is a nonconvex function;
-
(H5)
\(\lim_{s\to +\infty }\frac{\psi _{\mathrm{NC}}(s)}{s}=0\).
The new variational model is a combination of the convex and nonconvex variational models. Hence, the new model is demonstrated to be capable of achieving a good trade-off between noise removal and edge preservation which the convex and nonconvex variational models are respectively good at. This is not a simple combination: \(\psi _{C}\) controls the growth of the new functional and the regularization of the solution; \(\psi _{\mathrm{NC}}\) can’t only influence the growth of the new functional, but also preserve the singular parts of the image, such as corners, image contrast, edges, and so on, and furthermore, \(\psi _{\mathrm{NC}}\) controls the convexity of the functional.
In order to use the Young measure theory in [37–40], we have to assume Hypothesis (H3) on \(\psi _{C}\). However, Hypothesis (H3) is easy to be satisfied, for example, if \(\psi _{C}=\sqrt{1+\vert s\vert ^{2}}\). On the other hand, the new hypotheses are different from the assumptions in [2, 18], since the new hypotheses do not restrict the convexity of the functionals. Hence, based on the new framework, we can propose many interesting models. Assuming the hypotheses above, it is difficult to confirm the convexity of the new variational model, which may yield an ill-posed problem. The existence may indeed not be straightforward. In the next section, the existence of solutions of the singular diffusion equations based on the convex–nonconvex variation is considered. In [41, 42], Guidotti proposed two types of backward–forward regularization of the Perona–Malik equation. The two models are contained in the new framework when \(\psi _{\mathrm{NC}}=\ln (1+\vert s\vert ^{2})\), \(\psi _{C}=\vert s\vert ^{2}\), and \(\psi _{C}=\vert s\vert ^{p-2}s\), respectively.
2.1 Some special examples
Nevertheless, the potentials satisfy Hypotheses (H1)–(H3), if
and so on. On the other hand, the potentials satisfy Hypotheses (H4)–(H5), like in [2, 18–23], and there are a lot of nonconvex functionals, such as
and so on. Moreover,
where \(\psi _{\mathrm{NC}}\) is any nonconvex functional which satisfies Hypotheses (H4)–(H5).
In this paper, the following model is considered:
which can be rewritten as
where
and
Following the proof given by Chipot et al. [22], we have
Theorem 1
If \(f(x)\) is not a constant and \(f\in L^{\infty }(\Omega )\), then the function \(E_{\mathrm{NC}}(u)\) has no minimizer in \(W^{1,2}(\Omega )\) and \(\inf_{u\in W^{1,2}(\Omega )} E_{\mathrm{NC}}(u)=0\).
Proof
The theorem in the one-dimensional case \(\Omega =(a,b)\) is proved for the clarity sake, and the same proof goes for \(N\geq 2\). It’s clear that
Let
and then we will prove that the theorem is true for \(E_{\alpha }(u)\) with \(0<\alpha <1\).
By density, we always may find a sequence of step functions \(\tilde{u}_{n}\) such that
In fact, we can find a partition \(a=x_{0}< x_{1}<\cdots <x_{n}=b\) such that \(\tilde{u}_{n}\) is the constant \(\tilde{u}_{n,i}\) on each interval \((x_{i},x_{i})\), \(h_{n}=\max_{i}(x_{i}-x_{i-1})<1\) with \(\lim_{n\to +\infty }h_{n}=0\). Let us set \(\sigma _{i}=x_{i}-x_{i-1}\). Next, we define a sequence of continuous functions \(u_{n}\) by
Note that
and therefore,
Since
taking the limit on both sides yields
Moreover,
Thus
and finally,
i.e.,
Now, if there exists a minimizer \(u\in W^{1,2}(\Omega )\), then necessarily \(E_{\alpha }(u)=0\), which implies
The first equality is possible only if \(f\in W^{1,2}(\Omega )\), and in this case the second equality implies \(f'=0\), which is possible only if f is a constant. Therefore, excluding this trivial case, \(E_{\mathrm{NC}}(u)\) has no minimizer in \(W^{1,2}(\Omega )\). □
Remark 1
As we know, if the region Ω is bounded,
Then
Note that \(E_{\mathrm{NC}}(u)\geq 0\), and therefore
However, we cannot obtain any information about the minimizer of \(E_{\mathrm{NC}}(u)\) in \(\operatorname{BV}(\Omega )\).
3 A class of singular diffusion equations for denoising model
3.1 Singular diffusion equations based on the convex–nonconvex variation
Based on the new variational model, the following diffusion equation is proposed:
For this special equation, what we obtain in this paper will reveal another aspect for the existence, namely the existence of a discontinuous solution. Note that the equation is strongly degenerate at the discontinuous points of such a solution. On the other hand, the new equation can be considered as a perturbation of Perona–Malik model [23]. Such a perturbation is not the usual viscous one, for example, Δu or \(\Delta ^{2} u\), which has standard regularity effects. The perturbation has no hazard for the equation to permit the existence of discontinuous solutions, which has particular meaning: with the new perturbation, the new model is still an anisotropic diffusion equation. That is to say, inside the regions where the magnitude of the gradient of u is weak, the new equation acts as Gaussian smoothing, resulting in isotropic smoothing; near the region’s boundaries where the magnitude of the gradient is large, the regularization is “stopped” and the edges are preserved.
Let
and
for \(X\in \mathbb{R}^{N}\). Therefore, the new diffusion equation can be rewritten as
Let \(\varphi ^{**}\) denote the convexification of φ, namely,
and
Since \(\varphi \in C^{1}(\mathbb{R}^{N})\), \(\varphi ^{**}\in C^{1}(\mathbb{R}^{N})\) is convex.
Definition 1
A Young measure solution to problem (3)–(5) is a function
and there exists a \(W^{1,1}(Q_{T})\)-gradient Young measure \(\nu =(\nu _{x,t})_{(x,t)\in Q_{T}}\) on \(\mathbb{R}^{N}\) such that
where id is the identity mapping in \(\mathbb{R}^{N}\),
and
in the sense of trace.
Theorem 2
Let \(f\in \operatorname{BV}(\Omega )\cap L^{\infty }(\Omega )\). Then problem (3)–(5) admits at least one Young measure solution.
3.2 Preliminaries
We use \(C_{0}(\mathbb{R}^{d})\) to denote the closure of the set of continuous functions on \(\mathbb{R}^{d}\) with compact supports. The dual of \(C_{0}(\mathbb{R}^{d})\) can be identified with the space \(\mathcal{M}(\mathbb{R}^{d})\) of signed Radon measures with finite mass via the pairing
Let \(D\subset \mathbb{R}^{n}\) be a measurable set of finite measure. A map \(\nu :D\to \mathcal{M}(\mathbb{R}^{d})\) is called weakly* measurable if the functions \(x\mapsto \int _{\mathbb{R}^{d}}f\,d\nu _{x}\) are measurable for all \(f\in C_{0}(\mathbb{R}^{d})\), where \(\nu _{x}=\mu (x)\).
For \(p\geq 1\), define
As noted in [11], the space \(\mathcal{E}_{0}^{p}(\mathbb{R}^{d})\) is a separable Banach space with the norm
We define
which is an inseparable space with the above norm.
Definition 2
Let \(p\geq 1\). A Young measure \(\nu =(\nu _{x})_{x\in D}\) on \(\mathbb{R}^{d}\) is called a \(W^{1,p}\)-gradient Young measure if
-
(i)
\(x\in D\mapsto \int _{\mathbb{R}^{d}}f \,d\nu _{x}\in \mathbb{R}\) is a Lebesgue measurable function for all f bounded and continuous on \(\mathbb{R}^{d}\);
-
(ii)
There is a sequence of functions \(\{u^{k}\}_{k=1}^{\infty }\subset W^{1,p}(D)\) for which the representation formula
$$\begin{aligned}& \lim_{k\to \infty } \int _{E}\psi \bigl(\nabla u^{k}(x) \bigr)\,dx= \int _{E} \langle \nu _{x},\psi \rangle \,dx \end{aligned}$$(11)holds for all measurable \(E\subset D\) and all \(\psi \in \mathcal{E}_{0}^{p}(\mathbb{R}^{d})\), where \(\langle \nu _{x},\psi \rangle =\int _{\mathbb{R}^{d}}\psi \,d\nu _{x}\).
We also call ν the \(W^{1,p}(D)\)-gradient Young measure generated by \(\{\nabla u^{k}\}_{k=1}^{\infty }\) and \(\{\nabla u^{k}\}_{k=1}^{\infty }\) the \(W^{1,p}(D)\)-gradient generating sequence of ν. In addition, the representation formula (11) also holds for \(\psi \in \mathcal{E}^{p}(\mathbb{R}^{d})\). By the fundamental theorem for Young measure, we see that
Definition 3
Let \(\{z^{k}\}_{k=1}^{\infty }\subset L^{1}(D)\) and \(z\in L^{1}(D)\). We say that \(\{z^{k}\}_{k=1}^{\infty }\) converges to z in the bitting sense if there is a decreasing sequence of subsets \(E_{j+1}\subset E_{j}\) of D with \(\lim_{j\to \infty }\operatorname{meas}(E_{j})=0\) such that \(\{z^{k}\}_{k=1}^{\infty }\) converges weakly to z in \(L^{1}(D\backslash E_{j})\) for all j.
Definition 4
Let \(p\geq 1\). A Young measure \(\nu =(\nu _{x})_{x\in D}\) on \(\mathbb{R}^{d}\) is called a \(W^{1,p}(D)\)-bitting Young measure if there is a sequence \(\{z^{k}\}_{k=1}^{\infty }\subset L^{p}(D)\) and \(z\in L^{1}(D)\) such that \(\{\vert z^{k}\vert ^{p}\}_{k=1}^{\infty }\) converges to z and \(\{\psi (z^{k}(x))\}_{k=1}^{\infty }\) converges to \(\langle \nu _{x},\psi \rangle \) in the bitting sense for all \(\psi \in \mathcal{E}_{0}^{p}(\mathbb{R}^{d})\) (or \(\mathcal{E}^{p}(\mathbb{R}^{d})\)).
We also call ν the \(W^{1,p}(D)\)-bitting Young measure generated by \(\{z^{k}\}_{k=1}^{\infty }\) and \(\{z^{k}\}_{k=1}^{\infty }\) the \(W^{1,p}(D)\)-bitting generating sequence of ν. By the fundamental theorem for Young measure, we see that
Kinderlehrer and Pedregal [37] showed a property which characterizes \(W^{1,p}\)-gradient Young measures as described in the following lemma.
Lemma 1
Let \(\nu =(\nu _{x})_{x\in D}\) be a Young measure on \(\mathbb{R}^{d}\). Then \(\nu =(\nu _{x})_{x\in D}\) is a \(W^{1,p}(D)\)-gradient Young measure if and only if
-
(i)
There exists \(u\in W^{1,p}(D)\) such that
$$ \nabla u(x)= \int _{\mathbb{R}^{d}}A\,d\nu _{x}(x)\quad \textit{a.e. }x\in D; $$ -
(ii)
Jensen’s inequality
$$ \psi \bigl(\nabla u(x) \bigr)\leq \int _{\mathbb{R}^{d}}\psi (A)\,d\nu _{x}(A) $$holds for all \(\psi \in \mathcal{E}^{p}(\mathbb{R}^{d})\) continuous, quasiconvex, and bounded below;
-
(iii)
The function
$$ x\mapsto \int _{\mathbb{R}^{d}} \vert A \vert ^{p}\,d\nu _{x}(A) $$is in \(L^{1}(D)\).
We give the following two lemmas. The proofs can be found in [38, 39].
Lemma 2
Suppose \(f\in \mathcal{E}^{p}(\mathbb{R}^{d})\), for some \(p\geq 1\), is quasiconvex and bounded below and let \(\{u^{k}\}_{k=1}^{\infty }\) converge weakly to u in \(W^{1,p}(D)\). Then
-
(i)
For all measurable \(E\subset D\),
$$ \int _{E}f(\nabla u)\,dx\leq \liminf_{k\to \infty } \int _{E}f \bigl(\nabla u^{k} \bigr)\,dx; $$ -
(ii)
If
$$ \lim_{k\to \infty } \int _{D}f \bigl(\nabla u^{k} \bigr)\,dx= \int _{D}f(\nabla u)\,dx, $$then \(\{f(\nabla u^{k})\}_{k=1}^{\infty }\) are weakly sequentially precompact in \(L^{1}(D)\) and the sequence converges weakly to \(f(\nabla u)\).
Lemma 3
Let f and \(\{u^{k}\}_{k=1}^{\infty }\) be as in Lemma 2 (ii) and assume in addition that
for \(0< c\leq C\). Let v be generated by the gradients \(\{\nabla u\}_{k=1}^{\infty }\). Then ν is a \(W^{1,p}(D)\)-gradient Young measure.
We now state and prove a result for the sequences of gradient-generated Young measures [40].
Lemma 4
Let \(1\leq p<2\). Suppose that \(\{\nu ^{\alpha }=(\nu _{x}^{\alpha })_{x\in D}\}_{\alpha >0}\) is a family of \(W^{1,p}(D)\)-gradient Young measures and each is generated by \(\{\nabla u^{\alpha ,m}\}_{m=1}^{\infty }\), where \(u^{\alpha ,m}\) is in \(W^{1,p}(D)\) uniformly bounded in α and m. Then there exist a subsequence of \(\{\nu ^{\alpha }\}_{\alpha >0}\), denoted by \(\{\nu ^{\alpha _{i}}\}_{i=1}^{\infty }\), and a \(W^{1,p}(D)\)-gradient Young measure ν such that
-
(i)
\(\{\nu ^{\alpha _{i}}\}_{i=1}^{\infty }\) converges weakly* to ν in \(L^{\infty }(D;\mathcal{M}(\mathbb{R}^{d}))\), namely, \(\{\langle \nu ^{\alpha _{i}},\psi \rangle \}_{i=1}^{\infty }\) converges weakly* to \(\langle \nu ,\psi \rangle \) in \(L^{\infty }(D)\) for all \(\psi \in C_{0}(\mathbb{R} ^{d})\);
-
(ii)
For \(1\leq q< p\), \(\{\nu ^{\alpha _{i}}\}_{i=1}^{\infty }\) converges weakly to ν in \(L^{1}(D;(\mathcal{E}_{0}^{q}(\mathbb{R}^{d}))')\), namely, \(\{\langle \nu ^{\alpha _{i}},\psi \rangle \}_{i=1}^{\infty }\) converges weakly to \(\langle \nu ,\psi \rangle \) in \(L^{\infty }(D)\) for all \(\psi \in \mathcal{E}_{0}^{q}(\mathbb{R}^{d})\);
-
(iii)
\(\{\nu ^{\alpha _{i}}\}_{i=1}^{\infty }\) converges to ν in \(L^{1}(D;(\mathcal{E}_{0}^{q}(\mathbb{R}^{d}))')\) in the bitting sense, namely, \(\psi \in \mathcal{E}_{0}^{p}(\mathbb{R}^{d})\), \(\{\langle \nu ^{\alpha _{i}},\psi \rangle \}_{i=1}^{\infty }\) converges to \(\langle \nu ,\psi \rangle \) in the bitting sense.
3.3 Existence of solution to the approximation problem
Since equation (3) is degenerate, singular, and forward–backward, some necessary approximations are required for discussing the existence of solutions. Our approximations will be divided into two steps. For this purpose, we need to approximate the initial datum f. By the density properties of BV functions in [7], there exists some subsequence \(\{f_{p}\}\subset C_{0}^{\infty }(\Omega )\) such that \(\Vert f_{p}\Vert _{L^{\infty }(\Omega )}\) and \(\Vert \nabla f_{p}\Vert _{L^{1}(\Omega )}\) are uniformly bounded in p, and \(\{f_{p}\}_{0< p<1}\) converges to f in \(L^{1}(\Omega )\).
As the first step, we consider the following evolution problem:
where
and
Let \(\varphi ^{**}_{p}\) denote the convexification of \(\varphi _{p}\), namely,
and
Since \(\varphi _{p}\in C^{1}(\mathbb{R}^{N})\), \(\varphi ^{**}\in C^{1}(\mathbb{R}^{N})\) is convex. In addition,
Definition 5
A Young measure solution to problem (12)–(14) is a function
and there exists a \(W^{1,p}(Q_{T})\)-gradient Young measure \(\nu =(\nu _{x,t})_{(x,t)\in Q_{T}}\) on \(\mathbb{R}^{N}\) such that
and
in the sense of trace.
Theorem 3
Let \(f_{p}\in W^{1,p}(\Omega )\cap L^{\infty }(\Omega )\). Then problem (12)–(14) admits at least one Young measure solution.
The following existence proof follows by the ideas due to Kinderlehrer and Pedregal [39], Demoulini [38], and Yin and Wang [40].
In order to obtain the theorem above, the following functionals defined on \(W^{1,p}(\Omega )\) are considered:
and
where \(0< h<1\), \(u^{h,0}=f_{p}\), j is an integer and \(1\leq j\leq T/h+1\).
Lemma 5
There exists \(u^{h,j}\in W^{1,p}(\Omega )\cap L^{\infty }(\Omega )\) such that \(u^{h,j}\) is a minimum of \(\mathcal{F}_{h}^{**}(v;u^{h,j-1})\) and
Moreover,
and
where Λ only depends of \(\mu _{1}\) and \(\mu _{2}\).
Proof
By the relaxation theorem (cf. [43]), we get that
Let \(\{u^{h,j,k}\}_{k=1}^{\infty }\subset W^{1,1+\delta }(\Omega )\) be a minimizing sequence of \(\mathcal{F}_{h}\) and \(\mathcal{F}_{h}^{**}\). Then
and, for k sufficiently large,
From the growth condition, we see that \(\{u^{h,j,k}\}_{k=1}^{\infty }\) is bounded in \(W^{1,p}(\Omega )\cap L^{\infty }(\Omega )\), and therefore
where \(M_{1}\) is a constant independent of p, h, j, and k. Hence there exist \(u^{h,j}\in W^{1,p}(\Omega )\cap L^{\infty }(\Omega )\) and a subsequence of \(\{u^{h,j,k}\}_{k=1}^{\infty }\), denoted the same, such that
and \(\{u^{h,j,k}\}_{k=1}^{\infty }\subset L^{\infty }(\Omega )\) yields
and
Thus \(u^{h,j}\) is a minimum of \(\mathcal{F}_{h}^{**}(v;u^{h,j-1})\) and
Note that
and then
Thus
Hence
□
Let \(\nu ^{h,j}=(v_{x}^{h,j})_{x\in \Omega }\) be the Young measure generated by \(\{\nabla u^{h,j,k}\}_{k=1}^{\infty }\) in the proof of Lemma 5. By Lemma 3, \(\nu ^{h,j}\) is a \(W^{1,p}\)-gradient Young measure. Then
Noticing that \(\varphi ^{**}_{p}\leq \varphi _{p}\), we see that
Thus,
Let \(\chi ^{h,j}\) be the indicator function of \([hj,h(j+1))\) and
Define
Then
Based on the facts above, we can obtain the following lemma.
Lemma 6
The functions \(u^{h}\), \(w^{h}\), and the Young measure \(\nu ^{h}\) defined above satisfy
for \(\zeta \in C^{\infty }(\overline{Q}_{T})\), with \(\zeta (x,0)=\zeta (x,T)=0\). Moreover,
where M are independent of p and h.
Proof
Let \(\xi \in C_{0}^{\infty }(\Omega )\), \(-1<{\epsilon }<1\). Then there exists \(C>0\) such that
By Lemma 2, we can see that
which implies the equilibrium equation
\(\forall \xi \in C_{0}^{\infty }(\Omega )\). At the minimizer \(u^{h,j}\), the Gâteaux derivative of \(\mathcal{F}^{**}_{h}\) is zero, and we obtain
\(\forall \xi \in C_{0}^{\infty }(\Omega )\). Thus
and, by Lemma 5, we get the estimate
which implies that
for \(\zeta \in C^{\infty }(\overline{Q}_{T})\), with \(\zeta (x,0)=\zeta (x,T)=0\). From the direct calculation, we see that
Because
\(\partial _{t} u^{h}\) can be chosen as the test function in (27), and therefore
Then
and therefore
which implies
□
Let
for \(k\geq 1\). By Lemma 5, \(\{v^{h,k}\}_{k=1}^{\infty }\subset L^{\infty }((0,T);W^{1,p}(\Omega )) \cap L^{\infty }(Q_{T})\), \(\{v^{h,k}\}_{k=1}^{\infty }\) is the \(W^{1,p}(Q_{T})\)-gradient generating sequence of \(\nu ^{h}\) and
When \(h\to 0\) and \(k\to \infty \), we have the following lemma.
Lemma 7
There exist \(u,w\in L^{\infty }((0,T);W^{1,p}(\Omega ))\cap L^{\infty }(Q_{T})\), \(\nu \in L^{\infty }((0,T);(\mathcal{E}^{p}(\mathbb{R}^{N}))')\), \(\frac{\partial u}{\partial t}\in L^{2}(Q_{T})\), and subsequences of \(\{u^{h}\}_{0< h<1}\), \(\{w^{h}\}_{0< h<1}\), \(\{v^{h,k}\}_{0< h<1,k=1}^{\infty }\), and \(\{\nu ^{h}\}_{0< h<1}\), denoted by \(\{u^{h_{m}}\}_{m=1}^{\infty }\), \(\{w^{h_{m}}\}_{m=1}^{\infty }\), \(\{v^{m}=v^{h_{m},k_{m}}\}_{m=1}^{\infty }\), and \(\{\nu ^{h_{m}}\}_{m=1}^{\infty }\), respectively, such that
and
Proof
From Lemma 6, we see that \(\{u^{h}\}_{0< h<1}\) and \(\{w^{h}\}_{0< h<1}\) are bounded in \(L^{\infty }((0,T); W^{1,p}(\Omega ))\cap L^{\infty }(Q_{T})\), \(\{\partial _{t} u^{h}\}_{0< h<1}\) is bounded in \(L^{2}(Q_{T})\), \(\{\langle \nu ^{h},Z_{p} \rangle \}_{0< h<1}\) is bounded in \(L^{p/(p-1)}(Q_{T})\), \(\{\nu ^{h}\}_{0< h<1}\) is bounded in \(L^{\infty }((0,T);(\mathcal{E}_{0}^{p}(\mathbb{R}^{N}))')\), and the bounds are independent of δ and h. By weak compactness, we can obtain the convergence of the subsequences \(\{u^{h_{m}}\}_{m=1}^{\infty }\), \(\{w^{h_{m}}\}_{m=1}^{\infty }\), and \(\{\nu ^{h_{m}}\}_{m=1}^{\infty }\). From (26), we can see that (12) holds, namely
From (31), by Lemma 4 and the weak sequential precompactness of \(L^{\infty }((0,T); W^{1,p}(\Omega ))\cap L^{\infty }(Q_{T})\), there exist \(w\in L^{\infty }((0,T); W^{1,p}(\Omega ))\cap L^{\infty }(Q_{T})\) and a diagonal subsequence of \(\{v^{h_{m},k}\}_{m=1}^{\infty }\), denoted by \(\{v^{m}=w^{h_{m},k_{m}}\}_{m=1}^{\infty }\), such that \(\{v^{m}\}_{m=1}^{\infty }\) converges to w weakly in \(L^{\infty }((0,T); W^{1,p}(\Omega ))\cap L^{\infty }(Q_{T})\) and strongly in \(L^{p}(Q_{T})\),
\(\{v^{m}\}_{m=1}^{\infty }\subset L^{\infty }((0,T); W^{1,p}(\Omega )) \cap L^{\infty }(Q_{T})\),
and \(\{\nabla v^{m}\}_{m=1}^{\infty }\) is the \(W^{1,1}(Q_{T})\)-bitting generating sequence of ν, and then
Now let us prove that
Since
by Lemma 5, we have
Since \(\{u^{h_{m}}\}_{m=1}^{\infty }\) converges to u in \(L^{p}(Q_{T})\) and \(\{v^{h_{m},k_{m}}\}_{m=1}^{\infty }\) converges to w in \(L^{p}(Q_{T})\), we see that
Hence
which implies (7). Similar to the above arguments, we also obtain that \(\{v^{h_{m}}\}_{m=1}^{\infty }\) converges to u in \(L^{p}(Q_{T})\). □
Proof of Theorem 3
From Lemmas 6 and 7, we can obtain
for \(\zeta \in C^{\infty }(\overline{Q}_{T})\), with \(\zeta (x,0)=\zeta (x,T)=0\). Therefore, if we prove (8), we will obtain the weak solution of the problem (3)–(5). Let \(\{u^{h,j,k}\}_{k=1}^{\infty }\subset W^{1,p}(\Omega )\) be a minimizing sequence of \(\mathcal{F}_{h}\) in the proof of Lemma 5. For all \(\xi \in W^{1,p}(\Omega )\), we see that
and
Since \(Z_{p}^{**}(\nabla u^{h,j,k})\cdot \nabla u^{h,j,k}\) converges weakly to \(\langle \nu ^{h,j},Z^{**}_{p}\cdot {\mathrm{id}}\rangle \) in \(L^{1}(\Omega )\), \(Z_{p}^{**}(\nabla u^{h,j,k})\) converges weakly to \(\langle \nu ^{h,j},Z^{**}_{p} \rangle \) in \(L^{p/(p-1)}(\Omega )\), \(\nabla u^{h,j,k}\) converges weakly to \(\langle \nu ^{h,j},{\mathrm{id}}\rangle \) in \(L^{p}(\Omega )\) as k tends to infinity, we get that
Thus (23) implies that
By the definition of \(\nu ^{h}\) in (26), we see that
From Lemma 7, we can obtain that \(\langle \nu _{x,t}^{h_{m}},Z_{p}\cdot {\mathrm{id}}\rangle \) converges weakly to \(\langle \nu _{x,t},Z_{p}\cdot {\mathrm{id}}\rangle \) in the biting sense, \(\langle \nu _{x,t}^{h_{m}},Z_{p} \rangle \) converges weakly to \(\langle \nu _{x,t},Z_{p} \rangle \) in \(L^{p/(p-1)}(Q_{T})\), \(\langle \nu _{x,t}^{h_{m}},{\mathrm{id}}\rangle \) converges weakly to \(\langle \nu _{x,t},{\mathrm{id}}\rangle \) in \(L^{p}(Q_{T})\) as m tends to infinity. Thus for all \(\eta \in C^{\infty }(\overline{Q}_{T})\), with \(\eta (x,0)=\eta (x,T)=0\), (17) and (19) imply that
Hence,
So \(\langle \nu _{x,t}^{h_{m}},Z_{p} \rangle \cdot \langle \nu ^{h_{m}}_{x,t}, {\mathrm{id}}\rangle \) weakly converges to \(\langle \nu _{x,t},Z_{p} \rangle \cdot \langle \nu _{x,t},{\mathrm{id}}\rangle \) in \(L^{1}(Q_{T})\). Since \(\langle \nu _{x,t}^{h_{m}},Z_{p}\cdot {\mathrm{id}}\rangle \) converges to \(\langle \nu _{x,t},Z_{p}\cdot {\mathrm{id}}\rangle \) in the bitting sense, we obtain that
Since \(\langle \nu _{x,t},Z_{p}\cdot {\mathrm{id}}\rangle \in L^{1}(Q_{T})\),
which implies (8). Hence, u is the desired Young measure solution of problem (3)–(5). The proof of Theorem 3 is complete. □
Remark 2
Let u be the Young measure solution of problem (12)–(14) obtained in the proof of Theorem 3. Then from the proof we see that there exists a constant M depending only on \(\Vert f_{p}\Vert _{W^{1,p}(\Omega )}\), \(\Vert f_{p}\Vert _{L^{\infty }(\Omega )}\), Λ, and \(\operatorname{meas}(\Omega )\), but independent of p and T, such that
namely, \(u\in L^{\infty }(\mathbb{R}^{+}; W^{1,p}(\Omega ))\cap L^{\infty }(Q_{\infty })\), \(\partial u/\partial t\in L^{2}(Q_{\infty })\), where \(Q_{\infty }=\Omega \times \mathbb{R}^{+}\).
3.4 Existence of solution to problem (3)–(5)
In this subsection, we consider the limit case of problem (3)–(5), namely, \(p\to 1\).
Proof of Theorem 2
Let \(u_{p}\) be the Young measure solution of problem (12)–(14) with the initial data \(f_{p}\) with respect to the \(W^{1,p}(Q_{T})\)-gradient Young measures \(\nu ^{p}\) generated by the sequence \(\{\nabla w^{p,k}\}_{k=1}^{\infty }\), which we obtained in the proof of Theorem 3. We see that
and there exists a constant \(M_{0}\) depending only on \(\Vert f_{p}\Vert _{W^{1,p(\Omega )}}\), \(\Vert f_{p}\Vert _{L^{\infty }(\Omega )}\), Λ, and \(\operatorname{meas}(\Omega )\), but independent of p, such that
So there exist \(u\in L^{\infty }((0,T);\operatorname{BV}(\Omega ))\cap L^{\infty }(Q_{T})\) with \(\partial u/\partial t\in L^{2}(Q_{T})\) and a subsequence of \(\{u_{p}\}_{1< p<2}\), denoted by \(\{u_{p_{m}}\}_{m=1}^{\infty }\), such that
By Lemma 4, there exist a \(W^{1,1}(Q_{T})\)-gradient Young measure \(\nu \in L^{\infty }((0,T);(\mathcal{E}_{0}^{1}(\mathbb{R}^{N}))')\) and a subsequence of \(\{\nu ^{p_{m}}\}_{m=1}^{\infty }\), denoted the same, such that
which implies that there is a decreasing sequence of subsets \(E_{j+1}\subset E_{j}\) of \(Q_{T}\) with \(\lim_{j\to \infty }\operatorname{meas}(E_{j})=0\) such that \(\langle \nu ^{p_{m}},\psi \rangle \) converges weakly to \(\langle \nu ,\psi \rangle \) in \(L^{1}(Q_{T}\backslash E_{j})\) for all \(\psi \in \mathcal{E}_{0}^{1}(\mathbb{R}^{d})\) and all \(j\geq 1\). By (18) we get that
which implies (9). By Lemma 4, there exist \(w\in L^{\infty }((0,T);\operatorname{BV}(\Omega ))\) and a subsequence of \(\{w^{p_{m},k}\}_{m,k=1}^{\infty }\), denoted by \(\{w^{k}\}_{k=1}^{\infty }\), such that \(\{w^{k}\}_{k=1}^{\infty }\) converges to w in \(L^{1}(Q_{T})\) and \(\{\nabla w^{k}\}_{k=1}^{\infty }\) is the \(W^{1,1}(Q_{T})\)-biting generating sequence of ν, namely, there is a decreasing sequence of subsets \(G_{j+1}\subset G_{j}\) of \(Q_{T}\) with \(\lim_{j\to \infty }\operatorname{meas}(G_{j})=0\) such that \(\langle \nu ^{p_{m}},\psi \rangle \) converges weakly to \(\langle \nu ,\psi \rangle \) in \(L^{1}(Q_{T}\backslash G_{j})\) for all \(\psi \in \mathcal{E}_{0}^{1}(\mathbb{R}^{d})\) and all \(j\geq 1\).
To prove (6), we first prove that \(\{\langle \nu ^{p_{m}},Z_{p_{m}} \rangle \}_{m=1}^{\infty }\) converges weakly to \(\langle \nu ,Z \rangle \) in \(L^{1}(Q_{T})\). For \(i\geq 1\), define
Let \(\eta \in L^{\infty }(Q_{T};\mathbb{R}^{N})\). Then
Noticing that
we see that I tends uniformly to 0 in p as \(i\to \infty \). For II, we get that
So
Therefore,
So \(\{\langle \nu ^{p_{m}},Z_{p_{m}} \rangle \}_{m=1}^{\infty }\) converges weakly to \(\langle \nu ,Z \rangle \) in \(L^{1}(Q_{T})\). Thus (6) holds, namely,
Since \(\{w^{k}\}_{k=1}^{\infty }\) converges to w in \(L^{1}(Q_{T})\), we get that for all \(\eta \in C_{0}^{\infty }(Q_{T}\backslash G_{j};\mathbb{R}^{N})\),
Thus for all \(j\geq 1\),
in the sense of measure. Let \(G=\bigcap_{j=1}^{\infty }G_{j}\). Then \(\operatorname{meas}(G)=0\) and
in the sense of measure. Similar to the proof of Theorem 3, we get that
Thus (7) holds, namely,
in the sense of measure.
We now prove (8). For all \(\eta \in C_{0}^{\infty }(Q_{T})\), we get that
Since \(\langle \nu _{x,t}^{p_{m}},{\mathrm{id}}\rangle \) converges to \(\langle \nu _{x,t},{\mathrm{id}}\rangle \) in the biting sense, we see that for all \(j\geq 1\) and all \(\eta \in C_{0}^{\infty }(Q_{T}\backslash E_{j})\),
Thus for all \(j\geq 1\),
Fix \(i\geq C_{0}\). Let \(0\leq \eta \in C_{0}^{\infty }(Q_{T}\backslash E_{j})\). Then
Because \(\theta ^{i}(Z_{p_{m}}\cdot {\mathrm{id}})\) converge uniformly to \(\theta ^{i}(Z\cdot {\mathrm{id}})\) in \(\mathbb{R}^{N}\), we see that
Hence
From \((1-\theta ^{i})(Z_{p_{m}}\cdot {\mathrm{id}})\geq (1-\theta ^{i})(Z\cdot {\mathrm{id}})\), we get that
Thus
By the arbitrariness of \(0\leq \eta \in C_{0}^{\infty }(Q_{T}\backslash E_{j})\), we see that
Thus for all \(j\geq 1\),
Since \(\langle \nu _{x,t},Z\cdot {\mathrm{id}}\rangle \), \(\langle \nu _{x,t},Z \rangle \cdot \langle \nu _{x,t},{\mathrm{id}}\rangle \in L^{1}(Q_{T})\), we see that
So (8) holds, and therefore u is the Young measure solution of the problem (3)–(5). The proof of the theorem is complete. □
Remark 3
Note that if the initial data \(f\equiv C\), and then f is the Young measure of the problem (3)–(5).
4 Properties of the Young measure solution
Theorem 4
Let \(u_{1},u_{2}\in \mathscr{B}\) be the Young measure solutions of the problem (3)–(5) with the initial data \(f_{1}\), \(f_{2}\) and satisfy (H6), respectively. Then for \((x,t)\) a.e. in \(Q_{T}\),
In particular,
Proof
Denote
Let \(G(t)\in C^{1}(\mathbb{R})\) be such that
where B is a positive constant, and
Then \(\psi \in C(\mathbb{R}^{+})\cap H^{1}(\mathbb{R}^{+})\), \(\psi (0)=0\), \(\psi (t)\geq 0\), for \(t\in \mathbb{R}^{+}\), and \(G(u_{1}(x,t)-u_{2}(x,t)-K)\in \operatorname{BV}(\Omega )\). Note that \(G(u_{1}^{k}(x,t)-u_{2}^{k}(x,t)-K)\in W^{1,1}(\Omega )\) can be chosen as the test function and, taking \(Q_{s}\) for \(s\in [0,T]\) as the domain of integration, we obtain
From the proof of Theorem 4 and due to \(0< G'\leq B\),
Hence
And therefore
Then
which implies the right-hand side inequality of (33). Changing the position of \(u_{1}\) and \(u_{2}\) will yield the left-hand side inequality of (33). When the initial data \(f\equiv \operatorname{ess}\sup_{x\in \Omega }u_{0}^{+}\), f is the Young measure solution of (3)–(5); when the initial data \(f\equiv -\operatorname{ess}\sup_{x\in \Omega }u_{0}^{-}\), f is also the Young measure solution of (3)–(5), which completes the proof of the theorem. □
Corollary 1
Let \(u_{1},u_{2}\in \mathscr{B}\) be the Young measure solutions of the problem (3)–(5) with the initial data \(f_{1}\), \(f_{2}\) and satisfy (H1), respectively. Assume
Then
5 Numerical scheme
In this section, numerical results are presented to demonstrate the performance of our proposed algorithm for image restoration involving white Gaussian noise. The results are compared with those obtained by the PM method in [23] and the TV method in [8]. In the next two subsections, two numerical discrete schemes, the PM scheme (PMS) and the AOS scheme, will be proposed.
5.1 The AOS scheme
Let
Using the scheme in [43], the problem (3)–(5) can be discretized as
where \(A_{l}(u^{n})=[a_{i,j}(u^{n})]\),
and
where
with \(\mathcal{N}(i)\) being the set of the two neighbors of pixel i (boundary pixels have only one neighbor).
5.2 The PM scheme
Similar to the original PM method, the discrete explicit scheme of the problem (3)–(5) is as follows:
where
for \(0\leq i\leq I\), \(0\leq j\leq J\).
6 Denoising performance
The denoising algorithms are tested on four images: a synthetic image (\(128 \times 128\) pixels), Lena image (\(300\times 300\) pixels), and a tower image (\(500\times 500\) pixels). For each image, a noisy observation is generated by adding Gaussian noise with the standard deviation \(\sigma \in \{30,50\}\) to the original image.
Peak signal-to-noise ratio (PSNR) and the mean absolute deviation error (MAE) are used to measure the quality of the restoration results. They are defined as
where \(u_{O}\) and u are the original and restored image, respectively. The stopping criterion of all methods is set to achieve the maximal PSNR or the best MAE. All methods are implemented in Matlab R2007b on a 2.8 GHz Pentium 4 processor.
6.1 Measure of similarity of edges
The PSNR does not always give a clear guide as to whether one image is less staircased than another, so the authors [44] take into account the value of \(\mathrm{PSNR}_{\mathrm{grad}}\) which they define as \(1/2(\operatorname{PSNR}(\partial _{x} u, \partial _{x} u_{O})+\operatorname{PSNR}( \partial _{y}u,\partial _{y}u_{O}))\), and this should measure how well the derivatives of reconstruction match those of the true image.
The edge maps are defined as
where \(G_{\sigma }(x)=\frac{1}{4\pi \sigma }\exp (- \frac{\vert x\vert ^{2}}{4\sigma } ) \). If all images are normalized, their gray-scale is in the range \([0, 255]\). The authors [45] find that a value of \(0.025 \leq c \leq 0.0025\) and \(\sigma = 0.5\) give the best edge map. In [46], the authors propose the following PSNR:
where \(\vert \max u_{O}-\min u_{O}\vert \) gives the gray-scale range of the original image. It is noticed that PSNR̃ can measure how well the reconstruction data match the true data, and the data need not to be an image. Combining (35) and (36), in [47], we define the following \(\mathrm{PSNR}_{\mathrm{E}}\) in order to measure the similarity of edges:
If there are wrong edges in the restored image by some method, then the \(\mathrm{PSNR}_{\mathrm{E}}\) will be positive.
6.2 Comparison with other methods
The results are depicted in Figs. 1–3 for the synthetic image and Figs. 4–5 for Lena image. Our methods do a good job in restoring faint geometrical structures of the images even for high values of σ, for instance, the results on Lena image for \(\sigma = 50\). Our algorithm distinguishes itself from its competitors most of the time both visually and quantitatively, as revealed by the PSNR and MAE values. For TV method, the number of iterations which is necessary to satisfy the stopping rule increases rapidly when σ increases. For PM method, the appropriate parameter K is indispensable to get the best result.
Figures 1–3 illustrate that the proposed model is able to reconstruct sharp edges and nonuniform regions while avoiding staircasing. TV-based diffusion reconstructs sharp edges, but the staircasing effect is obvious. PM-based diffusion also reconstructs sharp edges, but it creates isolated black and white speckles in the restored image. The proposed model reconstructs sharp edges as effectively as PM-based diffusion and meanwhile recovers smooth regions as effectively as pure isotropic diffusion (in particular, without staircasing). Figure 3 shows the edge functions when the smoothed images by the new methods, TV, and PM attain the largest PSNR, respectively. Note that PM and TV methods create many new edges in the restored images.
Figures 4–5 show the restored Lena images produced by the Perona–Malik equation, the TV method, PMS, and AOS. Figures 4(e) and 5(e) show the processed images by the PM diffusion with isolated black and white speckles. However, in Figs. 4(c)–4(d), the processed images using the new methods are very clear. Inside the regions, the new diffusion acts as Gaussian smoothing, so our method can effectively avoid staircase effect. In Tables 1 and 2, we observe that both PSNRs and MAEs of the restored images in our methods are better than those in the PM and TV methods. And the increase in \(\mathrm{PSNR}_{\mathrm{E}}\) is obvious with the new diffusion operators.
The denoising performance results are tabulated in Tables 1–2 where the best PSNR, MAE, \(\mathrm{PSNR}_{\mathrm{E}}\), and CPU time are shown in boldface. The PSNR improvement brought by our approach can be quite high, particularly for \(\sigma = 50\) (see, e.g., Figs. 1–2), and the visual resolution is quite remarkable. For \(\sigma = 30\), the PSNRs of our algorithm also can be higher than that of PM and TV methods. Moreover, the new algorithm by AOS scheme shows high PSNRs on real images (Figs. 3–4). Note that for big size images (Figs. 3–4, \(300\times 300\)), the new methods take less time than TV and PM methods.
7 Conclusion
In this paper, based on convex and nonconvex linear growth functionals, we proposed a class of singular diffusion equations for noise removal. In our method, the convex and nonconvex functionals are combined into a new functional. It is hard to consider the analysis of the new functional. However, the new singular forward–backward diffusion equation is introduced from the functional. And the existence, uniqueness, and long-time behavior of solutions for the new equation are investigated. Finally, experimental results illustrate the effectiveness of the model in noise reduction.
This work proposes quite an original and efficient method for noise removal. Noise removal is a difficult problem that arises in various applications relevant to active imaging system. The main ingredients of our method are as follows: (1) The new framework is based on a combination between convex and nonconvex functions; (2) The new equation is a forward–backward diffusion and it is singular; (3) The Young measure solution is obtained, and some useful properties of the solution are considered, such as long-time behavior, stability, maximum principle, and so on; (4) The new diffusion can be simulated by the efficient AOS scheme.
The obtained numerical results are really encouraging since they outperform the most recent methods in this field.
Availability of data and materials
Not applicable.
Abbreviations
- AOS:
-
additive operator splitting
- PSNR:
-
peak signal-to-noise ratio
- MAE:
-
mean absolute deviation error
- TV:
-
total variation
- PM:
-
Perona–Malik
- CPU:
-
central processing unit
References
Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D: Nonlinear Phenom. 60(1–4), 259–268 (1992)
Vese, L.: Problemes variationnels et edp pour l’analyse d’images et levolution de courbes. PhD thesis, Nice (1996)
Meyer, Y.: Oscillating Patterns in Image Processing and Nonlinear Evolution Equations: The Fifteenth Dean Jacqueline B. Lewis Memorial Lectures, vol. 22. Am. Math. Soc., Providence (2001)
Strong, D., Chan, T.: Edge-preserving and scale-dependent properties of total variation regularization. Inverse Probl. 19(6), 165 (2003)
Bellettini, G., Caselles, V., Novaga, M.: The total variation ow in \(\mathbb{R}^{N}\). J. Differ. Equ. 184(2), 475–525 (2002)
Chan, T., Esedoglu, S., Park, F.: Total variation image restoration: overview and recent developments. In: Handbook of Mathematical Models in Computer Vision. Springer, Berlin (2006)
Strong, D.M., Chan, T.F. (eds.): Spatially and scale adaptive total variation based regularization and anisotropic diffusion in image processing. Diffusion in image processing. UCLA Math Department CAM report (1996)
Chambolle, A., Lions, P.L.: Image recovery via total variation minimization and related problems. Numer. Math. 76, 167–188 (1997)
Chen, Y.M., Levine, S., Rao, M.: Variable exponent, linear growth functionals in image restoration. SIAM J. Appl. Math. 66(4), 1383–1406 (2006)
Chan, T., Marquina, A., Mulet, P.: High-order total variation-based image restoration. SIAM J. Sci. Comput. 22, 503–516 (2000)
Lysaker, M., Lundervold, A., Tai, X.C.: Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time. IEEE Trans. Image Process. 12(12), 1579–1590 (2003)
Chan, T., Marquina, A., Mulet, P.: High-order total variation-based image restoration. PhD thesis (2000)
You, Y.L., Kaveh, M.: Fourth-order partial differential equations for noise removal. IEEE Trans. Image Process. 9(10), 1723–1730 (2000)
Theljani, A., Belhachmi, Z., Moakher, M.: High-order anisotropic diffusion operators in spaces of variable exponents and application to image inpainting and restoration problems. Nonlinear Anal., Real World Appl. 47, 251–271 (2019)
Mei, J.-J., Huang, T.-Z.: Primal–dual splitting method for high-order model with application to image restoration. Appl. Math. Model. 40(3), 2322–2332 (2016)
Liu, C., Jin, M.: Some properties of solutions of a fourth-order parabolic equation for image processing. Bull. Malays. Math. Sci. Soc. 43(1), 333–353 (2020)
Rathish Kumar, B.V., Halim, A.: A linear fourth-order PDE-based gray-scale image inpainting model. Comput. Appl. Math. 38(1), 6–21 (2019)
Aubert, G., Kornprobst, P.: Mathematical Problems in Image Processing (2002)
Hebert, T., Leahy, R.: A generalized em algorithm for 3-d Bayesian reconstruction from Poisson data using Gibbs priors. IEEE Trans. Med. Imaging 8, 194–202 (1990)
Geman, S., Reynolds, G.: Constrained restoration and the recovery of discontinuities. IEEE Trans. Pattern Anal. Mach. Intell. 14, 367–383 (1992)
Geman, D., Yang, C.: Nonlinear image recovery with half-quadratic regularization. IEEE Trans. Image Process. 4, 932–946 (1995)
Chipot, M., March, R., Rosati, M., Caarelli, G.V.: Analysis of a nonconvex problem related to signal selective smoothing. Math. Models Methods Appl. Sci. 7, 313–328 (1997)
Perona, P., Malik, J.: Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 12, 629–639 (1990)
Blake, A.: Comparison of the efficiency of deterministic and stochastic algorithms for visual reconstruction. IEEE Trans. Pattern Anal. Mach. Intell. 11(1), 2–12 (1989)
Charbonnier, B.-F.L.A.G.P., Barlaud, M.: Deterministic edge-preserving regularization in computed imaging. IEEE Trans. Image Process. 6(2), 298–311 (1997)
Nikolova, M., Michael, K.N.: Analysis of half-quadratic minimization methods for signal and image recovery. SIAM J. Sci. Comput. 27(3), 937–966 (2005)
Xiong, K., Iu, H.H.C., Wang, S.: Kernel correntropy conjugate gradient algorithms based on half-quadratic optimization. IEEE Trans. Cybern., 1–14 (2020)
Bergmann, R., Chan, R.H., Hielscher, R., Persch, J., Steidl, G.: Restoration of manifold-valued images by half-quadratic minimization. Inverse Probl. Imaging 10(2), 281–304 (2016)
Robini, M.C., Zhu, Y.: Generic half-quadratic optimization for image reconstruction. SIAM J. Imaging Sci. 8(3), 1752–1797 (2015)
Weickert, J., Romeny, B.M.T.H., Viergever, M.A.: Efficient and reliable schemes for nonlinear diffusion ltering. IEEE Trans. Image Process. 7(3), 398–410 (1998)
Charbonnier, B.-F.L.A.G.P., Barlaud, M. (eds.): In: Proceedings of 1st International Conference on Image Processing, Nov. 1994, Austin, TX, USA, pp. 168–172. IEEE, Austin (1994)
Weickert, J.: Anisotropic Diffusion in Image Processing. Teubner, Stuttgart (1998)
Catte, F., Lions, P.-L., Morel, J.-M., Coll, T.: Image selective smoothing and edge detection by nonlinear diffusion. SIAM J. Numer. Anal. 29(1), 182–193 (1992)
Keeling, S.L., Stollberger, R.: Nonlinear anisotropic diffusion filtering for multiscale edge enhancement. Inverse Probl. 18(1), 175–190 (2002)
Gilboa, G., Sochen, N., Zeevi, Y.Y.: Forward-and-backward diffusion processes for adaptive image enhancement and denoising. IEEE Trans. Image Process. 11(7), 689–703 (2002)
Smolka, B.: Combined forward and backward anisotropic diffusion filtering of color images. In: Joint Pattern Recognition Symposium, pp. 314–322. Springer, Berlin (2002)
Kinderlehrer, D., Pedregal, P.: Remarks about the analysis of gradient young measures. J. Geom. Anal. 4(1), 59–90 (1994)
Demoulini, S.: Young measure solutions for a nonlinear parabolic equation of forward–backward type. SIAM J. Math. Anal. 27(2), 376–403 (1996)
Kinderlehrer, D., Pedregal, P.: Weak convergence of integrands and the young measure representation. SIAM J. Math. Anal. 23(1), 1–19 (1992)
Yin, J., Wang, C.: Young measure solutions of a class of forward–backward diffusion equations. J. Math. Anal. Appl. 279(2), 659–683 (2003)
Guidotti, P.: A backward–forward regularization of the Perona–Malik equation. J. Differ. Equ. 252, 3226–3244 (2012)
Guidotti, P., Kim, Y., Lambers, J.: Image restoration with a new class of forward–backward–forward diffusion equations of Perona–Malik type with applications to satellite image enhancement. SIAM J. Imaging Sci. 6(3), 1416–1444 (2013)
Dacorogna, B.: Direct Methods in the Calculus of Variations, vol. 78. Springer, Berlin (2007)
Tai, X.C., Lie, K.A., Chan, T.F., Osher, S.: Image Processing Based on Partial Differential Equations. Springer, Heidelberg (2006)
Levine, S., Chen, Y., Stanich, J.: Image restoration via nonstandard diffusion. Technical Report 04-01, Department of Mathematics and Computer Science, Duquesne University (2004)
Durand, S., Fadili, J., Nikolova, M.: Multiplicative noise removal using \(l_{1}\) fidelity on frame coefficients. J. Math. Imaging Vis. 36, 201–226 (2010)
Guo, Z., Sun, J., Zhang, D., Wu, B.: Adaptive Perona–Malik model based on the variable exponent for image denoising. IEEE Trans. Image Process. 21(3), 958–967 (2012)
Acknowledgements
Not applicable.
Funding
The authors would like to express their sincere thanks to the referees for their valuable suggestions in the revision of the paper which contributed greatly to this work. This work was partially supported by the National Science Foundation of China (Grant no. 11271100) and the 985 project of Harbin Institute of Technology, and the Natural Science Foundation of Heilongjiang Province (Grant no. LH2020A015).
Author information
Authors and Affiliations
Contributions
The authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Dong, G., Wu, B. A class of singular diffusion equations based on the convex–nonconvex variation model for noise removal. Bound Value Probl 2021, 8 (2021). https://doi.org/10.1186/s13661-021-01485-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13661-021-01485-x