The state-of-the-art image registration problem can be expressed as an optimal control problem by

$\underset{\mathit{\varphi}\in \mathbf{\Gamma}}{min}\mathcal{J}[\mathbf{R},\mathbf{T};{\mathit{\varphi}}_{u}]$

(5)

for the functional

$\mathcal{J}[\mathbf{R},\mathbf{T};{\mathit{\varphi}}_{u}]={C}_{\mathrm{sim}}[\mathbf{R},\mathbf{T};{\mathit{\varphi}}_{u}]+\lambda {C}_{\mathrm{reg}}[u],$

(6)

where ${C}_{\mathrm{sim}}[\mathbf{R},\mathbf{T};{\mathit{\varphi}}_{u}]$ denotes a similarity measure between the template image **T** and the reference image **R**, ${\mathit{\varphi}}_{u}(\mathbf{x}):=\mathbf{x}+u(\mathbf{x})$ is the deformation field, *u* is the displacement field, Γ is the set of all possible admissible transformations, ${C}_{\mathrm{reg}}[u]$ is a regularization term, and *λ* is a regularization constant.

We choose the

${L}^{2}$-norm type similarity measure defined as

${C}_{\mathrm{sim}}[\mathbf{R}(\mathbf{x}),\mathbf{T}(\mathbf{x});{\mathit{\varphi}}_{u}(\mathbf{x})]:={\int}_{\mathrm{\Omega}}\mathrm{\nabla}\cdot (\mathbf{T}(\mathbf{x}+u(\mathbf{x}))-\mathbf{R}(\mathbf{x}))\phantom{\rule{0.2em}{0ex}}d\mathbf{x}.$

(7)

Note that some other similarity measures might be selected depending on the problem. We choose (7) because, as of our best knowledge, this similarity measure has not been associated with any volumetric image registration algorithm in the literature and to test the convenience of this measure in these types of applications.

Without the regularizing term in functional (6), the image registration problem (5) is ill-posed [8]; furthermore, imaging data usually is not smooth due to edges, folding, or other unwanted deformations. Ill-posed problems are widely used in PDE-based image processing problems and inverse problems. An optimization problem is said to be well posed if the solution of the problem uniquely exists and the solution depends continuously on the data of the problem. If one of these two conditions is not satisfied, it is called an ill-posed problem. Image registration is an ill-posed optimal control problem. In order to overcome the ill-posedness of the optimization problem (5) and to assure smooth solutions, we introduce additional regularization terms. The main idea behind adding a regularization term is to smoothen the problem with respect to both the functional and the solution so that well-posedness is assured and efficient computational methods can be defined to determine minimizers. Typical regularization terms associated with image registration problems include curvature, diffusion, elasticity, and fluid. Details about each of these regularization approaches can be seen, for example, in [1] and the references therein.

In this paper, we introduce a regularization term that consists of summation of two different terms defined as follows:

${C}_{\mathrm{reg}}[u(\mathbf{x})]:={\lambda}_{1}{\int}_{\mathrm{\Omega}}\sqrt{{|\mathrm{\nabla}u(\mathbf{x})|}^{2}+\beta}\phantom{\rule{0.2em}{0ex}}d\mathbf{x}+{\lambda}_{2}{\int}_{\mathrm{\Omega}}log(u(\mathbf{x}))\phantom{\rule{0.2em}{0ex}}d\mathbf{x}.$

(8)

Let us further point out that the regularization term (8) has not also been associated with any volumetric data integration problem in the literature. The term

${\int}_{\mathrm{\Omega}}\sqrt{{|\mathrm{\nabla}u(\mathbf{x})|}^{2}+\beta}\phantom{\rule{0.2em}{0ex}}d\mathbf{x}$ is known as a perturbed total-variation model and has been used in image restoration problems. This model was obtained by modifying the regularization term mostly known as the Dirichlet regularization term given by

${\int}_{\mathrm{\Omega}}\sqrt{{|\mathrm{\nabla}u(\mathbf{x})|}^{2}}\phantom{\rule{0.2em}{0ex}}d\mathbf{x},$

(9)

which penalizes non-smooth images. Major shortcomings of (9) is that some image features, like edges of the original image, may show up blurred in the reconstructed image. To overcome this drawback, Rudin, Osher, and Fatemi (ROF) proposed replacing (9) with so-called total-variation (TV) seminorm ${\int}_{\mathrm{\Omega}}\sqrt{|\mathrm{\nabla}u(\mathbf{x})|}\phantom{\rule{0.2em}{0ex}}d\mathbf{x}$. In the solution of the optimal control problem (5), in order to prevent the degeneracy of the resulting Euler-Lagrange equations, we modify the TV-model as ${\int}_{\mathrm{\Omega}}\sqrt{{|\mathrm{\nabla}u(\mathbf{x})|}^{2}+\beta}\phantom{\rule{0.2em}{0ex}}d\mathbf{x}$, where *β* is an arbitrarily small perturbation parameter. Another regularization term that we use is ${\int}_{\mathrm{\Omega}}log(u(\mathbf{x}))\phantom{\rule{0.2em}{0ex}}d\mathbf{x}$. This term is added to make the regularization term original and to see its impact on the volumetric data integration problems.

Having said these, we can express the cost function of the optimization problem (5) as

$\begin{array}{rcl}\mathcal{J}[\mathbf{R},\mathbf{T};{\mathit{\varphi}}_{u}]& =& {\int}_{\mathrm{\Omega}}\mathrm{\nabla}\cdot (\mathbf{T}(\mathbf{x}+u(\mathbf{x}))-\mathbf{R}(\mathbf{x}))\phantom{\rule{0.2em}{0ex}}d\mathbf{x}\\ +{\lambda}_{1}{\int}_{\mathrm{\Omega}}\sqrt{{|\mathrm{\nabla}u(\mathbf{x})|}^{2}+\beta}\phantom{\rule{0.2em}{0ex}}d\mathbf{x}+{\lambda}_{2}{\int}_{\mathrm{\Omega}}log(u(\mathbf{x}))\phantom{\rule{0.2em}{0ex}}d\mathbf{x}.\end{array}$

This is a variational [

7] convex optimization problem. Necessary and sufficient conditions for the existence and uniqueness of the solutions was given in [

4]. Because we set up a connection between this variational optimization problem and 3-D wavelet transforms, for a given scale

*m*, the optimal control problem can be expressed as

${\stackrel{\u02c6}{\alpha}}^{m}=\underset{{\alpha}^{m}\in {\mathcal{A}}^{m}}{argmin}\mathcal{J}[{C}_{\mathrm{sim}}(\mathbf{x}),{C}_{\mathrm{reg}}(\mathbf{x}),{\varphi}_{u}(\mathbf{x},{\alpha}^{m})],$

where

${\mathcal{A}}^{m}$ stands for the admissible parameter set. We apply a blockwise descent algorithm. During the minimization, the cost functional

$\mathcal{J}$ needs to be evaluated only on

${\mathrm{\Omega}}_{i,j,k}^{m}$, defined as

${\mathrm{\Omega}}_{i,j,k}^{m}:=[\frac{i-1}{{2}^{m}},\frac{i+1}{{2}^{m}}]\times [\frac{j-1}{{2}^{m}},\frac{j+1}{{2}^{m}}]\times [\frac{k-1}{{2}^{m}},\frac{k+1}{{2}^{m}}],$

which is the support of

${\mathrm{\Phi}}_{i,j,k}^{m}$. Inside the block, the direction of descent

$d\in {\mathbb{R}}^{3}$ is computed as the opposite of the gradient

$\frac{\partial \mathcal{J}}{\partial {\alpha}^{m}}$ of the cost function

$\mathcal{J}$ where

$\begin{array}{rcl}\frac{\partial \mathcal{J}}{\partial {\alpha}^{m}}& =& {\int}_{\mathrm{\Omega}}\mathrm{\Delta}({\mathbf{T}}_{u}(\mathbf{x})-{\mathbf{R}}_{u}(\mathbf{x}))\mathrm{\nabla}{\mathbf{T}}_{u}(\mathbf{x}){\left(\frac{\partial u(\mathbf{x})}{\partial {({\alpha}^{m})}^{t}}\right)}^{t}\phantom{\rule{0.2em}{0ex}}d\mathbf{x}\\ +{\lambda}_{1}{\int}_{\mathrm{\Omega}}\frac{\mathrm{\nabla}u(\mathbf{x})}{\sqrt{{|\mathrm{\nabla}u(\mathbf{x})|}^{2}}}{\left(\frac{\partial u(\mathbf{x})}{\partial {({\alpha}^{m})}^{t}}\right)}^{t}\phantom{\rule{0.2em}{0ex}}d\mathbf{x}+{\lambda}_{2}{\int}_{\mathrm{\Omega}}\frac{\mathrm{\nabla}u{(\mathbf{x})}^{\prime}}{\mathrm{\nabla}u(\mathbf{x})}{\left(\frac{\partial u(\mathbf{x})}{\partial {({\alpha}^{m})}^{t}}\right)}^{t}\phantom{\rule{0.2em}{0ex}}d\mathbf{x}.\end{array}$