
498 A. BUADES, B. COLL, AND J. M. MOREL
for a given Lagrange multiplier λ. The above functional is strictly convex and lower
semicontinuous with respect to the weak-star topology of BV . Therefore the minimum
exists, is unique, and is computable (see, e.g., [6]). The parameter λ controls the
tradeoff between the regularity and fidelity terms. As λ gets smaller the weight of
the regularity term increases. Therefore λ is related to the degree of filtering of the
solution of the minimization problem. Let us denote by TVF
λ
(v) the solution of
problem (2.2) for a given value of λ. The Euler–Lagrange equation associated with
the minimization problem is given by
(u(x) − v(x)) −
1
2λ
curv (u )(x)=0
(see [29]). Thus, we have the following theorem.
Theorem 2.5. The image method noise of the total variation minimization (2.2)
is
u(x) − TVF
λ
(u)(x)=−
1
2λ
curv (TVF
λ
(u))(x).
As in the anisotropic case, straight edges are maintained because of their small
curvature. However, details and texture can be oversmoothed if λ is too small, as is
shown in Figure 3.
2.4. Iterated total variation refinement. In the original total variation model
the removed noise, v(x) − u(x), is treated as an error and is no longer studied. In
practice, some structures and texture are present in this error. Several recent works
have tried to avoid this effect [35, 25].
2.4.1. The Tadmor–Nezzar–Vese approach. In [35], the authors have pro-
posed to use the Rudin–Osher–Fatemi model iteratively. They decompose the noisy
image, v = u
0
+ n
0
, by the total variation model. So taking u
0
to contain only ge-
ometric information, they decompose by the very same model n
0
= u
1
+ n
1
, where
u
1
is assumed to be again a geometric part and n
1
contains less geometric informa-
tion than n
0
. Iterating this process, one obtains u = u
0
+ u
1
+ u
2
+ ···+ u
k
as a
refined geometric part and n
k
as the noise residue. This strategy is in some sense
close to the matching pursuit methods [20]. Of course, the weight parameter in the
Rudin–Osher–Fatemi model has to grow at each iteration, and the authors propose a
geometric series λ, 2λ,...,2
k
λ. In that way, the extraction of the geometric part n
k
becomes twice more taxing at each step. Then the new algorithm is as follows:
1. Starting with an initial scale λ = λ
0
,
v = u
0
+ n
0
, [u
0
,n
0
] = arg min
v =u +n
|Du| + λ
0
|v(x) − u(x)|
2
dx.
2. Proceed with successive applications of the dyadic refinement n
j
= u
j+1
+
n
j+1
,
[u
j+1
,n
j+1
] = arg min
n
j
=u+n
|Du| + λ
0
2
j+1
|n
j
(x) − u(x)|
2
dx.
3. After k steps, we get the following hierarchical decomposition of v:
v = u
0
+ n
0
= u
0
+ u
1
+ n
1
= .....
= u
0
+ u
1
+ ···+ u
k
+ n
k
.