Journal of Mathematical Imaging and Vision 20: 89–97, 2004
c
2004 Kluwer Academic Publishers. Manufactured in The Netherlands.
An Algorithm for Total Variation Minimization and Applications
ANTONIN CHAMBOLLE
CEREMADE–CNRS UMR 7534, Universit
´
edeParis-Dauphine, 75775 Paris Cedex 16, France
antonin@ceremade.dauphine.fr
Abstract. We propose an algorithm for minimizing the total variation of an image, and provide a proof of
convergence. We show applications to image denoising, zooming, and the computation of the mean curvature
motion of interfaces.
Keywords: total variation, image reconstruction, denoising, zooming, mean curvature motion
1. Introduction
The total variation has been introduced in Com-
puter Vision first by Rudin, Osher and Fatemi [17],
as a regularizing criterion for solving inverse prob-
lems. It has proved to be quite efficient for regular-
izing images without smoothing the boundaries of the
objects.
In this paper we propose a algorithm for minimizing
the total variation, that we claim to be quite fast. It
is based on a dual formulation, and is related to the
works of Chan, Golub, and Mulet [6] or of Carter [3].
However, our presentation is slightly different and we
can provide a proof of convergence. We then show how
our algorithm can be applied to two standard inverse
problems in image processing, that are image denoising
and zooming. We refer to [5, 8–10, 15, 19] for other
algorithms to solve the same problem (as well as to
the other total variation—related papers quoted in this
note).
2. Notations and Preliminary Remarks
Let us fix our main notations. To simplify, our images
will be 2-dimensional matrices of size N × N (adapta-
tion to other cases or higher dimension is not difficult).
We denote by X the Euclidean space R
N ×N
.Todefine
the discrete total variation, we introduce a discrete (lin-
ear) gradient operator. If u ∈ X , The gradient ∇u is a
vector in Y = X × X given by
(∇u)
i, j
=
(∇u)
1
i, j
, (∇u)
2
i, j
with
(∇u)
1
i, j
=
u
i+1, j
− u
i, j
if i < N ,
0ifi = N,
(∇u)
2
i, j
=
u
i, j +1
− u
i, j
if j < N,
0ifj = N,
for i, j =1,...,N . Other choices of discretization are
of course possible for the gradient, as long as it is a
linear operator. Our choice seems to offer a good com-
promise between isotropy and stability.
Then, the total variation of u is defined by
J (u) =
1≤i, j≤N
|(∇u)
i, j
|, (1)
with |y| :=
y
2
1
+ y
2
2
for every y = (y
1
, y
2
) ∈ R
2
.
Let us observe here that this functional J is a dis-
cretization of the standard total variation, defined in the
continuous setting for a function u ∈ L
1
()( open
subset of R
2
)by
J (u) = sup
u(x)divξ(x) dx:
ξ ∈ C
1
c
(; R
2
), |ξ(x)|≤1 ∀x ∈
(2)