via group sparse representation. Since the group constraint is a
somewhat strong constraint for the accurate signal represen-
tation, clean images cannot be sufficiently restored from the
noisy ones, especially in noise of small deviations.
On the other hand, graph theory has been well employed
in a variety of image applications in recent decades. As for
image denoising, the reports also show that the denoising
performance can be improved when combined with the
graph theory. For example, a graph regularized sparse rep-
resentation method
17
is proposed to perform a manifold
embedding on sparse coefficients, in which the graph
Laplacian matrix is viewed as a useful tool to exploit the geo-
metrical structures for image patches. Subsequently, a robust
graph Laplacian-based image denoising is presented with the
optimal edge weights of the graph to cope with the noise.
18
Unlike the direct utilization of the graph Laplacian, a denois-
ing method with eigenvectors of graph Laplacian (EGL) has
been currently developed.
19
In this literature, the eigenvec-
tors of the graph Laplacian of patches not only represent the
global features of images, but also are used as a set of basis
functions to reconstruct images, i.e., guided images and
denoised images (see more details in Sec. 2). Very recently,
this idea was further incorporated in the sparse denoising
model to enhance the relationship among sparse coefficients
and achieves a rema rkable result.
20
However, there are still
some problems in the EGL method. One of them is the selec-
tion of eigenvectors. To achieve a better denoising perfor-
mance, only a part of the eigenvectors is adopted in the
EGL, where the number of these used eigenvectors is fluc-
tuated for various images and noise of various deviations.
Due to lack of the selection strategy on eigenvectors for
the guided and denoised images, the appropriate eigenvector
number should be trivially tested for each image and the
noise of each deviation. In practice, it is less effective, since
no clean images are given as a metric for the quality assess-
ment of denoised images. In other words, the selection of
eigenvectors becomes a bottleneck for the development of
the EGL-based methods.
Motivated by the recent progress, we propose a denoising
method called adaptive EGL (A-EGL), to solve the eigenvec-
tor selection problem, aiming to achieve better guided and
denoised images. The major contribution of our work is
threefold. (1) In the eigenvector selection for guided images,
a simple strategy is adopted by using the deviation estimation
of clean images, where the major structures are well
extracted from noisy images and then are reserved in guided
images. (2) To achieve de noised images, a group sparse
model is introduced into the EGL method. Denoised images
are effectively restored with the grouped sparse coefficients
in the subspace spanned by the selected eigenvectors.
Moreover, an error control strategy is employed to constrain
the denoised ima ges in an acceptable scale with the noisy
ones. Unlike the heuristic setting for the eigenvector number
in the traditional EGL, here the eigenvectors are adaptively
selected from the proposed group sparse model with the error
control. (3) A modified group orthogonal matching pursuit
(GOMP) algorithm is then presented to efficiently solve the
optimal problem in our group sparse model with the consid-
eration of the reliability of the eigenvectors. The experiments
show that our A-EGL method not only improves the practi-
cality of the EGL to release the parameter dependence in the
selection procedure of eigenvectors, but can also o utperform
some well-developed denoising methods, especially for
noise of large deviations.
This paper is organized as follows. Several related works
are reviewed in Sec. 2. The A-EGL method using the differ-
ent eigenvector selection strategies for guided and denoised
images are discussed in Sec. 3. Simulation results are
presented in Sec. 4. Relevant conclusions are finally given in
Sec. 5.
2 Related Work
2.1 Sparse Representation and Reconstruction
In general, sparse representation focuse s on two tasks, i.e.,
dictionary learning and sparse approximation (a.k.a. sparse
coding).
21,22
The target of dictionary learning is to search
an optimal signal space to support the attribution of a sparse
vector under a certain measure. As for sparse approximation,
it is dedicated to finding a sparse solution for an underdeter-
mined linear system. In this paper, we pay more attention to
the sparse approximation problem. Given a measurement
data matrix Y ¼½y
1
y
2
:::y
N
, the basic sparse representation
problem can be described in the residual error constraint as
EQ-TARGET;temp:intralink-;e001;326;511
˜
x
i
¼ arg min
x
i
kx
i
k
0
s:t: ky
i
− Dx
i
k
2
2
≤ ε;i¼ 1;2;:::;N;
(1)
where D ¼½d
1
d
2
:::d
K
is the dictionary with the atoms
fd
i
g
K
i¼1
, X ¼½x
1
x
2
:::x
N
is the coefficient matrix with
the sparse coefficients fx
i
g
N
i¼1
, and ε is a threshold of the
residual error. In the image denoising application, each y
i
represents a noisy image patch and N is the total number
of patches. Note that problem Eq. (1) is formulated as an
l
0
-norm optimization problem, which is NP-hard. To deal
with this problem, some greedy algor ithms, e.g., matching
pursuit
23
and orthogonal matching pursuit,
24
are successfully
developed to achieve the approximated solution. However,
in practice, this l
0
-norm problem can be very often trans-
formed into an l
1
-norm problem, wherein the dictionary is
in the control of the corresponding mutual incoherence.
25
Therefore, a number of effective algorithms, such as basis
pursuit,
26
iterative shrinkage,
27
and split Bregman,
28
are
proposed to tackle the l
1
-norm-based sparse representation
problem in different forms.
As for group sparse representation, it approximates the
signals by a union of a few subdictionaries.
29
Here, we just
introduce the group sparse model in the matrix form
30
as
EQ-TARGET;temp:intralink-;e002;326;239
˜
X ¼ arg min
X
kXk
2;0
; s:t: kY − DXk
2
F
≤ ε; (2)
where k · k
F
is the Frobenius norm, k · k
2;0
is the l
2;0
-norm
with kXk
2;0
¼
P
K
i¼1
I ðkl
i
k
2
Þ, l
T
i
is the i-th row vector of X,
and I ð·Þ is an indicator function defined as
EQ-TARGET;temp:intralink-;e003;326;167I ðkl
i
k
2
Þ¼
1; if k l
i
k
2
> 0;
0; otherwise:
(3)
Unfortunately, this group spars e model still belongs to an
NP-hard problem, where the l
0
-norm is inherently used in
the indicator function I ð·Þ. Therefore, a number of effective
algorithms are developed to deal with Eq. (2 ). However, most
of them can be viewed as variants of the aforementioned
Journal of Electronic Imaging 043019-2 Jul∕Aug 2016
•
Vol. 25(4)
Chen et al.: Image denoising via adaptive eigenvectors of graph Laplacian
Downloaded From: http://electronicimaging.spiedigitallibrary.org/ on 09/05/2016 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx