IEEE TRANSACTIONS ON CYBERNETICS, VOL. 45, NO. 12, DECEMBER 2015 2905
Structural Atomic Representation for Classification
Yuan Yan Tang, Fellow, IEEE, Yulong Wang, Student Member, IEEE, Luoqing Li,
and C. L. Philip Chen, Fellow, IEEE
Abstract—Recently, a large family of representation-based
classification methods have been proposed and attracted
great interest in pattern recognition and computer vision.
This paper presents a general framework, termed as atomic
representation-based classifier (ARC), to systematically unify
many of them. By defining different atomic sets, most popu-
lar representation-based classifiers (RCs) follow ARC as special
cases. Despite good performance, most RCs treat test samples
separately and fail to consider the correlation between the test
samples. In this paper, we develop a structural ARC (SARC)
based on Bayesian analysis and generalizing a Markov random
field-based multilevel logistic prior. The proposed SARC can
utilize the structural information among the test data to fur-
ther improve the performance of every RC belonging to the
ARC framework. The experimental results on both synthetic
and real-database demonstrate the effectiveness of the proposed
framework.
Index Terms—Atomic representation (AR), Bayesian analy-
sis, greedy coordinate descent, Markov random field (MRF),
subspace.
I. INTRODUCTION
O
VER the past years, representation-based classi-
fiers (RCs) have shown significant potential in pattern
recognition and computer vision [1]–[3]. They also have been
widely used in various classification tasks, such as handwritten
digit recognition [4] and face recognition [5]–[7].
Most RCs consist of two main steps. The first step is
to learn the representation matrix of the test data samples.
Nonetheless, this may be an underdetermined ill-posed inverse
problem by just minimizing the reconstruction error. Atomic
norm [8]–[12], as a general formulation, can refine this prob-
lem by encouraging solutions to be represented by a few atoms
from some elementary atomic set. As shown in the exper-
iments in this paper, the atomic norm can greatly enhance
the discriminative property of the representation matrix of test
Manuscript received October 27, 2014; revised December 4, 2014 and
December 28, 2014; accepted December 30, 2014. Date of publication
January 22, 2015; date of current version November 13, 2015. This work
was supported in part by Research Grant MYRG205(Y1-L4)-FST11-TYY
and Research Grant MYRG187(Y1-L3)-FST11-TYY, in part by Chair Prof.
Grant RDG009/FST-TYY of the University of Macau, in part by Macau FDC
Grant T-100-2012-A3 and Macau FDC Grant 026-2013-A, and in part by the
National Natural Science Foundation of China under Grant 61273244 and
Grant 11371007. This paper was recommended by Associate Editor D. Tao.
Y. Y. Tang, Y. Wang, and C. L. P. Chen are with the Faculty of Science
and Technology, University of Macau, Macau 999078, China.
L. Li is with the Faculty of Mathematics and Statistics, Hubei University,
Wuhan 430062, China (e-mail: lilq@hubu.edu.cn).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TCYB.2015.2389232
data. This gives rise to the success of atomic representation-
based classifiers (ARC) in classification [1], [13], [14].
After solving the representation matrix, the second step
is to assign each test sample to the class, which mini-
mizes the class dependent residuals [1]. Thus, test samples
are treated separately and the structural information among
them is ignored. In recent years, the structural information
among data has been successfully exploited in many classifi-
cation methods [15], [16]. To alleviate this drawback, in this
paper, we propose a structural ARC (SARC) by generalizing a
Markov random field (MRF)-based multilevel logistic (MLL)
prior [17]. As shown in our analysis, SARC can encode any
structure and encourage data samples with the same structure
to have the same label. In this way, each RC in the ARC
framework can be greatly improved.
A. Related Work
Existing RCs can be roughly divided into three main
categories: 1) sparse representation-based classifier (SRC);
2) low-rank representation-based classifier (LRRC); and
3) collaborative representation-based classifier (CRC).
1) SRC: Table I gives some notations used throughout this
paper. Given the training and test data matrices A and X,the
goal of classification is to estimate the true label vector y of
all test samples. For ease of presentation, we first define the
truncated operator as follows.
Definition 1: For any vector z ∈ R
m
, and any index set
J ⊆{1, 2,...,m}, the truncated operator T
J
is defined as
T
J
:R
m
→ R
#J
z → T
J
(z)
where #
J denotes the number of elements in J , and T
J
(z)
consists of the entries of z in order, whose indexes are in
J .
SRC [1] aims at exploiting the discriminative property of
sparse representation to perform classification. It seeks the
sparse coefficient vector for each test sample x
i
as follows:
min
z
i
∈R
m
z
i
1
s.t. x
i
= Az
i
, i = 1, 2,...,n. (1)
For noisy data, SRC solves the following modified stable
1
-minimization problem:
min
z
i
∈R
m
z
i
1
s.t. x
i
− Az
i
2
≤ , i = 1, 2,...,n (2)
where denotes the error tolerance parameter. Then, x
i
is assigned to the class minimizing the residuals r
k
(x
i
) =
x
i
− A
k
T
k
(z
i
)
2
, k ∈ K . Starting with the pioneer work
2168-2267
c
2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.