Effective use of color information for large scale face verification
Chengjun Liu
n
Department of Computer Science, New Jersey Institute of Technology, Newark, NJ 07102, United States
article info
Article history:
Received 1 November 2011
Received in revised form
23 March 2012
Accepted 27 May 2012
Communicated by Xiaoqin Zhang
Available online 4 September 2012
Keywords:
New color model
New similarity measure
Compact color image representation
Effective color feature extraction
Discriminant analysis
Pattern recognition
Face Recognition Grand Challenge (FRGC)
abstract
This paper presents a method for effective use of color information and a new similarity measure with
application to large scale face verification. Specifically, three effective color component images are first
obtained from a new color model that takes advantage of the subtraction of the primary colors. A compact
color image representation is then derived through discrete cosine transform and feature selection for
redundancy reduction and computational efficiency. The effective color features in terms of class separability
are further extracted by means of discr imina nt analysis. A new similarity measure is finally presented for
improving pattern recognition performance. The effectiveness of the proposed method is evaluated using a
large scale, grand challenge pattern recognition problem, namely, the Face Recognition Grand Challenge
(FRGC) problem. Specifically, the experiments using 36,818 FRGC color images show that the new color
model improves upon other image modalities, such as the RGB colorimage,andthegrayscaleimage;andthe
new similarity measure consistently performs better than other popular similarity measures, such as the
Euclidean distance measure, the cosine similarity measure, and the normalized correlation.
& 2012 Elsevier B.V. All rights reserved.
1. Introduction
Pattern recognition has been predominantly dealing with grays-
cale images due to historic reasons. Nowadays, however, color images
are pervasive, such as from digital cameras, cellphones, the Internet,
and image/video databases. As it provides additional discriminative
information, color should be further investigated for improving
pattern recognition performance [21,7,3,5,15,16]. Different color
models usually display different discriminatory power, as shown in
the comparative assessment of 12 commonly used color spaces in
face recognition [20]. These findings motivate us to explore novel
color spaces with enhanced discriminatory power. Color image
classification, nevertheless, applies different color models from those
for color image representation. The popular color models for color
image representation are usually formed by the primary colors, such
as the RGB colorspace,orbytheadditionoftheprimarycolors(e.g.,
the secondary colors), such as the CMY color space [6].Theeffective
color models for color image classification, on the other hand, often
involve color component images that are the weighted subtraction of
the primary colors, such as the Uncorrelated Color Space (UCS), the
Independent Color Space (ICS), and the Discriminating Color Space
(DCS) [13]. The UCS extracts three new statistically uncorrelated color
component images by decorrelating the red, green, and blue compo-
nent images of the RGB color space using the Principal Component
Analysis (PCA) [4]. The ICS defines three new statistically indepen-
dent color component images by means of a blind source separation
procedure, such as the Independent Component A nalysis (ICA)
[2,8,10]. The DCS derives three new discriminatory color component
images through discriminant analysis that optimizes a class separ-
ability criterion [4].ComparedtotheRGB color image representation
model, the three new color models exploit the weighted subtraction
of the primary colors for improving pattern recognition performance.
To further investigate novel color models and color feature
extraction and recognition approaches, we present in this paper a
new m ethod for effective use of color information in pattern
recognition. The novelty of our new method comes from the
following aspects. First, we derive three effective color component
images from a new color model that takes advantage of the
subtraction of the primary colors. The idea of applying the subtrac-
tion of the primary colors is motivated by our recent research on
new color models, such as the UCS, the ICS, and the DCS, which
involve the weighted subtraction of the primary colors for improv-
ing pattern recognition performance [13]. Second, we derive the
compact color image representation through Discrete Cosine Trans-
form (DCT) and feature selection for redundancy reduction and
computational efficiency. Even though PCA is the optimal repre-
sentation method in terms of mean square error, it requires a time
consuming eigenvalue decomposition to derive its data dependent
basis vectors. The DCT, on the other hand, has fixed basis vectors
and broad applications in image compression, such as in the JPEG
and MPEG standards. Third, we extract the most effective color
features in terms of class separability by m eans of discriminant
Contents lists available at SciVerse ScienceDirect
journal homepage: www.elsevier.com/locate/neucom
Neurocomputing
0925-2312/$ - see front matter & 2012 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.neucom.2012.05.029
n
Tel.: þ1 973 596 5280; fax: þ1 973 596 5777.
E-mail address: chengjun.liu@njit.edu
Neurocomputing 101 (2013) 43–51