Constrained discriminant neighborhood embedding for high
dimensional data feature extraction
Bo Li
a,b,c,
n
, Lei Lei
a,b
, Xiao-Ping Zhang
c,d
a
School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, Hubei 430065, China
b
Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan, Hubei 430065, China
c
Department of Electrical and Computer Engineering, Ryerson University, Toronto, Ontario, Canada M5B 2K3
d
School of Electronics and Information Engineering, Tongji University, Shanghai, 201804, China
article info
Article history:
Received 30 June 2014
Received in revised form
3 October 2014
Accepted 1 January 2015
Available online 1 September 2015
Keywords:
Dimensionality reduction
Feature extraction
Local uncorrelation
Neighborhood embedding
abstract
When handling pattern classification problem such as face recognition and digital handwriting identi-
fication, image data is always represented to high dimensional vectors, from which discriminant features
are extracted using dimensionality reduction methods. So in this paper, we present a supervised
manifold learning based dimensionality reduction method named constrained discirminant neighbor-
hood embedding (CDNE). In the proposed CDNE, on one hand, the class information of samples is taken
into account to construct both an inter-class graph and an intra-class graph, where neighborhood points
in the intra-class graph are selected from those with the same class and any point in the inter-class graph
should sample those labeled different classes as its neighborhood points. On the other hand, locally least
linear reconstruction technique is also introduced to model an objective function under the local
uncorrelation constraint to explore a discriminant subspace. Compared to some related and state-of-the-
art dimensionality reduction methods such as discriminant neighborhood embedding (DNE), supervised
locality discriminant manifold learning (SLDML), discriminant sparse neighborhood preserving embed-
ding (DSNPE), local graph embedding based on maximum margin criterion (LGE/MMC), uncorrelated
discriminant locality preserving projection (UDLPP) and locally uncorrelated discriminant projection
(LUDP), the proposed CDNE has been validated to be efficient and feasible by experimental results on
some benchmark face data sets including CMU PIE, ORL and FERET.
& 2015 Elsevier B.V. All rights reserved.
1. Introduction
Dimensionality reduction is indispensable in many applications
such as bioinformatics, pattern recognition, machine learning and
data mining because it can provide a way to overcome the curse of
dimensionality [1,2]. Moreover, dimensionality reduction also
contributes to extract discriminant features from the original data.
When carrying out the task of classification, objects expect to be
described in terms of a set of measurable features, where the
selection and quality of the features representing different pat-
terns have a considerable impact on the final performance. So
dimensionality reduction or feature extraction is employed to
derive new features from the original data to reduce the cost of
feature measurement and improve the classification accuracy with
high efficiency. Until to now, many current techniques involved
linear and nonlinear transformations are applied to all kind of data
classification [3-15, 51].
In the last decade, as one kind nonlinear method, manifold
learning has attracted more and more attentions, where locally
linear embedding (LLE) as well as its supervised extensions has
been widely used due to its high efficiency for dimensionality
reduction [16,17]. In the original LLE, manifold local geometry is
explored using k nearest neighbors (KNN) criterion which is
determined by the ascending Euclidean distances. In some cases,
some points with different labels may have shorter Euclidean dis-
tances than those with the same class and then are selected as
neighbors, which results in performance degradation for data
classification. Under such circumstances, some researchers have
reported their attempts to address the problem. When constructing
KNN graph, class information is taken into account to adjust
weights between KNN neighbors or to select the corresponding
neighbors. The work was first presented by de Ridder et al., where
only the Euclidean distances between those labeled different
classes are simply enlarged by adding a constant [18]. Instead of
Contents lists available at ScienceDirect
journal homepage: www.elsevier.com/locate/neucom
Neurocomputing
http://dx.doi.org/10.1016/j.neucom.2015.01.099
0925-2312/& 2015 Elsevier B.V. All rights reserved.
n
Corresponding author.
E-mail address: liberol@126.com (B. Li).
Neurocomputing 173 (2016) 137–144