Efficient Low-rank Supported Extreme Learning
Machine for Robust Face Recognition
Yingjie Guan
1
, Tao Lu
1,2
, Yanduo Zhang
1
,Bo Wang
2
, Xiaolin Li
1
, Zixiang Xiong
2
1.School of Computer Science and Engineering,Wuhan Institute of Technology, Wuhan, China ,430073
2.Department of Electrical and Computer Engineering Texas A&M University College Station, TX 77843
lutxyl@gmial.com zhangyanduo@hotmail.com william.bowang@gmail.com zx@ece.tamu.edu
Abstract—Recently, deep learning based face recognition algo-
rithms have achieved great success in recognition performance.
However, designing and training complex learning models suffer
from time and labor efficiency. In this paper, we propose a
novel three-layer low-rank supported extreme learning machine
(LSELM) algorithm to take advantage of both robust feature
representation and fast classification for efficient recognition.
Every given probe sample is first clustered into a sub-class
spanned by linear representation. With this sub-class, low-rank
and robust features that are insensitive to disguise, noise, variant
expression or illumination are recovered. These discriminative
features are then coded to support a forward neural network for
efficient prediction. Experimental results show that LSELM is on
par with other deep learning based face recognition algorithms
in recognition performance but has less time complexity on both
AR and extend Yale-B datasets.
Index Terms—Face Recognition, Robust Feature, Low-rank
Matrix Recovery, Extreme Learning Machine, Time complexity
I. INTRODUCTION
Face Recognition (FR) is an old topic with a variety of
real-world applications that range from surveillance to infor-
mation security. Many FR based APPs in mobile devices and
even robots are becoming more and more popular nowadays.
Limited by the computational capability and battery power of
mobile devices, to make the FR algorithms suitable for real-
world applications, numerous works have focused on design-
ing algorithms with both high accuracy and effectiveness [1],
[2]. FR mainly involves how to extract robust features and
make decision with a high performance classifier.
First, how to extract robust features is essential to FR.
Classical FR algorithms such as Eigenface, Fisherface and
Laplacianface [2] that used subspace-learning methods to rep-
resent the intrinsic characteristics of faces achieve satisfying
results. They are efficient but suffer from lower recognition
performance when the face images cover complex and large
intra-personal variations. Later on, new image features such as
local binary patterns (LBP) [3] and local ternary patterns (LT-
P) [4] are introduced into FR algorithms to achieve respectable
recognition performance. Sun [5] et al. claim that the reasons
for the boosted recognition performance of deeply learned face
representations are their sparsity, selectivity and robustness.
However, these robust features are either time-consuming to
extract or they require the support of expensive and complex
network structure.
The second crucial factor of FR is high-performance classi-
fier design. Traditional classification algorithms such as K-
nearest neighbors (KNN), support vector machine (SVM),
random forest (RF) and their variants are always cascaded
with different feature extraction algorithms. In the past few
years, sparse representation based classification (SRC) [6] has
been proposed as a new classification approach to achieve
good performance. SRC is robust to noise and occlusions but
with heavy computational burden. Collaboration representation
based classification (CRC) [7] utilizes the local manifold struc-
ture to constrain the coding process, resulting in performance
improvement. Du [8] et al. proposed a low-rank and sparse
representation based method (LSRC) for classification. Again,
LSRC has heavy computational burden. The recently emerging
deep-learning based FR algorithms such as PCANet proposed
by Chan [9] and DeepID [5] give promising performance.
However, designing and training these complex networks are
labor-intensive and time-consuming. This limits their large-
scale real-world applications.
From the above, we see that on one hand, FR with complex
models (such as PCANet and DeepID) suffers from time
and labor effectiveness when training the network; on the
other hand, FR with simple models (such as Eigenface and
SRC) has low recognition performance in complex scenarios.
Recently, Candes [10] et al. theoretically prove that under
certain conditions the observed images can be decomposed
into a low-rank subspace and a noise subspace. This low-rank
recovery (LR) feature representation scheme has been widely
used in robust estimation areas. Inspired by the highly effi-
cient classification algorithm called extreme learning machine
(ELM) [11], we propose a three-layer low-rank supported
ELM (LSELM) based method to take full advantage of both
low-rank, robust feature extraction and fast classification,
while balancing recognition performance and time/complexity
effectiveness. First, for each given probe image, using the
known label information from the training gallery dataset, we
pre-cluster it into a certain sub-class as the first layer. The
rationale behind this is that a testing image should lie in a
sub-space for a linear representation. Then, each sub-class
including testing samples are decomposed into a low-rank
clear subspace without noise. This low-rank recovery step can
be considered as the second layer of our proposed approach.
Finally, as the last classifier layer, with the support of the clear
low-rank subspace, eigenface is used to extract features and
978-1-5090-5316-2/16/$31.00
c
2016 IEEE VCIP 2016, Nov. 27 – 30, 2016, Chengdu, China