Editor’sChoiceArticle
Robust regional bounding spherical descriptor for 3D face recognition
and emotion analysis
☆
,
☆☆
Yue Ming
School of Electronic Engineering, Beijing Key Laboratory of Work Safety Intelligent Monitoring, Beijing University of Posts and Telecommunications, Beijing 100876, PR China
abstractarticle info
Article history:
Received 23 August 2013
Received in revised form 29 April 2014
Accepted 25 December 2014
Available online 7 January 2015
Keywords:
3D face recognition
Emotion analysis
Regional bounding spherical descriptor
Regional and global regression
Kullback–Leiber divergence (KLD)
3D face recognition and emotion analysis play important roles in many fields of communication and edutain-
ment. An effective facial descriptor, with higher discriminating capability for face recognition and higher descrip-
tiveness for facial emotion analysis, is a challeng ing issue. Howe ver, in the practical applications, the
descriptiveness and discrimination are independent and contradictory to each other. 3D facial data provide a
promising way to balance these two aspects. In this paper, a robust regional bounding spherical descriptor
(RBSR) is proposed to facilitate 3D face recognition and emotion analysis. In our framework, we first segment
a group of regions on each 3D facial point cloud by shape index and spherical bands on the human face. Then
the corresponding facial areas are projected to regional bounding spheres to obtain our regional descriptor. Final-
ly, a regional and global regression mapping (RGRM) technique is employed to the weighted regional descriptor
for boosting the classification accuracy. Three largest available databases, FRGC v2, CASIA and BU-3DFE, are con-
tributed to the performance comparison and the experimental results show a consistently better performance for
3D face recognition and emotion analysis.
© 2015 Elsevier B.V. All rights reserved.
1. Introduction
Face recognition and emotion analysis are two important branches
in biometric systems in remote communication, medical rescue, intelli-
gent monitoring and so on. A large number of demands of face recogni-
tion and expression control are emerged due to the rapid development
of 3D movies and entertainment [1–3]. More and more practice require-
ments are no longer satisfied by facial recognition or emotion analysis
separately. The emerging industry needs to find an effective facial repre-
sentation, which not only achieves a higher-qu ality discriminative
power for the large size individuals but also provides a better expression
description for control.
Our previous bounding sphere descriptor demon strated supe rior
performance in 3D face recognition, especially where there were large
pose variations. For practical applications, the proposed feature descrip-
tor should be able to simultaneously perform face recognition and ex-
pression analysis. Expression variations imply variations in specific
facial regions. Therefore, in this paper, we extend the global bounding
sphere descriptor to regional bounding sphere descriptors. Regional
weighted selection is used to form a general feature descriptor, which
can be used for face recognition and expression analysis simultaneously.
The novel low-dimensional features of this approach facilitate both the-
oretical innovations and practical applications.
Sufficient broad investigations of 3D face processing have been
achieved in the literature [4,5]. Although most of them independently
analyze the issue of 3D face recognition and 3D facial expression classi-
fication, the literature illustrate that facial regional descriptor analysis
can provide better accuracy for the two aspects simultaneously. For 3D
face recognition, Faltem ier et al. [6] divided a face into a group of regions
and led to a better recognition performance. A region-based registration
was employed to establish correspondence and 3D shape descriptor was
used for statistical feature extraction [7]. Passalis et al. [8] used facial
symmetry to overcome the challenges of large pose variations and the
wavelet-based biometrics signature was used to evaluate the real-
world applications. Local shape difference boosting [9] selected optimal
local features for assembling three collec tive strong classifiers and found
the most discriminative feature for 3D face recognition. Ioannis Marras
[10] introduced novel subspace-based methods for learning the azimuth
angle of subspace normal, which were well-suited for all types of 3D fa-
cial data for recognition. However, note that these algorithms focus on
3D face recognition. They cannot perform facial expression recognition.
Some scholars are concerned with the descriptions of the different
facial expressions, without the discrimination of the different individ-
uals. For example, the Facial Action Coding System (FACS) [11],asa
human facial expression representation, has been found over 25 years
ago. Prominent regions can describe the variant facial characteristics
on the different individuals [12]. However, practical application
Image and Vision Computing 35 (2015) 14–22
☆
Editor's Choice Articles are invited and handled by a select rotating 12 member
Editorial Board c ommittee. This paper has been recommend ed for acceptance by Dr.
Stefanos Zafeiriou.
☆☆
The work presented in this paper was supported by the National Natural Science
Foundation of Chin a (Grant No. NSFC-61 402046), Pr esident Funding of Beijing
University of Posts and Telecommunications (Grant No. 2013XZ10).
E-mail address: myname35875235@126.com.
http://dx.doi.org/10.1016/j.imavis.2014.12.003
0262-8856/© 2015 Elsevier B.V. All rights reserved.
Contents lists available at ScienceDirect
Image and Vision Computing
journal homepage: www.elsevier.com/locate/imavis