Signal Processing: Image Communication 81 (2020) 115684
Contents lists available at ScienceDirect
Signal Processing: Image Communication
journal homepage: www.elsevier.com/locate/image
Robust L1-norm two-dimensional collaborative representation-based
projection for dimensionality reduction
✩
Lulu He, Jimin Ye
∗
, Jianwei E
School of Mathematics and Statistics, Xidian University, Xi’an 710071, China
A R T I C L E I N F O
Keywords:
Collaborative representation-based projection
(CRP)
L1-2DCRP
L1-norm
Face recognition
Dimensionality reduction
A B S T R A C T
Collaborative representation-based projection (CRP) is a well-known dimensionality reduction technique, which
has been proved to have better performance than sparse representation-based projection (SRP) in the fields of
recognition and computer vision. However, classical CRP is sensitive to noises and outliers since its objective
function is based on L2-norm, and it will suffer from the curse of dimensionality as it is used for images
processing. In this paper, a novel CRP model, named L1-norm two-dimensional collaborative representation-
based projection (L1-2DCRP) and an efficient iterative algorithm to solve it are proposed. Different from
conventional CRP, the optimal problem in our proposed model is a L1-norm-based maximization and the
vector data is extended to matrix date. The proposed algorithm is theoretically proved to be monotonously
convergent, and more robust to noises and outliers since L1-norm is used. Experimental results on CMU Multi-
PIE, COIL20, FERET and ORL face databases validate the effectiveness of L1-2DCRP compared with several
state-of-the-art approaches.
1. Introduction
Dimensionality is always a serious problem in the fields of data
analysis and pattern recognition because data are always with high
dimensionality in the real world. In data analysis processing, high-
dimensional data need more training time and storage space, on which
it is not realistic to perform analysis directly. Dimensionality reduction
is the key step used to reduce the dimension of input space to simplify
data analysis problem in this field. In the last few decades, various
dimensionality reduction techniques haven been developed [1]. Among
these methods, principal component analysis (PCA) [2] and linear
discriminant analysis (LDA) [3] are two of the most popular ones. PCA
is currently the most famous and widely used linear dimensionality
reduction algorithm and its core idea is to project data along the direc-
tion of maximum variance of samples in the feature space to minimize
reconstruction error. However, PCA is an unsupervised method. LDA
algorithm can make full use of the label information and with the
idea of maximizing the Fisher discriminant criterion to seek optimal
projection vectors. As we all know, LDA is superior to PCA since LDA
owns more discriminant ability.
Recently, sparse representation (SR) has gained widely attention
and has been successfully applied to solve many practical problems [4–
6]. SR-based methods construct sparsity L1-graph, which is robustness
to outliers and can be divided into unsupervised and supervised. Qiao
✩
No author associated with this paper has disclosed any potential or pertinent conflicts which may be perceived to have impending conflict with this work.
For full disclosure statements refer to https://doi.org/10.1016/j.image.2019.115684.
∗
Corresponding author.
E-mail address: jmye@mail.xidian.edu.cn (J. Ye).
et al. [7] proposed sparsity preserving projections (SPP), in which the
sparse reconstructive relationship of data was preserved by minimizing
a L1 regularization-related objective function. Chen et al. [8] presented
sparse neighborhood preserving embedding (SNPE), which is identical
to SPP to some extent. While both of the two received good perfor-
mance in data classification, they are unsupervised. In order to further
improve the performance of SPP and SNPE, Gui et al. [9] and Zhang
et al. [10] embedded discriminant information into these two methods
and provided the discriminant sparse neighborhood preserving embed-
ding (DSNPE) and sparse locality preserving discriminative projections
(SLPDP), respectively.
The aforementioned approaches are all 1D vector-based algorithms,
in which 2D matrix data need to be converted into 1D vector form. As
we all know, data in real wold are always with high dimensionality and
the transformation will cause the so-called ‘‘curse-of-dimensionality’’.
In addition, the transformation will destroy the inner local charac-
teristics hidden in data. To address this shortcomings, several 2D
matrix-based methods have been developed. Yong et al. [11] incor-
porated the 2D matrix into PCA and proposed 2DPCA, in which the
covariance matrix was constructed directed by 2D matrix data. Ye
et al. [12] maximized the ratio of 2D between-class scatter matrix and
the 2D within-class scatter matrix and proposed 2DLDA. Compared
with PCA and LDA, 2DPCA and 2DLDA can achieve better performance
https://doi.org/10.1016/j.image.2019.115684
Received 10 June 2019; Received in revised form 29 October 2019; Accepted 1 November 2019
Available online 8 November 2019
0923-5965/© 2019 Elsevier B.V. All rights reserved.