IEEE SIGNAL PROCESSING LETTERS, VOL. 15, 2008 369
Fast Principal Component Extraction
Using Givens Rotations
S. Bartelmaos and K. Abed-Meraim
Abstract—In this letter, we elaborate a new version of the orthog-
onal projection approximation and subspace tracking (OPAST)
for the extraction and tracking of the principal eigenvectors of a
positive Hermitian covariance matrix. The proposed algorithm re-
ferred to as principal component OPAST (PC-OPAST) estimates
the principal eigenvectors (not only a random basis of the principal
subspace as for OPAST) of the considered covariance matrix. Also,
it guarantees the orthogonality of the weight matrix at each itera-
tion and requires
( )
flops per iteration, where is the size of
the observation vector and
is the number of eigenvectors
to estimate. (The number of flops per iteration represents the total
number of multiplication, division, and square root operations that
are required to extract the desired eigenvectors at each iteration.)
The estimation accuracy and tracking properties of PC-OPAST
are illustrated through simulation results and compared with the
well-known singular value decomposition (SVD) method and other
recently proposed PCA algorithms.
Index Terms—Fast adaptive algorithms, Givens rotations, prin-
cipal component analysis (PCA).
I. INTRODUCTION
P
RINCIPAL component analysis (PCA) and principal
subspace analysis (PSA), as feature extraction techniques,
have been widely used in many modern information processing
fields such as: telecommunication, adaptive filtering, antenna
array processing, statistical parameter estimation, neural net-
works, etc. The main difference between the PSA (e.g., [1],
[2]) and the PCA is that the latter requires the estimation of
the eigenvectors of the covariance matrix of an input vector
sequence while the former (PSA) estimates only a basis of
the principal subspace. The conventional matrix algebraic ap-
proaches, which include singular value decomposition (SVD)
of the data matrix, are often unsuitable for large dimensional
input data as well as for the adaptive PCA due to their high
computational load. Hence, seeking fast, online algorithms to
compute adaptively the PCA is an ongoing research topic in
signal processing and communications. As shown in [6], the
PCA method may be grouped in two categories: sequential and
parallel versions. In the first group [4], [7], the implementation
is based on the deflation technique. The basic idea of the defla-
tion is the sequential estimation of the principal components.
First the most dominant eigenvector is updated by applying a
Manuscript received May 29, 2007; revised July 26, 2007. The associate ed-
itor coordinating the review of this manuscript and approving it for publication
was Dr. Ivan W. Selesnick.
The authors are with the TSI Department, Telecom Paris, 75014 Paris, France
(e-mail: bartelmaos@tsi.enst.fr; abed@tsi.enst.fr).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/LSP.2008.920006
subspace algorithm with
. Then we remove the projection
of the current data vector
onto this eigenvector from
itself. Thus, the second dominant eigenvector can be extracted
in the same way as it becomes the most dominant one in the
updated data vector. This process is repeated
times to extract
the
dominant eigenvectors. In the parallel version, the de-
sired principal components are extracted simultaneously. The
parallel version has been studied extensively [3], [5], [6], [8]
since it overcomes the drawbacks of the sequential version that
makes use of a large amount of memory to store the repeatedly
used input samples, and it produces a long processing delay due
to the extraction of the desired components one after another.
In this letter, we propose to modify the subspace tracking
tracking algorithm orthogonal projection approximation sub-
space tracking (OPAST) [1] in order to perform the PCA while
preserving the same linear complexity as OPAST. This modi-
fication is accomplished by the use of Givens rotations which
allow us to transform the principal subspace matrix obtained
by the OPAST algorithm into the desired principal component
matrix. We present some simulation results to assess the perfor-
mance of our algorithm and compare it with the well-known and
efficient SVD method and other recently proposed fast PCA al-
gorithms [6], [8].
II. PC-OPAST A
LGORITHM
Let
be a sequence of random vectors with covari-
ance matrix
. Consider the problem of ex-
tracting
principal eigenvectors of the covariance matrix.
To solve this problem, several principal component algorithms
have so far been proposed as these following adaptive methods
[3]–[6], [8].
Let us start by presenting the OPAST algorithm for PSA. For
that, consider the scalar function
(1)
with a matrix argument
. Here, denotes the
trace of a square matrix
and is the Frobenus norm. It is
shown in [4] that
•
is a stationary point of if and only if
, where is a matrix containing any distinct
eigenvectors of
, and is any unitary matrix.
• All stationary points of
are saddle points, except
when
contains the dominant eigenvectors of .In
that case,
attains the global minimum.
1070-9908/$25.00 © 2008 IEEE