Tensor completion via a multi-linear low-n-rank factorization model
$
Huachun Tan
a,
n
, Bin Cheng
a
, Wuhong Wang
a
, Yu-Jin Zhang
b
, Bin Ran
c
a
Department of Transportation Engineering, Beijing Institute of Technology, Beijing 100081, PR China
b
Department of Electronic Engineering, Tsinghua University, Beijing 100084, PR China
c
Department of Civil and Environmental Engineering, University of Wisconsin-Madison, 1415 Engineering Drive, Madison, WI 53706, USA
article info
Article history:
Received 19 April 2013
Received in revised form
15 September 2013
Accepted 18 November 2013
Communicated by Tao Mei
Available online 20 January 2014
Keywords:
Tensor completion
Multi-linear low-n-rank factorization
Nonlinear Gauss–Seidal method
Singular value decomposition
abstract
The tensor completion problem is to recover a low-n-rank tensor from a subset of its entries. The main
solution strategy has been based on the extensions of trace norm for the minimization of tensor rank via
convex optimization. This strategy bears the computational cost required by the singular value
decomposition (SVD) which becomes increasingly expensive as the size of the underlying tensor
increase. In order to reduce the computational cost, we propose a multi-linear low-n-rank factorization
model and apply the nonlinear Gauss–Seidal method that only requires solving a linear least squares
problem per iteration to solve this model. Numerical results show that the proposed algorithm can
reliably solve a wide range of problems at least several times faster than the trace norm minimization
algorithm.
& 2014 The Authors. Published by Elsevier B.V. All rights reserved.
1. Introduction
A tensor is a multidimensional array which is the higher-order
generalization of vector and matrix. It has many applications in
information science, computer vision and graph analysis [1]. In the
real world, the size and the amount of redundancy of the data
increase fast, and nearly all of the existing high-dimensional real
world data either have the natural form of tensor (e.g. multi-
channel images and videos) or can be grouped into the form of
tensor (e.g. tensor face [2]). Therefore, challenges come up in many
areas when one confronts with the high-dimensional real world
data. Tensor decomposition is a popular tool for high-dimensional
data processing, analysis and visualization. Two particular tensor
decomposition methods can be considered as higher-order exten-
sions of the matrix singular value decomposition: CANDECOMP/
PARAFAC (CP) [3,4] and the Tucker [5]. Tensor decomposition
gives a concise representation of the underlying structure of
tensor, revealing when the tensor data can be modeled as lying
close to a low-dimensional subspace. Although useful, they are not
as powerful. For general tensors, tensor decomposition does
not deliver best low rank approximation, which will limit its
applications.
In this paper, we will try to recover a low-n-rank tensor from a
subset of its entries. This problem is called the tensor completion
problem. It is also called missing value estimation problem of
tensors. The problem in computer vision and graphics is known as
image and video in-painting problem [6,7]. The key factor to solve
this problem is how to build up the relationship between the
known elements and the unknown ones. Owing to this reason,
the algorithms for completing tensors can be coarsely divided into
local algorithms and global algorithms. Local algorithms [8,9]
assume that the further apart two points are, the smaller their
dependence is and the missing entries mainly depend on their
neighbors. Thus, the local algorithms can only exploit the informa-
tion of the adjacent entries. However, sometimes the values of the
missing entries depend on the entries which are far away and the
local algorithms cannot take advantage of a global property of
tensors. Therefore, in order to utilize the information of tensors as
much as possible, it is necessary to develop global algorithms that
can directly capture the complete information of tensors to solve
the tensor completion problem.
In the two-dimensional case, i.e. the matrix, the rank is a
powerful tool to capture the global information and can be directly
determined. But for the high-dimensional case, i.e. the tensor,
there is no polynomial algorithm to determine the rank of a
specific given tensor. Recently, based on the extensions of trace
norm for the minimization of tensor rank, some global algorithms
[6,7,10–12] solving the tensor completion problem via convex
optimization have been proposed. Liu et al. [6] first proposed the
Contents lists available at ScienceDirect
journal homepage: www.elsevier.com/locate/neucom
Neurocomputing
0925-2312/$ - see front matter & 2014 The Authors. Published by Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.neucom.2013.11.020
☆
This is an open-access article distributed under the terms of the Creative
Commons Attribution-NonCommercial-No Derivative Works License, which per-
mits non-commercial use, distribution, and reproduction in any medium, provided
the original author and source are credited.
n
Corresponding author. Tel./fax: þ86 10 68914582.
E-mail addresses: tanhc@bit.edu.cn, tanhc00@gmail.com (H. Tan),
13012991141@bit.edu.cn (B. Cheng), wangwuhong@bit.edu.cn (W. Wang),
zhang-yj@mail.tsinghua.edu.cn (Y.-J. Zhang), bran@wisc.edu (B. Ran).
Neurocomputing 133 (2014) 161–169