Context-aware Sequential Recommendation
Qiang Liu
1,2
, Shu Wu
1
, Diyi Wang
3
, Zhaokang Li
4
, Liang Wang
1,2
1
Institute of Automation, Chinese Academy of Sciences
2
University of Chinese Academy of Sciences
3
Northeastern University
4
Rice University
{qiang.liu, shu.wu}@nlpr.ia.ac.cn, wang.di@husky.neu.edu,
zhaokang.li@rice.edu, wangliang@nlpr.ia.ac.cn
Abstract—Since sequential information plays an important
role in modeling user behaviors, various sequential recom-
mendation methods have been proposed. Methods based on
Markov assumption are widely-used, but independently com-
bine several most recent components. Recently, Recurrent
Neural Networks (RNN) based methods have been successfully
applied in several sequential modeling tasks. However, for real-
world applications, these methods have difficulty in modeling
the contextual information, which has been proved to be very
important for behavior modeling. In this paper, we propose
a novel model, named Context-Aware Recurrent Neural Net-
works (CA-RNN). Instead of using the constant input matrix
and transition matrix in conventional RNN models, CA-RNN
employs adaptive context-specific input matrices and adap-
tive context-specific transition matrices. The adaptive context-
specific input matrices capture external situations where user
behaviors happen, such as time, location, weather and so on.
And the adaptive context-specific transition matrices capture
how lengths of time intervals between adjacent behaviors in
historical sequences affect the transition of global sequential
features. Experimental results show that the proposed CA-
RNN model yields significant improvements over state-of-the-
art sequential recommendation methods and context-aware
recommendation methods on two public datasets, i.e., the
Taobao dataset and the Movielens-1M dataset.
I. INTRODUCTION
Nowadays, people are overwhelmed by a huge amount of
information. Recommender systems have been an important
tool for users to filter information and locate their preference.
Since historical behaviors in different time periods have dif-
ferent effects on users’ behaviors, the importance of sequen-
tial information in recommender systems has been gradually
recognized by researchers. Methods based on Markov as-
sumption, including Factorizing Personalized Markov Chain
(FPMC) [6] and Hierarchical Representation Model (HRM)
[11], have been widely used for sequential prediction. How-
ever, a major problem of these methods is that these methods
independently combine several most recent components. To
resolve this deficiency, Recurrent Neural Networks (RNN)
have been employed to model global sequential dependency
among all possible components. It achieves the state-of-
the-art performance in different applications, e.g., sentence
modeling [4], click prediction [14], location prediction [3],
and next basket recommendation [13].
With enhanced ability in collecting information, a great
amount of contextual information, such as location, time,
weather and so on, can be collected by systems. Besides,
the contextual information has been proved to be useful in
determining users’ preferences in recommender systems [1].
In real scenarios, as shown in Figure 1, applications usually
contain not only sequential information but also a large
amount of contextual information. Though RNN and other
sequential recommendation methods have achieved satisfac-
tory performances in sequential prediction, they still have
difficulty in modeling rich contextual information in real
scenarios. On the other hand, context-aware recommendation
has been extensively studied, and several methods have
been proposed recently, such as Factorization Machine (FM)
[7], Tensor Factorization for MAP maximization (TFMAP)
model [10], CARS2 [9] and Contextual Operating Tensor
(COT) [2][12]. But these context-aware recommender meth-
ods can not take sequential information into consideration.
To construct a model to capture the sequential information
and contextual information simultaneously, we first inves-
tigate the properties of sequential behavioral histories in
real scenarios. Here, we conclude two types of contexts,
i.e., input contexts and transition contexts. Input contexts
denote external situations under which input elements occur
in behavioral sequences, that is to say, input contexts are
external contexts under which users conduct behaviors. Such
contexts usually include location (home or working place),
time (weekdays or weekends, morning or evening), weather
(sunny or rainy), etc. Transition contexts are contexts of
transitions between two adjacent input elements in historical
sequences. Specifically, transition contexts denote time inter-
vals between adjacent behaviors. It captures context-adaptive
transition effects from past behaviors to future behaviors
with different time intervals. Usually, shorter time intervals
have more significant effects comparing with longer ones.
In this work, we propose a novel model, named Context-
Aware Recurrent Neural Networks (CA-RNN), to model
sequential information and contextual information in one
framework. Each layer of conventional RNN contains an
input element and recurrent transition from the previous
status, which are captured by an input matrix and a transition
matrix respectively. Different from conventional RNN using
arXiv:1609.05787v1 [cs.IR] 19 Sep 2016