Short version of this paper is accepted at ICLR 2019 Workshop
RECURRENT EVENT NETWORK FOR REASONING OVER
TEMPORAL KNOWLEDGE GRAPHS
Woojeong Jin
†
, Changlin Zhang
†
, Pedro Szekely
‡
, Xiang Ren
†‡
†
Department of Computer Science, University of Southern California
‡
Information Sciences Institute, University of Southern California
{woojeong.jin, changlin.zhang, xiangren}@usc.edu, pszekely@isi.edu
ABSTRACT
Recently, there has been a surge of interest in learning representation of graph-
structured data that are dynamically evolving. However, current dynamic graph
learning methods lack a principled way in modeling temporal, multi-relational,
and concurrent interactions between nodes—a limitation that is especially prob-
lematic for the task of temporal knowledge graph reasoning, where the goal is to
predict unseen entity relationships (i.e., events) over time. Here we present Re-
current Event Network (RE-NET)—an architecture for modeling complex event
sequences—which consists of a recurrent event encoder and a neighborhood ag-
gregator. The event encoder employs a RNN to capture (subject, relation)-specific
patterns from historical entity interactions; while the neighborhood aggregator
summarizes concurrent interactions within each time stamp. An output layer is
designed for predicting forthcoming, multi-relational events. Experiments
1
on
temporal link prediction over two knowledge graph datasets demonstrate the ef-
fectiveness of our method, especially on multi-step inference over time.
1 INTRODUCTION
Representation learning on graph-structured data that are dynamically evolving has emerged as an
important machine learning task in a wide range of applications, such as social network analysis,
question answering, and event forecasting. This task becomes particularly challenging when dealing
with multi-relational graphs with complex interaction patterns between nodes—e.g., in reasoning
over temporal knowledge graphs (TKGs). However, despite that there has been some recent stud-
ies on representation learning and reasoning over TKGs (Trivedi et al., 2017; Garc
´
ıa-Dur
´
an et al.,
2018; Dasgupta et al., 2018; Leblay & Chekol, 2018), these methods either simply embed the as-
sociated time information into low-dimensional space while ignoring the temporal dependencies
between events (Garc
´
ıa-Dur
´
an et al., 2018; Dasgupta et al., 2018; Leblay & Chekol, 2018), or lack a
principled way to consolidate concurrent events within the same time stamps (Trivedi et al., 2017).
In this paper, we propose a general neural architecture, called Recurrent Event Network (RE-NET),
for modeling multi-relational event sequences. To address the above limitations, RE-NET introduces
an event sequence encoder and a neighborhood aggregation module. The event sequence encoder
captures temporal and multi-relation dynamics by utilizing the past interactions between entities
(i.e., events). This encoder harness a recurrent neural network to encode the past entity interactions.
The neighborhood aggregation module resolves multiple concurrent interactions at the same time
stamp by consolidating neighborhood information via different ways. A classifier layer is designed
to predict unseen entity relationships for the current time stamp, given prior encoder state, subject
entity, and relation. We adopt multi-class cross entropy loss to learn the RE-NET model and perform
multi-step inference for predicting forthcoming events on the graph over time.
We evaluate our proposed method on temporal graph reasoning (i.e., link prediction) using two
public temporal knowledge graph datasets, and test the performance of multi-step inference over
time. Experiment results demonstrate that the strengths of RE-NET on modeling temporal, multi-
relational graph data with concurrent events, over the state-of-the-art static and temporal graph rea-
1
Code and data are released at https://github.com/INK-USC/RENet.
1
arXiv:1904.05530v1 [cs.LG] 11 Apr 2019