Knowledge Graph Embedding via Dynamic Mapping Matrix
Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu and Jun Zhao
National Laboratory of Pattern Recognition (NLPR)
Institute of Automation Chinese Academy of Sciences, Beijing, 100190, China
{guoliang.ji,shizhu.he,lhxu,kliu,jzhao}@nlpr.ia.ac.cn
Abstract
Knowledge graphs are useful resources for
numerous AI applications, but they are far
from completeness. Previous work such as
TransE, TransH and TransR/CTransR re-
gard a relation as translation from head en-
tity to tail entity and the CTransR achieves
state-of-the-art performance. In this pa-
per, we propose a more fine-grained model
named TransD, which is an improvement
of TransR/CTransR. In TransD, we use
two vectors to represent a named sym-
bol object (entity and relation). The first
one represents the meaning of a(n) entity
(relation), the other one is used to con-
struct mapping matrix dynamically. Com-
pared with TransR/CTransR, TransD not
only considers the diversity of relations,
but also entities. TransD has less param-
eters and has no matrix-vector multipli-
cation operations, which makes it can be
applied on large scale graphs. In Experi-
ments, we evaluate our model on two typ-
ical tasks including triplets classification
and link prediction. Evaluation results
show that our approach outperforms state-
of-the-art methods.
1 Introduction
Knowledge Graphs such as WordNet (Miller
1995), Freebase (Bollacker et al. 2008) and Yago
(Suchanek et al. 2007) have been playing a piv-
otal role in many AI applications, such as relation
extraction(RE), question answering(Q&A), etc.
They usually contain huge amounts of structured
data as the form of triplets (head entity, relation,
tail entity)(denoted as (h, r, t)), where relation
models the relationship between the two entities.
As most knowledge graphs have been built either
collaboratively or (partly) automatically, they of-
ten suffer from incompleteness. Knowledge graph
completion is to predict relations between entities
based on existing triplets in a knowledge graph. In
the past decade, much work based on symbol and
logic has been done for knowledge graph comple-
tion, but they are neither tractable nor enough con-
vergence for large scale knowledge graphs. Re-
cently, a powerful approach for this task is to en-
code every element (entities and relations) of a
knowledge graph into a low-dimensional embed-
ding vector space. These methods do reasoning
over knowledge graphs through algebraic opera-
tions (see section ”Related Work”).
Among these methods, TransE (Bordes et al.
2013) is simple and effective, and also achieves
state-of-the-art prediction performance. It learns
low-dimensional embeddings for every entity and
relation in knowledge graphs. These vector em-
beddings are denoted by the same letter in bold-
face. The basic idea is that every relation is re-
garded as translation in the embedding space. For
a golden triplet (h, r, t), the embedding h is close
to the embedding t by adding the embedding r,
that is h + r t t. TransE is suitable for 1-to-1
relations, but has flaws when dealing with 1-to-
N, N-to-1 and N-to-N relations. TransH (Wang
et al. 2014) is proposed to solve these issues.
TransH regards a relation as a translating oper-
ation on a relation-specific hyperplane, which is
characterized by a norm vector w
r
and a trans-
lation vector d
r
. The embeddings h and t are
first projected to the hyperplane of relation r to
obtain vectors h
?
= h w
>
r
hw
r
and t
?
=
t w
>
r
tw
r
, and then h
?
+ d
r
t t
?
. Both
in TransE and TransH, the embeddings of entities
and relations are in the same space. However, en-
tities and relations are different types of objects,
it is insufficient to model them in the same space.
TransR/CTransR (Lin et al. 2015) set a mapping
matrix M
r
and a vector r for every relation r.
In TransR, h and t are projected to the aspects
that relation r focuses on through the matrix M
r