Cogn Comput
historical check-in location can reflect users’ daily behav-
iors. What is more, users’ historical check-in location and
users’ social behavior were meaningfully connected [21]. In
Zheng et al. [3], proposed the HGSM-based recommenda-
tion model based on individual location history and social
relationships to make friend and location recommendations.
InLiandChen[22], proposed a three-layered friendship
model in order to evaluate the similarity among users in
LBSNs. The model made use of social relationship, users’
behaviors and mobility models to find the relationships
among users. In Sui et al. [4], proposed a location sensitive
friend recommendation model combining users’ location
sequences detected from the posts. In Bagci and Karagoz
[5], proposed context-aware friend recommendation in
LBSNs based on current context and location. However,
these methods did not ensure both accuracy and efficiency
of friend recommendation simultaneously. Our work is dif-
ferent from these existing studies. The FE-ELM model
uses a new perspective to solve the problem of friend rec-
ommendationinLBSNs,wherefriendrecommendationis
regarded as a binary classification problem. FE-ELM model
integrates spatial-temporal, social, and textual properties
simultaneously to extract users’ features. And ELM with
fast learning speed is selected as the classifier. Therefore,
the accuracy and efficiency of FE-ELM model outperform
these existing friend recommendation methods.
Preliminary and Overview of Our Model
In this section, first, the preliminary is given, which con-
tains a brief introduction of ELM and some definitions used
in this paper. Then, the overview of FE-ELM model is
introduced. Finally, Table 1 shows the notations frequently
used in this paper.
Extreme Learning Machine
The training requires N arbitrary samples (x
j
, t
j
),where
x
j
=[x
j1
,x
j2
, ··· ,x
jn
]
T
∈ R
n
and t
j
=[t
j1
,t
j2
, ··· ,t
jm
]
T
∈ R
m
. x
j
is feature
vector. t
j
is the target result. Huang et al. [14] modeled
SLFN with
N hidden nodes and activation function g(x).
The output function of SLFNs can be given by
N
i=1
β
i
g
i
(x
j
)=
N
i=1
β
i
g(w
i
·x
j
+b
i
)= o
j
(j = 1, ······ ,N)
(1)
In Eq. (2), the w
i
=[w
i1
,w
i2
, ··· ,w
in
]
T
is input weight
of i
th
hidden node, β
i
=[β
i1
,β
i2
, ··· ,β
im
]
T
is the output
weight of i
th
hidden node. Besides, b
i
represents the bias of
the i
th
hidden node. o
j
is the predicted result.
The goal of SLFN learning is to minimize the error of the
output, which can be expressed as
N
j=1
o
j
− t
j
=0(2)
there exist β
i
, w
i
,andb
i
satisfying that
N
i=1
β
i
g(w
i
· x
j
+ b
i
) = t
j
(j = 1, ······ ,N) (3)
Table 1 The frequently used notations
N The number of t raining samples U
u
The bound norm of vector U
u
(x
j
, t
j
) The training sample ST sim(u, υ) The spatial-temporal feature
N The number of hidden nodes |T (u, υ)| The number of transition-users
w
i
The input weight |P (u)| The number of all neighbor users
β
i
The output weight ST ran(u, υ) The social feature
b
i
The bias of hidden node K The set of keywords
o
j
The predicted result r
i
The visited vertex
H The output matrix in(i) The set that points to vertex
T The target result matrix |U
i
∩ U
j
| The number of common friends
H
The Moore-Penrose inverse of H V
u
The vector of keyword preference
G Asocialnetwork w
ij
The weight in UUUK graph
C The set of check-ins out
j
The set that vertex points to
U The set of users |U
i
| The number of user’s friends
E The set of edges
|U
i
|·|U
j
| The average number of friends
U
u
The vector of user u in the matrix T P er(u, υ) The textual feature