0018-9545 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TVT.2014.2367029, IEEE Transactions on Vehicular Technology
4
Particularly in (4), parameter p
i
, ∀i ∈ N exactly describes the
property of each user, connecting the contributed information
with its recognized value. At the same time, it also indicates
the QoI difference among users. On the basis of (4), if we
consider the decreasing marginal returns in the amount of
information, another example can be:
u
a
(l) = ϑ
P
N
i=1
p
i
u
a
i
(l)/
·
N
X
i=1
p
i
u
a
i
(l), (6)
where ϑ ∈ (0, 1] is the decay coefficient, and is the scaling
factor.
As our proposal is based on the mathematical paradigm
of Gur Game, our solution is transparent to any specific
form of function used in the QoI model, whether or not it
is discontinuous, multimodal, or concave, etc. In this paper,
we adopt the fusion functions in (4) and (6), and verify the
adaptability of our solution in the Section VII.
B. System Flow
Gur Game [20], [21] was proposed to use in distribute
systems who wish a collection of agents to cooperate on a
task. Each agent is associated with a finite state automaton
that independently guides the agent’s action, while taking
into account the collective feedback that eventually captures
the composite effect of all agents’ actions. Compare to our
considered participatory crowdsourcing scenarios, the partic-
ipant’s smart device in this case acts as the “agent”, where
the associated automaton can be easily deployed through a
piece of software in the mobile OS. The “task” translates
exactly to our focused social studies crowdsourced from a
co-located group of participants; and the “composite effect
of all agents’ actions” is then the result of the participants’
action upon returning answers to the querier. Therefore, we
believe that the fundamentals of Gur Game serve as the ideal
engine algorithmically due to its robustness, simplicity and
decentralized features. To make it particularly suitable for
crowdsourcing, in this paper we largely extend the existing
Gur Game and use it as part of the overall system flow shown
in Fig. 1.
The system flow consists of two stages. The first stage
relates to the interaction between the information center of
the network platform and the Gur Game engine programmed
in user’s smart device. The inputs are the user’s smart device
residual energy levels and multiple QoI requirements of the
request. At each iteration step of the Gur Game, the smart
device sends its preliminary action (as a result of our proposed
Gur Game algorithm) back to the platform. The latter then
calculates the pay-off value based on the received collective
pieces of information from all participants, and propagates it
back to Gur Game engine of each user. Based on this feedback,
the automaton changes its current state and generates the new
action, i.e., the level of information contribution. Therefore,
the Gur Game engine uses the trial-and-error method to
produce the best result at each step and iteratively achieves
the overall optimum to fulfill the QoI requirements in aware
of energy efficiency.
Sept. 5, 2008
June 29, 2009
a 3-day time period
phone usages
questionnaire request
91 time periods
time
ĂĂĂĂ
lunch dinner lunch lunch dinnerdinner
ĂĂ
Fig. 2. The illustration of quantizing the entire duration into 91 time periods
and when considered participatory crowdsourcing request arrives.
The second stage relates to the interaction between the credit
center of the network platform and the bidding module of
user’s smart device. After the first stage, the platform obtains
each user’s preliminary action. Then, the users send their bids
to the platform, which represent their expected paid credits for
unit amount of information contribution. The network platform
selects the users who can meet the QoI requirements of the
request and help reduce total paid credits as the active users
for questionnaire answering, receives their final information
contribution determined by the Gur Game engine in the first
stage, and pay their credits according to the previous bids. We
next describe a detailed implementation and solution of this
framework.
IV. A CASE STUDY
This section first provides an overview of the used Social
Evolution data set gathered by MIT Media Lab [29], and
then illustrates how we use it to motivate our participatory
crowdsoucing application. The data set is generated by an
application on 80 undergraduates’ smart devices, who move
around the campus. It collects the phone usages and student
locations from October 2008 to June 2009. The phone usage
data consist of 3.15 million records of Bluetooth scans, 3.63
million scans of WLAN access-points, 61,100 call records, and
47,700 logged SMS events. Also, students provide offline, self-
report answers related to their health habits, diet and exercise,
weight changes, and political opinions during the presidential
election campaign. In our simulation, we use the phone usage
data, and the self-report answers on the health condition to
motivate and form the participatory crowdsourcing process as
described below.
A. Phone Usage Data
We extract the phone usage records from September 5, 2008
to June 29, 2009, in a total of 273 consecutive days. The
data include 49,906 voice call records with the calling time,
duration, caller and callee information, 33,148 SMS events
including the sending time, sender and receiver information.
Then, to facilitate our simulation, we quantize the entire 273
days into 91 independent time periods each of which involves
a 3-day usages, as shown in Fig. 2. We assume that phone
usages between two consecutive periods do not interrelate with
each other, and since the phone battery status is not provided
in the data set we simply assume that at the beginning of any
time period the device is fully charged. As a result, we have
91 time periods, each of which is associated a set of phone
usages, and this is used to evaluate how different phone data
impact on the proposed algorithm. Besides, we also make the
following two assumptions: