1076 B. Xu et al. / Applied Soft Computing 11 (2011) 1074–1086
v
t
(x) = (1 − P
D,t
(x))v
t|t−1
(x)
+
z
t
P
D,t
(x)h(z
t
|x)v
t|t−1
(x)
K(z
t
) +
P
D,t
()h(z
t
|)v
t|t−1
()d
(9)
where P
S,t
() is the surviving probability of the previous target state
, ˇ
t|t−1
(x|) denotes the intensity of the spawning RFS with a pre-
vious state ,
t
(x) is the intensity of birth RFS, and P
D,t
(x)isthe
detection probability of a target with state x at time t.
Eqs. (8) and (9) describe the prediction and update of the inten-
sity function, respectively, and note that the implementation of the
PHD recursion requires less cost than the original multiple-target
Bayesian recursion in Eqs. (5) and (6) since the former operates on
the single-target state space X while the latter does on the space
F(X) of all finite subsets of X. Even so, the PHD has no analytic solu-
tions in general, and various efforts have been made and many
promising results have been achieved under a variety of assump-
tions [16,17].
2.2. The ant algorithms
Swarm intelligence is a novel approach to problem solving
that takes inspiration from the social behavior, such as foraging
and clustering of dead corpses, of insects and of other animals.
In particular, ants have inspired a number of methods and tech-
niques among which the most studied and the most successful
two versions are the ant colony optimization algorithm and the
ant-based clustering algorithm. Both are population-based and
meta-heuristic approaches, but the former is based on the opti-
mization whilst the latter is on a matter of experience. A brief
introduction of the two ant algorithm is described below.
2.2.1. The ant colony optimization algorithm
Ant colony optimization (ACO) algorithm was first proposed by
Dorigo et al. to solve several discrete optimization problems, and
its successful application is to solve the traveling salesman prob-
lem (TSP) by the ant system (AS) [21], where ants iteratively build
solutions and deposit a given amount of pheromone to paths that
they have traveled.
Path selection is a stochastic procedure based on a product
of two parameters, i.e., the pheromone and heuristic values. For
instance, if ant k is now located at the decision point i, then it will
choose city j according to the following law:
P
k
i,j
=
[
i,j
]
˛
[
i,j
]
ˇ
l ∈ N
k
i
[
i,l
]
˛
[
i,l
]
ˇ
if j ∈ N
k
i
(10)
where ˛ and ˇ are two adjustment parameters which determine
the relative importance between the pheromone trail
i,j
and the
heuristic information
i,j
, and N
k
i
is a set of city candidates which are
not visited by ant k. Observe that a big value for
i,j
will attract more
ants to visit path ij, and moreover, ants prefer to a short path visit
that results in a big value of
i,j
. Therefore, it is observed that such a
probabilistic decision encourages ants to find the shortest-path to
food source.
Once the ant arrives at its destination, the solution correspond-
ing to the ant’s traveled path is evaluated and the corresponding
pheromone value on the path is updated according to the following
law:
i,j
←
i,j
+
k
k
i,j
, (11)
where is the pheromone trail persistence, and
k
i,j
is the
pheromone amount that ant k deposit on the trail ij it has traversed.
This pheromone update process aims to exploit the best path by
k
k
i,j
and eliminates the possibility of worse path to be visited
by following ants. In addition,
i,j
causes the pheromone level of
all trails to diminish gradually, and trails that are not reinforced
gradually lose pheromone and will in turn have a lower probability
of being chosen by subsequent ants.
As a consequence, the path with the highest pheromone, corre-
sponding to the shortest distance, forms the solution to the TSP.
2.2.2. Ant-based clustering algorithm
Ant-based clustering has been first introduced by Deneubourg
et al. [26] to explain the phenomena of ants gathering corpses to
clean up their nest, transporting larvae to place them by size. In
the ant-based clustering algorithm, we assume that the behaviors
of each ant consist of picking and dropping, and all data items are
randomly scattered on the grid. Initially, each ant randomly picks
up one data item, and then it is placed at a random position on the
grid. For instance, for an ant k, it first moves a step of a given step-
size on the grid, then it will decide whether to drop its data item at
the new point according to the probability:
P
k
drop
(i) =
f (i)
k
−
+ f (i)
2
(12)
where f(i) denotes the neighborhood function (or called similarity
estimate) with the form of f (i) = max{0, (1/
2
)
j
(1 − (d
i,j
/˛))},
k
−
is an empirical parameter and equal to 0.3,
2
is the size of
the local neighborhood, ˛ is the scaling parameter, and d
i,j
is the
dissimilarity function defined between points in data space.
If the “drop” decision is made, the ant drops the data item at
its current grid position or in the immediate neighborhood of it.
Then it will search for a new “free” data item to pick it up with the
probability:
P
k
pick
(i) =
k
+
k
+
+ f (i)
2
(13)
where k
+
is also an empirical parameter and takes the value of 0.1.
This dropping-picking behavior is performed and repeated for each
ant until the iteration condition is satisfied, and the obtained layout
of items constitutes a clustering result in the form of size, or other
properties.
2.3. Motivation for combining PHD with ant algorithms
As indicated in Eqs. (8) and (9), since the Bayesian recursion
of PHD operates on the single-target state space X, the intuitive
approximation solution to this problem is to give the intensity
function
v
t−1
(
•
) particle description at time t − 1, i.e.:
v
t−1
(x
t−1
) ≈
L
t−1
i=1
w
(i)
t−1
ı
x
(i)
t−1
(x
t−1
) (14)
where ı
x
(i)
t−1
(x
t−1
) is the Dirac delta function concentrated at x
(i)
t−1
,
w
(i)
t−1
is the corresponding weight, and L
t−1
is the number of parti-
cles.
Assume that there are J
t
particles to represent the intensity func-
tion ␥
t
(x) at time t, and two non-negative proposal densities, i.e.,
p
t
(
•
|Z
t
) and q
t
(
•
|x
(i)
t−1
, Z
t
), are chosen, then the predicted PHD can
be approximated and rewritten as [16]:
v
t|t−1
(x
t
) =
L
t−1
+J
t
i=1
w
(i)
t|t−1
ı
x
(i)
t
(x
t
) (15)
where
x
(i)
t
∼
q
t
(
•
|x
(i)
t−1
, Z
t
) i = 1, ..., L
t−1
p
t
(
•
|Z
t
) i = L
t−1
+ 1, ..., L
t−1
+ J
t