578 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 19, NO. 4, AUGUST 2015
Algorithm 1 One-Test-at-a-Time Strategy
1: Input: SUT(n,
1
, ...,
n
) and covering strength t
2: Output: covering array TS
3: TS =∅
4: Construct S (all the t-way schemas to be covered) based
on n,
1
, ...,
n
and t
5: while S =∅ do
6: Generate a test case p with the highest fitness value
according to some heuristics
7: Add p to the test suite TS
8: Remove the t-way schemas covered by p from S
9: end while
10: return TS
Definition 4 (Fitness Function): Let TS be the generated
test set, and p be a test case. Then fitness(p) is the number of
uncovered t-way schemas in TS that are covered by p.
The fitness function can be formulated as
fitness(p) =|schema
t
({p}) − schema
t
(TS)| (1)
where schema
t
(TS) represents the set of all t-way schemas
covered by test set TS, and |·| stands for cardinality. When
all C
t
n
t-way schemas covered by p are not covered by TS,
the fitness function reaches the maximum value fitness(p) =
|schema
t
({p})|=C
t
n
.
For example, consider the 2-way covering array generation
of the e-commerce system shown in Table II. Suppose that
TS consists of test cases (0, 0, 0, 0, 0) and (0, 0, 1, 1, 1).
The fitness function computes the fitness value for a candi-
date test case p = (1, 0, 1, 2, 2) as follows: because p covers
ten 2-way schemas, namely schema
2
({p}) ={(1,0,–,–,–),
(1, –, 1, –, –,), (1, –, –, 2, –), (1, –, –, –, 2), (–, 0, 1, –, –),
(–,0,–,2,–),(–,0,–,–,2),(–,–,1,2,–),(–,–,1,–,2),
(–,–,–,2,2)}andTS onlycovers(–,0,1,–,–)inp,the
function returns fitness(p) =9.
C. PSO
PSO is a swarm-based evolutionary computation technique.
It was developed by Kennedy et al.[30], inspired by the social
behavior of bird flocking and fish schooling. PSO utilizes a
population of particles as a set of candidate solutions. Each
of the particles represents a certain position in the problem
hyperspace with a given velocity. A fitness function is used
to evaluate the quality of each particle. Initially, particles are
distributed in the hyperspace uniformly. Then each particle
repeatedly updates its state according to the individual best
position in its history (pbest) and the current global best posi-
tion (gbest). Eventually, each particle possibly moves toward
the direction of the individual optimum and global optimum,
and finds an optimal or near optimal solution.
Suppose that the problem domain is a D−dimensional
hyperspace. Then the position and velocity of particle i can
be represented by x
i
∈ R
D
and v
i
∈ R
D
respectively. CPSO
uses the following equations to update a particle’s velocity
and position, where v
i,j
(k) represents the jth component of the
velocity of particle i at the kth iteration, and x
i,j
(k) represents
its corresponding position:
v
i,j
(k + 1) = ω × v
i,j
(k) + c
1
× r
1,j
× (pbest
i,j
− x
i,j
(k))
+ c
2
× r
2,j
× (gbest
j
− x
i,j
(k)) (2)
x
i,j
(k + 1) = x
i,j
(k) + v
i,j
(k + 1). (3)
The best position of particle i in its history is pbest
i
, and
gbest is the best position among all particles. The velocity-
update equation (2) captures the three basic concepts of PSO.
The first is inertia, the tendency of the particle to continue in
the direction it has been moving. The second is memory of
the best position ever found by itself. The third is cooperation
using the best position found by other particles.
The parameter ω is inertia weight. It controls the bal-
ance between exploration (global search state) and exploitation
(local search state). Two positive real numbers c
1
and c
2
are acceleration constants that control the movement ten-
dency toward the individual and global best position. Most
studies set ω = 0.9, and c
1
= c
2
= 2 to get the best
balance [17], [31], [32]. In addition, r
1,j
and r
2,j
are two uni-
formly distributed random numbers in the range of [0.0, 1.0],
used to ensure the diversity of the population.
If the problem domain (the search space of particles) has
bounds, a bound handling strategy is adopted to keep the par-
ticles inside the search space. Many different strategies have
been proposed [33]. In the reflecting strategy, when a particle
exceeds the bound of the search space in any dimension, the
particle direction is reversed in this dimension to get back to
the search space. For example, in case of a dimension with
a range of values from 0 to 2, if a particle moves to 3, its
position is reversed to 1.
In addition, as the velocity can increase over time, a limit
is set on velocity to prevent an infinite velocity or invalid
position for the particle. Setting a maximum velocity, which
determines the distance of movement from the current position
to the possible target position, can reduce the likelihood of
explosion of the swarm traveling distance [31]. Generally, the
value of the maximum velocity is selected as
i
/2, where
i
is the range of dimension i.
The pseudocode in Algorithm 2 presents the process of gen-
erating a test case by PSO. This algorithm can be invoked
by line 6 of Algorithm 1 to generate a test case for t-way
schemas. The n factors of the test case can be treated as an
n-dimensional hyperspace. A particle p
i
=(x
1
, x
2
, ..., x
n
) can
be regarded as a candidate test case. The fitness function is
that of Definition 4, the number of uncovered t-way schemas
in the generated test suite that are covered by particle p
i
. PSO
employs real numbers but the valid values are integers for cov-
ering array generation, so each dimension of particle’s position
can be rounded to an integer while maintaining the velocity
as a real number. This method is used in all prior research
applying PSO to covering array generation [13]–[16].
III. R
ELATED WORK
In this section, three different but related aspects are dis-
cussed. We first summarize search-based CT, especially the
current applications of PSO for covering array generation.
Then, we introduce prior research on discrete versions of PSO,