222 W. Shao et al. / Knowledge-Based Systems 107 (2016) 219–234
ordinal matrix describes job orders in the sequences. Let ϕ
j
, j
rep-
resent the element of , which means the number of times that
job j
appears immediately after the job j . The dependency matrix
describes similar job blocks of selected elite learners.
Up to now, several researchers [33–37] have proposed a lot
of probabilistic models based on the ordinal matrix and the
dependency matrix to solve scheduling problems. Unfortunately,
these models suffer from some limitations when applied to solve
flow shop scheduling problems. To be convenient for explana-
tion, let ρ
j, i
denote the probability that job j appears in the
position i , and ( i ) represents the set of jobs not scheduled
until the position i . Jarboui et al. [33] considered ρ
j,i
= φ
j,i
×
ϕ
j, [ i −1]
/
l∈ (i )
φ
l,i
× ϕ
l, [ i −1]
, where [ i − 1] denotes the job in the
position i − 1 . According to the definition, it is obvious that there
is no job which can be arranged in the first position, since
ϕ
j, [ i −1]
= 0 when i = 1 . Although the literature [38] combined the
above probabilistic model with PSO, the jobs in the first posi-
tion are invalid. To overcome the above drawback, Pan and Ruiz
[34] considered the probability that the job j appears in the first
position ρ
j,i
= φ
j,i
/
l∈ (i )
φ
l,i
, and let ρ
j,i
= (( φ
j,i
/
l∈ (i )
φ
l,i
) +
( ϕ
j
, j
/
l∈ (i )
ϕ
l,i
)) / 2 when i = 2 , ··· , n . However, the model pro-
posed by Pan and Ruiz also has some limitations. Firstly, it is un-
reasonable to assume that each sequence has the same weight.
Secondly, the probabilistic model also lacks an effective update
mechanism. It would cause that the probabilistic model cannot
save the historical information of population and the population
easily loses diversity. Chen et al. [35] suggested ρ
j, i
is equal to a
random value with the uniform distribution when i = 1 , and the
probabilistic model is updated with incremental learning as in PBIL
[39] to modify the search space and improve the performance. In
fact, that random method could not gather the genetic informa-
tion in the first position and reflect the current state of the su-
perior population. In addition, Shen and Wang [40,41] make the
probabilistic model of PBIL suited to gather the information of job
permutations, but the similar job blocks are not considered in this
probabilistic model.
After investigating the limitations of the previously mentioned
models, this article presents an incremental learning probabilistic
model. Let λ
j
, j
(t) represent the probability that job j
appears im-
mediately after the job j in the t -th iteration. μ
j, i
( t ) denotes the
probability of job j before or in the position i in the t -th iteration.
ɛ
j, i
( t ) denotes the probability of job j in the position i in the t -th
iteration when i = 1 . λ
j
, j
(t) and μ
j, i
( t ) are defined as Eq. (8 ).
λ
j
, j
(t) =
ϕ
j
, j
(t)
n
j
=1
ϕ
j
, j
(t)
,
μ
j,i
(t) =
φ
j,i
(t)
n
j=1
φ
j,i
(t)
i = 2 , 3 , ···n , j = 1 , 2 ··· , n (8)
From the above design, it can be seen that the proposed prob-
abilistic model considers the similar blocks of jobs in the selected
individuals. Therefore, the job in the position i has a great relation-
ship with the job in the position i − 1 . This paper employs a weight
mechanism to determine the job in the first position. The original
weight of each individual is replaced by the rank of each individ-
ual in the superior set according to the makespan. Thus, ɛ
j, i
( t ) can
be defined as Eq. (9 ),
ε
j,i
(t) =
sumrank
j
(t)
subsize
m =1
sumrank
j
(t)
i = 1 (9)
where sumrank
j
( t ) represents the total rank of job j that appears in
the position i in the t -th iteration.
In order to make the probabilistic model can balance the his-
torical and current information of the superior population, the in-
cremental learning mechanism based on Heb rule [39] is adopted
to update the λ
j
, j
(t) , μ
j, i
( t ) and ɛ
j, i
( t ), described as Eqs. (10 –
12
), where γ ∈ (0, 1) is a learning rate. Indeed, the mechanism
can slow down the convergence of the algorithm and adjust the
searching space.
λ
j
, j
(t) = (1 − γ ) · λ
j
, j
(t − 1) + γ · λ
j
, j
(t) (10)
μ
j,i
(t) = (1 − γ ) · μ
j,i
(t − 1) + γ · μ
j,i
(t) (11)
ε
j,i
(t) = (1 − γ ) · ε
j,i
(t − 1) + γ · ε
j,i
(t) (12)
According to the above definitions, the probability ρ
j, i
( t ) that
job j appears in the position i in the t -th iteration with a learning
strategy can be established as Eq. (13 ).
ρ
j,i
(t) =
ε
j,i
(t) i = 1
ω · μ
j,i
(t) + (1 − ω) · λ
j
, [ i −1]
(t) i = 2 , 3 , ··· , n
(13)
where ω ∈ (0, 1) is a scaling factor used to control the proportion
of μ
j, i
( t ) and λ
j
, j
(t) .
In the canonical learning phase, each learner evolves through
learning from each other. In this paper, each learner learns from
the probability model to generate the offspring. Cr ∈ (0, 1) is a
cross rate and r is a random number sampled from a uniform dis-
tribution between 0 and 1. The procedure of generating offspring
can be described as follows: from the first position to the last po-
sition, if r < Cr , the first job in the unscheduled set is inserted into
π
new
i
. If r ≥ Cr , an unscheduled job with the maximum probability
ρ
j, i
( t ) is put into π
new
i
. Based on the above explanation, the pseudo
code of the discrete probabilistic learning phase is given as follows
Algorithm 2 .
In order to illustrate the procedure of the proposed discrete
probabilistic learning phase clearly, we give a simple example here.
The manufacturing process has five jobs and three processing op-
erations. Suppose that there are five elite learners selected from
the population as shown in Fig. 2 , and all elite learners have been
sorted in an ascending order according to the makespan. First, we
compute the probability ɛ of the jobs that appear in the first posi-
tion. Then μ and λ are computed as Fig. 2.
Next, suppose that ω = 0 . 2 , Cr = 0 . 5 , the current learner is π =
{ 2 , 4 , 7 , 5 , 3 , 6 , 1 } , the unscheduled set = { 1 , 2 , 3 , 4 , 5 , 6 , 7 } , the
procedure of generating a new learner is shown as follows.
(1) i = 1 . If r = 0 . 6 > Cr, then a job in first position is gener-
ated by ɛ . From ɛ in Fig. 2 , job2 and job4 have the maxi-
mum probability 0.33, we randomly select a job from job2
and job4 to put in the first position. Let π
new
(1) = 2 , and
job2 is deleted from π and , then π = { 4 , 7 , 5 , 3 , 6 , 1 } , =
{ 1 , 3 , 4 , 5 , 6 , 7 } .
(2) i = 2 . If r = 0 . 3 < Cr, then job4 is selected from π and let
π
new
(2) = 4 . Job4 is removed from π and , then π =
{ 7 , 5 , 3 , 6 , 1 } , = { 1 , 3 , 5 , 6 , 7 } .
(3) i = 3 . If r = 0 . 9 > Cr, then the probability of all unscheduled
jobs are computed, ρ
1 , 3
= 0 . 2 × 0 . 2 + 0 . 8 × 0 . 25 = 0 . 24 ,
ρ
3 , 3
= 0 . 2 × 0 . 13 + 0 . 8 × 0 . 25 = 0 . 226 , ρ
5 , 3
= 0 . 2 × 0 . 13 +
0 . 7 × 0 . 00 = 0 . 026 , ρ
6 , 3
= 0 . 2 × 0 . 13 + 0 . 8 × 0 . 25 = 0 . 226 ,
ρ
7 , 3
= 0 . 2 × 0 . 07 + 0 . 8 × 0 . 25 = 0 . 214 . Thus, job1 has the
maximum probability, and then π
new
(3) = 1 , π = { 7 , 5 , 3 , 6 } ,
= { 3 , 5 , 6 , 7 } .
(4) i = 4 . If r = 0 . 7 > Cr, then ρ
3 , 4
= 0 . 2 × 0 . 10 + 0 . 8 × 0 . 00 =
0 . 1 , ρ
5 , 4
= 0 . 2 × 0 . 20 + 0 . 8 × 0 . 25 = 0 . 24 , ρ
6 , 4
= 0 . 2 × 0 . 10 +
0 . 8 × 0 . 25 = 0 . 22 , ρ
7 , 4
= 0 . 2 × 0 . 20 + 0 . 8 × 0 . 25 = 0 . 24 .
Thus, π
new
(4) = 5 , π = { 7 , 3 , 6 } , = { 3 , 6 , 7 } .
(5) i = 5 . If r = 0 . 2 < Cr, then π
new
(5) = 7 , π = { 3 , 6 } , =
{ 3 , 6 } .