B
j
e
The recovery time point of the machine breakdown
disruption on machine j.
w
1
The penalty coefficient.
M Sufficiently large constant.
Decision variables
s
i;j
The starting time of operation O
ij
in the new schedule.
c
i;j
The completion time of operation O
ij
in the new
schedule.
p
i;j
The processing time of operation O
ij
in the new schedule.
x
ij
A 0/1 variable that is equal to one if and only if operation
O
ij
has a different starting time in the on-going and new
schedules.
y
i;j;1
A 0/1 variable that is equal to one if and only if s
i;j
þ
p
i;j
r B
j
s
.
y
i;j;2
A 0/1 variable that is equal to one if and only if s
i;j
o B
j
s
and s
i;j
þp
i;j
4 B
j
s
.
y
i;j;3
A 0/1 variable that is equal to one if and only if B
j
s
r s
i;j
.
2.2. Mathematical model
With the variables defined above, the mathematical model for
the flowshop rescheduling problem is given as follows:
f ¼ w
1
n
f
1
þð1 w
1
Þ
n
f
2
ð1Þ
f
1
¼ min max
1 r i r n
c
i;m
ð2Þ
f
2
¼ min
∑
n
i ¼ 1
∑
m
j ¼ 1
x
ij
()
ð3Þ
s.t.
c
i;j
s
i;j
p
i;j
þð1 y
i;j;1
ÞM Z 0 ð4Þ
c
i;j
s
i;j
p
i;j
ð1 y
i;j;1
ÞM r 0 ð5Þ
c
i;j
s
i;j
p
i;j
B
j
e
þB
j
s
þð1y
i;j;2
ÞM Z 0 ð6Þ
c
i;j
s
i;j
p
i;j
B
j
e
þB
j
s
ð1y
i;j;2
ÞM r 0 ð7Þ
c
i;j
max fs
i;j
; B
j
e
gp
i;j
þð1y
i;j;3
ÞM Z 0 ð8Þ
c
i;j
max fs
i;j
; B
j
e
gp
i;j
ð1y
i;j;3
ÞM r 0 ð9Þ
∑
3
k ¼ 1
y
i;j;k
¼ 1; iA f1; 2; …; ng; jA f 1; 2; …; mg; kA f1; 2; 3gð10Þ
c
i
1
;j
c
i
2
;j
p
i
1
;j
Z 0orc
i
2
;j
c
i
1
;j
p
i
2
;j
Z 0; 8i
1
a i
2
ð11Þ
s
i;j
Z r
i;j
; i A f1; 2; …; ng; jA f1; 2; …; mgð12Þ
s
i þ 1;j
Z c
i;j
; i A f1; 2; …; n 1g; j A f1; 2; …; mgð13Þ
x
ij
¼ 0; 1fg; i A f1; 2; …; ng; jA f1; 2; …; mgð14Þ
y
ijk
¼ 0; 1fg; iA f1; 2; …; ng; jA f1; 2; …; mg; kA f1; 2; 3gð15Þ
In this mathematical model, the total weight ed objective and the
makespan objective are given in Eqs. (1) and (2), respectiv ely. Eq. (3)
illustrate s the second objective of the problem, i.e., the instability
objective, which is used to minimise the number of tasks whose
starting times have been altered in the new schedule. Thr ee situa-
tions under the machine breakdown disruption are guarant eed by
Constraint s (4) to (9).Thefir st situation that is guar ant eed by
Constraints (4) and (5) is for the operations that hav e completed
their tasks before the machine breakdown. Constraints (6) and (7)
guarant ee the second situa tion, which is for the operations that
over lap with the machine breakdo wn, while Constr aints (8) and (9)
guarantee the third situation, in which the operations begin their
work after the machine breakdown disruption. Constraint (10) shows
that only one case can occur in the problem that is under study.
Constraint (11) shows that multiple jobs canno t be pr ocessed on the
same machine at the same time. Constraint (12) implies that a job
cannot be started until its release event occurs. In Constraint (13),the
operation sequence is realised for the same job; in other wor ds, the
following operation cannot be started until the completion of the
predecessor operation of the same job. Constraints (1 4) and (1 5)
define the value ranges for the decision variables.
3. The canonical TLBO
TLBO mimics the learning process of a teacher and a population
of learners. In the canonical TLBO, several learners construct a
population of activities. The best learner of the current population
is selected as the teacher. All of the learners will perform two
evolving phases, i.e., the teaching phase and the learning phase.
3.1. Teaching phase
In the canonical TLBO, the first part of the algorithm is
the teaching phase, where the learners learn through the teacher
by using the difference between the teacher and the mean result
of the current population. In a problem with n dimension, at
any iteration t, let X
i
t
(i¼ 1,2,…,m) be the ith learner in the
population, let M
t;j
be the mean result of the learners in a
particular subject j (j¼1,2,…,n), and let X
b
t
be the best learner in
the current population, which is also selected as the current
teacher. The detailed implementation of the teaching phase is
given as follows:
Step 1. Compute the difference D
t;j
Let D
t;j
be the difference between the teacher X
b
t
and the mean
result M
t;j
. Formula (16) gives the computation process of D
t;j
.
D
t;j
¼ r
t
ðX
b
t;j
T
F
M
t;j
Þð16Þ
where X
b
t;j
is the result of the teacher in subject j, r
t
is a random
number in the range [0, 1], and T
F
is the teaching factor that decides
thevalueofthemeantobechanged.ThevalueofT
F
can be either
1 or 2, which is again a heuristic step and is decided randomly with
equal probability (Rao et al., 20 1 2).
Step 2. Generate a new learner
Learner X
i
t
updates its state in subject j by combining the
difference D
t;j
and its current state:
X
i
'
t;j
¼ X
i
t;j
þD
t;j
ð17Þ
where X
i
'
t;j
is the updated value of the current i
th
learner X
i
t
in
subject j.
Step 3. If X
i
'
t
is better than X
i
t
, then replace the latter by the
new value.
3.2. Learning phase
In the learning phase, several learners evolve through learning
from each other. The main steps of the learning phase are as follows:
Step 1. For the learner X
i
t
, randomly select another learner X
k
t
.
Step 2. If X
i
t
is better than X
k
t
, then let
X
i
'
t
¼ X
i
t
þr
i
ðX
i
t
X
k
t
Þð18Þ
J.-q. Li et al. / Engineering Applications of Artificial Intelligence 37 (2015) 279–292 281