1768 D. Liu et al. / International Journal of Approximate Reasoning 55 (2014) 1764–1786
Definition 6. (See [43,44,60].) Given a decision table S = (U , C ∪ D, V , f ), U/C ={X
1
, X
2
, ···, X
m
} is a partition of objects
under the condition attributes of C , where X
i
(i = 1, 2, ···, m) is an equivalence class of condition attributes; U /D =
{
D
1
, D
2
, ···, D
n
} is a partition of objects under the decision attributes of D, where D
j
( j = 1, 2, ···, n) is an equivalence
class of decision attributes. ∀X
i
∈ U/C, ∀D
j
∈ U/D, the support, accuracy and coverage of a rule X
i
→ D
j
are defined
respectively as follows.
Support of X
i
→ D
j
: Supp(D
j
|X
i
) =|X
i
∩ D
j
|;
Accuracy of X
i
→ D
j
: Acc(D
j
|X
i
) =|X
i
∩ D
j
|/|X
i
|;
Coverage of X
i
→ D
j
: Cov(D
j
|X
i
) =|X
i
∩ D
j
|/|D
j
|,
where |X
i
| and |D
j
| denote the cardinality of set X
i
and D
j
, respectively. For a decision rule X
i
→ D
j
, X
i
is the predecessor
of the decision rule, D
j
is the successor of the decision rule according to Definition 5.
By
considering the massive data in real-life applications, we utilize the strategy of matrices to simplify the problem. The
support matrix, the accuracy matrix, the coverage matrix as well as their propositions are defined as follows [43,44].
Supp(D|X) =
⎛
⎜
⎜
⎜
⎝
Supp(D
1
|X
1
) Supp(D
2
|X
1
) ··· Supp(D
n
|X
1
)
Supp(D
1
|X
2
) Supp(D
2
|X
2
) ··· Supp(D
n
|X
2
)
.
.
.
.
.
.
.
.
.
.
.
.
Supp(D
1
|X
m
) Supp(D
2
|X
m
) ··· Supp(D
n
|X
m
)
⎞
⎟
⎟
⎟
⎠
(1)
Acc
(D|X) =
⎛
⎜
⎜
⎜
⎝
Acc(D
1
|X
1
) Acc(D
2
|X
1
) ··· Acc(D
n
|X
1
)
Acc(D
1
|X
2
) Acc(D
2
|X
2
) ··· Acc(D
n
|X
2
)
.
.
.
.
.
.
.
.
.
.
.
.
Acc(D
1
|X
m
) Acc(D
2
|X
m
) ··· Acc(D
n
|X
m
)
⎞
⎟
⎟
⎟
⎠
(2)
Cov
(D|X) =
⎛
⎜
⎜
⎜
⎝
Cov(D
1
|X
1
) Cov(D
2
|X
1
) ··· Cov(D
n
|X
1
)
Cov(D
1
|X
2
) Cov(D
2
|X
2
) ··· Cov(D
n
|X
2
)
.
.
.
.
.
.
.
.
.
.
.
.
Cov(D
1
|X
m
) Cov(D
2
|X
m
) ··· Cov(D
n
|X
m
)
⎞
⎟
⎟
⎟
⎠
(3)
Proposition 1. (See [43,44].) Supp(D
j
|X
i
) ≥ 0, ∀X
i
∈ U /C, ∀D
j
∈ U /D, i = 1, 2, ···, m, j = 1, 2, ···, n.
Proposition
2. (See [43,44].) 0 ≤ Acc(D
j
|X
i
) ≤ 1 and
n
j
=1
Acc(D
j
|X
i
) = 1, ∀X
i
∈ U /C, i = 1, 2, ···, m.
Proposition
3. (See [43,44].) 0 ≤ Cov(D
j
|X
i
) ≤ 1 and
m
i
=1
Cov(D
j
|X
i
) = 1, ∀D
j
∈ U /D, j = 1, 2, ···, n.
According to Definition 5, the relations among the three matrices in (1)–(3) can be written as: Acc(D
j
|X
i
) =
Supp(D
j
|X
i
)
n
j
=1
Supp(D
j
|X
i
)
and Cov(D
j
|X
i
) =
Supp(D
j
|X
i
)
m
i
=1
Supp(D
j
|X
i
)
, namely, the “support” can be expressed by both “accuracy” and “cover-
age”.
Followed by Pawlak and Tsumoto’s ideas [60,72,73], we choose the two factors, accuracy and coverage, to describe the
knowledge in this paper. The definition of knowledge is displayed as follows.
Definition 7. (See [43,44].) ∀X
i
(i = 1, 2, ···, m), ∀D
j
( j = 1, 2, ···, n), if Acc(D
j
|X
i
) ≥ α and Cov(D
j
|X
i
) ≥ β hold, we call
the rule X
i
→ D
j
a kind of knowledge where α ∈ (0.5, 1) and β ∈ (0, 1).
The
values α and β are dependent on a problem itself. Generally, we choose the decision rules with high accuracy and
high coverage [44].
3. Incremental approaches for knowledge discovery based on extended rough sets
In this section, we focus on discussing the knowledge updating process in IIS when the object set evolves over time
while the attribute set remains unchanged. Followed by the incremental approaches in [43,44], the incremental strategies
on adding and deleting of objects in IIS are firstly investigated. To illustrate our method clearly, we divide the work into
three parts. We provide several basic assumptions of the incremental approach for inducing knowledge in Section 3.1. We
introduce a matrix updating strategy to achieve the knowledge incremental learning process in Section 3.2.An incremental