Many other conditions on cost functions have been discussed in [42]. By using the thresholds, one can divide the universe
U into three regions of a decision partition
p
D
based on (
a
,b):
POS
ð
a
;bÞ
ð
p
D
j
p
A
Þ¼fx 2 UjpðD
max
ð½x
A
Þj½x
A
Þ P
a
g;
BND
ð
a
;bÞ
ð
p
D
j
p
A
Þ¼fx 2 Ujb < pðD
max
ð½x
A
Þj½x
A
Þ <
a
g;
NEG
ð
a
;bÞ
ð
p
D
j
p
A
Þ¼fx 2 UjpðD
max
ð½x
A
Þj½x
A
Þ 6 bg;
ð7Þ
where D
max
ð½x
A
Þ¼arg max
D
i
2
p
D
j½x
A
\D
i
j
j½x
A
j
no
.
Unlike rules in the classical rough set theory, all three types of rules obtained from the three regions may be uncertain.
They represent the levels of tolerance of errors in making incorrect decisions. Let p = p(D
max
([x]
A
)j[x]
A
), the Bayesian expected
cost of each rule can be expressed by follows:
Cost of positive rule : p k
PP
þð1 pÞk
PN
;
Cost of boundary rule : p k
BP
þð1 pÞk
BN
;
Cost of negative rule : p k
NP
þð1 pÞk
NN
:
ð8Þ
Consider the special case where we assume zero cost for a correct classification, namely, k
PP
= k
NN
= 0, the decision costs of all
rules are defined as [35]:
Cost of positive rule : ð1 pÞk
PN
;
Cost of boundary rule : p k
BP
þð1 pÞk
BN
;
Cost of nagative rule : p k
NP
:
ð9Þ
For a given decision table, the decision cost of the table can be expressed as:
COST ¼
X
p
i
P
a
ð1 p
i
Þk
PN
þ
X
b<p
j
<
a
ðp
j
k
BP
þð1 p
j
Þk
BN
Þþ
X
p
k
6b
p
k
k
NP
; ð10Þ
where p
i
= p(D
max
([x
i
]
A
)j[x
i
]
A
). In this expression, the cost of the whole table is composed of three types of cost: cost of the
positive rules, cost of the boundary rules and cost of the negative rules.
3. Attribute reducts in decision-theoretic rough set models
In this section, we review existing definitions of attribute reducts in Pawlak rough set model and in decision-theoretic
rough set models. The purpose of attribute reduction is to find a minimal subset of attributes that satisfies or improves
one or several criteria compared with the entire set of attributes. By reviewing the current definitions of attribute reducts,
we find that the main difference of these definitions lies in their criteria. According to the criteria, we classify these defini-
tions into two categories: qualitative attribute reducts based on qualitative criteria and quantitative attribute reducts based
on quantitative criteria.
Before we give a detailed explanation of the two kinds of reducts, we want to review first definitions of attribute reducts
in Pawlak rough set model.
3.1. Attribute reducts in Pawlak rough set model
A classical attribute reduct in Pawlak rough set model is a relative reduct with respect to the decision attribute D, which is
defined by requiring that the positive region of the decision table or
p
D
is unchanged.
Definition 2. [20] Given a decision table S =(U, At = C [ D,{V
a
j a 2 At}, {I
a
ja 2 At}), an attribute set R # C is a Pawlak reduct of
C with respect to D if it satisfies the following two conditions.
(1) POS(
p
D
j
p
R
) = POS(
p
D
j
p
C
),
(2) for any attribute a 2 R, POS(
p
D
j
p
R{a}
) – POS(
p
D
j
p
C
).
In this definition, condition (1) is also called a jointly sufficient condition and condition (2) is called an individually nec-
essary condition.
In Pawlak rough set model, POS(
p
D
j
p
C
) \ BND(
p
D
j
p
C
)=;, and POS(
p
D
j
p
C
) [ BND(
p
D
j
p
C
)=U. For a reduct R # C, the con-
dition POS(
p
D
j
p
R
) = POS(
p
D
j
p
C
) is equivalent to BND(
p
D
j
p
R
) = BND(
p
D
j
p
C
). The requirement of keeping positive region guar-
antees the same boundary region.
Many researchers use an equivalent definition of a Pawlak reduct. It is a kind of quantitative definition compared with a
qualitative definition for a Pawalk reduct. The quantitative definition is based on a measure called the quality of classification,
which is defined as:
c
ð
p
D
j
p
A
Þ¼
jPOSð
p
D
j
p
A
Þj
jUj
ð11Þ
154 X. Jia et al. / Information Sciences 219 (2013) 151–167