Novel algorithms of attribute reduction with variable precision rough
set model
Yanyan Yang
a,
n
, Degang Chen
b
, Ze Dong
a
a
School of Control and Computer Engineering, North China Electric Power University, Beijing 102206, PR China
b
Department of Mathematics and Physics, North China Electric Power University, Beijing 102206, PR China
article info
Article history:
Received 25 October 2013
Received in revised form
4 January 2014
Accepted 10 February 2014
Communicated by D. Wang
Available online 12 April 2014
Keywords:
Variable precision rough sets
Discernibility matrix
Minimal element
Equivalence class pair relative to condition
attributes
abstract
The variable precision rough set model resists noise in data by introducing a parameter to relax the strict
inclusion in approximations of the classical rough set model, and attribute reduction with the variable
precision rough set model aims at deleting dispensable condition attributes from a decision system by
considering this relaxation. In the variable precision rough set model, the approach of the discernibility
matrix is the theoretical foundation of attribute reduction. However, this approach has always heavy
computing load, so its effective improvement is clearly of importance in order to find reducts faster.
In this paper, we observe that only minimal elements in the discernibility matrix are sufficient to find
reducts, and each minimal element is at least determined by one equivalence class pair relative to
condition attributes. With this motivation, the relative discernibility relation of a condition attribute is
defined to characterize minimal elements in the discernibility matrix, and the algorithm of finding all
minimal elements is developed by this characterization. Based on the algorithm of finding all minimal
elements, we develop two algorithms to find all reducts and one reduct in variable precision rough sets.
Several experiments are performed to demonstrate that our methods proposed in this paper are effective
to reduce the computing load.
& 2014 Elsevier B.V. All rights reserved.
1. Introduction
The variable precision rough set model (VPRS) [30,31], which is
one of the improvements of Pawlak's rough set model [7,10,24,30],
shows certain robustness to misclassification and noise in data.
It relaxes the strict inclusion in approximations of Pawlak's
rough set model [17,18] to the partial inclusion by considering
a parameter as an inclusion degree. Essentially, this parameter
represents a threshold that determines the portion of the objects
in an equivalence class which are classified to the same decision
class. Many researchers have focused on the selection of the
parameter [1–5,8,21] and developing some extensions of VPRS
[11,13,20,26,28]. On the other hand, more researches have been
put forward on the investigation of attribute reduction with VPRS
[3,9,12,14,25,29]. Similar to the purpose of attribute reduction
with Pawlak's rough set model, attribute reduction with VPRS also
aims at selecting a set of less condition attributes to provide the
same classification information as the full condition attributes set
by introducing this parameter.
Attribute reduction with VPRS was originally proposed in [30]
to preserve the sum of objects in the β-lower approximations of all
decision classes by defining the dependency function. This kind of
reduct is called approximate reduct or β-reduct, and was noted not
to preserve the β-lower approximation of each decision class in
VPRS [3,9]. Therefore, the derived decision rules from β-reduct
may be in conflict with the ones from the original decision system
[14]. Considering these drawbacks of β-reduct, the concepts of
β-lower and upper distributed reducts based on VPRS were
presented in [14] to preserve β-lower and upper approximations,
respectively. Meanwhile, the derived decision rules from β-lower
and upper distributed reducts are compatible with the ones
derived from the original system. Furthermore, in [14] β-lower
and upper distributed discernibility matrices were also provided
to fi nd β-lower and upper distributed reducts, respectively.
As noted in [29], the dependency function, the positive region
and the β-lower approximation may not be monotonic in VPRS
after deleting a condition attribute. Consequently, the algorithm of
finding β-reduct may not be convergent by applying the measure
of dependency function. In [22], the algorithm of attribute reduc-
tion was proposed by using β-variable precision rough entropy,
where both the precision parameter and the correction coef
ficient
were given in advance. VPRS-QuickReduct algorithm was proposed
in [16] to improve the current QuickReduct algorithm by selecting
the biggest dependency attribute during each iteration and using
VPRS's dependency as a criterion which admitted some small
Contents lists available at ScienceDirect
journal homepage: www.elsevier.com/locate/neucom
Neurocomputing
http://dx.doi.org/10.1016/j.neucom.2014.02.023
0925-2312/& 2014 Elsevier B.V. All rights reserved.
n
Corresponding author.
Neurocomputing 139 (2014) 336–344