Xu et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2012 13(2):131-138
131
Optimizing radial basis function neural network based on
rough sets and affinity propagation clustering algorithm
*
Xin-zheng XU
†1
, Shi-fei DING
1,2
, Zhong-zhi SHI
2
, Hong ZHU
1
(
1
School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221116, China)
(
2
Key Laboratory of Intelligent Information Processing, Institute of Computing Technology,
Chinese Academy of Sciences, Beijing 100190, China)
†
E-mail: xuxinzh@163.com
Received June 25, 2011; Revision accepted Oct. 25, 2011; Crosschecked Dec. 29, 2011
Abstract: A novel method based on rough sets (RS) and the affinity propagation (AP) clustering algorithm is developed to
optimize a radial basis function neural network (RBFNN). First, attribute reduction (AR) based on RS theory, as a preprocessor of
RBFNN, is presented to eliminate noise and redundant attributes of datasets while determining the number of neurons in the input
layer of RBFNN. Second, an AP clustering algorithm is proposed to search for the centers and their widths without a priori
knowledge about the number of clusters. These parameters are transferred to the RBF units of RBFNN as the centers and widths of
the RBF function. Then the weights connecting the hidden layer and output layer are evaluated and adjusted using the least square
method (LSM) according to the output of the RBF units and desired output. Experimental results show that the proposed method
has a more powerful generalization capability than conventional methods for an RBFNN.
Key words: Radial basis function neural network (RBFNN), Rough sets, Affinity propagation, Clustering
doi:
10.1631/jzus.C1100176
Document code:
A
CLC number:
TP183
1 Introduction
The radial basis function neural network
(RBFNN), as a type of feedforward neural network
(NN), has recently attracted extensive research in-
terest because of its simple architecture, high ap-
proximation and regularization capability, and good
local specialization and global generalization ability.
RBFNN has proved to be able to approximate any
reasonable continuous function mapping with a sat-
isfactory level of accuracy (Zhang and Zhang, 2004).
To date, RBFNN has been widely used in function
approximation, pattern recognition, data classifica-
tion, control, time series prediction, and nonlinear
system identification (Guerra and Coelho, 2008; Jing
et al., 2008; Lee and Ko, 2009; Beyhan and Alci,
2010; Jayasree et al., 2010; Hou and Han, 2011).
However, optimizing the structure of RBFNN
remains challenging. The parameters of RBFNN
involve the numbers of neurons in the input layer,
hidden layer, and output layer, RBF centers and
widths of neurons in the hidden layer, and linear
weights connecting the hidden layer and output layer.
Each neuron in the hidden layer of RBFNN produces
a radically symmetric response around a node pa-
rameter vector called a ‘center’ (Du et al., 2010). As
is well known, the performance of RBFNN critically
relies on the selection of RBF centers. Several dif-
ferent learning approaches have been presented to
determine the RBF centers of neurons in the hidden
layer. First, the conventional strategy is a clustering
technique including k-means clustering, fuzzy
k-means clustering, and hierarchical clustering,
Journal of Zhejiang University-SCIENCE C (Computers & Electronics)
ISSN 1869-1951 (Print); ISSN 1869-196X (Online)
www.zju.edu.cn/jzus; www.springerlink.com
E-mail: jzus@zju.edu.cn
*
Project supported by the National Natural Science Foundation of
China (Nos. 41074003 and 60975039), the Opening Foundation of the
Key Laboratory of Intelligent Information Processing of Chinese
Academy of Sciences (No. IIP2010-1), and the Youth Science Foun-
dation of China University of Mining and Technology (Nos.
2008A045 and 2009A053)
© Zhejiang University and Springer-Verlag Berlin Heidelberg 2012