EXTREME LEARNING MACHINE AND APPLICATIONS
Extreme learning machine for interval neural networks
Dakun Yang
•
Zhengxue Li
•
Wei Wu
Received: 16 September 2013 / Accepted: 16 November 2013 / Published online: 5 December 2013
Ó Springer-Verlag London 2013
Abstract Interval data offer a valuable way of repre-
senting the available information in complex problems
where uncertainty, inaccuracy, or variability must be taken
into account. Considered in this paper is the learning of
interval neural networks, of which the input and output are
vectors with interval components, and the weights are real
numbers. The back-propagation (BP) learning algorithm is
very slow for interval neural networks, just as for usual
real-valued neural networks. Extreme learning machine
(ELM) has faster learning speed than the BP algorithm. In
this paper, ELM is applied for learning of interval neural
networks, resulting in an interval extreme learning machine
(IELM). There are two steps in the ELM for usual feed-
forward neural networks. The first step is to randomly
generate the weights connecting the input and the hidden
layers, and the second step is to use the Moore–Penrose
generalized inversely to determine the weights connecting
the hidden and output layers. The first step can be directly
applied for interval neural networks. But the second step
cannot, due to the involvement of nonlinear constraint
conditions for IELM. Instead, we use the same idea as that
of the BP algorithm to form a nonlinear optimization
problem to determine the weights connecting the hidden
and output layers of IELM. Numerical experiments show
that IELM is much faster than the usual BP algorithm. And
the generalization performance of IELM is much better
than that of BP, while the training error of IELM is a little
bit worse than that of BP, implying that there might be an
over-fitting for BP.
Keywords Interval computation Interval neural
network Extreme learning machine
Interval extreme learning machine
1 Introduction
In real-life situations, available information is often
uncertain, imprecise, and incomplete. In many such
applications, it is natural to express and treat the infor-
mation in the form of intervals rather than real values
[1–4]. For instance, interval neural networks [5–11] have
been proposed for handling interval data. BP algorithm
[12, 13] was extended to the case of interval input vectors
[5]. But as is well known, BP algorithm is usually very slow.
Recently, extreme learning machine (ELM) [14–18]
proposed by Huang [19–21] becomes a popular learning
algorithm for single hidden layer feedforward networks
(SLFNs). There are two steps in the ELM for usual feed-
forward neural networks. The first step is to randomly
generate the weights connecting the input and the hidden
layers, and the second step is to use the Moore–Penrose
generalized inversely to determine the weights connecting
the hidden and output layers. The learning speed of ELM
can be thousands of times faster than BP algo rithm, since
the hidden layer of SLFNs need not be tuned.
The aim of this paper is to apply ELM for the lea rning
of single hidden layer interval feedforward networ ks
(SLIFNs), of which the input and output are vectors with
interval components, and the weights are real numbers.
The SLIFN with ELM learning is called an interval
extreme learning machine (IELM). The above-mentioned
D. Yang Z. Li W. Wu (&)
School of Mathematical Sciences, Dalian University of
Technology, Dalian, China
e-mail: wuweiw@dlut.edu.cn
D. Yang
e-mail: ydk1026@gmail.com
123
Neural Comput & Applic (2016) 27:3–8
DOI 10.1007/s00521-013-1519-3