Fig. 1. Single-hidden layer feed forward networks
With L hidden nodes in output layer, the output function of SLFNs can be expressed
by:
. (1)
The function also can be written as f(x) = h(x). where = [1, 2, ... ,
L
] is the
vector of the output weights between the hidden layer of L neurons and the output
neuron and h(x) = [h
1
(x), h
2
(x), ... , h
L
(x)] is the output vector of the hidden layer with
respect to the input x, which maps the data from input space to the ELM feature space
[11].
In particular, L equals to N, which is rare condition because L is far smaller than N in
actual problem, that is to say, that there is error between the output value and the actual
value. So, the most important thing is to find least-squares solution
of the linear
system.
,
T. (2)
Where
is the Moore-Penrose Generalized inverse of matrix H [12, 13],
=
(H’H)
-1
H’ or H’(HH’)
-1
, depending on the singularity of H’H or HH’.
In the newly developed kernel ELM, it’s getting more stable to introduce a positive
coefficient into the learning system. If H’H is nonsingular, the coefficient 1/λ is added
to the diagonal of H’H in the calculation of the output weights. After that, = H’
(I/λ + HH’)
-1
, the corresponding function of the regularized ELM is:
. (3)
[11] shown that ELM with a kernel matrix can be defined as follows. Let Ω
ELM
=
HH’ : Ω
ELMi,j
= h(x
i
)h(x
j
) = K(x
i
, x
j
). The output function can be written as:
(4)