XING et al.: PEN BASED ON COGNITION DEEPENING MODEL 609
Fig. 4. Abstract structure of the perception-prototype field limited neural
networks. The number of prototypes in the prototype field is fixed. Lines in
the prototype field represent the topology relationship between prototypes.
In some algorithms, topology relationship is not predefined but learned by
the algorithm itself such as the TRN.
Fig. 5. Abstract structure of the prototype field open-ended neural networks.
The prototypes with dashed edge in the prototype field are added or pruned
during learning. Some algorithms do not define the topology relationship
between the prototypes such as the ART network.
To solve this problem, many growing neural networks
or incremental networks are designed. Growing
SOM (GSOM) [19], growing cell structure [20], and
growing NG (GNG) [21] insert new prototype(s) for every
λ samples learned, where λ is a constant parameter. Life-long
learning cell structure [22] introduces an insertion criterion to
decide whether to insert a new prototype for every λ patterns
learned; meanwhile, it also deletes prototypes to avoid
overfitting. However, in these methods, during each λ period,
the input sample is forced to merge with a prototype no
matter how big the gap between them. Considering the
physical meaning, it is unreasonable to merge two patterns
with significant difference.
There is another problem, which is well known as the
stability-plasticity dilemma [9], i.e., making the system
quickly learn about new knowledge (e.g., new objects) without
just as quickly being forced to forget previously learned, but
still useful, memories. The adaptive resonance theory (ART)
network [9], fuzzy ART [23], evolving SOMs [24], and
TopoART [25] (ART family) will create a new prototype
when no match occurs between the current input sample
and the current category set. The degree of matching is
controlled by a parameter known as the vigilance parameter.
This strategy makes the network add new prototype, when
the input sample is not similar to the existing prototypes that
the network has learned. However, the vigilance parameter
should be predefined. It is a difficult job when we have little
prior knowledge about the learning task, especially for the
unsupervised learning task. Evolving vector quantization [26]
introduces an online split-and-merge strategy to overcome the
poor setting of the vigilance parameter.
Self-organizing incremental neural network (SOINN) [27],
enhanced-SOINN [28], adjusted-SOINN [29], and
load-balancing-SOINN [30] decide whether to create a
new prototype for the input sample according to the
prototype-distribution around the local region of the input
sample. These methods overcome the disadvantages of
GSOM, GNG, and ART family algorithm. Incremental
learning vector quantization [31], [32] introduces the idea of
the adjusted-SOINN to the learning vector quantization and
achieves very good results.
We summarize these unsupervised incremental learning
methods as prototype field open-ended models, as shown
in Fig. 5. This type of network focuses on the prototype field;
it makes the prototype field open-ended for new categories
by adding new prototypes. In recent years, these methods are
applied to various domains, including reasoning system [33],
pattern recognition [34], and computer vision [35].
However, the perception-prototype field limited and
prototype field open-ended models are not able to expand
the perception field, which we think is a very important
ability for unsupervised learning as mentioned in Section I.
For example, if we install new sensors to a robot as a new
information channel, and we want the robot to use the new
sensors effectively. The perception-prototype field limited and
prototype field open-ended models cannot deal with such
a task, and the proposed PEN is designed to solve such
problems.
III. P
ERCEPTION EVOLUTION NETWORK
A. Problem Formulation
Assume that the original neural network N has n neurons
in the perception field, which receive n-dimensional external
data x = (x
1
, x
2
,...,x
n
) ∈ R
n
. After a period of learn-
ing, some prototypes are created in the prototype field of
PEN. Then, m new sensory neurons emerge in the percep-
tion field of PEN. In addition, the received data becomes
x = (x
1
, x
2
,...,x
n
, x
n+1
,...,x
n+m
) ∈ R
n+m
. The learned
prototypes will be mapped to a high-dimensional space, which
contains the dimension of these m new sensory neurons.
If the new sense brings some new distinguishable categories,
PEN will create prototypes for such new categories.
The entire workflow of the PEN is as follows. The prototype
field of PEN is empty in the beginning, and learning samples
are fed into the network sequentially, i.e., in an online way.
The PEN will create two prototypes using the first two input
samples. For the latter input sample, PEN first conducts
prototype competition, then prototype learning and prototype
self-adaptive associating are conducted according to the result
of the competition step; meanwhile, the similarity threshold of
the activated prototypes will be updated. When all the steps
are done, PEN will process the next input sample. Prototype
pruning is conducted after every λ samples learned.
When some new sensory neurons are introduced,
PEN will find some low-dimensional prototype to
map to high-dimensional space for each input sample.
Prototype self-adaptive associating and similarity threshold
updating are conducted similarly as the procedure before