Multi-Valued Neural Network Trained by
Differential Evolution for Synthesizing
Multiple-Valued Functions
Huiqin Chen
Jiangsu Agri-animal
Husbandry Vocational College,
Jiangsu, China
Sheng Li
College of Computer Science
and Technology,
Taizhou University,
Jiangsu, China
Qian Shi and Dongmei Shen
Department of Automation,
Donghua University,
Shanghai, China
Shangce Gao
Faculty of Engineering,
University of Toyama,
Toyama, Japan
Email: gaosc@eng.u-toyama.ac.jp
Abstract—We consider the problem of synthesizing multiple-
valued logic (MVL) functions by neural networks. A differential
evolution algorithm is proposed to train the learnable multiple-
valued logic network. The optimum window and biasing pa-
rameters to be chosen for convergence are derived. Experiments
performed on benchmark problems demonstrate the convergence
and robustness of the network. Preliminary results indicate that
differential evolution is suitable to train MVL networks for
synthesizing MVL functions.
I. INTRODUCTION
Multiple-valued logic (MVL) has been studied and is of
interest to engineers involved in various aspects of computing
for over forty years. MVL is a way of implementing digital
operations by using a multiple set (with cardinality of three
or more) of logical values instead of the binary set. The
synthesis (mainly the minimization) of MVL is an important
technique for reducing the area required by a programmable
logic array [1], and for accumulating knowledge about objects
and processes [2]. Exact minimization of MVL functions
[3] is prohibitively expensive. Heuristic algorithms involving
functional decomposition techniques [4], iterative functional
improvement methods [5], direct cover techniques [6] and evo-
lutionary optimization methods [7] for minimizing MVL func-
tions have been reported. Although these approaches provide
promising alternative methods to minimization MVL function-
s, they are time consuming [1] and usually lack of the learning
capacity [8]. Recently, MVL begins to be associated with
neural networks [9]. An MVL network (MVLN) [8] consisted
of layered arithmetic piecewise linear units was proposed. The
MVLN possesses functional completeness properties due to its
construction method based on Allen-Givone Algebra and is
capable of making use of prior knowledge while constructing
the network [10]. Moreover, the hardware implementation of
the proposed MVL network is rather simple and straightfor-
ward since the arithmetic operations of the network are wired
sums and piecewise linear operations [11]. The problem of
minimizing MVL functions (i.e., finding a minimal expression
for arbitrary MVL functions) using MVLN is thus converted
into a network learning task. Local search algorithms [10],
[12], genetic algorithm [13] and clonal selection algorithm [14]
have been proposed to learn the MVL network.
In this paper, we propose a novel differential evolution
to learn MVLN. The differential evolution (DE) [15] is a
population based stochastic meta-heuristic for global opti-
mization related both with simplex methods and evolutionary
algorithms. The advantages are its simple structure, ease of
use, speed, and robustness. Due to these advantages, DE has
been successfully applied in solving optimization problems
arising in different practical applications, including data min-
ing, parameter identification, digital filter design, scheduling,
etc. [16]. A differential evolution based neural network training
algorithm was first introduced in [17] where the method’s
characteristics as a global optimizer were compared to other
neural network training methods. Training MVLN can also
be considered a difficult global optimization problem, despite
the fact that local optimizers are usually applied for train-
ing. Investigation of applying global optimizers to training is
well-motivated since local optimizers have basically limited
capabilities for global optimization. The novelties of this work
are therefore three-fold: (1) DE is for the first time used to
train multiple-valued logic networks whose transfer functions
do not satisfy the requirements concerning the availability of
gradient information. (2) The effectiveness of DE is verified
by comparing its performance with many other traditional
algorithms. (3) The topological structure of the MVLN trained
by DE is automatically pruned to synthesize an MVL function.
II. M
ULTIPLE-VALUED LOGIC NETWORK
The multiple-valued logic network [8] is shown in Fig. 1.
The architecture of the MVL network including its topological
structure (i.e., connectivity) is constructed using a set of
operators {MAX, MIN, and Literal operator} based on the
sum-of-product expression. Given the set R = {0, 1, ..., r − 1}
for any n-variables r-valued system, the operators used in
MVLN are defined as follows.
(1)MAX and MIN operators:
x
1
+ x
2
+ ... + x
n
= MAX(x
1
,x
2
, ..., x
n
) (1)
x
1
· x
2
···x
n
= MIN(x
1
,x
2
,...,x
n
) (2)
(2)Literal operators:
x(a, b)=
r-1 a ≤ x ≤ b
0 otherwise
(3)
2015 2nd International Conference on Information Science and Control Engineering
978-1-4673-6850-6/15 $31.00 © 2015 IEEE
DOI 10.1109/ICISCE.2015.80
332
2015 2nd International Conference on Information Science and Control Engineering
978-1-4673-6850-6/15 $31.00 © 2015 IEEE
DOI 10.1109/ICISCE.2015.80
332