Chapter Evolving Connectionist and Fuzzy Connectionist Systems: Theory and Applications for Adaptive, On-line Intelligent Systems


Download 110.29 Kb.
Pdf ko'rish
bet13/30
Sana04.02.2023
Hajmi110.29 Kb.
#1162389
1   ...   9   10   11   12   13   14   15   16   ...   30
Bog'liq
nft99-ecos (1)

5. 5. Sleep eco-training.
This strategy was explained in section 2. The main idea is that different modules
evolve quickly to capture the most important information concerning their
specialised function (e.g., class). The modules store exemplars of relevant for their
functioning examples during the active training mode - when the examples are
presented at the ECOS’ inputs. After that, the modules begin to exchange
exemplars that are stored in their W1 connections as negative examples for other
modules to improve their performance (e.g., recognition rate). During the sleep-
eco training new rule nodes are created and the same evolving algorithm is used
on examples (exemplars) that are not presented but rather stored in the already
evolved modules. During the sleep- eco training the ECOS parameters can have
different values from the values used in the active training phase, e.g., different
sensitivity threshold and different learning rates.
The following are the parameters of the evolved through sleep 
eco-training
EFuNNs for the three Iris classes: SThr=0.95; Errthr=0.05; 
rn ( setosa) = 9; rn
(versicolor) =18; rn ( virginica)=22. Overall classification: Setosa - 50(100%);
Versicolor - 50 (100%); Virginica - 50 (100%). The results of the sleep 
eco-
training are better than the results after training with positive data only (see 5.3),
but the significant difference is that here the false positive activation is strongly
depressed and in some EFuNNs it is completely eliminated.
5.6. Unsupervised and reinforcement learning
Unsupervised learning in ECOS systems is based on the same principles as the
supervised learning, but there is no desired output and no calculated output error.
There are two cases in the evolving procedure:
(a) There is an output node activated (by the current input vector 
x) above a pre-
set threshold Outhr. In this case the example x is accommodated in the connection


126
weights of the most highly activated case neuron according to the learning rules of
ECOS (e.g. as it is in the EFuNN algorithm).
(b) Otherwise, there will be a new rule node created and new output 
neuron (or
new module) created to accommodate this example. The new rule node is then
connected to the fuzzy input nodes and to a new output node as it is the case in the
supervised evolving (e.g., as it is in the EFuNN algorithm).
Reinforcement learning uses similar procedures as case (a) and case (b) above.
Case (a) is applied only when the output from the evolving system is confirmed
(approved) by the 'critique' and case (b) is applied otherwise.

Download 110.29 Kb.

Do'stlaringiz bilan baham:
1   ...   9   10   11   12   13   14   15   16   ...   30




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling