C++ Neural Networks and Fuzzy Logic


Download 1.14 Mb.
Pdf ko'rish
bet6/41
Sana16.08.2020
Hajmi1.14 Mb.
#126479
1   2   3   4   5   6   7   8   9   ...   41
Bog'liq
C neural networks and fuzzy logic


     else

          cout<<"\n pattern= "<

          "  output = "<


          "  discrepancy occurred";

     }


cout<<"\n\n";

int patrn2[]= {0,1,0,1};

h1.activation(patrn2);

for(i=0;i<4;i++)

     {

     if (h1.output[i] == patrn2[i])



          cout<<"\n pattern= "<

C++ Neural Networks and Fuzzy Logic:Preface

Notes on the Header File Hop.h

58


          "  output = "<


     else

          cout<<"\n pattern= "<

          "  output = "<


          "  discrepancy occurred";

       }


}

Previous Table of Contents Next

Copyright ©

 IDG Books Worldwide, Inc.

C++ Neural Networks and Fuzzy Logic:Preface

Notes on the Header File Hop.h

59


C++ Neural Networks and Fuzzy Logic

by Valluru B. Rao

MTBooks, IDG Books Worldwide, Inc.



ISBN: 1558515526   Pub Date: 06/01/95

Previous Table of Contents Next



Comments on the C++ Program for Hopfield Network

Note the use of the output stream operator cout<< to output text strings or numerical output. C++ has istream

and ostream classes from which the iostream class is derived. The standard input and output streams are cin

and cout, respectively, used, correspondingly, with the operators >> and <<. Use of cout for the output stream

is much simpler than the use of the C function printf. As you can see, there is no formatting suggested for

output. However, there is a provision that allows you to format the output, while using cout.

Also note the way comments are introduced in the program. The line with comments should start with a

double slash //. Unlike C, the comment does not have to end with a double slash. Of course, if the comments

extend to subsequent lines, each such line should have a double slash at the start. You can still use the pair, /*

at the beginning with */ at the end of lines of comments, as you do in C. If the comment continues through

many lines, the C facility will be handier to delimit the comments.

The neurons in the network are members of the network class and are identified by the abbreviation nrn. The

two patterns, 1010 and 0101, are presented to the network one at a time in the program.

Output from the C++ Program for Hopfield Network

The output from this program is as follows and is self−explanatory. When you run this program, you’re likely

to see a lot of output whiz by, so in order to leisurely look at the output, use redirection. Type Hop >

filename, and your output will be stored in a file, which you can edit with any text editor or list by using the

type filename | more command.

THIS PROGRAM IS FOR A HOPFIELD NETWORK WITH A SINGLE LAYER OF 4 FULLY

INTERCONNECTED NEURONS. THE NETWORK SHOULD RECALL THE PATTERNS 1010 AND

0101 CORRECTLY.

 nrn[0].weightv[0] is  0

 nrn[0].weightv[1] is  −3

 nrn[0].weightv[2] is  3

 nrn[0].weightv[3] is  −3

activation is 3

output value is  1

 nrn[1].weightv[0] is  −3

 nrn[1].weightv[1] is  0

 nrn[1].weightv[2] is  −3

 nrn[1].weightv[3] is  3

activation is −6

output value is  0

 nrn[2].weightv[0] is  3

 nrn[2].weightv[1] is  −3

 nrn[2].weightv[2] is  0

 nrn[2].weightv[3] is  −3

activation is 3

C++ Neural Networks and Fuzzy Logic:Preface

Comments on the C++ Program for Hopfield Network

60


output value is  1

 nrn[3].weightv[0] is  −3

 nrn[3].weightv[1] is  3

 nrn[3].weightv[2] is  −3

 nrn[3].weightv[3] is  0

activation is −6

output value is  0

 pattern= 1  output = 1  component matches

 pattern= 0  output = 0  component matches

 pattern= 1  output = 1  component matches

 pattern= 0  output = 0  component matches

 nrn[0].weightv[0] is  0

 nrn[0].weightv[1] is  −3

 nrn[0].weightv[2] is  3

 nrn[0].weightv[3] is  −3

activation is −6

output value is  0

 nrn[1].weightv[0] is  −3

 nrn[1].weightv[1] is  0

 nrn[1].weightv[2] is  −3

 nrn[1].weightv[3] is  3

activation is 3

output value is  1

 nrn[2].weightv[0] is  3

 nrn[2].weightv[1] is  −3

 nrn[2].weightv[2] is  0

 nrn[2].weightv[3] is  −3

activation is −6

output value is  0

 nrn[3].weightv[0] is  −3

 nrn[3].weightv[1] is  3

 nrn[3].weightv[2] is  −3

 nrn[3].weightv[3] is  0

activation is 3

output value is  1

 pattern= 0  output = 0  component matches

 pattern= 1  output = 1  component matches

 pattern= 0  output = 0  component matches

 pattern= 1  output = 1  component matches

Further Comments on the Program and Its Output

Let us recall our previous discussion of this example in Chapter 1. What does the network give as output if we

present a pattern different from both A and B? If C = (0, 1, 0, 0) is the input pattern, the activation (dot

products) would be –3, 0, –3, 3 making the outputs (next state) of the neurons 0,1,0,1, so that B would be

recalled. This is quite interesting, because if we intended to input B, and we made a slight error and ended up

presenting C instead, the network would recall B. You can run the program by changing the pattern to 0, 1, 0,

0 and compiling again, to see that the B pattern is recalled.

Another element about the example in Chapter 1 is that the weight matrix W is not the only weight matrix that

would enable the network to recall the patterns A and B correctly. If we replace the 3 and –3 in the matrix

with 2 and –2, respectively, the resulting matrix would facilitate the same performance from the network. One

C++ Neural Networks and Fuzzy Logic:Preface

Further Comments on the Program and Its Output

61


way for you to check this is to change the wt1, wt2, wt3, wt4 given in the program accordingly, and compile

and run the program again. The reason why both of the weight matrices work is that they are closely related.

In fact, one is a scalar (constant) multiple of the other, that is, if you multiply each element in the matrix by

the same scalar, namely 2/3, you get the corresponding matrix in cases where 3 and –3 are replaced with 2 and

–2, respectively.

Previous Table of Contents Next

Copyright ©

 IDG Books Worldwide, Inc.

C++ Neural Networks and Fuzzy Logic:Preface

Further Comments on the Program and Its Output

62


C++ Neural Networks and Fuzzy Logic

by Valluru B. Rao

MTBooks, IDG Books Worldwide, Inc.



ISBN: 1558515526   Pub Date: 06/01/95

Previous Table of Contents Next



A New Weight Matrix to Recall More Patterns

Let’s continue to discuss this example. Suppose we are interested in having the patterns E = (1, 0, 0, 1) and F

= (0, 1, 1, 0) also recalled correctly, in addition to the patterns A and B. In this case we would need to train the

network and come up with a learning algorithm, which we will discuss in more detail later in the book. We

come up with the matrix W1, which follows.

             0    −5     4    4

     W

1

 =   −5     0     4    4



             4     4     0   −5

             4     4    −5    0

Try to use this modification of the weight matrix in the source program, and then compile and run the program

to see that the network successfully recalls all four patterns A, B, E, and F.



NOTE:  The C++ implementation shown does not include the asynchronous update feature

mentioned in Chapter 1, which is not necessary for the patterns presented. The coding of this

feature is left as an exercise for the reader.

Weight Determination

You may be wondering about how these weight matrices were developed in the previous example, since so far

we’ve only discussed how the network does its job, and how to implement the model. You have learned that

the choice of weight matrix is not necessarily unique. But you want to be assured that there is some

established way besides trial and error, in which to construct a weight matrix. You can go about this in the

following way.



Binary to Bipolar Mapping

Let’s look at the previous example. You have seen that by replacing each 0 in a binary string with a –1, you

get the corresponding bipolar string. If you keep all 1’s the same and replace each 0 with a –1, you will have a

formula for the above option. You can apply the following function to each bit in the string:

     f(x) = 2x – 1

NOTE:  When you give the binary bit x, you get the corresponding bipolar character f(x)

For inverse mapping, which turns a bipolar string into a binary string, you use the following function:

     f(x) =  (x + 1) / 2

C++ Neural Networks and Fuzzy Logic:Preface

A New Weight Matrix to Recall More Patterns

63


NOTE:  When you give the bipolar character x, you get the corresponding binary bit f(x)

Pattern’s Contribution to Weight

Next, we work with the bipolar versions of the input patterns. You take each pattern to be recalled, one at a

time, and determine its contribution to the weight matrix of the network. The contribution of each pattern is

itself a matrix. The size of such a matrix is the same as the weight matrix of the network. Then add these

contributions, in the way matrices are added, and you end up with the weight matrix for the network, which is

also referred to as the correlation matrix. Let us find the contribution of the pattern A = (1, 0, 1, 0):

First, we notice that the binary to bipolar mapping of A = (1, 0, 1, 0) gives the vector (1, –1, 1, –1).

Then we take the transpose, and multiply, the way matrices are multiplied, and we see the following:

     1  [1   −1   1   −1]       1   −1   1   −1

     1                     =   −1    1  −1    1

     1                          1   −1   1   −1

     1                         −1    1  −1    1

Now subtract 1 from each element in the main diagonal (that runs from top left to bottom right). This

operation gives the same result as subtracting the identity matrix from the given matrix, obtaining 0’s in the

main diagonal. The resulting matrix, which is given next, is the contribution of the pattern (1, 0, 1, 0) to the

weight matrix.

      0      −1      1     −1

     −1       0     −1      1

      1      −1      0     −1

     −1       1     −1      0

Similarly, we can calculate the contribution from the pattern B = (0, 1, 0, 1) by verifying that pattern B’s

contribution is the same matrix as pattern A’s contribution. Therefore, the matrix of weights for this exercise

is the matrix W shown here.

           0     −2      2      −2

  W  =    −2      0     −2       2

           2     −2      0      −2

          −2      2     −2       0

You can now optionally apply an arbitrary scalar multiplier to all the entries of the matrix if you wish. This is

how we had previously obtained the +/− 3 values instead of +/− 2 values shown above.

Previous Table of Contents Next

Copyright ©

 IDG Books Worldwide, Inc.

C++ Neural Networks and Fuzzy Logic:Preface

Pattern’s Contribution to Weight

64


C++ Neural Networks and Fuzzy Logic

by Valluru B. Rao

MTBooks, IDG Books Worldwide, Inc.



ISBN: 1558515526   Pub Date: 06/01/95

Previous Table of Contents Next



Autoassociative Network

The Hopfield network just shown has the feature that the network associates an input pattern with itself in

recall. This makes the network an autoassociative network. The patterns used for determining the proper

weight matrix are also the ones that are autoassociatively recalled. These patterns are called the exemplars. A

pattern other than an exemplar may or may not be recalled by the network. Of course, when you present the

pattern 0 0 0 0, it is stable, even though it is not an exemplar pattern.



Orthogonal Bit Patterns

You may be wondering how many patterns the network with four nodes is able to recall. Let us first consider

how many different bit patterns are orthogonal to a given bit pattern. This question really refers to bit patterns

in which at least one bit is equal to 1. A little reflection tells us that if two bit patterns are to be orthogonal,

they cannot both have 1’s in the same position, since the dot product would need to be 0. In other words, a

bitwise logical AND operation of the two bit patterns has to result in a 0. This suggests the following. If a

pattern P has k, less than 4, bit positions with 0 (and so 4−k bit positions with 1), and if pattern Q is to be

orthogonal to P, then Q can have 0 or 1 in those k positions, but it must have only 0 in the rest 4−k positions.

Since there are two choices for each of the k positions, there are 2

k

 possible patterns orthogonal to P. This



number 2

k

 of patterns includes the pattern with all zeroes. So there really are 2



k

–1 non−zero patterns

orthogonal to P. Some of these 2

k

–1 patterns are not orthogonal to each other. As an example, P can be the



pattern 0 1 0 0, which has k = 3 positions with 0. There are 2

3

–1=7 nonzero patterns orthogonal to 0 1 0 0.



Among these are patterns 1 0 1 0 and 1 0 0 1, which are not orthogonal to each other, since their dot product is

1 and not 0.



Network Nodes and Input Patterns

Since our network has four neurons in it, it also has four nodes in the directed graph that represents the

network. These are laterally connected because connections are established from node to node. They are

lateral because the nodes are all in the same layer. We started with the patterns A = (1, 0, 1, 0) and B = (0, 1,

0, 1) as the exemplars. If we take any other nonzero pattern that is orthogonal to A, it will have a 1 in a

position where B also has a 1. So the new pattern will not be orthogonal to B. Therefore, the orthogonal set of

patterns that contains A and B can have only those two as its elements. If you remove B from the set, you can

get (at most) two others to join A to form an orthogonal set. They are the patterns (0, 1, 0, 0) and (0, 0, 0, 1).

If you follow the procedure described earlier to get the correlation matrix, you will get the following weight

matrix:


             0     −1      3      −1

W  =   −1      0     −1      −1

             3     −1      0      −1

            −1     −1     −1       0

C++ Neural Networks and Fuzzy Logic:Preface

Autoassociative Network

65


With this matrix, pattern A is recalled, but the zero pattern (0, 0, 0, 0) is obtained for the two patterns (0, 1, 0,

0) and (0, 0, 0, 1). Once the zero pattern is obtained, its own recall will be stable.



Second Example for C++ Implementation

Recall the cash register game from the show The Price is Right, used as one of the examples in Chapter 1.

This example led to the description of the Perceptron neural network. We will now resume our discussion of

the Perceptron model and follow up with its C++ implementation. Keep the cash register game example in

mind as you read the following C++ implementation of the Perceptron model. Also note that the input signals

in this example are not necessarily binary, but they may be real numbers. It is because the prices of the items

the contestant has to choose are real numbers (dollars and cents). A Perceptron has one layer of input neurons

and one layer of output neurons. Each input layer neuron is connected to each neuron in the output layer.

Previous Table of Contents Next

Copyright ©

 IDG Books Worldwide, Inc.

C++ Neural Networks and Fuzzy Logic:Preface

Second Example for C++ Implementation

66


C++ Neural Networks and Fuzzy Logic

by Valluru B. Rao

MTBooks, IDG Books Worldwide, Inc.



ISBN: 1558515526   Pub Date: 06/01/95

Previous Table of Contents Next



C++ Implementation of Perceptron Network

In our C++ implementation of this network, we have the following classes: we have separate classes for input

neurons and output neurons. The ineuron class is for the input neurons. This class has weight and activation

as data members. The oneuron class is similar and is for the output neuron. It is declared as a friend class in

the ineuron class. The output neuron class has also a data member called output. There is a network class,

which is a friend class in the oneuron class. An instance of the network class is created with four input

neurons. These four neurons are all connected with one output neuron.

The member functions of the ineuron class are: (1) a default constructor, (2) a second constructor that takes a

real number as an argument, and (3) a function that calculates the output of the input neuron. The constructor

taking one argument uses that argument to set the value of the weight on the connection between the input

neuron and the output neuron. The functions that determine the neuron activations and the network output are

declared public. The activations of the neurons are calculated with functions defined in the neuron classes. A

threshold value is used by a member function of the output neuron to determine if the neuron’s activation is

large enough for it to fire, giving an output of 1.



Header File

Listing 4.3 contains percept.h, the header file for the C++ program for the Perceptron network. percept.h

contains the declarations for three classes, one for input neurons, one for output neurons, and one for network.

Listing 4.3 The percept.h header file.

//percept.h        V. Rao, H. Rao

// Perceptron model

#include

#include

#include

class ineuron

{

protected:



     float weight;

     float activation;

     friend class oneuron;

public:


     ineuron() {};

     ineuron(float j) ;

     float act(float x);

};

class oneuron



{

protected:

     int output;

C++ Neural Networks and Fuzzy Logic:Preface

C++ Implementation of Perceptron Network

67


     float activation;

     friend class network;

public:

     oneuron() { };



     void actvtion(float x[4], ineuron *nrn);

     int outvalue(float j) ;

};

class network



{

public:


     ineuron   nrn[4];

     oneuron   onrn;

     network(float,float,float,float);

};

Implementation of Functions

The network is designed to have four neurons in the input layer. Each of them is an object of class ineuron,

and these are member classes in the class network. There is one explicitly defined output neuron of the class



oneuron. The network constructor also invokes the neuron constructor for each input layer neuron in the

network by providing it with the initial weight for its connection to the neuron in the output layer. The

constructor for the output neuron is also invoked by the network constructor, at the same time initializing the

output and activation data members of the output neuron each to zero. To make sure there is access to needed

information and functions, the output neuron is declared a friend class in the class ineuron. The network is

declared as a friend class in the class oneuron.



Source Code for Perceptron Network

Listing 4.4 contains the source code in percept.cpp for the C++ implementation of the Perceptron model

previously discussed.

Listing 4.4 Source code for Perceptron model.

//percept.cpp   V. Rao, H. Rao

//Perceptron model

#include "percept.h"

#include "stdio.h"

#include "stdlib.h"

ineuron::ineuron(float j)

{

weight= j;



}

float ineuron::act(float x)

{

float a;


a = x*weight;

return a;

}

void oneuron::actvtion(float *inputv, ineuron *nrn)



{

C++ Neural Networks and Fuzzy Logic:Preface

Implementation of Functions

68


int i;

activation = 0;

for(i=0;i<4;i++)

     {


     cout<<"\nweight for neuron "<     nrn[i].activation = nrn[i].act(inputv[i]);

     cout<<"           activation is      "<

     activation += nrn[i].activation;

     }

cout<<"\n\nactivation is  "<

}

int oneuron::outvalue(float j)

{

if(activation>=j)



     {

     cout<<"\nthe output neuron activation \

exceeds the threshold value of "<

     output = 1;

     }

else


     {

     cout<<"\nthe output neuron activation \

is smaller than the threshold value of "<

     output = 0;

     }

cout<<" output value is "<< output;



return (output);

}

network::network(float a,float b,float c,float d)



{

nrn[0] = ineuron(a) ;

nrn[1] = ineuron(b) ;

nrn[2] = ineuron(c) ;

nrn[3] = ineuron(d) ;

onrn = oneuron();

onrn.activation = 0;

onrn.output = 0;

}

void main (int argc, char * argv[])



{

float inputv1[]= {1.95,0.27,0.69,1.25};

float wtv1[]= {2,3,3,2}, wtv2[]= {3,0,6,2};

FILE * wfile, * infile;

int num=0, vecnum=0, i;

float threshold = 7.0;

if (argc < 2)

     {


     cerr << "Usage: percept Weightfile Inputfile";

     exit(1);

     }

// open  files



wfile= fopen(argv[1], "r");

infile= fopen(argv[2], "r");

if ((wfile == NULL) || (infile == NULL))

C++ Neural Networks and Fuzzy Logic:Preface

Implementation of Functions

69


     {

     cout << " Can't open a file\n";

     exit(1);

     }


cout<<"\nTHIS PROGRAM IS FOR A PERCEPTRON NETWORK WITH AN INPUT LAYER OF";

cout<<"\n4 NEURONS, EACH CONNECTED TO THE OUTPUT NEURON.\n";

cout<<"\nTHIS EXAMPLE TAKES REAL NUMBERS AS INPUT SIGNALS\n";

//create the network by calling its constructor.

//the constructor calls neuron constructor as many times as the number of

//neurons in input layer of the network.

cout<<"please enter the number of weights/vectors \n";

cin >> vecnum;

for (i=1;i<=vecnum;i++)

     {


     fscanf(wfile,"%f %f %f %f\n", &wtv1[0],&wtv1[1],&wtv1[2],&wtv1[3]);

     network h1(wtv1[0],wtv1[1],wtv1[2],wtv1[3]);

     fscanf(infile,"%f %f %f %f \n",

     &inputv1[0],&inputv1[1],&inputv1[2],&inputv1[3]);

     cout<<"this is vector # " << i << "\n";

     cout << "please enter a threshold value, eg 7.0\n";

     cin >> threshold;

     h1.onrn.actvtion(inputv1, h1.nrn);

     h1.onrn.outvalue(threshold);

     cout<<"\n\n";

     }

fclose(wfile);



fclose(infile);

}

Previous Table of Contents Next



Copyright ©

 IDG Books Worldwide, Inc.

C++ Neural Networks and Fuzzy Logic:Preface

Implementation of Functions

70


Download 1.14 Mb.

Do'stlaringiz bilan baham:
1   2   3   4   5   6   7   8   9   ...   41




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling