C++ Neural Networks and Fuzzy Logic
Header File for C++ Program for Kohonen’s Approach
Download 1.14 Mb. Pdf ko'rish
|
C neural networks and fuzzy logic
- Bu sahifa navigatsiya:
- Source File Listing
- C++ Neural Networks and Fuzzy Logic by Valluru B. Rao MTBooks, IDG Books Worldwide, Inc. ISBN
- Table 15.2
- Optimizing a Stock Portfolio
- objective
Header File for C++ Program for Kohonen’s Approach Listing 15.3 contains the header file for this program, and listing 15.4 contains the corresponding source file: Listing 15.3 Header file for C++ program for Kohonen’s approach //tsp_kohn.h V.Rao, H.Rao #include C++ Neural Networks and Fuzzy Logic:Preface Other Approaches to Solve the Traveling Salesperson Problem 366
#include #define MXSIZ 10 #define pi 3.141592654 class city_neuron { protected: double x,y; int mark,order,count; double weight[2]; friend class tspnetwork; public: city_neuron(){}; void get_neuron(double,double); }; class tspnetwork {
protected: int chosen_city,order[MXSIZ]; double gain,input[MXSIZ][2]; int citycount,index,d[MXSIZ][MXSIZ]; double gain_factor,diffsq[MXSIZ]; city_neuron (cnrn)[MXSIZ]; public: tspnetwork(int,double,double,double,double*,double*); void get_input(double*,double*); void get_d(); void find_tour(); void associate_city(); void modify_weights(int,int); double wtchange(int,int,double,double); void print_d(); void print_input(); void print_weights(); void print_tour(); };
The following is the source file listing for the Kohonen approach to the traveling salesperson problem. Listing 15.4 Source file for C++ program for Kohonen’s approach //tsp_kohn.cpp V.Rao, H.Rao #include “tsp_kohn.h” void city_neuron::get_neuron(double a,double b) { x = a; y = b; mark = 0; count = 0; weight[0] = 0.0; weight[1] = 0.0; }; tspnetwork::tspnetwork(int k,double f,double q,double h, double *ip0,double *ip1) C++ Neural Networks and Fuzzy Logic:Preface Other Approaches to Solve the Traveling Salesperson Problem 367
{ int i; gain = h; gain_factor = f; citycount = k; // distances between neurons as integers between 0 and n−1 get_d(); print_d(); cout<<”\n”; // input vectors get_input(ip0,ip1); print_input(); // neurons in the network for(i=0;i {
order[i] = citycount+1; diffsq[i] = q;
cnrn[i].get_neuron(ip0[i],ip1[i]); cnrn[i].order = citycount +1;
} }
void tspnetwork::associate_city() {
int i,k,j,u; double r,s;
for(u=0;u {
//start a new iteration with the input vectors for(j=0;j { for(i=0;i { if(cnrn[i].mark==0)
{ k = i;
i =citycount; }
} //find the closest neuron
for(i=0;i {
r = input[j][0] − cnrn[i].weight[0]; s = input[j][1] − cnrn[i].weight[1];
diffsq[i] = r*r +s*s; if(diffsq[i] } chosen_city = k;
cnrn[k].count++; if((cnrn[k].mark<1)&&(cnrn[k].count==2))
{ C++ Neural Networks and Fuzzy Logic:Preface
Other Approaches to Solve the Traveling Salesperson Problem 368
//associate a neuron with a position cnrn[k].mark = 1; cnrn[k].order = u; order[u] = chosen_city; index = j; gain *= gain_factor; //modify weights modify_weights(k,index); print_weights(); j = citycount; } } } }
void tspnetwork::find_tour() {
int i; for(i=0;i {
associate_city(); }
//associate the last neuron with remaining position in // tour
for(i=0;i {
if( cnrn[i].mark ==0) {
cnrn[i].order = citycount−1; order[citycount−1] = i;
cnrn[i].mark = 1; }
} //print out the tour.
//First the neurons in the tour order //Next cities in the tour
//order with their x,y coordinates print_tour();
} void tspnetwork::get_input(double *p,double *q)
{
for(i=0;i {
input[i][0] = p[i]; input[i][1] = q[i];
} }
//function to compute distances (between 0 and n−1) between //neurons
void tspnetwork::get_d() {
int i,j; C++ Neural Networks and Fuzzy Logic:Preface
Other Approaches to Solve the Traveling Salesperson Problem 369
for(i=0;i { for(j=0;j { d[i][j] = (j−i);
if(d[i][j]<0) d[i][j] = d[j][i]; }
} }
//function to find the change in weight component double tspnetwork::wtchange(int m,int l,double g,double h)
{
r = exp(−d[m][l]*d[m][l]/gain); r *= (g−h)/sqrt(2*pi);
return r; }
//function to determine new weights void tspnetwork::modify_weights(int jj,int j)
{
double t; double w[2];
for(i=0;i {
w[0] = cnrn[i].weight[0]; w[1] = cnrn[i].weight[1];
//determine new first component of weight t = wtchange(jj,i,input[j][0],w[0]);
w[0] = cnrn[i].weight[0] +t; cnrn[i].weight[0] = w[0];
//determine new second component of weight t = wtchange(jj,i,input[j][1],w[1]);
w[1] = cnrn[i].weight[1] +t; cnrn[i].weight[1] = w[1];
} }
//different print routines void tspnetwork::print_d()
{
cout<<”\n”; for(i=0;i { cout<<” d: “;
for(j=0;j {
cout< }
cout<<”\n”; }
}
C++ Neural Networks and Fuzzy Logic:Preface Other Approaches to Solve the Traveling Salesperson Problem
370
{ int i,j; for(i=0;i { cout<<”input : “;
for(j=0;j<2;j++) {
cout<
}
cout<<”\n”; }
}
{
cout<<”\n”; for(i=0;i { cout<<” weight: “;
for(j=0;j<2;j++) {
cout< }
cout<<”\n”; }
}
{
cout<<”\n tour : “; for(i=0;i { cout< } cout< for(i=0;i {
j = order[i]; cout<<”(“< } j= order[0];
cout<<”(“< }
void main() {
int nc= 5;//nc = number of cities double q= 0.05,h= 1.0,p= 1000.0;
double input2[][5]= {7.0,4.0,14.0,0.0,5.0,3.0,6.0,13.0,12.0,10.0}; tspnetwork tspn2(nc,q,p,h,input2[0],input2[1]);
tspn2.find_tour(); }
C++ Neural Networks and Fuzzy Logic:Preface Other Approaches to Solve the Traveling Salesperson Problem
371
Previous Table of Contents Next Copyright © IDG Books Worldwide, Inc. C++ Neural Networks and Fuzzy Logic:Preface Other Approaches to Solve the Traveling Salesperson Problem 372
C++ Neural Networks and Fuzzy Logic by Valluru B. Rao MTBooks, IDG Books Worldwide, Inc. ISBN: 1558515526 Pub Date: 06/01/95 Previous Table of Contents Next Output from a Sample Program Run The program, as mentioned, is created for the Kohonen approach to the traveling salesperson problem for five cities. There is no user input from the keyboard. All parameter values are given to the program with appropriate statements in the function main. A scale factor of 0.05 is given to apply to the gain parameter, which is given as 1. Initially, the distance of each neuron weight vector from an input vector is set at 1000, to facilitate finding the closest for the first time. The cities with coordinates (7,3), (4,6), (14,13), (0,12), (5,10) are specified for input vectors. The tour found is not the one in natural order, namely 0 ’ 1 ’ 2 ’ 3 ’ 4 ’ 0, with a distance of 43.16. The tour found has the order 0 ’ 3 ’ 1 ’ 4 ’ 2 ’ 0, which covers a distance of 44.43, which is slightly higher, as shown in Figure 15.2. The best tour, 0 ’ 2 ’ 4 ’ 3 ’ 1 ’ 0 has a total distance of 38.54. Figure 15.2 City placement and tour found for TSP. Table 15.2 gives for the five−city example, the 12 (5!/10) distinct tour distances and corresponding representative tours. These are not generated by the program, but by enumeration and calculation by hand. This table is provided here for you to see the different solutions for this five−city example of the traveling salesperson problem. Table 15.2 Distances and Representative Tours for Five−City Example DistanceTourComment 49.050−3−2−1−4−0worst case 47.590−3−1−2−4−0 45.330−2−1−4−3−0 44.860−2−3−1−4−0 44.430−3−1−4−2−0tour given by the program 44.300−2−1−3−4−0 43.290−1−4−2−3−0 43.160−1−2−3−4−0 42.730−1−2−4−3−0 42.260−1−3−2−4−0 40.000−1−4−3−2−0 38.540−2−4−3−1−0optimal tour There are 12 different distances you can get for tours with these cities by hand calculation, and four of these are higher and seven are lower than the one you find from this program. The worst case tour (0 [rarr] 3 [rarr] 2 C++ Neural Networks and Fuzzy Logic:Preface Other Approaches to Solve the Traveling Salesperson Problem 373
[rarr] 1 [rarr] 4 [rarr] 0) gives a distance of 49.05, and the best, as you saw above, 38.54. The solution from the program is at about the middle of the best and worst, in terms of total distance traveled. The output of the program being all computer generated is given below as follows: d: 0 1 2 3 4 d: 1 0 1 2 3 d: 2 1 0 1 2 d: 3 2 1 0 1 d: 4 3 2 1 0 input : 7 3 input : 4 6 input : 14 13 input : 0 12 input : 5 10 weight: 1.595769 2.393654 weight: 3.289125e−09 4.933688e−09 weight: 2.880126e−35 4.320189e−35 weight: 1.071429e−78 1.607143e−78 weight: 1.693308e−139 2.539961e−139 weight: 1.595769 2.393654 weight: 5.585192 5.18625 weight: 2.880126e−35 4.320189e−35 weight: 1.071429e−78 1.607143e−78 weight: 1.693308e−139 2.539961e−139 weight: 1.595769 2.393654 weight: 5.585192 5.18625 weight: 5.585192 5.18625 weight: 1.071429e−78 1.607143e−78 weight: 1.693308e−139 2.539961e−139 weight: 1.595769 2.393654 weight: 5.585192 5.18625 weight: 5.585192 5.18625 weight: 5.585192 5.18625 weight: 1.693308e−139 2.539961e−139 weight: 1.595769 2.393654 weight: 5.585192 5.18625 weight: 5.585192 5.18625 weight: 5.585192 5.18625 weight: 5.585192 5.18625 tour : 0 –> 3–> 1 –> 4 –> 2–> 0 (7, 3) –> (0, 12) –> (4, 6) –> (5, 10) –> (14, 13) –> (7, 3) Optimizing a Stock Portfolio Development of a neural network approach to a stock selection process in securities trading is similar to the application of neural networks to nonlinear optimization problems. The seminal work of Markowitz in making a mathematical formulation of an objective function in the context of portfolio selection forms a basis for such a development. There is risk to be minimized or put a cap on, and there are profits to be maximized. Investment capital is a limited resource naturally. C++ Neural Networks and Fuzzy Logic:Preface Optimizing a Stock Portfolio 374
The objective function is formulated in such a way that the optimal portfolio minimizes the objective function. There would be a term in the objective function involving the product of each pair of stock prices. The covariance of that pair of prices is also used in the objective function. A product renders the objective function a quadratic. There would of course be some linear terms as well, and they represent the individual stock prices with the stock’s average return as coefficient in each such term. You already get the idea that this optimization problem falls into the category of quadratic programming problems, which result in real number values for the variables in the optimal solution. Some other terms would also be included in the objective function to make sure that the constraints of the problem are satisfied. A practical consideration is that a real number value for the amount of a stock may be unrealistic, as fractional numbers of stocks may not be purchased. It makes more sense to ask that the variables be taking 0 or 1 only. The implication then is that either you buy a stock, in which case you include it in the portfolio, or you do not buy at all. This is what is usually called a zero−one programming problem. You also identify it as a combinatorial problem. You already saw a combinatorial optimization problem in the traveling salesperson problem. The constraints were incorporated into special terms in the objective function, so that the only function to be computed is the
Deeming the objective function as giving the energy of a network in a given state, the simulated annealing paradigm and the Hopfield network can be used to solve the problem. You then have a neural network in which each neuron represents a stock, and the size of the layer is determined by the number of stocks in the pool from which you want to build your stock portfolio. The paradigm suggested here strives to minimize the energy of the machine. The objective function needs therefore to be stated for minimization to get the best portfolio possible. Previous Table of Contents Next Copyright © IDG Books Worldwide, Inc. C++ Neural Networks and Fuzzy Logic:Preface Optimizing a Stock Portfolio 375
C++ Neural Networks and Fuzzy Logic by Valluru B. Rao MTBooks, IDG Books Worldwide, Inc. ISBN: 1558515526 Pub Date: 06/01/95 Previous Table of Contents Next Tabu Neural Network Tabu search, popularized by Fred Glover with his contributions, is a paradigm that has been used successfully in many optimization problems. It is a method that can steer a search procedure from a limited domain to an extended domain, so as to seek a solution that is better than a local minimum or a local maximum. Tabu search (TS), suggests that an adaptive memory and a responsive exploration need to be part of an algorithm. Responsive exploration exploits the information derivable from a selected strategy. Such information may be more substantial, even if the selected strategy is in some sense a bad strategy, than what you can get even in a good strategy that is based on randomness. It is because there is an opportunity provided by such information to intelligently modify the strategy. You can get some clues as to how you can modify the strategy. When you have a paradigm that incorporates adaptive memory, you see the relevance of associating a neural network:. a TANN is a Tabu neural network. Tabu search and Kohonen’s self−organizing map have a common approach in that they work with “neighborhoods.” As a new neighborhood is determined, TS prohibits some of the earlier solutions, as it classifies them as tabu. Such solutions contain attributes that are identified as tabu active. Tabu search, has STM and LTM components as well. The short−term memory is sometimes called
memory makes the search method much more potent. It also does not necessitate longer runs of the search process. Some of the examples of applications using Tabu search are: • Training neural nets with the reactive Tabu search • Tabu Learning: a neural network search method for solving nonconvex optimization problems • Massively parallel Tabu search for the quadratic assignment problem • Connection machine implementation of a Tabu search algorithm for the traveling salesman problem
• A Tabu search procedure for multicommodity location/allocation with balancing requirements Summary The traveling salesperson problem is presented in this chapter as an example of nonlinear optimization with neural networks. Details of formulation are given of the energy function and its evaluation. The approaches to the solution of the traveling salesperson problem using a Hopfield network and using a Kohonen self−organizing map are presented. C++ programs are included for both approaches. The output with the C++ program for the Hopfield network refers to examples of four− and five−city tours. The output with the C++ program for the Kohonen approach is given for a tour of five cities, for illustration. C++ Neural Networks and Fuzzy Logic:Preface Tabu Neural Network 376
The solution obtained is good, if not optimal. The problem with the Hopfield approach lies in the selection of appropriate values for the parameters. Hopfield’s choices are given for his 10−city tour problem. The same values for the parameters may not work for the case of a different number of cities. The version of this approach given by Anzai is also discussed briefly. Use of neural networks for nonlinear optimization as applied to portfolio selection is also presented in this chapter. You are introduced to Tabu search and its use in optimization with neural computing. Previous Table of Contents Next Copyright © IDG Books Worldwide, Inc. C++ Neural Networks and Fuzzy Logic:Preface Tabu Neural Network 377
C++ Neural Networks and Fuzzy Logic by Valluru B. Rao MTBooks, IDG Books Worldwide, Inc. ISBN: 1558515526 Pub Date: 06/01/95 Previous Table of Contents Next Chapter 16 Applications of Fuzzy Logic Introduction Up until now, we have discussed how fuzzy logic could be used in conjunction with neural networks: We looked at a fuzzifier in Chapter 3 that takes crisp input data and creates fuzzy outputs, which then could be used as inputs to a neural network. In chapter 9, we used fuzzy logic to create a special type of associative memory called a FAM (fuzzy associative memory). In this chapter, we focus on applications of fuzzy logic by itself. This chapter starts with an overview of the different types of application areas for fuzzy logic. We then present two application domains of fuzzy logic: fuzzy control systems, and fuzzy databases and quantification. In these sections, we also introduce some more concepts in fuzzy logic theory. Download 1.14 Mb. Do'stlaringiz bilan baham: |
ma'muriyatiga murojaat qiling