The Failures of Mathematical Anti-Evolutionism
Download 0.99 Mb. Pdf ko'rish
|
The Failures of Mathematical Anti-Evolutionism (Jason Rosenhouse) (z-lib.org)
(Stearns 2020, 9)
Molecular evolution understood within a broadly Darwinian framework improves people’s lives in tangible ways. In contrast, intelligent design theory just sits there and does nothing. 6.7 dawkins’ weasel experiments The remainder of this chapter involves arguments drawn from a branch of mathematics known as “combinatorial search.” If you have never seen this subject before it can seem very abstract. It will be 192 6 information and combinatorial search helpful to see some of its basic principles illustrated in the context of a concrete example, and that is what we shall pursue here. The title of this section is drawn from a demonstration pre- sented by biologist Richard Dawkins in his book The Blind Watch- maker (Dawkins 1986). The purpose of the book was to present fundamental ideas of evolutionary biology to a lay audience. The book’s early chapters were meant to clear up common misconceptions about the theory, and one of those misconceptions was that evolution has to build complex structures just by random chance. To dramatize the actual manner in which evolution is said to build complex structures, Dawkins carried out two separate exper- iments. Both started the same way. He programmed a computer to recognize the target phrase “methinks it is like a weasel,” which is drawn from Hamlet. Note that this involves 28 characters (including spaces), drawn from an alphabet of 27 possibilities (26 letters and a space). In both experiments, he had the computer generate random strings of 28 characters in an attempt to produce the target phrase. This is where the two experiments diverged. In the first experi- ment, he simply had the computer keep generating strings at random, one right after the other. This was meant to model the idea of evolu- tion having to produce complex adaptations just by random chance. Of course, the computer never got close to the target phrase by this method. The space of possibilities is enormous, and the experiment was such that each sequence of 28 characters had the same probability as any other of appearing. The second experiment proceeded differently. As before, the computer started by generating a small number of random sequences. This time, however, the computer now scanned the offerings for any strings which, just by chance, had a vague resemblance to the target phrase. In the run of the experiment described by Dawkins, the winning string was: wdlmnlt dtjbkwirzrezlmqco p This does not look very promising, but notice that the ‘e’ in the second grouping is actually in the correct place. The other strings were now 6.7 dawkins’ weasel experiments 193 discarded, and this string served as the starting point of the next gen- eration. A new set of strings was generated from this one, but in such a way that each letter was given some chance of mutating to any of the other letters, with each letter having an equal probability of appearing. Then these new strings were surveyed to find the ones with the closest resemblance to the target phrase, and the process began anew. Dawkins reports that after ten generations the winning phrase was mdldmnls itjiswhrzrez mecs p A quick scan shows that several letters are now in the correct places. The winners after 20 and 30 generations were meldinls it iswprke z wecsel methings it iswlike b wecsel, and the complete phrase was attained after 43 generations. Dawkins’ point was that evolution by natural selection is some- times misunderstood to be acting in the manner of the first experi- ment, but in reality it is far closer to the second experiment. Evolution builds complex structures by accumulating small improvements, with selection ensuring there is no backsliding while waiting for the next improvement to appear. Dawkins intended this to clarify a conceptual point about evo- lution, but we can use it as a straightforward example of a combinato- rial search problem. We started with a very large space of possibilities. Using the techniques from Section 5.4, we find that the total number of 28-character strings drawn from an alphabet with 27 possibilities is 28 27 , (that is, 28 multiplied by itself 27 times), which is a 40-digit number. Within this space, we are searching for a small target, which in this case is the phrase “methinks it is like a weasel.” There is also a ranking on the strings that allows us to say that some are better than others, which in this context means that some strings are closer than others to the target phrase. In combinatorial search problems, this ranking is usually called a “fitness function” for the space. To picture what is happening, imagine the target phrase sitting at the top of a large hill. Just a little bit below the top are all the 194 6 information and combinatorial search strings that are one character away from the target. One notch below that are the strings that are two characters away, and lower still are the strings that are three characters away. There is a vast ocean of strings that have no letters in common with the target, and you can picture them as not even being on the hill at all. The picture we have is that of a single, gently sloping hill sitting in the middle of an otherwise flat plane. This picture is commonly referred to as the “fitness landscape” for the problem. We now try to search the space for the target phrase. We cannot possibly try every possible string because the space is far too big, even for a computer. So we employ an algorithm, which is basically a strategy for deciding which specific strings to sample. In Dawkins’ first experiment, we used an algorithm known as “blind search.” In other words, we chose our string at random and hoped for the best. This approach is extremely unlikely to be successful in a large space. In Dawkins’ second experiment, we used what is known as a “hill-climbing” algorithm. We started by choosing strings at random, but just by chance there will inevitably be one that is just barely on the hill. We used that as our starting point for the next round, by throwing off random variations in all directions. Again, just by chance a small number of strings will end up slightly higher up the hill than the previous one, and they form the basis for the next generation. This is a much more strategic way of searching the space. After the first generation, most of the vast space can be ignored since it is so far from our starting point that it will never arise during the experiment. The point is that if you want to be successful at searching a large space, then you had better be clever about choosing your algorithm. Dawkins’ demonstration was very effective at clearing up a common confusion about evolution. His simulation captured enough of the important aspects of evolution to show that cumulative selec- tion will very quickly achieve what blind search will never achieve at all. However, there is also an important difference between the demonstration and the evolutionary process. This difference, noted by Dawkins in his presentation, is that evolution has no notion of a 6.8 the no free lunch theorems 195 “target phrase.” Evolution is not searching for a pre-set goal. It just sort of meanders around the space. This is an important point, but it is not as important as anti-evolutionists think it is. We will revisit this issue in Section 6.9. 6.8 the no free lunch theorems Let us quickly review our progress. We have emphasized the extent to which anti-evolutionists rely on the metaphor of a search in discussing evolution. The arguments we have considered to this point have all related to the arrangement of the points within the search space. One line of attack asserted that functional structures represented points of such low probability that they could never be found by known evolutionary mechanisms. A different line of attack asserted that functional structures were so isolated within the space that evolution could never bridge the gaps between them. We found both lines of attack to be seriously wanting. However, the search metaphor has two parts. One part is the space itself and the arrangement of points within it. The other part is the algorithm used to search the space, which in evolution is natural selection acting on chance genetic variations. In recent years, anti- evolutionists have focused much of their fire on the algorithm under- lying the evolutionary process. In particular, they employ a collection of results known as the “No Free Lunch” (NFL) theorems, which were published by David Wolpert and William Macready (Wolpert and Macready 1997). To understand the anti-evolutionary argument, we first need to generalize the examples of Section 6.7. Mathematicians and computer scientists frequently confront problems of the following sort: There is a large space of possibilities under consideration. Each point in the space has a fitness associated with it, meaning that there is a ranking that tells us that some points are better for our purposes than others. Our goal is to find a point that maximizes fitness. If the space of possibilities is small, then we can just test each point individually until we find the one with maximum fitness. This 196 6 information and combinatorial search is usually not feasible in practical problems because the space of possibilities is much too large. For such problems, only certain points can be sampled, and this means some algorithm must be employed for deciding which points to check. In other words, there must be some formal procedure for deciding which point to examine next, given some knowledge of what has already been searched. The blind search and hill-climbing approaches of Section 6.7 are two examples of possible algorithms. As in our discussion of Dawkins’ weasel experiments, we now introduce a visual metaphor. We can imagine the space of possibilities arranged as the points of the xy-plane. The fitness of any point in the plane can then be viewed as a number on the z axis above the plane. In this way, we get a three dimensional surface known, again, as a “fitness landscape.” The success of an algorithm will depend on the shape of the landscape it confronts. If the landscape consists of a single hill with a clear maximum point at the top, as in the weasel experiment, then a simple hill-climbing algorithm will work quite well. If instead the surface is very rugged, then the algorithm might get stuck at a local maximum, even though there are better points elsewhere on the surface. This is shown in Figure 6.1. There are many other search algorithms available, some of them very ingenious and complex. However, it always seems to be the case that any algorithm works well on some fitness landscapes and not so well on others. In practice, algorithms are typically devised with specific search problems in mind, so it is not surprising that they work better on some surfaces than on others. It is something of an art to know which sort of algorithm to use on which problem, and it can be an inconvenience trying to determine the appropriate algorithm for your particular problem. What would be really nice is an algorithm that worked well on any fitness landscape. That would certainly simplify matters. According to the No Free Lunch (NFL) theorems, there is no such algorithm. If an algorithm works well on certain landscapes, |
Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling
ma'muriyatiga murojaat qiling