Ghost In The Machine – Richard Milton

This is chapter 15 (The Ghost In The Machine) from Richard Milton excellent book entitled, “SHATTERING THE MYTHS OF DARWINISM(PDF), as well as a bit from near the end of the book regarding a “spellchecking” program to shore up his ideas from chapter fifteen. It really has to do with responding to the idea that a computer program is shown to “evolve.” That in some way there is an increase in novel information in the program that is adding to the specificity of the program apart from the designers/software engineers.

I would be remiss to not link to an article that also goes through this example well:

Enjoy… mind you this book was read by me in 1998, but the ideas here have been sustained through today.


Ghost in the Machine
pp. 167-176


[167>] Russel and Seguin’s 1982 picture of a human-looking “evolved” version of a dinosaur was an impressive feat combining science and imagination in a constructive and entertaining way. Yet few in 1982 foresaw that in little more than a decade, over 100 million people around the world would pay to be scared by the even more impressive feat of the computer generated dinosaurs of “Jurassic Park”.

Nothing that has entered the evolution debate since Darwin’s time has promised to illuminate the subject so much as the modern computer and its apparently limitless ability to represent, on the monitor-screen, compelling visual solutions to the most abstruse mathematical questions.

The information handling capacity of electronic data processing, with its obvious analogy to DNA, has been enthusiastically enlisted by computer-literate Darwinists as offering powerful evidence for their theory; while genetic software systems, said to emulate the processes of genetic mutation and natural selection at speeds high enough to make the process visible, have become a feature of most up-to-date biology laboratories.

The computer has been put to many ingenious uses in the service of Darwinist theory. And it has changed the minds of not a few skeptics by its powerful visual imagery and uncanny ability to bring extinct creatures – or even creatures that never lived – to life in front [168>] of us. But, compelling though the visual images are, how much confidence should we put in the computer as a guide to the evolution of life?

In his book The Blind Watchmaker Richard Dawkins describes a computer program he wrote which randomly generates symmetrical figures from dots and lines. These figures, to a human eye, have a resemblance to a variety of objects. Dawkins gives some of them insect and animal names, such as bat, spider, fox or caddis fly. Others he gives names like lunar lander, precision balance, spitfire, lamp and crossed sabers.

Dawkins calls these creations “biomorphs”, meaning life shapes or living shapes, a term he borrows from fellow zoologist Desmond Morris. He also feels very strongly that in using a computer program to create them, he is in some way simulating evolution itself. His approach can be understood from this extract:

Nothing in my biologist’s intuition, nothing in my 20 years experience of programming computers, and nothing in my wildest dreams, prepared me for what actually emerged on the screen. I can’t remember exactly when in the sequence it first began to dawn on me that an evolved resemblance to something like an insect was possible. With a wild surmise, I began to breed generation after generation, from whichever child looked most like an insect. My incredulity grew in parallel with the evolving resemblance…. Admittedly they have eight legs like a spider, instead of six like an insect, but even so! I still cannot conceal from you my feeling of exultation as I first watched these exquisite creatures emerging before my eyes.[1]

Dawkins not only calls his computer drawings “biomorphs”, he gives some of them the names of living creatures. He also refers to them as “quasi-biological” forms and in a moment of excitement calls them “exquisite creatures”. He plainly believes that in some way they correspond to the real world of living animals and insects. But they do not correspond in any way at all with living things, except in the purely trivial way that he sees some resemblance in their shapes. The only thing about the “biomorphs” that is [169>] biological is Richard Dawkins, their creator. As far as the “spitfire” and the “lunar lander” are concerned there is not even a fancied biological resemblance.

The program he wrote and the computer he used have no analog at all in the real biological world. Indeed, if he set out to create an experiment that simulates evolution, he has only succeeded in making one that simulates special creation, with himself in the omnipotent role.

His program is not a true representation of random mutation coupled with natural selection. On the contrary it is dependent on artificial selection in which he controls he sees some resemblance in their shapes. The only thing about the “biomorphs” that is biological is Richard Dawkins, their creator. As far as the “spitfire” and the “lunar lander” are concerned there is not even a fancied biological resemblance.

The program he wrote and the computer he used have no analog at all in the real biological world. Indeed, if he set out to create an experiment that simulates evolution, he has only succeeded in making one that simulates special creation, with himself in the omnipotent role.

His program is not a true representation of random mutation coupled with natural selection. On the contrary it is dependent on artificial selection in which he controls the rate of occurrence of mutations. Despite Dawkins’s own imaginative interpretations, and even with the deck stacked in his favor, his biomorphs show no real novelty arising. There are no cases of bears turning into whales.

There is also no failure in his program: his biomorphs are not subject to fatal consequences of degenerate mutations like real living things. And, most important of all, he chooses which are the lucky individuals to receive the next mutation – it is not decided by fate – and of course he chooses the most promising ones (“I began to breed from whichever child looked most like an insect.”) That is why they have ended up looking like recognizable images from his memory. If his mutations really occurred randomly, as in the real world, Dawkins would still be sitting in front of his screen watching a small dot and waiting for it do something.

Above all, his computer experiment falsifies the most important central claim of mechanistic Darwinian thinking; that, through natural processes, living things could come into being without any precursor. What Dawkins has shown is that, if you want to start the evolutionary ball rolling, you need some form of design to take a hand in the proceedings, just as he himself had to sit down and program his computer.

In fact, his experiment shows very much the same sort of results that field work in biology and zoology has shown for the past hundred years: there is no evidence for beneficial spontaneous genetic mutation; there is no evidence for natural selection (except as an empty tautology); there is no evidence for either as significant evolutionary mechanisms. There is only evidence of an unquenchable optimism among Darwinists that given enough [170>] time, anything can happen – the argument from probability.

But although Dawkins’s program does not qualify as a simulation of random genetic mutation coupled with natural selection, it does highlight at least one very important way in which computer programs resemble genetic processes. Each instruction in a program must be carefully considered by the programmer as to both its immediate effect on the computer hardware and its effects on other parts of the program. The letters and numbers which the programmer uses to write the instructions have to be written down with absolute precision with regard to the vocabulary and syntax of the programming language he uses in order for the computer system to function at all. Even the most trivial error can lead to a complete malfunction. In 1977, for example, an attempt by NASA to launch a weather satellite from Cape Canaveral ended in disaster when the launch vehicle went off course shortly after takeoff and had to be destroyed. Subsequent investigation by NASA engineers found that the accident was caused by failure of the onboard computer guidance system – because a single comma had been misplaced in the guidance program.

Anyone who has programmed a computer to perform the simplest task in the simplest language – Basic for instance – will understand the problem. If you make the simplest error in syntax, misplacing a letter, a punctuation mark or even a space, the program will not run at all.

In just the same way, each nucleotide has to be “written” in precisely the correct order and in precisely the correct location in the DNA molecule for the offspring to remain viable, and, as described earlier, major functional disorders in humans, animals and plants are caused by the loss or displacement of a single DNA molecule, or even a single nucleotide within that molecule.

In order to simulate neo-Darwinist evolution on his computer, it is not necessary for Dawkins to devise complex programs that seek to simulate insect life. All he has to do is to write a program containing a large number of instructions (3000 million instructions if he wishes to simulate human DNA) that continually regenerates its own program code, but randomly interferes with the code in trivial ways, such as transposing, shifting or missing characters. (The system must be set to restart itself after each fatal “birth”.)

[171>] The result of this experiment would be positive if the system ever develops a novel function that was not present in the original programming. One way of defining “novelty” would be to design the program so that, initially, its sole function was to replicate itself (a computer virus). A novel function would then be anything other than mere reproduction. In practice, however, I do not expect the difficulty of defining what constitutes a novelty to pose any problem. It is extremely improbable that Dawkins’s program will ever work again after the first generation, just as in real life, mutations cause genetic defects, not improvements.

Outside of the academic world there are a number of important commercial applications based on computer simulations that deserve to be seriously examined. A good example of this is in the field of aircraft wing design where computers have been used by aircraft engineers to develop the optimum airfoil profile. In the past wing design has been based largely on repetitive trial and error methods. A hypothetical wing shape is drawn up; a physical model is made and is aerodynamically tested in the wind tunnel. Often the results of such an empirical design approach are predictable: lengthening the upper wing curve, in relation to the lower, generally increases the upward thrust obtained. But sometimes results are very unpredictable, as when complex patterns of turbulence combine at the trailing edge to produce drag, which lowers wing efficiency, and causes destructive vibration.

Engineers at Boeing Aircraft tried a new approach. They created a computer model which was able to “mutate” a primitive wing shape at random – to stretch it here or shrink it there. They also fed into the model rules that would enable the computer to simulate testing the resulting design in a computerized version of the “wind tunnel”- the rules of aerodynamics.

The engineers say this process has resulted in obtaining wing designs offering maximum thrust and minimum drag and turbulence, more quickly than before and without any human intervention once the process has been set in motion.

Designers have made great savings in time compared with previous methods and the success of the computer in this field has given rise to a new breed of application dubbed “genetic software”. Indeed, on the face of it, the system is acting in a Darwinian manner. The [172>] computer (an inanimate object) has produced an original and intelligent design (comparable, say, with a natural structure such as a bird’s wing) by random mutation of shape combined with selection according to rules that come from the natural world – the laws of aerodynamics. If the computer can do this in the laboratory in a few hours or days, what could nature not achieve in millions of years?

The fallacies on which this case is constructed are not very profound but they do need to be nailed down. In a recently published popular primer on molecular biology, Andrew Scott’s Vital Principles, this very example is given under the heading “the creativity of evolution”. The process itself is called “computer generated evolution” as though it were analogous to an established natural process of mutation and selection.[2]

The most important fallacy in this argument is the idea that somehow a result has occurred which is independent of, or in some way beyond the engineers, who merely started the machine by pressing a button. Of course, the fact is that a human agency has designed and built the computer and programmed it to perform the task in question. As with the previous experiment, this begs the only important question in evolution theory: could complex structures have arisen spontaneously by random natural processes without any precursor? Like all other computer simulation experiments, this one actually makes a reasonable case for special creation – or some form of vitalist-directed design – because it specifically requires a creator to build the computer and devise and implement the program in the first place.

However, there are other important fallacies too. The only reason that the Boeing engineers are able to take the design produced on paper by their computer and translate that design into an aircraft that flies, is because they are employing an immense body of knowledge – not possessed by the com put er – regarding the properties of materials from which the aircraft will be made and the manufacturing processes that will be used to make it. The computer’s wing is merely an outline on paper, an idea: it is of no more significance to aviation than a wave outline on the beach or a wind outline in the desert. The real wing has to actually fly in the air with real passengers. The decisive events that make that idea into a reality are a long, complex sequence of human operations [173>] and judgments that involve not only the shaping and fastening of metal for wings but also the design and manufacture of airframes and jet engines. These additional complexities are beyond the capacity of the computer, not merely in practice but in principle, because computers cannot even make a cup of coffee, let alone an airliner, without being instructed every step of the way.

In order for a physical structure like an aircraft wing to evolve by spontaneous random means, it is necessary for natural selection to do far more than select an optimum shape. It must also select the correct materials, the correct manufacturing methods (to avoid failure in service) and the correct method of integrating the new structure into its host creature. These operations involve genetic engineering principles which are presently unknown. And because they are unknown by us, they cannot be programmed into a computer.

There is also an important practical reason why the computer simulation is not relevant to synthetic evolution: because an aircraft wing differs from a natural wing in a fundamental way. The aircraft wing is passive, since the forward movement of the aircraft is derived from an engine. A natural wing like a bird’s, however, has to provide upthrust and the forward motion necessary to generate that lift making it a complex, articulated active mechanism. The engineering design problem of evolving a passive wing is merely a repetitive mechanical task – that is why it is suitable for computerization. So far, no-one has suggested programming a computer to design a bird’s wing by random mutation because the suggestion would be seen as ludicrous. Even if all of the world’s computers were harnessed together, they would be unable to take even the most elementary steps needed to design a bird’s wing unless they were told in advance what they were aiming at and how to get there.

If computers are no use to evolutionists as models of the hypothetical selection process, they are proving invaluable in another area of biology; one that seems to hold out much promise to Darwinists – the field of genetics. Since Watson and Crick elucidated the structure of the DNA molecule, and since geneticists began unraveling the meaning of the genetic code, the center of gravity of evolution theory has gradually shifted away from the earth sciences – geology and pale-ontology – toward molecular biology.

[174>] This shift in emphasis has occurred not only because of the attraction of the new biology as holding the answers to many puzzling questions, but also because the traditional sciences have proved ultimately sterile as a source of decisive evidence. The gaps in the fossil record, the incomplete-ness of the geological strata, and the ambiguity of the evidence from comparative anatomy, ultimately caused Darwinists to give up and look somewhere else for decisive evidence. Thanks to molecular biology and computer science they now have somewhere else to try.

Darwinists seem to have drawn immense comfort from their recent discoveries at the cellular level and beyond, behaving and speaking as though the new discoveries of biology represent a triumphant vindication of their long-held beliefs over the irrational ideas of vitalists. Yet the gulf between what Darwinists claim for molecular biological discoveries and what those discoveries actually show is only too apparent to any objective evaluation.

Consider these remarks by Francis Crick, justly famous as one of the biologists who cracked the genetic code, and equally well known as an ardent supporter of Darwinist evolution. In his 1966 book Molecules and Men, in which he set out to criticize vitalism, Crick asked which of the various molecular biological processes are likely to be the seat of the “vital principle”.[3] “It can hardly be the action of the enzymes,” he says, “because we can easily make this happen in a test tube. Moreover most enzymes act on rather simple organic molecules which we can easily synthesize.”

There is one slight difficulty but Crick easily deals with it; “It is true that at the moment nobody has synthesized an actual enzyme chemically, but we can see no difficulty in doing this in principle, and in fact I would predict quite confidently that it will be done within the next five or ten years.”

A little later, Crick says of mitochondria (important objects in the cell that also contain DNA):

It may be some time before we could easily synthesise such an object, but eventually we feel that there should be no gross difficulty in putting a mitochondrion together from its component parts.

This reservation aside, it looks as if any system of [175>] enzymes could be made to act without invoking any special principles, or without involving material that we could not synthesize in the laboratory. [4]

There is no question that Crick and Watson’s decoding of the DNA molecule is a brilliant achievement and one of the high points of twentieth-century science. But this success seems to me to have led many scientists to expect too much as a result.

Crick’s early confidence that an enzyme would be produced synthetically within five or ten years has not been borne out and biologists are further than ever from achieving such a synthesis. Indeed, reading and rereading the words above with the benefit of hindsight I cannot help but interpret them as saying “we are unable to synthesize any significant part of a cell at present, but this reservation aside, we are able to synthesize any part of the cell.”

Certainly great strides have been made. William Shrive, writing in the McGraw Hill Encyclopedia of Science and Technology, says, “The complete amino acid sequence of several enzymes has been determined by chemical methods. By X-ray crystallographic methods it has even been possible to deduce the exact three-dimensional molecular structure of a few enzymes.”[5] But despite these advances no-one has so far synthesized anything remotely as complex as an enzyme or any other protein molecule.

Such a synthesis was impossible when Crick wrote in 1966 and remains impossible today. It is probably because there is a world of difference between having a neat table that shows the genetic code for all twenty amino acids (Alanine = GCA, Praline = CCA and so on) and knowing how to manufacture a protein. These complex molecules do not simply assemble themselves from a mixture of ingredients like a cup of tea. Something else is needed. What the something else is remains conjectural. If it is chemical it has not been discovered; if it is a process it is an unknown process; if it is a “vital principle” it has not yet been recognized. Whatever the something is, it is presently impossible to build a case either for Darwinism or against vitalism out of what we have learned of the cell and the molecules of which it is composed.

It is easy to see why evolutionists should be so excited about cellular discoveries because the mechanisms they have found appear to [176>] be very simple. But however simple they may seem, as of yet no-one has succeeded in synthesizing any significant original structure from raw materials. We know the code for the building blocks; we don’t know the instructions for building a house with them.

Indeed, the discoveries of biochemistry and molecular biology have raised some rather awkward questions for Darwinists, which they have yet to address satisfactorily. For example, the existence of genetically very simple biological entities, such as viruses, seems to support Darwinist ideas about the origin of life. One can imagine all sorts of primitive life forms and organisms coming into existence in the primeval ocean and it seems only natural that one should find entities that are part way between the living and the nonliving – stepping stones to life as it were. It is only to be expected, says Richard Dawkins, that the simplest form of self-replicating object would merely be that part of the DNA program which says only “copy me”, which is essentially what a virus is.

The problem here is that viruses lack the ability to replicate unless they inhabit a host cell – a fully functioning cell with its own genetic replication mechanisms. So the first virus must have come after the first cell, not before in a satisfyingly Darwinian processes.

But despite minor unresolved problems of this kind Darwinists still have one remaining card to play in support of their theory. It is the strongest card in their hand and the most powerful and decisive evidence in favor of Darwinian evolutionary processes.

[….]


pp. 223-227


[223>] Earlier on I referred to computers and their programs as a fruitful source of comparison with genetic processes since both are concerned with the storage and reliable transmission of large quantities of information. Arguing from analogy is a dangerous practice, but there is one phenome-non connected with computer systems that could be of some importance in understanding biological information processing strategies.

The phenomenon has to do with the computer’s ability to refer to a master list or template and to highlight any exceptions to this master list that it encounters during processing. This “exception reporting” is profoundly important in information processing. For instance, this book was prepared using a word-processing program that has a spelling checker. When invoked, the spell checker reads the typescript of the book and compares each word with its built-in dictionary, highlighting as potential mistakes those it does not recognize. Of course, it will encounter words that are spelled correctly but are not found in a normal dictionary – such as “deoxyribonucleic acid”. But the program is clever enough to allow me to add the novel word to the dictionary, so that the next time it is encountered it will be accepted as correct instead of reported as an exception – as long as I spell it correctly.

In other words, the spelling checker isn’t really a spelling checker. It has no conception of correct spelling. It is merely a mechanism [224>] for reporting exceptions. Using these methods, programmers can get computers to behave in an apparently intelligent or purposeful way when they are really only obeying simple mechanical rules. Not unnaturally, this gives Darwinists much encouragement to believe that life processes may at root be just as simple and mechanical.

In cell biology there are natural chemical properties of complex molecules that lend them-selves to automatic checking and excepting of this kind. For example many molecules are stereospecific – they will attach only to certain other specific molecules and only in special positions. There are also much more complex forms of exception reporting, for instance as part of the brain’s (of if you prefer, the mind’s) cognitive processes: as when we see and recognize a single face in the crowd or hear our name mentioned at a noisy cocktail party.

In the case of the spelling checker, the behavior of the system can be made to look more and more intelligent through a process of learning if, every time it highlights a new word, I add that word to its internal dictionary. If I continue for a long enough time, then eventually, in principle, the system will have recorded every word in the English language and will highlight only words that are indeed misspelled. It will have achieved the near-miraculous levels of efficiency and repeatability that we are used to seeing in molecular biological processes. But something strange has also been happening at the same time – or, rather, two strange things.

The first is that as its vocabulary grows, the spelling checker becomes less efficient at drawing to my attention possible mistakes. This unexpected result comes about in the fallowing way. Remember, the computer knows nothing of spelling, it merely reports exceptions to me. To begin with, it has only, say, 50,000 standard words in its dictionary. This size of dictionary really only covers the common everyday words plus a modest number of proper nouns (for capital cities, common surnames and the like) and doesn’t leave much room for unusual words. It would, for instance include a word like ‘great’ but not the less-frequently used word “grate”.

The result is that if I accidentally type “grate” when I really mean “great”, the spell checker will draw it to my attention. If however, I enlarge the dictionary and add the word “grate”, the spell [225>]checker will ignore it in future, even though the chances are that it will occur only as a typing mis take – except in the rare case where I am writing about coal fires or cookery.

One can generalize this case by saying that when the dictionary has an optimum size of vo-cabulary, I get the best of both worlds: it points out misspellings of the most common words and reports anything unusual which in most cases probably will be an error. (Obviously to work at optimum efficiency the size of dictionary should be matched to the vocabulary of the writer). As the dictionary grows in volume it becomes more efficient in one way, highlighting only real spelling errors, but less efficient in another: it becomes more probable that my typing errors will spell a real word – one that will not be reported – but not the word I mean to use. Paradoxically, although the spelling checker is more efficient, the resulting book is full of contextual errors: ‘pubic’ instead of ‘public’, ‘grate’ instead of ‘great’ and so on.

It requires a human intelligence -a real spelling checker, not a mechanical exception reporter to make sure that the intended result is produced.

I said two strange things have been happening while I have been adding words to the spelling checker. The second is the odd occasion when the system has highlighted a real spelling mistake to me- say, “problem” instead of “problem” – and I have mistakenly told the computer to add the word to its dictionary. This, of course, has the very unfortunate result that in future it will cease to highlight a real spelling mistake and will pass it as correct. The error is no longer an exception it is now a dictionary word.

Under what circumstances am I most likely to issue such a wrong instruction? It is most likely to happen with words that I type most frequently and that I habitually mistype. Anyone who uses a keyboard every day knows that there are many such ‘favorite’ misspelled words that get typed over and over. Once again, only a real spelling checker, a human brain, can spot the error and correct it.

The reason that the computer’s spellchecker breaks down under these circumstances is that the simple mechanisms put in place do not work from first principles. They do not work in what electronics engineers call ‘real time’ (they are not in touch with the real world) and do not employ any real intelligent understanding [226>] of the tasks they are being called on to perform. So although the computer continues to work perfectly as it was designed to, it becomes more and more corrupted from the standpoint of its original function.

I believe that this analogy may well have some relevance to Darwinists’ belief that biological processes can at root be as simple as the spelling checker. It is easy to think of any number of simple cell replication mechanisms that rely on exception reporting of this kind. I believe that if biological processes were so simple, they too would become functionally corrupt unless there is some underlying or overall design process to which the simple mechanisms answer globally, and which is capable of taking action to correct mistakes. This is the mechanism that we see in action in the case of the “eyeless fly”, Drosophila; in Driesch’s experiment with the sea urchin and Balinsky’s with the eyes of amphibians; the ‘field’ that governs the metamorphosis of the butterfly or the reconstitution of the cells of sponges and vertebrates.

Darwinists believe that the only overall control process is natural selection, but the natural selection mechanism could not account for the cases referred to above. Natural selection works on populations, not individuals. It is capable only of tending to make creatures with massively fatal genetic defects die in infancy, or to make populations that are geographically dispersed eventually produce sterile hybrid offspring. It is such a poor feedback mechanism in the sense of exercising an overall regulating effect that it has failed even to eliminate major congenital diseases. Natural selection offers only death or glory: there is no genetic engineering nor holistic supervision of the organism’s integrity. Yet we are asked to believe that a mechanism of such crudity can creatively supervise a program of gene mutation that will restore sight to the eyeless fly.

This is plainly wishful thinking. The key question remains: what is the location of the supervisory agency that oversees somatic development? How does it work? What is it’s connection with the cell structure of the body?


FOOTNOTES


  • Richard Milton, Shattering the Myths of Darwinism (Rochester, VT: Park Street Press, 1997), 167-176; 223-226.

(Editor’s Note. The author did not footnote what page he was quoting from, he only cited the work itself)

[1] Richard Dawkins, The Blind Watchmaker (London, England: Pearson Longman, 1986).

[2] Andrew Scott, Vital Principles: The Molecular Mechanisms of Life (Oxford, England: Blackwell Publishers, 1988).

[3] Francis Crick, Of Molecules and Men (Seattle, WA: Univ of Washington Press, 1966).

[4] Ibid.

[5] William Shrive, Enymes, in the McGraw-Hill Encyclopedia of Science & Technology (New York, NY: McGraw-Hill Book Company, 1982).