Ghost In The Machine – Richard Milton

This is chapter 15 (The Ghost In The Machine) from Richard Milton excellent book entitled, “SHATTERING THE MYTHS OF DARWINISM,” as well as a bit from near the end of the book regarding a “spellchecking” program to shore up his ideas from chapter fifteen. It really has to do with responding to the idea that a computer program is shown to “evolve.” That in some way there is an increase in novel information in the program that is adding to the specificity of the program apart from the designers/software engineers.

I would be remiss to not link to an article that also goes through this example well:

Enjoy… mind you this book was read by me in 1998, but the ideas here have been sustained through today.


Ghost in the Machine
pp. 167-176


[167>] Russel and Seguin’s 1982 picture of a human-looking “evolved” version of a dinosaur was an impressive feat combining science and imagination in a constructive and entertaining way. Yet few in 1982 foresaw that in little more than a decade, over 100 million people around the world would pay to be scared by the even more impressive feat of the computer generated dinosaurs of “Jurassic Park”.

Nothing that has entered the evolution debate since Darwin’s time has promised to illuminate the subject so much as the modern computer and its apparently limitless ability to represent, on the monitor-screen, compelling visual solutions to the most abstruse mathematical questions.

The information handling capacity of electronic data processing, with its obvious analogy to DNA, has been enthusiastically enlisted by computer-literate Darwinists as offering powerful evidence for their theory; while genetic software systems, said to emulate the processes of genetic mutation and natural selection at speeds high enough to make the process visible, have become a feature of most up-to-date biology laboratories.

The computer has been put to many ingenious uses in the service of Darwinist theory. And it has changed the minds of not a few skeptics by its powerful visual imagery and uncanny ability to bring extinct creatures – or even creatures that never lived – to life in front [168>] of us. But, compelling though the visual images are, how much confidence should we put in the computer as a guide to the evolution of life?

In his book The Blind Watchmaker Richard Dawkins describes a computer program he wrote which randomly generates symmetrical figures from dots and lines. These figures, to a human eye, have a resemblance to a variety of objects. Dawkins gives some of them insect and animal names, such as bat, spider, fox or caddis fly. Others he gives names like lunar lander, precision balance, spitfire, lamp and crossed sabers.

Dawkins calls these creations “biomorphs”, meaning life shapes or living shapes, a term he borrows from fellow zoologist Desmond Morris. He also feels very strongly that in using a computer program to create them, he is in some way simulating evolution itself. His approach can be understood from this extract:

Nothing in my biologist’s intuition, nothing in my 20 years experience of programming computers, and nothing in my wildest dreams, prepared me for what actually emerged on the screen. I can’t remember exactly when in the sequence it first began to dawn on me that an evolved resemblance to something like an insect was possible. With a wild surmise, I began to breed generation after generation, from whichever child looked most like an insect. My incredulity grew in parallel with the evolving resemblance…. Admittedly they have eight legs like a spider, instead of six like an insect, but even so! I still cannot conceal from you my feeling of exultation as I first watched these exquisite creatures emerging before my eyes.[1]

Dawkins not only calls his computer drawings “biomorphs”, he gives some of them the names of living creatures. He also refers to them as “quasi-biological” forms and in a moment of excitement calls them “exquisite creatures”. He plainly believes that in some way they correspond to the real world of living animals and insects. But they do not correspond in any way at all with living things, except in the purely trivial way that he sees some resemblance in their shapes. The only thing about the “biomorphs” that is [169>] biological is Richard Dawkins, their creator. As far as the “spitfire” and the “lunar lander” are concerned there is not even a fancied biological resemblance.

The program he wrote and the computer he used have no analog at all in the real biological world. Indeed, if he set out to create an experiment that simulates evolution, he has only succeeded in making one that simulates special creation, with himself in the omnipotent role.

His program is not a true representation of random mutation coupled with natural selection. On the contrary it is dependent on artificial selection in which he controls he sees some resemblance in their shapes. The only thing about the “biomorphs” that is biological is Richard Dawkins, their creator. As far as the “spitfire” and the “lunar lander” are concerned there is not even a fancied biological resemblance.

The program he wrote and the computer he used have no analog at all in the real biological world. Indeed, if he set out to create an experiment that simulates evolution, he has only succeeded in making one that simulates special creation, with himself in the omnipotent role.

His program is not a true representation of random mutation coupled with natural selection. On the contrary it is dependent on artificial selection in which he controls the rate of occurrence of mutations. Despite Dawkins’s own imaginative interpretations, and even with the deck stacked in his favor, his biomorphs show no real novelty arising. There are no cases of bears turning into whales.

There is also no failure in his program: his biomorphs are not subject to fatal consequences of degenerate mutations like real living things. And, most important of all, he chooses which are the lucky individuals to receive the next mutation – it is not decided by fate – and of course he chooses the most promising ones (“I began to breed from whichever child looked most like an insect.”) That is why they have ended up looking like recognizable images from his memory. If his mutations really occurred randomly, as in the real world, Dawkins would still be sitting in front of his screen watching a small dot and waiting for it do something.

Above all, his computer experiment falsifies the most important central claim of mechanistic Darwinian thinking; that, through natural processes, living things could come into being without any precursor. What Dawkins has shown is that, if you want to start the evolutionary ball rolling, you need some form of design to take a hand in the proceedings, just as he himself had to sit down and program his computer.

In fact, his experiment shows very much the same sort of results that field work in biology and zoology has shown for the past hundred years: there is no evidence for beneficial spontaneous genetic mutation; there is no evidence for natural selection (except as an empty tautology); there is no evidence for either as significant evolutionary mechanisms. There is only evidence of an unquenchable optimism among Darwinists that given enough [170>] time, anything can happen – the argument from probability.

But although Dawkins’s program does not qualify as a simulation of random genetic mutation coupled with natural selection, it does highlight at least one very important way in which computer programs resemble genetic processes. Each instruction in a program must be carefully considered by the programmer as to both its immediate effect on the computer hardware and its effects on other parts of the program. The letters and numbers which the programmer uses to write the instructions have to be written down with absolute precision with regard to the vocabulary and syntax of the programming language he uses in order for the computer system to function at all. Even the most trivial error can lead to a complete malfunction. In 1977, for example, an attempt by NASA to launch a weather satellite from Cape Canaveral ended in disaster when the launch vehicle went off course shortly after takeoff and had to be destroyed. Subsequent investigation by NASA engineers found that the accident was caused by failure of the onboard computer guidance system – because a single comma had been misplaced in the guidance program.

Anyone who has programmed a computer to perform the simplest task in the simplest language – Basic for instance – will understand the problem. If you make the simplest error in syntax, misplacing a letter, a punctuation mark or even a space, the program will not run at all.

In just the same way, each nucleotide has to be “written” in precisely the correct order and in precisely the correct location in the DNA molecule for the offspring to remain viable, and, as described earlier, major functional disorders in humans, animals and plants are caused by the loss or displacement of a single DNA molecule, or even a single nucleotide within that molecule.

In order to simulate neo-Darwinist evolution on his computer, it is not necessary for Dawkins to devise complex programs that seek to simulate insect life. All he has to do is to write a program containing a large number of instructions (3000 million instructions if he wishes to simulate human DNA) that continually regenerates its own program code, but randomly interferes with the code in trivial ways, such as transposing, shifting or missing characters. (The system must be set to restart itself after each fatal “birth”.)

[171>] The result of this experiment would be positive if the system ever develops a novel function that was not present in the original programming. One way of defining “novelty” would be to design the program so that, initially, its sole function was to replicate itself (a computer virus). A novel function would then be anything other than mere reproduction. In practice, however, I do not expect the difficulty of defining what constitutes a novelty to pose any problem. It is extremely improbable that Dawkins’s program will ever work again after the first generation, just as in real life, mutations cause genetic defects, not improvements.

Outside of the academic world there are a number of important commercial applications based on computer simulations that deserve to be seriously examined. A good example of this is in the field of aircraft wing design where computers have been used by aircraft engineers to develop the optimum airfoil profile. In the past wing design has been based largely on repetitive trial and error methods. A hypothetical wing shape is drawn up; a physical model is made and is aerodynamically tested in the wind tunnel. Often the results of such an empirical design approach are predictable: lengthening the upper wing curve, in relation to the lower, generally increases the upward thrust obtained. But sometimes results are very unpredictable, as when complex patterns of turbulence combine at the trailing edge to produce drag, which lowers wing efficiency, and causes destructive vibration.

Engineers at Boeing Aircraft tried a new approach. They created a computer model which was able to “mutate” a primitive wing shape at random – to stretch it here or shrink it there. They also fed into the model rules that would enable the computer to simulate testing the resulting design in a computerized version of the “wind tunnel”- the rules of aerodynamics.

The engineers say this process has resulted in obtaining wing designs offering maximum thrust and minimum drag and turbulence, more quickly than before and without any human intervention once the process has been set in motion.

Designers have made great savings in time compared with previous methods and the success of the computer in this field has given rise to a new breed of application dubbed “genetic software”. Indeed, on the face of it, the system is acting in a Darwinian manner. The [172>] computer (an inanimate object) has produced an original and intelligent design (comparable, say, with a natural structure such as a bird’s wing) by random mutation of shape combined with selection according to rules that come from the natural world – the laws of aerodynamics. If the computer can do this in the laboratory in a few hours or days, what could nature not achieve in millions of years?

The fallacies on which this case is constructed are not very profound but they do need to be nailed down. In a recently published popular primer on molecular biology, Andrew Scott’s Vital Principles, this very example is given under the heading “the creativity of evolution”. The process itself is called “computer generated evolution” as though it were analogous to an established natural process of mutation and selection.[2]

The most important fallacy in this argument is the idea that somehow a result has occurred which is independent of, or in some way beyond the engineers, who merely started the machine by pressing a button. Of course, the fact is that a human agency has designed and built the computer and programmed it to perform the task in question. As with the previous experiment, this begs the only important question in evolution theory: could complex structures have arisen spontaneously by random natural processes without any precursor? Like all other computer simulation experiments, this one actually makes a reasonable case for special creation – or some form of vitalist-directed design – because it specifically requires a creator to build the computer and devise and implement the program in the first place.

However, there are other important fallacies too. The only reason that the Boeing engineers are able to take the design produced on paper by their computer and translate that design into an aircraft that flies, is because they are employing an immense body of knowledge – not possessed by the com put er – regarding the properties of materials from which the aircraft will be made and the manufacturing processes that will be used to make it. The computer’s wing is merely an outline on paper, an idea: it is of no more significance to aviation than a wave outline on the beach or a wind outline in the desert. The real wing has to actually fly in the air with real passengers. The decisive events that make that idea into a reality are a long, complex sequence of human operations [173>] and judgments that involve not only the shaping and fastening of metal for wings but also the design and manufacture of airframes and jet engines. These additional complexities are beyond the capacity of the computer, not merely in practice but in principle, because computers cannot even make a cup of coffee, let alone an airliner, without being instructed every step of the way.

In order for a physical structure like an aircraft wing to evolve by spontaneous random means, it is necessary for natural selection to do far more than select an optimum shape. It must also select the correct materials, the correct manufacturing methods (to avoid failure in service) and the correct method of integrating the new structure into its host creature. These operations involve genetic engineering principles which are presently unknown. And because they are unknown by us, they cannot be programmed into a computer.

There is also an important practical reason why the computer simulation is not relevant to synthetic evolution: because an aircraft wing differs from a natural wing in a fundamental way. The aircraft wing is passive, since the forward movement of the aircraft is derived from an engine. A natural wing like a bird’s, however, has to provide upthrust and the forward motion necessary to generate that lift making it a complex, articulated active mechanism. The engineering design problem of evolving a passive wing is merely a repetitive mechanical task – that is why it is suitable for computerization. So far, no-one has suggested programming a computer to design a bird’s wing by random mutation because the suggestion would be seen as ludicrous. Even if all of the world’s computers were harnessed together, they would be unable to take even the most elementary steps needed to design a bird’s wing unless they were told in advance what they were aiming at and how to get there.

If computers are no use to evolutionists as models of the hypothetical selection process, they are proving invaluable in another area of biology; one that seems to hold out much promise to Darwinists – the field of genetics. Since Watson and Crick elucidated the structure of the DNA molecule, and since geneticists began unraveling the meaning of the genetic code, the center of gravity of evolution theory has gradually shifted away from the earth sciences – geology and pale-ontology – toward molecular biology.

[174>] This shift in emphasis has occurred not only because of the attraction of the new biology as holding the answers to many puzzling questions, but also because the traditional sciences have proved ultimately sterile as a source of decisive evidence. The gaps in the fossil record, the incomplete-ness of the geological strata, and the ambiguity of the evidence from comparative anatomy, ultimately caused Darwinists to give up and look somewhere else for decisive evidence. Thanks to molecular biology and computer science they now have somewhere else to try.

Darwinists seem to have drawn immense comfort from their recent discoveries at the cellular level and beyond, behaving and speaking as though the new discoveries of biology represent a triumphant vindication of their long-held beliefs over the irrational ideas of vitalists. Yet the gulf between what Darwinists claim for molecular biological discoveries and what those discoveries actually show is only too apparent to any objective evaluation.

Consider these remarks by Francis Crick, justly famous as one of the biologists who cracked the genetic code, and equally well known as an ardent supporter of Darwinist evolution. In his 1966 book Molecules and Men, in which he set out to criticize vitalism, Crick asked which of the various molecular biological processes are likely to be the seat of the “vital principle”.[3] “It can hardly be the action of the enzymes,” he says, “because we can easily make this happen in a test tube. Moreover most enzymes act on rather simple organic molecules which we can easily synthesize.”

There is one slight difficulty but Crick easily deals with it; “It is true that at the moment nobody has synthesized an actual enzyme chemically, but we can see no difficulty in doing this in principle, and in fact I would predict quite confidently that it will be done within the next five or ten years.”

A little later, Crick says of mitochondria (important objects in the cell that also contain DNA):

It may be some time before we could easily synthesise such an object, but eventually we feel that there should be no gross difficulty in putting a mitochondrion together from its component parts.

This reservation aside, it looks as if any system of [175>] enzymes could be made to act without invoking any special principles, or without involving material that we could not synthesize in the laboratory. [4]

There is no question that Crick and Watson’s decoding of the DNA molecule is a brilliant achievement and one of the high points of twentieth-century science. But this success seems to me to have led many scientists to expect too much as a result.

Crick’s early confidence that an enzyme would be produced synthetically within five or ten years has not been borne out and biologists are further than ever from achieving such a synthesis. Indeed, reading and rereading the words above with the benefit of hindsight I cannot help but interpret them as saying “we are unable to synthesize any significant part of a cell at present, but this reservation aside, we are able to synthesize any part of the cell.”

Certainly great strides have been made. William Shrive, writing in the McGraw Hill Encyclopedia of Science and Technology, says, “The complete amino acid sequence of several enzymes has been determined by chemical methods. By X-ray crystallographic methods it has even been possible to deduce the exact three-dimensional molecular structure of a few enzymes.”[5] But despite these advances no-one has so far synthesized anything remotely as complex as an enzyme or any other protein molecule.

Such a synthesis was impossible when Crick wrote in 1966 and remains impossible today. It is probably because there is a world of difference between having a neat table that shows the genetic code for all twenty amino acids (Alanine = GCA, Praline = CCA and so on) and knowing how to manufacture a protein. These complex molecules do not simply assemble themselves from a mixture of ingredients like a cup of tea. Something else is needed. What the something else is remains conjectural. If it is chemical it has not been discovered; if it is a process it is an unknown process; if it is a “vital principle” it has not yet been recognized. Whatever the something is, it is presently impossible to build a case either for Darwinism or against vitalism out of what we have learned of the cell and the molecules of which it is composed.

It is easy to see why evolutionists should be so excited about cellular discoveries because the mechanisms they have found appear to [176>] be very simple. But however simple they may seem, as of yet no-one has succeeded in synthesizing any significant original structure from raw materials. We know the code for the building blocks; we don’t know the instructions for building a house with them.

Indeed, the discoveries of biochemistry and molecular biology have raised some rather awkward questions for Darwinists, which they have yet to address satisfactorily. For example, the existence of genetically very simple biological entities, such as viruses, seems to support Darwinist ideas about the origin of life. One can imagine all sorts of primitive life forms and organisms coming into existence in the primeval ocean and it seems only natural that one should find entities that are part way between the living and the nonliving – stepping stones to life as it were. It is only to be expected, says Richard Dawkins, that the simplest form of self-replicating object would merely be that part of the DNA program which says only “copy me”, which is essentially what a virus is.

The problem here is that viruses lack the ability to replicate unless they inhabit a host cell – a fully functioning cell with its own genetic replication mechanisms. So the first virus must have come after the first cell, not before in a satisfyingly Darwinian processes.

But despite minor unresolved problems of this kind Darwinists still have one remaining card to play in support of their theory. It is the strongest card in their hand and the most powerful and decisive evidence in favor of Darwinian evolutionary processes.

[….]


pp. 223-227


[223>] Earlier on I referred to computers and their programs as a fruitful source of comparison with genetic processes since both are concerned with the storage and reliable transmission of large quantities of information. Arguing from analogy is a dangerous practice, but there is one phenome-non connected with computer systems that could be of some importance in understanding biological information processing strategies.

The phenomenon has to do with the computer’s ability to refer to a master list or template and to highlight any exceptions to this master list that it encounters during processing. This “exception reporting” is profoundly important in information processing. For instance, this book was prepared using a word-processing program that has a spelling checker. When invoked, the spell checker reads the typescript of the book and compares each word with its built-in dictionary, highlighting as potential mistakes those it does not recognize. Of course, it will encounter words that are spelled correctly but are not found in a normal dictionary – such as “deoxyribonucleic acid”. But the program is clever enough to allow me to add the novel word to the dictionary, so that the next time it is encountered it will be accepted as correct instead of reported as an exception – as long as I spell it correctly.

In other words, the spelling checker isn’t really a spelling checker. It has no conception of correct spelling. It is merely a mechanism [224>] for reporting exceptions. Using these methods, programmers can get computers to behave in an apparently intelligent or purposeful way when they are really only obeying simple mechanical rules. Not unnaturally, this gives Darwinists much encouragement to believe that life processes may at root be just as simple and mechanical.

In cell biology there are natural chemical properties of complex molecules that lend them-selves to automatic checking and excepting of this kind. For example many molecules are stereospecific – they will attach only to certain other specific molecules and only in special positions. There are also much more complex forms of exception reporting, for instance as part of the brain’s (of if you prefer, the mind’s) cognitive processes: as when we see and recognize a single face in the crowd or hear our name mentioned at a noisy cocktail party.

In the case of the spelling checker, the behavior of the system can be made to look more and more intelligent through a process of learning if, every time it highlights a new word, I add that word to its internal dictionary. If I continue for a long enough time, then eventually, in principle, the system will have recorded every word in the English language and will highlight only words that are indeed misspelled. It will have achieved the near-miraculous levels of efficiency and repeatability that we are used to seeing in molecular biological processes. But something strange has also been happening at the same time – or, rather, two strange things.

The first is that as its vocabulary grows, the spelling checker becomes less efficient at drawing to my attention possible mistakes. This unexpected result comes about in the fallowing way. Remember, the computer knows nothing of spelling, it merely reports exceptions to me. To begin with, it has only, say, 50,000 standard words in its dictionary. This size of dictionary really only covers the common everyday words plus a modest number of proper nouns (for capital cities, common surnames and the like) and doesn’t leave much room for unusual words. It would, for instance include a word like ‘great’ but not the less-frequently used word “grate”.

The result is that if I accidentally type “grate” when I really mean “great”, the spell checker will draw it to my attention. If however, I enlarge the dictionary and add the word “grate”, the spell [225>]checker will ignore it in future, even though the chances are that it will occur only as a typing mis take – except in the rare case where I am writing about coal fires or cookery.

One can generalize this case by saying that when the dictionary has an optimum size of vo-cabulary, I get the best of both worlds: it points out misspellings of the most common words and reports anything unusual which in most cases probably will be an error. (Obviously to work at optimum efficiency the size of dictionary should be matched to the vocabulary of the writer). As the dictionary grows in volume it becomes more efficient in one way, highlighting only real spelling errors, but less efficient in another: it becomes more probable that my typing errors will spell a real word – one that will not be reported – but not the word I mean to use. Paradoxically, although the spelling checker is more efficient, the resulting book is full of contextual errors: ‘pubic’ instead of ‘public’, ‘grate’ instead of ‘great’ and so on.

It requires a human intelligence -a real spelling checker, not a mechanical exception reporter to make sure that the intended result is produced.

I said two strange things have been happening while I have been adding words to the spelling checker. The second is the odd occasion when the system has highlighted a real spelling mistake to me- say, “problem” instead of “problem” – and I have mistakenly told the computer to add the word to its dictionary. This, of course, has the very unfortunate result that in future it will cease to highlight a real spelling mistake and will pass it as correct. The error is no longer an exception it is now a dictionary word.

Under what circumstances am I most likely to issue such a wrong instruction? It is most likely to happen with words that I type most frequently and that I habitually mistype. Anyone who uses a keyboard every day knows that there are many such ‘favorite’ misspelled words that get typed over and over. Once again, only a real spelling checker, a human brain, can spot the error and correct it.

The reason that the computer’s spellchecker breaks down under these circumstances is that the simple mechanisms put in place do not work from first principles. They do not work in what electronics engineers call ‘real time’ (they are not in touch with the real world) and do not employ any real intelligent understanding [226>] of the tasks they are being called on to perform. So although the computer continues to work perfectly as it was designed to, it becomes more and more corrupted from the standpoint of its original function.

I believe that this analogy may well have some relevance to Darwinists’ belief that biological processes can at root be as simple as the spelling checker. It is easy to think of any number of simple cell replication mechanisms that rely on exception reporting of this kind. I believe that if biological processes were so simple, they too would become functionally corrupt unless there is some underlying or overall design process to which the simple mechanisms answer globally, and which is capable of taking action to correct mistakes. This is the mechanism that we see in action in the case of the “eyeless fly”, Drosophila; in Driesch’s experiment with the sea urchin and Balinsky’s with the eyes of amphibians; the ‘field’ that governs the metamorphosis of the butterfly or the reconstitution of the cells of sponges and vertebrates.

Darwinists believe that the only overall control process is natural selection, but the natural selection mechanism could not account for the cases referred to above. Natural selection works on populations, not individuals. It is capable only of tending to make creatures with massively fatal genetic defects die in infancy, or to make populations that are geographically dispersed eventually produce sterile hybrid offspring. It is such a poor feedback mechanism in the sense of exercising an overall regulating effect that it has failed even to eliminate major congenital diseases. Natural selection offers only death or glory: there is no genetic engineering nor holistic supervision of the organism’s integrity. Yet we are asked to believe that a mechanism of such crudity can creatively supervise a program of gene mutation that will restore sight to the eyeless fly.

This is plainly wishful thinking. The key question remains: what is the location of the supervisory agency that oversees somatic development? How does it work? What is it’s connection with the cell structure of the body?


FOOTNOTES


  • Richard Milton, Shattering the Myths of Darwinism (Rochester, VT: Park Street Press, 1997), 167-176; 223-226.

(Editor’s Note. The author did not footnote what page he was quoting from, he only cited the work itself)

[1] Richard Dawkins, The Blind Watchmaker (London, England: Pearson Longman, 1986).

[2] Andrew Scott, Vital Principles: The Molecular Mechanisms of Life (Oxford, England: Blackwell Publishers, 1988).

[3] Francis Crick, Of Molecules and Men (Seattle, WA: Univ of Washington Press, 1966).

[4] Ibid.

[5] William Shrive, Enymes, in the McGraw-Hill Encyclopedia of Science & Technology (New York, NY: McGraw-Hill Book Company, 1982).

A Misused MLK Quote (Plus! An RPT Rant)

Larry Elder corrects the record on a quote by Martin Luther King, Jr., often taken from its larger context. On Thursday, May 28th, the quote was the 11th most searched item in Google “A riot is the language of the unheard

THE AMERICAN SPECTATOR deals with the above misquoting of MLK (misunderstanding his intent of that statement) very well:

It was inevitable that George Floyd’s death would spark protests against police brutality and that mendacity would characterize the attendant media coverage. True to form, the press affected dismay when the demonstrations devolved into violence, yet reported the riots with obvious approbation. The most obscene example of this was the widespread use, in headlines and ledes, of an out-of-context Martin Luther King quote suggesting that the civil rights leader would have condoned the mayhem. USA Today, for example, ran a feature story bearing the following title: “ ‘A riot is the language of the unheard’: MLK’s powerful quote resonates amid George Floyd protests.”

This grotesque misrepresentation of Dr. King’s views is only possible by cynically cherry-picking eight words from a 1966 interview during which he repeatedly emphasized that violence was counterproductive to the progress of the civil rights movement. Mike Wallace interviewed him for “CBS Reports” on Sept. 27, 1966, and the primary topic of discussion involved divisions within the movement concerning overall strategy. The myth that King had somehow endorsed violence went mainstream in 2013, when “60 Minutes Rewind” posted a clip from the Wallace interview and irresponsibly titled it using the same out-of-context quote. The interview transcript begins with this unambiguous statement:

KING: I will never change in my basic idea that non-violence is the most potent weapon available to the Negro in his struggle for freedom and justice. I think for the Negro to turn to violence would be both impractical and immoral.

It’s pretty difficult to find anything resembling support for street violence or riots in this statement, but a subsequent question about the “Black Power” movement persuaded Dr. King to explain the impetus of the numerous 1966 riots. He cited the growing frustration caused by the absence of progress on basic civil rights for black people in general. King obviously understood that much of the community was growing very impatient. He also knew that most owners of property burned and businesses ruined during riots were owned by black people. This is still true. Thus, he continued to denounce the riots as self-defeating and socially destructive and insisted that nonviolence was the best course to follow:

MIKE WALLACE: There’s an increasingly vocal minority who disagree totally with your tactics, Dr. King.

KING: There’s no doubt about that. I will agree that there is a group in the Negro community advocating violence now. I happen to feel that this group represents a numerical minority. Surveys have revealed this. The vast majority of Negroes still feel that the best way to deal with the dilemma that we face in this country is through non-violent resistance, and I don’t think this vocal group will be able to make a real dent in the Negro community in terms of swaying 22 million Negroes to this particular point of view. And I contend that the cry of “black power” is, at bottom, a reaction to the reluctance of white power to make the kind of changes necessary to make justice a reality for the Negro. I think that we’ve got to see that a riot is the language of the unheard. And, what is it that America has failed to hear? It has failed to hear that the economic plight of the Negro poor has worsened over the last few years. (Emphasis added.)

The media have dishonestly plucked the highlighted fragment from this 175-word answer to create the false impression that Dr. King somehow viewed violence as a legitimate weapon in the fight for justice. In reality, there is no honest way to arrive at this conclusion when those eight words are read in their proper context. Yet USA Today is by no means alone in its misuse of this fragment. CNN uses the same eight words for the title of a Fareed Zakaria segment that begins with a deceptively edited clip from King’s 1967 speech, “The Other America,” in which he discusses riots much as he did on CBS. In order to launch the segment with the magic words, however, CNN edited out most of the speech, including the following:

Let me say as I’ve always said, and I will always continue to say, that riots are socially destructive and self-defeating. I’m still convinced that nonviolence is the most potent weapon available to oppressed people in their struggle for freedom and justice. I feel that violence will only create more social problems than they will solve. That in a real sense it is impracticable for the Negro to even think of mounting a violent revolution in the United States. So I will continue to condemn riots, and continue to say to my brothers and sisters that this is not the way.

USA Today, CBS, and CNN have lot of company. The Week, for example, ran yet another trite effusion titled “ ‘A riot is the language of the unheard,’ Martin Luther King Jr. explained 53 years ago.” This nonsense, like the rest, ignores the facts and includes standard fictions to once again conjure up an image of Dr. King as an advocate of violence in the cause of social justice. Among those offended by this mendacious exploitation of King’s words to validate violence is his niece, Alveda King. She writes, “I am saddened yet undaunted that a quote from my Uncle Martin is being taken out of context.… Some people are calling this an endorsement of violence, but nothing could be further from the truth.”……

MY RIOTESS THOUGHTS

I feel bad for the Floyd family. Not because of their loss (although that was my first emotion and care, was for the loss of their son… even if it was more heart related, the officer in question could have saved his life if he wasn’t kneeing his neck), but because I do not care about the incident all that much any longer. I am more focused on the fruits of a culture that has been brewing since gay author/professor first fired a warning shot over the New Left’s bow (the beginning of the culture war):

  • There is one thing a professor can be absolutely certain of: almost every student entering the university believes, or says he believes, that truth is relative. If this belief is put to the test, one can count on the students’ reaction: they will be uncomprehending. That anyone should regard the proposition as not self-evident astonishes them…. The relativity of truth is… a moral postulate, the condition of a free society, or so they see it…. The danger they have been taught to fear is not error but intolerance. (Allan Bloom, The Closing of the American Mind [New York, NY: Simon and Schuster, 1987], 25.)

These riots have nothing to do with that officers’ actions. It has to do with how a large segment of society brands people for seeking categories for society to adhere to (SIXHIRB: sexist, islamophobic, xenophobic, homophobic, intolerant, racist, bigoted). Unless people (a) counter these histories found in horrible university texts like the one pictured to the right with actual histories that work in the real world when applied… not some fantasy Utopia; (b) or at least invigorate adults to challenging themselves to enter into real conversations about our body politic (which requires discussions about our nation’s history, past and current politics, our nations roots in cities like Athens and Jerusalem), we will see more of this:

The Western world has produced some of the most prosperous and most free civilizations on earth. What makes the West exceptional? Ben Shapiro, editor-in-chief of the Daily Wire and author of “The Right Side of History,” explains that the twin pillars of revelation and reason — emanating from ancient Jerusalem and Athens — form the bedrock for Western civilization’s unprecedented success.

All culminating in America’s “Trinity”:

Nearly every country on Earth is defined by race or ethnicity. Not America. What makes the United States different? Dennis Prager outlines the values that have allowed the American people to flourish and, unlike immigrants almost everywhere else, transformed those who arrived from across the globe into full Americans—regardless of where they were born.

One needs to also confront the idea that in the black community cults like the Five Percenters (The Nation of Gods and Earths) and Nation of Islam in some of these communities of color (an aside: if I had said colored communities — that is racist — but not communities of color). If MLK hated this radicalism, then why do people support it in the black community but rebuff it in the white?

King’s influence was tempered by the increasingly caustic tone of Black militancy of the period after 1965. Black radicals increasingly turned away from the Gandhian precepts of King toward the Black Nationalism of Malcolm X, whose posthumously published autobiography and speeches reached large audiences after his assassination in February 1965. King refused to abandon his firmly rooted beliefs about racial integration and nonviolence.

In his last book, Where Do We Go from Here: Chaos or Community?, King dismissed the claim of Black Power advocates “to be the most revolutionary wing of the social revolution taking place in the United States.” But he acknowledged that they responded to a psychological need among African Americans he had not previously addressed.

“Psychological freedom, a firm sense of self-esteem, is the most powerful weapon against the long night of physical slavery,” King wrote. “The Negro will only be free when he reaches down to the inner depths of his own being and signs with the pen and ink of assertive manhood his own emancipation proclamation.”

see more

People [read here adults] need to challenge their beliefs with thinking outside their lifelong or university taught Leftism. Pick a site from the following and visit it a couple times a week [hint: Powerline will be the quickest reads]:

– just to name a few with good writing and represent some counter thinking to the CNN’s and WaPo’s of the world. They offer an excellent introduction to how Conservatives view our political landscape. Stop feeding these lies about American history based on emotion rather than testing one’s own viewpoints. PICK UP A SINGLE BOOK AND READ. Preferably one you disagree with and would otherwise read. If we don’t figure out how to do this, the cities that most need businesses and stability will lose them over and over. This is exactly what we can expect to happen:

Here is something I said in July of 2013:

  • A conservative think tank had to have their yearly meeting in an undisclosed place due to threats of violence, Michael Steele had Oreo cookies thrown at him, conservative speakers like Ann Coulter need body guards when going on to a campus when speaking (the reverse is not true of liberal speakers), eco-fascists (like this CBS story notes) put nails in trees so when lumber jacks cut through them they are maimed, from rapes and deaths and blatantly anti-Semitic/anti-American statements and threats made at occupy movements [endorsed by Obama], we are seeing Obama’s America divided, more violent; [NOT OT MENTION] forcing Christians to photograph, make cakes for, and put flower arrangements together for same-sex marriage ceremoniesto pro-choice opponents with jars of feces and urine taken from them after chanting “hail Satan” and “fuck the church,” a perfect storm is being created for a real culture warall with thanks to people who laugh at terms like “eco-fascists” and “leftist thugs.” The irony is that these coal unions asked their members to vote for Obama. Well, the chickens have come home to roost.

The chickens indeed are coming home to roost (Obama’s pastor’s saying after his “Goddamm America” sermon), just for the people that except such a bad ethos. With the NYTs 1619 project. Professors teaching a generation that America was and is the most oppressive racist nation. Media making things up about Republicans being racists since Goldwater. And the calling of a President who has Jewish religious kids and grandkids an anti-Semite/racist. The comedic newsers like Trevor Noah, Colbert, and the like confirming such lies to a millennial generation that gets their news from the “Jimmy Falons” of the world (not to mention CNN, NPR, WaPo, MSNBC, NYT, etcetera).

THE AMERICAN MIND has a great article saying similar things:

The publication of my new book, America’s Revolutionary Mind: A Moral History of the American revolution and the Declaration that Defined It, comes at a crucial moment in American history. Academic study of the American revolution is dying on our college campuses, and the principles and institutions of the American Founding are now under assault from the nattering nabobs of both the progressive Left and the reactionary Right. These two ideological antipodes share little in common other than a mutually-assured desire to purge 21st-century American life of the founders’ philosophy of classical liberalism.

On this point, the radical Left and Right have merged.

The philosophy of Americanism is, as I have argued in my book and elsewhere, synonymous with the founders’ ideas, actions, and institutions. Its core tenets can be summed up as: the moral laws and rights of nature, ethical individualism, self-interest rightly understood, self-rule, constitutionalism, rule of law, limited government, and laissez-faire capitalism.

The founders’ Americanism is most identifiably expressed in the leading political documents of the founding era: the Declaration of Independence, which Thomas Jefferson said was an “expression of the American mind,” and in the revolutionary state constitutions as well as the federal Constitution and the Bill of Rights. The classical liberalism of the founding era assumed that individual rights to life, liberty, property, and the pursuit of happiness are grounded in nature and that government’s primary responsibility is to protect those rights.

[….]

The anti-Americanism of the radical Left is well known and long established. Its most recent and most virulent incarnation comes in the form of the New York Times’s “1619 Project,” which claims that the founders’ principles and institutions were disingenuous in 1776 and immoral today.

Much more interesting than the ho-hum anti-Americanism of the progressive Left, though, is the rise in recent years of a rump faction of former Paleo or Tradcons, who have come out of their ideological closet and transitioned from pro- to anti-Americanism. The recent rise of the radical Right in America is distinguished from all previous forms of conservatism and libertarianism by its explicit rejection of the founders’ liberalism.

A new generation of neo-reactionary ideologues looks at contemporary America and sees nothing but moral, cultural, and political decay, which they blame on the soullessness of the founders’ Americanism. Remarkably, just like the radical Left, the radical Right condemns the philosophy of 18th-century liberalism as untrue and therefore immoral. It is the source, they claim, of all our present discontents.

Much has already been written on the 1619 Project, so I shall only briefly describe its arguments and goals in order to better focus on the aims and tactics of the reactionary Right.

[….]

Lastly, a word to the young—to those who have been let down or feel abandoned by the cowardice and unmanliness of Conservatism and Libertarianism, Inc.—know this: you have not been abandoned. There is a new generation of intellectuals willing to take up the cause of Americanism.

More to the point, you should know this as well: I will be, to quote William Lloyd Garrison, as “harsh as truth, and as uncompromising as justice” when it comes to defending the Declaration of Independence, the Constitution, and the Bill of Rights. The principles and institutions of the founders’ liberalism are worth defending because they are true. The reactionary Right is a dead end; it’s a dead end because it’s a lie. You should not let your despair turn you to the Dark Side. It’s time to come home.

(READ IT ALL)

 

 

“Redlining” | Thomas Sowell

BANK LOANS

Group Disparities

In the course of a long and heated campaign in politics and in the media during the early twenty-first century, claiming that there was rampant discrimination against black home mortgage loan applicants, data from various sources were cited repeatedly, showing that black applicants for the most desirable kind of mortgage were turned down substantially more often than white applicants for those same mortgages.

In the year 2000, for example, data from the U.S. Commission on Civil Rights showed that 44.6 percent of black applicants were turned down for those mortgages, while only 22.3 percent of white applicants were turned down.1 These and similar statistics from other sources set off widespread denunciations of mortgage lenders, and demands that the government “do something” to stop rampant racial discrimination in mortgage lending institutions.

The very same report by the U.S. Commission on Civil Rights, which showed that blacks were turned down for conventional mortgages at twice the rate for whites, contained other statistics showing that whites were turned down for those same mortgages at a rate nearly twice that for “Asian Americans and Native Hawaiians.”

While the rejection rate for white applicants was 22.3 percent, the rejection rate for Asian Americans and Native Hawaiians was 12.4 percent.2 But such data seldom, if ever, saw the light of day in most newspapers or on most television news programs, for which the black-white difference was enough to convince journalists that racial bias was the reason.

That conclusion fit existing preconceptions, apparently eliminating a need to check whether it also fit the facts. This one crucial omission enabled the prevailing preconception to dominate discussions in politics, in the media and in much of academia.

One of the very few media outlets to even consider alternative explanations for the black-white statistical differences was the Atlanta Journal-Constitution, which showed that 52 percent of blacks had credit scores so low that they would qualify only for the less desirable subprime mortgages, as did 16 percent of whites. Accordingly, 49 percent of blacks in the data cited by the Atlanta Journal-Constitution ended up with subprime mortgages, as did 13 percent of whites and 10 percent of Asians.3 In short, the three groups’ respective rankings in terms of the kinds of mortgage loans they could get was similar to their respective rankings in average credit ratings.

But such statistics, so damaging to the prevailing preconception that intergroup differences in outcomes showed racial bias, were almost never mentioned in most of the mass media. With credit ratings being what they were, the statistics were consistent with Discrimination IA (judging each applicant as an individual), but were reported in the media, in politics and in academia as proof of Discrimination II, arbitrary bias against whole groups.

While the omitted statistics would have undermined the prevailing preconception that white lenders were biased against black applicants, that preconception at least seemed plausible, even if it failed to stand up under closer scrutiny. But the idea that white lenders would also be discriminating against white applicants, and in favor of Asian applicants, lacked even plausibility. What was equally implausible was that black-owned banks were discriminating against black applicants. But in fact black-owned banks turned down black applicants for home mortgage loans at a higher rate than did white-owned banks.4

[1] United States Commission on Civil Rights, Civil Rights and the Mortgage Crisis (Washington: U.S. Commission on Civil Rights, 2009), p. 53.

[2] Ibid. See also page 61; Robert B. Avery and Glenn B. Canner, “New Information Reported under HMDA and Its Application in Fair Lending Enforcement,” Federal Reserve Bulletin, Summer 2005, p. 379; Wilhelmina A. Leigh and Danielle Huff, “African Americans and Homeownership: The Subprime Lending Experience, 1995 to 2007,” Joint Center for Political and Economic Studies, November 2007, p. 5.

[3] Jim Wooten, “Answers to Credit Woes are Not in Black and White,” Atlanta Journal-Constitution, November 6,2007, p. 12A.

[4] Harold A. Black, M. Cary Collins and Ken B. Cyree, “Do Black-Owned Banks Discriminate Against Black Borrowers?” Journal of Financial Services Research, Vol. 11, Issue 1-2 (February 1997), pp. 189-204. Here, as elsewhere, it should not be assumed that two unexamined samples are equal in the relevant variable& In this case, there is no reason to assume that those blacks who applied to black banks were the same as those blacks who applied to white banks.

Thomas Sowell, Discrimination and Disparities, Revised and Enlarged Edition (New York, NY: Basic Books, 2019), 88-89 (added references).

With Michelle Obama recently railing on White Americans for “white flight” from her Chicago neighborhood as a kid, Larry explains his experience with the same phenomenon growing up in Los Angeles. However, he describes a very different experience with the issue of race relations, and it’s not as black and white as one would think.

BACKGROUND CHECKS

To take an extreme example of Discrimination 1b, for the sake of illustration, if 40 percent of the people in Group X are alcoholics and 1 percent of the people in Group Y are alcoholics, an employer may well prefer to hire only people from Group Y for work where an alcoholic would be not only ineffective but dangerous. This would mean that a majority of the people in Group X — 60 percent in this case — would be denied employment, even though they are not alcoholics.

What matters, crucially, to the employer is the cost of determining which individual is or is not an alcoholic, when job applicants all show up sober on the day when they are seeking employment.

This also matters to the customers who buy the employer’s products and to society as a whole. If alcoholics produce a higher proportion of products that turn out to be defective, that is a cost to customers, and that cost may take different forms. For example, the customer could buy the product and then discover that it is defective. Alternatively, defects in the product might be discovered at the factory and discarded. In this case, the customers will be charged higher prices for the products that are sold, since the costs of defective products that are discovered and discarded at the factory must be covered by the prices charged for the reliable products that pass the screening test and are sold.

To the extent that alcoholics are not only less competent but dangerous, the costs of those dangers are paid by either fellow employees who face those dangers on the job or by customers who buy dangerously defective products, or both. In short, there are serious costs inherent in the situation, so that either 60 percent of the people in Group X or employers or customers— or all three groups— end up paying the costs of the alcoholism of 40 percent of the people in Group X

This is certainly not judging each job applicant as an individual, so it is not Discrimination I in the purest sense of Discrimination Ia. On the other hand, it is also not Discrimination II, in the sense of decisions based on a personal bias or antipathy toward that group. The employer might well have personal friends from Group X, based on far more knowledge of those particular individuals than it is possible to get about job applicants, without prohibitive costs.

The point here is neither to justify nor condemn the employer but to classify different decision-making processes, so that their implications and consequences can be analyzed separately. If judging each person as an individual is Discrimination 1a, we can classify as Discrimination 1b basing decisions about groups on information that is correct for that group, though not necessarily correct for every individual in that group, nor necessarily even correct for a majority of the individuals in that group.

A real-life example of the effect of the cost of knowledge in this context is a study which showed that, despite the reluctance of many employers to hire young black males, because a significant proportion of them have criminal records (Discrimination 1b), those particular employers who automatically did criminal background checks on all their employees (Discrimination 1a) tended to hire more young black males than did other employers.1

In other words, where the nature of the work made criminal background checks worth the cost for all employees, it was no longer necessary to use group information to assess whether individual young black job applicants had a criminal background. This made young black job applicants without a criminal background more employable than before.

More is involved here than simply a question of nomenclature. It has implications for practical policies in the real world. Many observers, hoping to help young black males have more employment opportunities, have advocated prohibiting employers from asking job applicants questions about a criminal record. Moreover, the U.S. Equal Employment Opportunity Commission has sued employers who do criminal background checks on job applicants, on grounds that this was racial discrimination, even when it was applied to all job applicants, regardless of race.2 Empirically, however, criminal background checks provided more employment opportunities for young black males.

[1] Harry J. Holzer, Steven Raphael, and Michael A. Stoll, “Perceived Criminality, Criminal Background Checks, and the Racial Hiring Practices of Employers,” Journal of Law and Economics, Vol. 49, No. 2 (October 2006), pp. 452, 473.

[2] Jason L. Riley, “Jobless Blacks Should Cheer Background Checks,” Wall Street Journal, August 23, 2013, p. All; Paul Sperry, “Background Checks Are Racist?” Investor’s Business Daily, March 28, 2014, p. Al.

Thomas Sowell, Discrimination and Disparities (New York, NY: Basic Books, 2018), 23-25 (added references).

Rich or poor, most people agree that wealth disparities exist. Thomas Sowell discusses the origins and impacts of those wealth disparities in his new book, Discrimination and Disparities in this episode of Uncommon Knowledge.

Sowell explains his issues with the relatively new legal standard of “disparate impact” and how it disregards the American legal principle of “burden of proof.” Sowell and Robinson discuss how economic outcomes vary greatly across individuals and groups and that concepts like “disparate impact” fail to take into account these variations.

They chat about the impact of nuclear families on the IQs of individuals, as studies have not only shown that children raised by two parents tend to have higher levels of intelligence but also that first-born and single children have even higher intelligence levels than those of younger siblings, indicating that the time and attention given by parents to their children greatly impacts the child’s future more than factors like race, environment, or genetics. Sowell talks about his book in which he wrote extensively about National Merit Scholarship finalists who more often than not were the first-born or only child in a family.

Sowell and Robinson go on to discuss historical instances of discrimination and how those instances affected economic and social issues within families, including discrimination created by housing laws in the Bay Area. They discuss unemployment rates, violence, the welfare state in regards to African American communities, and more.

Some “Orthodoxy” Quotes | G.K. Chsterton

I will add to this over time.

Dennis Prager always speaks to a whole generation being bored. This quote from Chesterton reminded me of that idea… that is, how one approaches/views life offers meaning or boredom:

…oddities only strike ordinary people. Oddities do not strike odd people. This is why ordinary people have a much more exciting time; while odd people are always complaining of the dulness of life. This is also why the new novels die so quickly, and why the old fairy tales endure for ever. The old fairy tale makes the hero a normal human boy; it is his adventures that are startling; they startle him because he is normal. But in the modern psychological novel the hero is abnormal; the centre is not central. Hence the fiercest adventures fail to affect him adequately, and the book is monotonous. You can make a story out of a hero among dragons; but not out of a dragon among dragons. The fairy tale discusses what a sane man will do in a mad world. The sober realistic novel of to-day discusses what an essential lunatic will do in a dull world.

G.K. Chesterton, Orthodoxy (San Francisco, CA: Ignatius Press, 1995), 20.

“But what we suffer from to-day is humility in the wrong place. Modesty has moved from the organ of ambition. Modesty has settled upon the organ of conviction; where it was never meant to be. A man was meant to be doubtful about himself, but undoubting about truth; this has been exactly reversed.”

G.K. Chesterton, Orthodoxy (San Francisco, CA: Ignatius Press, 1995), 36-37.

Take first the more obvious case of materialism. As an explanation of the world, materialism has a sort of insane simplicity. It has just the quality of the madman’s argument; we have at once the sense of it covering everything and the sense of it leaving everything out. Contemplate some able and sincere materialist, as, for instance, Mr. McCabe, and you will have exactly this unique sensation. He understands everything, and everything does not seem worth understanding. His cosmos may be complete in every rivet and cog-wheel, but still his cosmos is smaller than our world. Somehow his scheme, like the lucid scheme of the madman, seems unconscious of the alien energies and the large indifference of the earth; it is not thinking of the real things of the earth, of fighting peoples or proud mothers, or first love or fear upon the sea. The earth is so very large, and the cosmos is so very small. The cosmos is about the smallest hole that a man can hide his head in.

It must be understood that I am not now discussing the relation of these creeds to truth; but, for the present, solely their relation to health. Later in the argument I hope to attack the question of objective verity; here I speak only of a phenomenon of psychology. I do not for the present attempt to prove to Haeckel[13] that materialism is untrue, any more than I attempted to prove to the man who thought he was Christ that he was labouring under an error. I merely remark here on the fact that both cases have the same kind of completeness and the same kind of incompleteness. You can explain a man’s detention at Hanwell by an indifferent public by saying that it is the crucifixion of a god of whom the world is not worthy. The explanation does explain. Similarly you may explain the order in the universe by saying that all things, even the souls of men, are leaves inevitably unfolding on an utterly unconscious tree— the blind destiny of matter. The explanation does explain, though not, of course, so completely as the madman’s. But the point here is that the normal human mind not only objects to both, but feels to both the same objection. Its approximate statement is that if the man in Hanwell is the real God, he is not much of a god. And, similarly, if the cosmos of the materialist is the real cosmos, it is not much of a cosmos. The thing has shrunk. The deity is less divine than many men; and (according to Haeckel) the whole of life is something much more grey, narrow, and trivial than many separate aspects of it. The parts seem greater than the whole.

For we must remember that the materialist philosophy (whether true or not) is certainly much more limiting than any religion. In one sense, of course, all intelligent ideas are narrow. They cannot be broader than themselves. A Christian is only restricted in the same sense that an atheist is restricted. He cannot think Christianity false and continue to be a Christian; and the atheist cannot think atheism false and continue to be an atheist. But as it happens, there is a very special sense in which materialism has more restrictions than spiritualism. Mr. McCabe thinks me a slave because I am not allowed to believe in determinism. I think Mr. McCabe a slave because he is not Allowed to believe in fairies. But if we examine the two vetoes we shall see that his is really much more of a pure veto than mine. The Christian is quite free to believe that there is a considerable amount of settled order and inevitable development in the universe. But the materialist is not allowed to admit into his spotless machine the slightest speck of spiritualism or miracle. Poor Mr. McCabe is not allowed to retain even the tiniest imp, though it might be hiding in a pimpernel. The Christian admits that the universe is manifold and even miscellaneous, just as a sane man knows that he is complex. The sane man knows that he has a touch of the beast, a touch of the devil, a touch of the saint, a touch of the citizen. Nay, the really sane man knows that he has a touch of the madman. But the materialist’s world is quite simple and solid, just as the madman is quite sure he is sane. The materialist is sure that history has been simply and solely a chain of causation, just as the interesting person before mentioned is quite sure that he is simply and solely a chicken. Materialists and madmen never have doubts.

Spiritual doctrines do not actually limit the mind as do materialistic denials. Even if I believe in immortality I need not think about it. But if I disbelieve in immortality I must not think about it. In the first case the road is open and I can go as far as I like; in the second the road is shut. But the case is even stronger, and the parallel with madness is yet more strange. For it was our case against the exhaustive and logical theory of the lunatic that, right or wrong, it gradually destroyed his humanity. Now it is the charge against the main deductions of the materialist that, right or wrong, they gradually destroy his humanity; I do not mean only kindness, I mean hope, courage, poetry, initiative, all that is human. For instance, when materialism leads men to complete fatalism (as it generally does), it is quite idle to pretend that it is in any sense a liberating force. It is absurd to say that you are especially advancing freedom when you only use free thought to destroy free will. The determinists come to bind, not to loose. They may well call their law the “chain” of causation. It is the worst chain that ever fettered a human being. You may use the language of liberty, if you like, about materialistic teaching, but it is obvious that this is just as inapplicable to it as a whole as the same language when applied to a man locked up in a mad-house…

[13] Ernst Haeckel (1834-1919), a German biologist, monistic philosopher and evolutionary theorist who believed that he had proved that there was no immortal soul, free will or personal God.

G.K. Chesterton, Orthodoxy (San Francisco, CA: Ignatius Press, 1995), 27-30.

H.P. Owen and Self Referentially FALSE Views of Nature

One of the reasons I am a bibliophile and love to follow references given in one book with the purchase of the referenced book is many of the same quotes used by multiple authors on a subject do not give the full weight and gravity of the larger quote. I will give you an example. In J.P. Moreland’s work from 1987, “Scaling the Secular City: A Defense of Christianity,” he quotes Huw Parri Owen’s work, Christian Theism. In a more voluminous work, he and William Lane Craig use the same quote:

Determinism is self-stultifying.  If my mental processes are totally determined, I am totally determined either to accept or to reject determinism.  But if the sole reason for my believing or not believing X is that I am causally determined to believe it I have no ground for holding that my judgment is true or false.

J.P. Moreland and William Lane Craig, Philosophical Foundations for a Christian Worldview (Downers Grove, IL: IVP Academic, 2003) 241.

A great quote for sure.

I was finally able to get a good bound copy for a VERY reasonable price (previously when I looked for a copy, they were very expensive). While this book will enter my hopper to be read in full, I read the chapter the quote came from, and loved this larger quote from the section… and it deals with the self-stultifying aspect of Marx and Freud. I will add another quote by an excellent authot=r that does much the same, but first here is H.P. Owen’s larger reference:H.P. Owen Christian Theism Book 330

  1. If determinism were true how could the illusion of free will arise? If we are wholly determined why are we not conscious of being so? These questions gain additional force from the fact that we feel ourselves able to resist those very forces by which according to determinism our actions are invariably caused. The sense of free will cannot be plausibly attributed to “wish-fulfilment”. Admittedly it may seem desirable in so far as it raises us above physical nature. Yet is also imposes on us an existential burden together with a burden of guilt on those occasions when we have misused our freedom of choice.
  2. Determinism is incompatible with a great deal of our moral language. In particular it is incompatible with the concepts of obligation and moral responsibility. I cannot be obliged to do X unless I am free to do it simply because it is my duty and not because I am determined by other factors. Of course obligation is itself a determining factor in so far as it is a form of constraint. However the constraint is a unique one; and a sign of its uniqueness is that it leaves a person free either to accept or to reject it. Equally I cannot be morally responsible for an action that I was compelled to perform even if the compulsion proceeds from my own nature and so is an act of self-determination. And if I am not responsible for an action I cannot be blamed for it.
  3. Chiefly, however, determinism is self-stultifying. If my mental processes are totally determined, I am totally determined either to accept or to reject determinism. But if the sole reason for my believing or not believing X is that I am causally determined to believe it I have no ground for holding that my judgment is true or false. J. R. Lucas has put the point cogently with reference to Marxist and Freudian forms of determinism thus. ‘The Marxist who says that all ideologies have no independent validity and merely reflect the class interests of those who hold them can be told that in that case his Marxist views merely express the economic interests of his class, and have no more claim to be judged true or valid than any other view. So too the Freudian, if he makes out that everybody else’s philosophy is merely the consequence of childhood experiences, is, by parity of reasoning, revealing merely his delayed response to what happened to him when he was a child.’ Lucas then makes the same point with regard to a person who maintains, more generally, that our behaviour is totally determined by heredity and environment. “If what he says is true, he says it merely as the result of his heredity and environment, and of nothing else. He does not hold his determinist views because they are true, but because he has such-and-such a genetic make-up, and has received such-and-such stimuli; that is, not because the structure of the universe is such-and-such but only because the configuration of only one part of the universe, together with the structure of the determinist’s brain, is such as to produce that result.”

The exact force of this criticism is sometimes missed. Certainly on deterministic premisses determinism may be true. But we should not have any grounds for affirming that it is true or therefore for knowing that it is so. In order to obtain these grounds we must be free from all determining factors in order to assess the evidence according to its own worth. This principle applies to the assessment of all truth-claims (including those of Christianity). Freedom from determining factors is therefore required in the cognitive as much as in the moral sphere.

Huw Parri Owen, Christian Theism: A Study in its Basic Principles (Edinburgh, London: T & T Clark, 1984), 118-119.

Here is a smaller section from Dr. Roy Clouser critiquing Freudian determinism as well as throwing a stone in Taoism’s shoe:

…As an example of the strong sense of this incoherency, take the claim sometimes made by Taoists that “Nothing can be said of the Tao.” Taken without qualification (which is not the way it is intended), this is self-referentially incoherent since to say “Nothing can be said of the Tao” is to say something of the Tao. Thus, when taken in reference to itself, the statement cancels its own truth. As an example of the weak version of self-referential incoherency, take the claim once made by Freud that every belief is a product of the believer’s unconscious emotional needs. If this claim were true, it would have to be true of itself since it is a belief of Freud’s. It therefore requires itself to be nothing more than the product of Freud’s unconscious emotional needs. This would not necessarily make the claim false, but it would mean that even if it were true neither Freud nor anyone else could ever know that it is. The most it would allow anyone to say is that he or she couldn’t help but believe it.  The next criterion says that a theory must not be incompatible with any belief we have to assume for the theory to be true. I will call a theory that violates this rule “self-assumptively incoherent.” As an example of this incoherence, consider the claim made by some philosophers that all things are exclusively physical [atheistic-naturalism]. This has been explained by its advocates to mean that nothing has any property or is governed by any law that is not a physical property or a physical law. But the very sentence expressing this claim, the sentence “All things are exclusively physical,” must be assumed to possess a linguistic meaning. This is not a physical property, but unless the sentence had it, it would not be a sentence; it would be nothing but physical sounds or marks that would not) linguistically signify any meaning whatever and thus could not express any claim — just as a group of pebbles, or clouds, or leaves, fails to signify any meaning or express any claim. Moreover, to assert this exclusivist materialism is the same as claiming it is true, which is another nonphysical property; and the claim that it is true further assumes that its denial would have to be false, which is a relation guaranteed by logical, not physical, laws. (Indeed, any theory which denies the existence of logical laws is instantly and irredeemably self-assumptively incoherent since that very denial is proposed as true in a way that logically excludes its being false.) What this shows is that the claim “All things are exclusively physical” must itself be assumed to have nonphysical properties and be governed by nonphysical laws or it could neither be understood nor be true. Thus, no matter how clever the supporting arguments for this claim may seem, the claim itself is incompatible with assumptions that are required for it to be true. It is therefore self-assumptively incoherent in the strong sense…

Roy A. Clouser, The Myth of Religious Neutrality: An Essay on the Hidden Role of Religious Belief in Theories (Notre Dame, IN: Notre Dame Press, 2005), 84-85.

Another Jefferson Misquote

I kept getting this quote in conversation thrown at me proving Jefferson’s “anti-war” stance on Twitter. Here is one such use of it followed by an ultimatum:

  • “I abhor war and view it as the greatest scourge of mankind.” ~Thomas Jefferson. Now you still want to argue your thinking?

The quote comes from a letter to Elbridge Gerry, and can be read here. Here is a larger section where this comes from… I will italicize the quote used already, and after the larger quote emphasize what follows that gives the sentence context:

I have been happy, however, in believing, from the stifling of this effort, that that dose was found too strong, & excited as much repugnance there as it did horror in other parts of our country, & that whatever follies we may be led into as to foreign nations, we shall never give up our Union, the last anchor of our hope, & that alone which is to prevent this heavenly country from becoming an arena of gladiators. Much as I abhor war, and view it as the greatest scourge of mankind, and anxiously as I wish to keep out of the broils of Europe, I would yet go with my brethren into these, rather than separate from them. But I hope we may still keep clear of them, notwithstanding our present thraldom, & that time may be given us to reflect on the awful crisis we have passed through, and to find some means of shielding ourselves in future from foreign influence, political, commercial, or in whatever other form it may be attempted. I can scarcely withhold myself from joining in the wish of Silas Deane, that there were an ocean of fire between us & the old world.

Here is the sentence in whole — again:

  • Much as I abhor war, and view it as the greatest scourge of mankind, and anxiously as I wish to keep out of the broils of Europe, I would yet go with my brethren into these, rather than separate from them.

There is a lot of qualifying that the sentence ripped from it’s context does not allows a reader to better understand Jefferson’s position. Also note that the letter included the history and knowledge of the Silas Deane affair as well as what is missing from the letter… which we know because we have the rough draft:

“I shall never forget the prediction of the count de Vergennes that we shall exhibit the singular phenomenon of a fruit rotten before it is ripe, nor cease to join in the wish of Silas Deane that there were an ocean of fire between us & the old world. Indeed my dear friend I am so disgusted with this entire subjection to a foreign power that if it were in the end to appear to be the wish of the body of my countrymen to remain in that vassalege I should feel my unfitness to be an agent in their affairs, and seek in retirement that personal independence without which this world has nothing I value. I am confident you set the same store by it which I do: but perhaps your situation may not give you the same conviction of its existence.”

(read more)

As an aside… Jefferson would have liked to see the French Revolution be more bloody if it succeeded in it’s aims:

My own affections have been deeply wounded by some of the martyrs to this cause [the French Revolution], but rather than it should have failed, I would have seen half the earth desolated.

  • Thomas Jefferson, Letter of January 3, 1793, The Portable Thomas Jefferson, ed. Merrill D. Peterson (New York: Penguin Books, 1975), p. 465; from, Thomas Sowell, A Conflict of Visions: Ideological Origins of Political Struggles (New York, NY: basic Books, 2007), 29.

Free Will is an Illusion If Atheism Is True

Below are examples of atheists and theists agreeing that if atheism is true, truth is no longer a category to be trusted (find many more or fuller quotes and videos HERE):

✦ Determinism is self-stultifying. If my mental processes are totally determined, I am totally determined either to accept or to reject determinism. But if the sole reason for my believing or not believing X is that I am causally determined to believe it I have no ground for holding that my judgment is true or false. (H.P. Owen)

✦ If my mental processes are determined wholly by the motions of atoms in my brain, I have no reason to suppose that my beliefs are true…and hence I have no reason for supposing my brain to be composed of atoms. (J.B.S. Haldane)

✦ The principle chore of brains is to get the body parts where they should be in order that the organism may survive. Improvements in sensorimotor control confer an evolutionary advantage: a fancier style of representing [the world] is advantageous so long as it… enhances the organism’s chances for survival. Truth, whatever that is, takes the hindmost. (Patricia Churchland)

✦ He thus acknowledged the need for any theory to allow that humans have genuine freedom to recognize the truth. He (again, correctly) saw that if all thought, belief, feeling, and choice are determined (i.e., forced on humans by outside conditions) then so is the determinists’ acceptance of the theory of determinism forced on them by those same conditions. In that case they could never claim to know their theory is true since the theory making that claim would be self-referentially incoherent. In other words, the theory requires that no belief is ever a free judgment made on the basis of experience or reason, but is always a compulsion over which the believer has no control. (Roy A. Clouser,)

✦ If what he says is true, he says it merely as the result of his heredity and environment, and nothing else. He does not hold his determinist views because they are true, but because he has such-and-such stimuli; that is, not because the structure of the structure of the universe is such-and-such but only because the configuration of only part of the universe, together with the structure of the determinist’s brain, is such as to produce that result…. They [determinists – I would posit any philosophical naturalist] want to be considered as rational agents arguing with other rational agents; they want their beliefs to be construed as beliefs, and subjected to rational assessment; and they want to secure the rational assent of those they argue with, not a brainwashed repetition of acquiescent pattern. Consistent determinists should regard it as all one whether they induce conformity to their doctrines by auditory stimuli or a suitable injection of hallucinogens: but in practice they show a welcome reluctance to get out their syringes, which does equal credit to their humanity and discredit to their views. Determinism, therefore, cannot be true, because if it was, we should not take the determinists’ arguments as being really arguments, but as being only conditioned reflexes. Their statements should not be regarded as really claiming to be true, but only as seeking to cause us to respond in some way desired by them. (J. R. Lucas)

✦ …a lecture he attended entitled “Determinism – Is Man a Slave or the Master of His Fate,” given by Stephen Hawking, who is the Lucasian Professor of Mathematics at Cambridge, Isaac Newton’s chair, was this admission by Dr. Hawking’s, was Hawking’s admission that if “we are the random products of chance, and hence, not free, or whether God had designed these laws within which we are free.” In other words, do we have the ability to make choices, or do we simply follow a chemical reaction induced by millions of mutational collisions of free atoms? Michael Polyni mentions that this “reduction of the world to its atomic elements acting blindly in terms of equilibrations of forces,” a belief that has prevailed “since the birth of modern science, has made any sort of teleological view of the cosmos seem unscientific…. [to] the contemporary mind.”

✦ If we were free persons, with faculties which we might carelessly use or willfully misuse, the fact might be explained; but the pre-established harmony excludes this supposition. And since our faculties lead us into error, when shall we trust them? Which of the many opinions they have produced is really true? By hypothesis, they all ought to be true, but, as they contradict one another, all cannot be true. How, then, distinguish between the true and the false? By taking a vote? That cannot be, for, as determined, we have not the power to take a vote. Shall we reach the truth by reasoning? This we might do, if reasoning were a self-poised, self verifying process; but this it cannot be in a deterministic system. Reasoning implies the power to control one’s thoughts, to resist the processes of association, to suspend judgment until the transparent order of reason has been readied. It implies freedom, therefore. In a mind which is controlled by its states, instead of controlling them, there is no reasoning, but only a succession of one state upon another. There is no deduction from grounds, but only production by causes. No belief has any logical advantage over any other, for logic is no longer possible. (Borden P Bowne)

✦ What merit would attach to moral virtue if the acts that form such habitual tendencies and dispositions were not acts of free choice on the part of the individual who was in the process of acquiring moral virtue? Persons of vicious moral character would have their characters formed in a manner no different from the way in which the character of a morally virtuous person was formed—by acts entirely determined, and that could not have been otherwise by freedom of choice. (Mortimer J. Adler)

✦ Atheist Daniel Dennett’s assertion that consciousness is an illusion is not the result of an unbiased evaluation of the evidence. Indeed, there is no such thing as “unbiased evaluation” in a materialist world because the laws of physics determine everything anyone thinks, including everything Dennett thinks. Dennett is just assuming the ideology of materialism is true and applying its implications to consciousness. In doing so, he makes the same mistake we’ve seen so many other atheists make. He is exempting himself from his own theory. Dennett says consciousness is an illusion, but he treats his own consciousness as not an illusion. He certainly doesn’t think the ideas in his book are an illusion. He acts like he’s really telling the truth about reality. (Frank Turek quoting Dennett)

Even Darwin had some misgivings about the reliability of human beliefs. He wrote, “With me the horrid doubt always arises whether the convictions of man’s mind, which has been developed from the mind of lower animals, are of any value or at all trustworthy. Would any one trust in the convictions of a monkey’s mind, if there are any convictions in such a mind?”

Given unguided evolution, “Darwin’s Doubt” is a reasonable one. Even given unguided or blind evolution, it’s difficult to say how probable it is that creatures—even creatures like us—would ever develop true beliefs. In other words, given the blindness of evolution, and that its ultimate “goal” is merely the survival of the organism (or simply the propagation of its genetic code), a good case can be made that atheists find themselves in a situation very similar to Hume’s.

The Nobel Laureate and physicist Eugene Wigner echoed this sentiment: “Certainly it is hard to believe that our reasoning power was brought, by Darwin’s process of natural selection, to the perfection which it seems to possess.” That is, atheists have a reason to doubt whether evolution would result in cognitive faculties that produce mostly true beliefs. And if so, then they have reason to withhold judgment on the reliability of their cognitive faculties. Like before, as in the case of Humean agnostics, this ignorance would, if atheists are consistent, spread to all of their other beliefs, including atheism and evolution. That is, because there’s no telling whether unguided evolution would fashion our cognitive faculties to produce mostly true beliefs, atheists who believe the standard evolutionary story must reserve judgment about whether any of their beliefs produced by these faculties are true. This includes the belief in the evolutionary story. Believing in unguided evolution comes built in with its very own reason not to believe it.

This will be an unwelcome surprise for atheists. To make things worse, this news comes after the heady intellectual satisfaction that Dawkins claims evolution provided for thoughtful unbelievers. The very story that promised to save atheists from Hume’s agnostic predicament has the same depressing ending.

It’s obviously difficult for us to imagine what the world would be like in such a case where we have the beliefs that we do and yet very few of them are true. This is, in part, because we strongly believe that our beliefs are true (presumably not all of them are, since to err is human—if we knew which of our beliefs were false, they would no longer be our beliefs).

Suppose you’re not convinced that we could survive without reliable belief-forming capabilities, without mostly true beliefs. Then, according to Plantinga, you have all the fixins for a nice argument in favor of God’s existence For perhaps you also think that—given evolution plus atheism—the probability is pretty low that we’d have faculties that produced mostly true beliefs. In other words, your view isn’t “who knows?” On the contrary, you think it’s unlikely that blind evolution has the skill set for manufacturing reliable cognitive mechanisms. And perhaps, like most of us, you think that we actually have reliable cognitive faculties and so actually have mostly true beliefs. If so, then you would be reasonable to conclude that atheism is pretty unlikely. Your argument, then, would go something like this: if atheism is true, then it’s unlikely that most of our beliefs are true; but most of our beliefs are true, therefore atheism is probably false.

Notice something else. The atheist naturally thinks that our belief in God is false. That’s just what atheists do. Nevertheless, most human beings have believed in a god of some sort, or at least in a supernatural realm. But suppose, for argument’s sake, that this widespread belief really is false, and that it merely provides survival benefits for humans, a coping mechanism of sorts. If so, then we would have additional evidence—on the atheist’s own terms—that evolution is more interested in useful beliefs than in true ones. Or, alternatively, if evolution really is concerned with true beliefs, then maybe the widespread belief in God would be a kind of “evolutionary” evidence for his existence.

You’ve got to wonder.

Mitch Stokes, A Shot of Faith: To the Head (Nashville, TN: Thomas Nelson, 2012), 44-45.

…if evolution were true, then there would be selection only for survival advantage; and there would be no reason to suppose that this would necessarily include rationality. After a talk on the Christian roots of science in Canada, 2010, one atheopathic* philosophy professor argued that natural selection really would select for logic and rationality. I responded by pointing out that under his worldview, theistic religion is another thing that ‘evolved’, and this is something he regards as irrational. So under his own worldview he believes that natural selection can select powerfully for irrationality, after all. English doctor and insightful social commentator Theodore Dalrymple (who is a non-theist himself) shows up the problem in a refutation of New Atheist Daniel Dennett:

Dennett argues that religion is explicable in evolutionary terms—for example, by our inborn human propensity, at one time valuable for our survival on the African savannahs, to attribute animate agency to threatening events.

For Dennett, to prove the biological origin of belief in God is to show its irrationality, to break its spell. But of course it is a necessary part of the argument that all possible human beliefs, including belief in evolution, must be explicable in precisely the same way; or else why single out religion for this treatment? Either we test ideas according to arguments in their favour, independent of their origins, thus making the argument from evolution irrelevant, or all possible beliefs come under the same suspicion of being only evolutionary adaptations—and thus biologically contingent rather than true or false. We find ourselves facing a version of the paradox of the Cretan liar: all beliefs, including this one, are the products of evolution, and all beliefs that are products of evolution cannot be known to be true.

Jonathan D. Sarfati, The Genesis Account: A Theological, Historical, And Scientific Commentary On Genesis 1-11 (Powder Springs, GA: Creation Book Publishers, 2015), 259-259.

Einstein Was Not an Atheist OR Agnostic ~ Max Jammer

The following comes from the best biographical look at Einstein and religion:

[p. 90>] When the Northwestern Regional Conference of the American Association of Theological Schools convened at the Theological Seminary in Princeton in May 1939, one of the few nontheologians invited to address the meeting was Einstein. The mimeographed transcripts of his lecture car­ried the title “The Goal.” 34 Einstein began his talk by recall­ing that in the last century it was widely held that scientific knowledge and religious belief conflict with each other and that the prevailing trend “among advanced minds” was to replace belief with knowledge. The function of education was therefore confined to the development of rational thinking and knowing. Although “the aspiration toward such objective knowledge belongs to the highest of which man is capable… knowledge of what is does not open the door directly to what should be. One can have the clearest and most complete knowledge of what is, and yet not be able to deduct from that what should be the goal of our human aspirations.” Scientific thinking alone, Einstein con­tinued, cannot lead us to the ultimate and fundamental purpose of our existence.

To make clear these fundamental ends and valuations, and to set them fast in the emotional life of the indi­vidual, seems to me precisely the most important function which religion has to perform in the social life of man. And if one asks whence derives the au­thority of such fundamental ends, since they cannot be [p. 91>] stated and justified merely by reason, one can only an­swer: they exist in a healthy society as powerful tradi­tions, which act upon the conduct and aspirations and judgments of the individuals; they are there, that is, as something living, without its being necessary to find justification for their existence. They come into being not through demonstration but through revelation, through the medium of powerful personalities. One must not attempt to justify them, but rather to sense their nature simply and clearly. The highest principles for our aspirations and judgments are given to us in the Jewish-Christian religious tradition. It is a very high goal, which, with our weak powers, we can reach only very inadequately, but which gives us a sure foundation to our aspirations and valuations.

Compared with his 1930 essay, this talk had a much more reserved tone and its ideas were acceptable even to orthodox theologians. It should be noted, however, that the topic of Einstein’s 1930 essay differs distinctly from that of his 1939 talk; while the former dealt mainly with the origin and nature of religious beliefs, the latter deals almost ex­clusively with questions related to the purpose and goal of our life, a subject on which agreement is more easily attain­able than on the nature of religion. In fact, Einstein’s 1939 talk was sympathetically received by almost all partici­pants of the conference.

This was probably one of the reasons that Rabbi Louis Finkelstein, a prominent religious leader, president of the Jewish Theological Seminary in New York, and member of the organizing committee of the “Conference on Science, Philosophy and Religion,” scheduled to convene on Sep-[p. 92>] tember 9-11, 1940, at the Union Theological Seminary in the City of New York, thought it appropriate to invite Ein­stein to address this conference as well. Einstein agreed to write an essay, “Science and Religion,” to be read at this conference.35 Neither he nor Finkelstein anticipated the se­rious controversies and harsh acrimonies that this essay would evoke.

Einstein agreed, not only out of respect for a distin­guished leader of liberal Judaism but also because of his well-known magnanimity to respond to all requests he thought to be ingenuous. Thus, in 1936 when Phyllis Wright, a sixth-grade student in the Sunday school of the Riverside Church in New York, asked whether scientists pray and, if they do, what they pray for, he gave a reply that can serve as an introduction to his essay for the 1940 conference.

“Scientific research is based on the assumption that all events, including the actions of mankind, are deter­mined by the laws of nature. Therefore, a research sci­entist will hardly be inclined to believe that events could be influenced by a prayer, that is, by a wish ad­dressed to a supernatural Being. However, we have to admit that our actual knowledge of these laws is only an incomplete piece of work (unvollkommenes Stuck-werk), so that ultimately the belief in the existence of fundamental all-embracing laws also rests on a sort of faith. All the same, this faith has been largely justified [p. 93>] by the success of science. On the other hand, however, everyone who is seriously engaged in the pursuit of science becomes convinced that the laws of nature manifest the existence of a spirit vastly superior to that of men, and one in the face of which we with our modest powers must feel humble. The pursuit of sci­ence leads therefore to a religious feeling of a special kind, which differs essentially from the religiosity of more naive people. With friendly greetings, your Al­bert Einstein.”36

EINSTEIN’S CONTRIBUTION to the 1940 conference was pre­sented to an audience of over five hundred participants. The article begins with the question of what, precisely, we understand by science and by religion. Science, says Ein­stein, can easily be defined as “the attempt at the posterior reconstruction of existence by the process of conceptualiza­tion”; but to define religion is a much more difficult task. We can reach this definition by inquiring first what charac­terizes the aspirations of a religious person. “A person who is religiously enlightened,” says Einstein, “appears to me to be one who has, to the best of his ability, liberated him­self from the fetters of his selfish desires and is preoc­cupied with thoughts, feelings, and aspirations to which he clings because of their superpersonal value.” What is im­portant, according to Einstein, is “the force of this super-personal content…. regardless of whether any attempt is made to unite this content with a divine Being.” From these presuppositions, Einstein then derived the definition [p. 94>] of religion as “the age-old endeavor of mankind to become clearly and completely conscious of these values and goals and constantly to strengthen and extend their effect.”

These definitions enabled Einstein to repeat what he had already said in his essay, “The Goal,” namely, that because science ascertains only what is, but not what should be, no conflict between the two can exist. Only intervention on the part of religion into the realm of science—if, for exam­ple, a religious community insists on the absolute truthful­ness of all statements in the Bible—can give rise to conflict, as has been the case in the struggle of the Church against the doctrines of Galileo or Darwin. Even though the realms of religion and science are distinctly marked off from each other, strong reciprocal relations exist between the two. Though religion determines the goal, science, in its broad­est sense, shows the means for attaining this goal. How­ever, “science can only be created by those who are thoroughly imbued with the aspiration toward truth and understanding. This source of feeling, however, springs from the sphere of religion…. I cannot conceive of a gen­uine scientist without that profound faith. The situation may be expressed by an image: science without religion is lame, religion without science is blind.”

Had this statement been the final conclusion, the article probably would have been acclaimed by all the partici­pants. But Einstein qualified his statements about the com­patibility of religion and science “with reference to the ac­tual content of historical religions.” “This qualification,” he continued, “has to do with the concept of God.” He then mentioned, though more briefly than in his 1930 essay, his theory of the three stages in the evolution of religion and the concept of God and declared that “the main source of [p. 95>] the present-day conflicts between the spheres of religion and of science lies in this concept of a personal God.” Al­though he conceded that the doctrine of a personal God could never be refuted, because such a doctrine could al­ways take refuge where science has not yet been able to gain a foothold, he called such a procedure 

not only unworthy but also fatal. For a doctrine which is able to maintain itself not in clear light but only in the dark, will of necessity lose its effect on mankind, with incalculable harm to human progress. In their struggle for the ethical good, teachers of religion must have the stature to give up that source of fear and hope which in the past placed such vast power in the hands of priests. The further the spiritual evolution of mankind advances, the more certain it seems to me that the path to genuine religiosity does not lie through the fear of life, and the fear of death, and blind faith, but through striving after rational knowl­edge. In this sense I believe that the priest must be­come a teacher if he wishes to do justice to his lofty educational mission.

Some background is necessary to assess correctly the re­action that this article—in particular, its denial of a per­sonal God—evoked among the theologians attending the conference and the wider public. Einstein did not antici­pate that the denial of a personal God would be misin­terpreted as the denial of God. That such a misinterpre­tation was not uncommon can be gathered from a 1945 encyclopedia of religion that defined the term “atheism” as “the denial that there exists a being corresponding to some particular definition of god; frequently, but unfortunately, [p. 96>] used to denote the denial of God as personal.”37 That Ein­stein was neither an atheist nor an agnostic—certainly not in the usual sense of the term coined in 1869 by Thomas Henry Huxley—follows not only from Einstein’s above-mentioned statements concerning his cosmic religion but also from statements made by all those with whom he had intimate discussions about his religious conviction. Thus, for example, his close friend Max Born once remarked, “he [Einstein] had no belief in the Church, but did not think that religious faith was a sign of stupidity, nor unbelief a sign of intelligence.”38 David Ben-Gurion—who visited Einstein in Princeton a year before inviting him to become President of Israel—recalled that, when discussing reli­gion, “even he [Einstein], with his great formula about en­ergy and mass, agreed that there must be something be­hind the energy.”39 With respect to religion, Ben-Gurion and Einstein had much in common. Like Einstein, Ben-Gurion was an ardent admirer of Spinoza. He also de­clared his belief “that there must be a being, intangible, indefinable, even unimaginable, but something infinitely superior to all we know and are capable of conceiving,”40 a belief not much different from Einstein’s belief in the im­personal God of his cosmic religion.

At a charity dinner in New York, Einstein explicitly disso­ciated himself from atheism when he spoke with the Ger­man anti-Nazi diplomat and author Hubertus zu Lowen-[p. 97>]stein: “In view of such harmony in the cosmos which I, with my limited human mind, am able to recognize, there are yet people who say there is no God. But what really makes me angry is that they quote me for support of such views.”41

Footnotes

[34] A. Einstein, “The Goal,” lecture delivered 19 May 1939, Ideas and Opinions, pp. 41-44; Out of My Later Years, pp. 25-28.

[35] A. Einstein, “Science and Religion,” Transactions of the First Confer­ence on Science, Philosophy and Religion in Their Relation to the Demo­cratic Way of Life (New York, 1941); Ideas and Opinions, pp. 44-49; Out of My Later Years, pp. 28-33; Nature 146 (1940): 605-607.

[36] Einstein to P. Wright, 24 January 1936. Einstein Archive, reel 52-337.

[37] V. Ferm, ed., An Encyclopedia of Religion (Philosophical Library, New York, 1945), p. 44.

[38] Born—Einstein Letters p. 203.

[39] M. Pearlman, Ben Gurion Looks Back (Weidenfeld and Nicolson, London, 1965), p. 217.

[40] Ibid., p. 216.

[41] Prinz Hubertus zu Lowenstein, Towards the Further Shore (Victor Gollancz, London, 1968), p. 156.

Max Jammer, Einstein and Religion (Princeton, NJ: Princeton University Press, 1999), 90-97.

Vishnal Mangalwadi Quotes George Orwell

This was a great quote from a book I am currently reading. The quote from Orwell comes from his letter/notes entitled “A Patriot After All: 1940-1941 (The Complete Works of George Orwell, Vol. 12)“:

India-born British author George Orwell (1903-50) was a socialist, inclined toward atheism. The horrors of Fascism, Nazism, Commu­nism, and the two World Wars forced him to face the consequences of the “amputation of the soul.” In his “Notes on the Way,” Orwell wrote that the writers who sawed off the West’s soul included “Gibbon, Vol­taire, Rousseau, Shelley, Byron, Dickens, Stendahl, Samuel Butler, Ibsen, Zola, Flaubert, Shaw, Joyce—in one way or another they are all of them destroyers, wreckers, saboteurs.” These “Enlightenment” writers led the West into its present darkness.

In his essay Orwell was reflecting on Malcolm Muggeridge’s book The Thirties, which describes the damage these writers had done to Europe. Muggeridge, then still an atheist, was astute enough to per­ceive that

we are living in a nightmare precisely because we have tried to set up an earthly paradise. We have believed in “progress.” Trusted to human leadership, rendered unto Caesar the things that are God’s. . . . There is no wisdom except in the fear of God; but no one fears God; therefore there is no wisdom. Man’s history reduces itself to the rise and fall of material civilizations, one Tower of Babel after another . . . downwards into abysses which are horrible to contemplate.

I first discovered the Bible as a student in India. It transformed me as an individual and I soon learned that, contrary to what my uni­versity taught, the Bible was the force that had created modern India. Let me, therefore, begin our study of the book that built our world by telling you my own story.

Vishnal Mangalwadi, The Book that Made Your World: How the Bible Created the Soul of Western Civilization (Nashville TN: Thomas Nelson, 2011), 22-23.

2-Quotes from An Early Salvo in the Culture War ~ Allan Bloom

I just wanted to catalog two quotes by a Jewish (non-religious), gay, anti-conservative professor, and then post some excerpts from a review of the book.

There is one thing a professor can be absolutely certain of: almost every student entering the university believes, or says he believes, that truth is relative. If this belief is put to the test, one can count on the students’ reaction: they will be uncomprehending. That anyone should regard the proposition as not self-evident astonishes them. … The relativity of truth is … a moral postulate, the condition of a free society, or so they see it. … The danger they have been taught to fear is not error but intolerance. Relativism is necessary to openness; and this is the virtue, the only virtue, which all primary education for more than fifty years has dedicated itself to inculcating. Openness — and the relativism that makes it plausible — is the great insight of our times. … The study of history and of culture teaches that all the world was mad in the past; men always thought they were right, and that led to wars, persecutions, slavery, xenophobia, racism, and chauvinism. The point is not to correct the mistakes and really be right; rather it is not to think you are right at all.

[….]

In the United States, practically speaking, the Bible was the only common culture, one that united the simple and the sophisticated, rich and poor, young and old, and—as the very model for a vision of the order of the whole of things, as well as the key to the rest of Western art, the greatest works of which were in one way or another responsive to the Bible—provided access to the seriousness of books. With its gradual and inevitable disappearance, the very idea of such a total book is disappearing. And fathers and mothers have lost the idea that the highest aspiration they might have for their children is for them to be wise—as priests, prophets or philosophers are wise. Specialized competence and success are all that they can imagine. Contrary to what is commonly thought, without the book even the idea of the whole is lost.

Allan Bloom, Closing of the American Mind: How Higher Education Has Failed Democracy and Impoverished the Souls of Today’s Students (New York, NY: Simon and Schuster, 1987), 25, 58 (respectively).

This review comes by way of The American Conservative. I would also recommend The Weekly Standard’s anniversary review of the book.

…While I continue to learn much from Bloom, over the years I have arrived at three main judgments about the book’s relevance, its prescience, and its failings. First, Bloom was right to be concerned about the specter of relativism—though perhaps even he didn’t realize how bad it would get, particularly when one considers the reaction to his book compared to its likely reception were it published today. Second, his alarm over the threat of “multiculturalism” was misplaced and constituted a bad misreading of the zeitgeist, in which he mistook the left’s tactical use of identity politics for the rise of a new kind of communalist and even traditionalist tribalism. And, lastly, most of his readers—even today—remain incorrect in considering him to be a representative of “conservatism,” a label that he eschewed and a worldview he rejected…

[….]

What should most astonish any reader of Bloom’s Closing after 25 years is the fact that this erudite treatise about the crisis of higher education not only sat atop the bestseller list for many weeks but was at the center of an intense, lengthy, and ferocious debate during the late 1980s over education, youth, culture, and politics. In many ways, it became the most visible and weightiest salvo in what came to be known as “the culture wars,” and people of a certain generation still hold strong opinions about Bloom and his remarkable, unlikely bestseller.

Today there are many books about the crisis of higher education—while the nature of the crisis may change, higher education never seems to be out of the woods—but none before or since Bloom’s book achieved its prominence or made its author as rich and famous as a rock star. It was a book that many people bought but few read, at least not beyond a few titillating passages condemning rock-and-roll and feminism. Yet it was a book about which almost everyone with some engagement in higher education held an opinion—indeed, it was obligatory to have considered views on Bloom’s book, whether one had read it or not.

Bloom’s book was at the center of a debate—one that had been percolating well before its publication in 1987—over the nature and content of a university education. That debate intensified with the growing numbers of “diverse” populations seeking recognition on college campuses—concomitant with the rise of departments of Women’s Studies, African-American Studies, and a host of other “Studies” studies—leading to demands that the curriculum increasingly reflect contributions by non-male, non-white, non-European and even non-dead authors.

The Closing of the American Mind spawned hundreds, perhaps even thousands of responses—most of them critiques—including an article entitled “The Philosopher Despot” in Harper’s by political theorist Benjamin Barber, and the inevitably titled The Opening of the American Mind by Lawrence Levine. Partly spurred by the firestorm initiated by Bloom’s book, perennial presidential candidate Jesse Jackson led a march through the campus of Stanford University shouting through a bullhorn, “Hey hey, ho ho, Western Civ has got to go!” Passions for campus reform ran high, and an avalanche of words, articles, denunciations, and ad hominem attacks greeted Bloom’s defense of the Western canon.

Yet the nuances of Bloom’s qualified defense of the Western canon were rarely appreciated by critics or supporters alike. While Bloom was often lumped together with E.D. Hirsch—whose Cultural Literacy was published the same year and rose to number two on the New York Times bestseller list, just behind Closing—Bloom’s argument was fundamentally different and far more philosophically challenging than Hirsch’s more mundane, if nevertheless accurate, point that educated people increasingly did not have knowledge about their own culture. Hirsch’s book spoke to anxiety about the loss of a shared literary and cultural inheritance, which today has been largely supplanted by references to a few popular television shows and sports televised on ESPN.

Bloom made an altogether different argument: American youth were increasingly raised to believe that nothing was True, that every belief was merely the expression of an opinion or preference. Americans were raised to be “cultural relativists,” with a default attitude of non-judgmentalism. Not only all other traditions but even one’s own (whatever that might be) were simply views that happened to be held by some people and could not be judged inferior or superior to any other. He bemoaned particularly the decline of household and community religious upbringing in which the worldviews of children were shaped by a comprehensive vision of the good and the true. In one arresting passage, he waxed nostalgic for the days when people cared: “It was not necessarily the best of times in America when Catholic and Protestants were suspicious of and hated one another; but at least they were taking their beliefs seriously…”

He lamented the decline of such true belief not because he personally held any religious or cultural tradition to be true—while Bloom was raised as a Jew, he was at least a skeptic, if not a committed atheist—but because he believed that such inherited belief was the source from which a deeper and more profound philosophic longing arose. It wasn’t “cultural literacy” he wanted, but rather the possibility of that liberating excitement among college-age youth that can come from realizing that one’s own inherited tradition might not be true. From that harrowing of belief can come the ultimate philosophic quest—the effort to replace mere prejudice with the quest for knowledge of the True.

Near the beginning of Closing, Bloom relates one telling story of a debate with a psychology professor during his time teaching at Cornell. Bloom’s adversary claimed, “it was his function to get rid of prejudices in his students.” Bloom compared that function to the activity of an older sibling who informs the kids that there is no Santa Claus—disillusionment and disappointment. Rather than inspiring students to replace “prejudice” with a curiosity for Truth, the mere shattering of illusion would simply leave students “passive, disconsolate, indifferent, and subject to authorities like himself.”

Bloom relates that “I found myself responding to the professor of psychology that I personally tried to teach my students prejudices, since nowadays—with the general success of his method—they had learned to doubt beliefs even before they believed in anything … One has to have the experience of really believing before one can have the thrill of liberation.” Bloom’s preferred original title—before being overruled by Simon and Schuster—was Souls Without Longing. He was above all concerned that students, in being deprived of the experience of living in their own version of Plato’s cave, would never know or experience the opportunity of philosophic ascent.

[….]

Today we live in a different age, one that so worried Bloom—an age of indifference. Institutions of higher learning have almost completely abandoned even a residual belief that there are some books and authors that an educated person should encounter. A rousing defense of a curriculum in which female, African-American, Latino, and other authors should be represented has given way to a nearly thoroughgoing indifference to the content of our students’ curricula. Academia is committed to teaching “critical thinking” and willing to allow nearly any avenue in the training of that amorphous activity, but eschews any belief that the content of what is taught will or ought to influence how a person lives.

Thus, not only is academia indifferent to whether our students become virtuous human beings (to use a word seldom to be found on today’s campuses), but it holds itself to be unconnected to their vices—thus there remains no self-examination over higher education’s role in producing the kinds of graduates who helped turn Wall Street into a high-stakes casino and our nation’s budget into a giant credit card. Today, in the name of choice, non-judgmentalism, and toleration, institutions prefer to offer the greatest possible expanse of options, in the implicit belief that every 18- to 22-year-old can responsibly fashion his or her own character unaided.

Bloom was so correct about the predictable rise of a society defined by indifference that one is entitled to conclude that were Closing published today, it would barely cause a ripple. This is not because most of academia would be inclined to agree with his arguments any more than they did in 1987. Rather, it is simply the case that hardly anyone in academe any longer thinks that curricula are worth fighting over….

[….]

Today’s academic leaders don’t believe the content of those choices has any fundamental influence on the souls of our students, most likely because it would be unfashionable to believe that they have souls. As long as everyone is tolerant of everyone else’s choices, no one can get hurt. What is today called “tolerance,” Bloom rightly understood to be more deeply a form of indifference, the extreme absence of care, leading to a society composed not only of “souls without longing” but humans treated as utilitarian bodies that are increasingly incapable of love.

(3-Part Interview)