Evolutionists w/Egg On Their Faces – Failed Predictions

  • the theory has been attacked on the grounds that many aspects of nature fail to show any evidence of intelligent design, such as “junk” DNA — R. T. Pennock (2002)

First, this is with a hat-tip to WINTERY KNIGHT! BTW, I posted on this many years back (jump below to my imports). Here is the video WK posted and then I will add more from him and my old posts.

VIDEO DESCRIPTION

Is the idea of junk DNA this one of the biggest mistakes in science in our lifetime? Only about 1% of our DNA codes for proteins, so what is the other 99% doing? Many evolutionary scientists over the years insisted that the non-protein coding DNA is largely junk, but intelligent design theorists predicted function will be prevalent throughout our genome. Guess which prediction turned out to be right?

Learn how scientists have discovered that the vast majority of our genome has function in this installment of the “Codes of Life” mini-series produced as part of the “Long Story Short” show on YouTube.

Find out more about scientific challenges to evolution. Download a free copy of the mini-book “Top 10 Scientific Problems With Evolution” here: TOP TEN PROBLEMS w/EVOLUTION. This free digital mini-book reviews the scientific literature and shows there are powerful scientific challenges to core tenets of Darwinian theory.

The key ideas here are this, Evolutionists predicted one thing. Intelligent Design proponents, another (EVOLUTION NEWS):

  1. evolutionists predicted that, in line with their premise of a randomly generated genome, DNA would turn out to be full of Darwinian debris, playing no functional role but merely parasitic (atheist Richard Dawkins’s term) on the small portion of functional DNA.
  2. Proponents of intelligent design said the opposite. William Dembski (1998) and Richard Sternberg (2002) predicted widespread function for the so-called “junk.” After all, as a product of care and intention, the genome ought to be comparable in a way with products of human genius, with every detail there for a reason.

For instance, here is an article that John Woodmorappe wrote in 2002:

More and more noncoding DNA, long considered ‘junk DNA’, has eventually been found to be functional. Hardly more than a few months pass by and there is not another scientific paper demonstrating function for some form of junk DNA. As summarized in this article, there is also growing evidence that at least some pseudogenes are functional. It should be stressed that pseudogenes, unlike other so-called junk DNA, have long been burdened not only with the ingrained belief that they lack function, but also the additional onus of having supposedly lost a function. In addition, consider the following preconception relative to protein-coding genes in general:

‘Considerably less analysis of this type has been performed on coding regions, possibly be­cause the bias present from the protein-encoding function represented as nucleotide triplets (codons) promotes the general assumption that secondary functionality is present infrequently in protein cod­ing sequences.’

[….]

Against the backdrop of the customary negative opinion of pseudogenes, there have always been a few individuals who anticipated their functional potential. McCarrey (1986) et al. were probably the first to suggest that pseudogenes can be functional in terms of the regulation of the expression of its paralogous genes. They noted that the sense RNA tran­scribed by a gene could be effectively removed by hybrid­izing (forming a duplex) with the antisense RNA produced by the paralagous pseudogene. In addition, an otherwise nonfunctional peptide unit translated by the pseudogene could inhibit the peptide translated by the gene. They lik­ened these processes to a buffered acid-base titration. As described below, their ideas proved prophetic.

Inouye apparently independently realized the same pos­sibility for pseudogenes (1988)….

Of course, me being no scientist could also read the writing on the wall, similar to vestigial organs. Here are those old posts of mine:

Both posts are from Friday, June 15, 2007

“Junk” Science

Evolution Predicts “Junk”… Intelligent Design Predicts “Treasure”

A long trail of refuse is what has been left behind by the theory of evolution. From the many deaths because evolutionary theory taught that tonsils were vestigial, to stalled insight into the appendix. Now we have years lost in the study of what was known as “Junk DNA.” Many years ago I debated that this will be found not to be junk, but will be shown to be useful, and after the first Scientific American article about “Junk DNA” not being Junk DNA, I was using it as an example to bolster the Intelligent Design argument:

For instance you can find a response here that I wrote in March of 2005:

This is originally from VOLCONVO, a debate forum I graced many years ago, now defunct, under:

philosophy-religion/4153-global-myths-evolution-spin-off-3

It is nice to see more and more INFORMATION (pun intended) come out on this, and the prediction made by Intelligent Design leaders in 1994! (Taken from EVOLUTION NEWS article):

As far back as 1994, pro-ID scientist and Discovery Institute fellow Forrest Mims had warned in a letter to Science against assuming that ‘junk’ DNA was ‘useless.'” Science wouldn’t print Mims’ letter, but soon thereafter, in 1998, leading ID theorist William Dembski repeated this sentiment in First Things:

[Intelligent] design is not a science stopper. Indeed, design can foster inquiry where traditional evolutionary approaches obstruct it. Consider the term “junk DNA.” Implicit in this term is the view that because the genome of an organism has been cobbled together through a long, undirected evolutionary process, the genome is a patchwork of which only limited portions are essential to the organism. Thus on an evolutionary view we expect a lot of useless DNA. If, on the other hand, organisms are designed, we expect DNA, as much as possible, to exhibit function. And indeed, the most recent findings suggest that designating DNA as “junk” merely cloaks our current lack of knowledge about function. For instance, in a recent issue of the Journal of Theoretical Biology, John Bodnar describes how “non-coding DNA in eukaryotic genomes encodes a language which programs organismal growth and development.” Design encourages scientists to look for function where evolution discourages it.

(William Dembski, “Intelligent Science and Design,” First Things, Vol. 86:21-27 (October 1998)

If these scientists were coming from the perspective that everything was “designed” to begin with, they wouldn’t merely write off unknowns as “junk.”

10:13 PM 

Killing in the Name of Darwin

The Dangers of Darwinism

People use to have their tonsils pulled whenever they were slightly inflamed. In the 1930’s over half of all children had their tonsils and adenoids removed. In 1969, 19.5 out of every 1,000 children under the age of nine had undergone a tonsillectomy. By 1971 the frequency had dropped to only 14.8 per 1,000, with the percentage continuing to decrease in subsequent years. Most medical authorities now actively discourage tonsillectomies. Many agree with Wooley, chairman of the department of pediatrics at Wayne State University, who was quoted in one study as saying: “If there are one million tonsillectomies done in the United States, there are 999,000 that don’t need doing.”

In the Medical World News (N. J. Vianna, Peter Greenwald, and U. N. Davies, September 10, 1973, p.10), a story stated that although removal of tonsils at a young age obviously eliminates tonsillitis (the inflammation of the tonsils) it may significantly increase the incidence of strep-throat and even Hodgkin’s disease. In fact, according to the New York Department of Cancer Control: people who have had tonsillectomies are nearly three times as likely to develop Hodgkin’s Disease, a form of cancer that attacks the lymphoid tissue” (Lawrence Galton, “All Those Tonsil Operations: Useless? Dangerous?”Parade, May 2 (1976), pp. 26ff).

Ken Miller, 13 years ago, said, 

  • “the designer made serious errors, wasting millions of bases of DNA on a blueprint full of junk and scribbles. Evolution, in contrast, can easily explain them as nothing more than failed experiments in a random process.” (ARN and UNCOMMON DECENT)

The SCIENCE DAILY article that the above ARN article links to has this to say:

“This impressive effort has uncovered many exciting surprises and blazed the way for future efforts to explore the functional landscape of the entire human genome,” said NHGRI Director Francis S. Collins, M.D., Ph.D. “Because of the hard work and keen insights of the ENCODE consortium, the scientific community will need to rethink some long-held views about what genes are and what they do, as well as how the genome’s functional elements have evolved. This could have significant implications for efforts to identify the DNA sequences involved in many human diseases.”….

….The ENCODE consortium’s major findings include the discovery that the majority of DNA in the human genome is transcribed into functional molecules, called RNA, and that these transcripts extensively overlap one another. This broad pattern of transcription challenges the long-standing view that the human genome consists of a relatively small set of discrete genes, along with a vast amount of so-called junk DNA that is not biologically active.

Thanks to the design theorists who predicted this outcome for the Intelligent Design theory, and for showing how this revelation refutes the prediction (yet again) that we should see if evolution is true. That is, useless genes and DNA.

10:34 PM 

SEE OTHER POSTS HERE ON MY .COM:

 

A “High-Brow” Defection from Darwinian Naturalism | Thomas Nagel

Originally Posted November of 2012

Here Dr. William Lane Craig demonstrates that atheism cannot give an account for reason, logic, and truth.

EVOLUTION NEWS AND VIEWS has this bitchin post:

About a decade ago I would muse on what it might take for intelligent design to win the day. Clearly, its intellectual and scientific project needed to move forward, and, happily, that has been happening. But I was also thinking in terms of a watershed event, something that could have the effect of a Berlin Wall coming down, so that nothing thereafter was the same. It struck me that an event like this could involve some notable atheists coming to reverse themselves on the evidence for design in the cosmos.

Shortly after these musings, Antony Flew, who had been the most notable intellectual atheist in the English-speaking world until Richard Dawkins supplanted him, announced that he had come to believe in God (a deistic deity and not the full-blooded deity of ethical monotheism) on account of intelligent design arguments. I wondered whether this could be the start of that Berlin Wall coming down, but was quickly disabused as the New York Times and other media outlets quickly dismissed Flew’s conversion as a sign of his dotage (he was in his eighties when he deconverted from atheism). Flew, though sound in mind despite what his critics were saying (I spoke with him on the phone in 2006), was quickly marginalized and his deconversion didn’t have nearly the impact that it might have.

Still, I may have been on to something about defections of high profile intellectuals from Darwinian naturalism and the effect that this might have in creating conceptual space for intelligent design and ultimately winning the day for it. In 2011 we saw University of Chicago molecular biologist James Shapiro deconstruct Darwinian evolution with an incisiveness and vigor that even the ID community has found hard to match (for my review of his Evolution: A View from the 21st Century, go here; for my exchange with Shapiro on this forum, go here).

A Most Disconcerting Deconversion

Thomas Nagel, with his just published Mind & Cosmos, has now become another such defector from Darwinian naturalism. Appearing from Oxford University Press and subtitled Why the Materialist Neo-Darwinian Conception of Nature is Almost Certainly False, this slender volume (it’s only 130 pages) represents the most disconcerting defection (disconcerting to Darwinists) from Darwinian naturalism to date. We’re still not talking the Berlin Wall coming down, but it’s not hard to see it as a realistic possibility, off in the distance, after reading this book.

Because intelligent design is still a minority position that is widely marginalized by the media and mainstream science, it’s easy for defenders of intelligent design to wax apocalyptic. Indeed, it’s a very natural impulse to want to throw off the shackles of an oppressive and powerful majority, especially when one views their authority as unwarranted and unjust. So I have to keep my own impulses in check when I make comments about the Berlin Wall coming down (by the way, I had an uncle, aunt, and cousins who lived in “West Berlin” at the time as well as relatives in Poland, so my interest in the Berlin Wall is not merely hypothetical). But Thomas Nagel is a very major intellectual on the American scene and his no-holds-barred deconstruction of Darwinian naturalism is just the sort of critique, coupled with others to be sure, that will, if anything, unravel Darwin’s legacy.

Nagel is a philosopher at New York University. Now in his 70s, he has been a towering figure in the field, and his essays were mandatory reading, certainly when I was a graduate student in philosophy in the early 1990s. His wildly popular essay “What Is It Like to Be a Bat?” takes on reductionist accounts of mind, and his books Mortal Questions (Cambridge, 1979) and The View from Nowhere (Oxford, 1986) seemed to be in many of my fellow graduate students’ backpacks.

Reading Nagel’s latest, I had the sense of watching Peter Finch in the film Network (1976), where he rants “I’m mad as hell and I’m not going to take this anymore” (in that famous monologue, Finch also says “I’m a human being, my life has value” — a remarkable point to make three years after Roe v. Wade; to see the monologue, go here). Now Nagel in Mind & Cosmos, unlike Finch in Network, is measured and calm, but he is no less adamant that the bullying by Darwinists needs to stop. Perhaps with Richard Dawkins in mind, who has remarked that dissenters from Darwin are either ignorant, stupid, wicked, insane, or brainwashed, Nagel writes,

I realize that such doubts [about Darwinian naturalism] will strike many people as outrageous, but that is because almost everyone in our secular culture has been browbeaten into regarding the reductive research program as sacrosanct, on the ground that anything else would not be science.

Nagel has nailed it here. The threat of being branded unscientific in the name of a patently ill-supported Darwinian evolutionary story is the thing that most keeps Darwinism alive (certainly not the evidence for it). We saw a similar phenomenon in the old communist Eastern bloc. Lots of people doubted Marxism-Leninism. But to express such doubt would get one branded as a reactionary. And so people kept silent. I recall David Berlinski, a well-known Darwin skeptic, telling me about a reading group at MIT among faculty there who studied his work but did so sub rosalest they have to face the wrath of Darwinists.

In Mind & Cosmos, Nagel serves notice on Darwinists that their coercive tactics at ensuring conformity have not worked with him and, if his example inspires others, won’t work with them either. What a wonderful subtitle to his book: Why the Materialist Neo-Darwinian Conception of Nature is Almost Certainly False. It’s a dare. Go ahead, make my day, do your worst to bring the wrath of Darwin’s devoted disciples on me. Nagel regards the emperor as without clothes and says so:

For a long time I have found the materialist account of how we and our fellow organisms came to exist hard to believe, including the standard version of how the evolutionary process works. The more details we learn about the chemical basis of life and the intricacy of the genetic code, the more unbelievable the standard historical account becomes. This is just the opinion of a layman who reads widely in the literature that explains contemporary science to the nonspecialist. Perhaps that literature presents the situation with a simplicity and confidence that does not reflect the most sophisticated scientific thought in these areas. But it seems to me that, as it is usually presented, the current orthodoxy about the cosmic order is the product of governing assumptions that are unsupported, and that it flies in the face of common sense.

…Read More…

William Lane Craig shows how the naturalist cannot trust their own thinking. He mentioned Alvin Plantinga who argues that if evolution is true that spells trouble for the atheist. Indeed, can the atheist (who calls himself a “free thinker”) be free if his brain is no more than matter and motion dictated by the laws of nature?

And this update from The Weekly Standard [DEFUNCT] now at WASHINIGTON EXAMINER:

Last fall, a few days before Halloween and about a month after the publication of Mind and Cosmos, the controversial new book by the philosopher Thomas Nagel, several of the world’s leading philosophers gathered with a group of cutting-edge scientists in the conference room of a charming inn in the Berkshires. They faced one another around a big table set with pitchers of iced water and trays of hard candies wrapped in cellophane and talked and talked, as public intellectuals do. PowerPoint was often brought into play. 

The title of the “interdisciplinary workshop” was “Moving Naturalism Forward.” For those of us who like to kill time sitting around pondering the nature of reality—personhood, God, moral judgment, free will, what have you—this was the Concert for Bangladesh. The biologist Richard Dawkins was there, author of The Blind WatchmakerThe Selfish Gene, and other bestselling books of popular science, and so was Daniel Dennett, a philosopher at Tufts and author of Consciousness Explained and Darwin’s Dangerous Idea: Evolution and the Meanings of Life. So were the authors of Why Evolution is True, The Really Hard Problem: Meaning in a Material WorldEverything Must Go: Metaphysics Naturalized, and The Atheist’s Guide to Reality: Enjoying Life without Illusions—all of them books that to one degree or another bring to a larger audience the world as scientists have discovered it to be.

[….]

Daniel Dennett took a different view. While it is true that materialism tells us a human being is nothing more than a “moist robot”—a phrase Dennett took from a Dilbert comic—we run a risk when we let this cat, or robot, out of the bag. If we repeatedly tell folks that their sense of free will or belief in objective morality is essentially an illusion, such knowledge has the potential to undermine civilization itself, Dennett believes. Civil order requires the general acceptance of personal responsibility, which is closely linked to the notion of free will. Better, said Dennett, if the public were told that “for general purposes” the self and free will and objective morality do indeed exist—that colors and sounds exist, too—“just not in the way they think.” They “exist in a special way,” which is to say, ultimately, not at all.

[….]

 …How did we lose Tom….

Thomas Nagel may be the most famous philosopher in the United States—a bit like being the best power forward in the Lullaby League, but still. His paper “What Is It Like to Be a Bat?” was recognized as a classic when it was published in 1974. Today it is a staple of undergraduate philosophy classes. His books range with a light touch over ethics and politics and the philosophy of mind. His papers are admired not only for their philosophical provocations but also for their rare (among modern philosophers) simplicity and stylistic clarity, bordering sometimes on literary grace. 

Nagel occupies an endowed chair at NYU as a University Professor, a rare and exalted position that frees him to teach whatever course he wants. Before coming to NYU he taught at Princeton for 15 years. He dabbles in the higher journalism, contributing articles frequently to the New York Review of Books and now and then to the New Republic. A confirmed atheist, he lacks what he calls the sensus divinitatis that leads some people to embrace the numinous. But he does possess a finely tuned sensus socialistis; his most notable excursion into politics was a book-length plea for the confiscation of wealth and its radical redistribution—a view that places him safely in the narrow strip of respectable political opinion among successful American academics. 

For all this and more, Thomas Nagel is a prominent and heretofore respected member of the country’s intellectual elite. And such men are not supposed to write books with subtitles like the one he tacked onto Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False.

Imagine if your local archbishop climbed into the pulpit and started reading from the Collected Works of Friedrich Nietzsche. “What has gotten into Thomas Nagel?” demanded the evolutionary psychologist Steven Pinker, on Twitter. (Yes, even Steven Pinker tweets.) Pinker inserted a link to a negative review of Nagel’s book, which he said “exposed the shoddy reasoning of a once-great thinker.” At the point where science, philosophy, and public discussion intersect—a dangerous intersection these days—it is simply taken for granted that by attacking naturalism Thomas Nagel has rendered himself an embarrassment to his colleagues and a traitor to his class. 

The Guardian awarded Mind and Cosmos its prize for the Most Despised Science Book of 2012. The reviews were numerous and overwhelmingly negative; one of the kindest, in the British magazine Prospect, carried the defensive headline “Thomas Nagel is not crazy.” (Really, he’s not!) Most other reviewers weren’t so sure about that. Almost before the ink was dry on Nagel’s book the UC Berkeley economist and prominent blogger Brad DeLong could be found gathering the straw and wood for the ritual burning. DeLong is a great believer in neo-Darwinism. He has coined the popular term “jumped-up monkeys” to describe our species. (Monkeys because we’re descended from primates; jumped-up because evolution has customized us with the ability to reason and the big brains that go with it.) 

DeLong was particularly offended by Nagel’s conviction that reason allows us to “grasp objective reality.” A good materialist doesn’t believe in objective reality, certainly not in the traditional sense. “Thomas Nagel is not smarter than we are,” he wrote, responding to a reviewer who praised Nagel’s intelligence. “In fact, he seems to me to be distinctly dumber than anybody who is running even an eight-bit virtual David Hume on his wetware.” (What he means is, anybody who’s read the work of David Hume, the father of modern materialism.) DeLong’s readers gathered to jeer as the faggots were placed around the stake. 

“Thomas Nagel is of absolutely no importance on this subject,” wrote one. “He’s a self-contradictory idiot,” opined another. Some made simple appeals to authority and left it at that: “Haven’t these guys ever heard of Richard Dawkins and Daniel Dennett?” The hearts of still others were broken at seeing a man of Nagel’s eminence sink so low. “It is sad that Nagel, whom my friends and I thought back in the 1960’s could leap over tall buildings with a single bound, has tripped over the Bible and fallen on his face. Very sad.”

Nagel doesn’t mention the Bible in his new book—or in any of his books, from what I can tell—but among materialists the mere association of a thinking person with the Bible is an insult meant to wound, as Bertie Wooster would say. Directed at Nagel, a self-declared atheist, it is more revealing of the accuser than the accused. The hysterical insults were accompanied by an insistence that the book was so bad it shouldn’t upset anyone. 

“Evolutionists,” one reviewer huffily wrote, “will feel they’ve been ravaged by a sheep.” Many reviewers attacked the book on cultural as well as philosophical or scientific grounds, wondering aloud how a distinguished house like Oxford University Press could allow such a book to be published. The Philosophers’ Magazine described it with the curious word “irresponsible.” How so? In Notre Dame Philosophical Reviews, the British philosopher John Dupré explained. Mind and Cosmos, he wrote, “will certainly lend comfort (and sell a lot of copies) to the religious enemies of Darwinism.” Simon Blackburn of Cambridge University made the same point: “I regret the appearance of this book. It will only bring comfort to creationists and fans of ‘intelligent design.’ ”

But what about fans of apostasy? You don’t have to be a biblical fundamentalist or a young-earth creationist or an intelligent design enthusiast—I’m none of the above, for what it’s worth—to find Mind and Cosmos exhilarating. “For a long time I have found the materialist account of how we and our fellow organisms came to exist hard to believe,” Nagel writes. “It is prima facie highly implausible that life as we know it is the result of a sequence of physical accidents together with the mechanism of natural selection.” The prima facie impression, reinforced by common sense, should carry more weight than the clerisy gives it. “I would like to defend the untutored reaction of incredulity to the reductionist neo-Darwinian account of the origin and evolution of life.” 

[….]

Nagel follows the materialist chain of reasoning all the way into the cul de sac where it inevitably winds up. Nagel’s touchier critics have accused him of launching an assault on science, when really it is an assault on the nonscientific uses to which materialism has been put. Though he does praise intelligent design advocates for having the nerve to annoy the secular establishment, he’s no creationist himself. He has no doubt that “we are products of the long history of the universe since the big bang, descended from bacteria through millions of years of natural selection.” And he assumes that the self and the body go together. “So far as we can tell,” he writes, “our mental lives, including our subjective experiences, and those of other creatures are strongly connected with and probably strictly dependent on physical events in our brains and on the physical interaction of our bodies with the rest of the physical world.” To believe otherwise is to believe, as the materialists derisively say, in “spooky stuff.” (Along with jumped-up monkeys and moist robots and countless other much-too-cute phrases, the use of spooky stuff proves that our popular science writers have spent a lot of time watching Scooby-Doo.) Nagel doesn’t believe in spooky stuff.

Materialism, then, is fine as far as it goes. It just doesn’t go as far as materialists want it to. It is a premise of science, not a finding. Scientists do their work by assuming that every phenomenon can be reduced to a material, mechanistic cause and by excluding any possibility of nonmaterial explanations. And the materialist assumption works really, really well—in detecting and quantifying things that have a material or mechanistic explanation. Materialism has allowed us to predict and control what happens in nature with astonishing success. The jaw-dropping edifice of modern science, from space probes to nanosurgery, is the result. 

But the success has gone to the materialists’ heads. From a fruitful method, materialism becomes an axiom: If science can’t quantify something, it doesn’t exist, and so the subjective, unquantifiable, immaterial “manifest image” of our mental life is proved to be an illusion.

Here materialism bumps up against itself. Nagel insists that we know some things to exist even if materialism omits or ignores or is oblivious to them. Reductive materialism doesn’t account for the “brute facts” of existence—it doesn’t explain, for example, why the world exists at all, or how life arose from nonlife. Closer to home, it doesn’t plausibly explain the fundamental beliefs we rely on as we go about our everyday business: the truth of our subjective experience, our ability to reason, our capacity to recognize that some acts are virtuous and others aren’t. These failures, Nagel says, aren’t just temporary gaps in our knowledge, waiting to be filled in by new discoveries in science. On its own terms, materialism cannot account for brute facts. Brute facts are irreducible, and materialism, which operates by breaking things down to their physical components, stands useless before them. “There is little or no possibility,” he writes, “that these facts depend on nothing but the laws of physics.” …

…read it all…

Aren’t atheists supposed to be “free thinkers”? They often call themselves that. But if atheism is true, there is no “free” and there is no “thinking “ going on. We are all just molecular machines. Dr. Tim Stratton of Freethinking Ministries shares the stage with Frank to explain why.

And a recent addition by EVOLUTION NEWS AND VIEWS:

John West’s updated and expanded book, out this week, Darwin Day in America: How Our Politics and Culture Have Been Dehumanized in the Name of Science.

In an all new added chapter, West recounts among other recent developments the sensation that followed the publication of Thomas Nagel’s book Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False. The renowned atheist philosopher expressed admiration for advocates of intelligent design including Meyer, Behe, and Berlinski.

What was the nub of his critique of neo-Darwinism?

Nagel ultimately offered a simple but profound objection to Darwinism: “Evolutionary naturalism provides an account of our capacities that undermines their reliability, and in doing so undermines itself.” In other words, if our mind and morals are simply the accidental products of a blind material process like natural selection acting on random genetic mistakes, what confidence can we have in them as routes to truth?

The basic philosophical critique of Darwinian reductionism offered by Nagel had been made before, perhaps most notably by Sir Arthur Balfour, C.S. Lewis, and Alvin Plantinga. But around the same time as the publication of Nagel’s book came new scientific discoveries that undermined Darwinian materialism as well. In the fall of 2012, the Encyclopedia of DNA Elements (ENCODE) project released results showing that much of so-called junk DNA actually performs biological functions. The ENCODE results overturned long-repeated claims by leading Darwinian biologists that most of the human genome is genetic garbage produced by a blind evolutionary process. At the same time, the results confirmed predictions made during the previous decade by scholars who think nature displays evidence of intelligent design.

New scientific challenges to orthodox Darwinian theory have continued to proliferate. In 2013 Stephen Meyer published Darwin’s Doubt: The Explosive Origin of Animal Life and the Case for Intelligent Design, which threw down the gauntlet on the question of the origin of biological information required to build animal body plans in the history of life. The intriguing thing about Meyer’s book was not the criticism it unleashed from the usual suspects but the praise it attracted from impartial scientists. Harvard geneticist George Church lauded it as “an opportunity for bridge-building rather than dismissive polarization — bridges across cultural divides in great need of professional, respectful dialogue.” Paleontologist Mark McMenamin, coauthor of a major book from Columbia University Press on animal origins, called it “a game changer for the study of evolution” that “points us in the right direction as we seek a new theory for the origin of animals.”

Even critics of Darwin’s Doubt found themselves at a loss to come up with a convincing answer to Meyer’s query about biological information. University of California at Berkeley biologist Charles Marshall, one of the world’s leading paleontologists, attempted to answer Meyer in the pages of the journal Science and in an extended debate on British radio. But as Meyer and others pointed out, Marshall tried to explain the needed information by simply presupposing the prior existence of even more unaccounted-for genetic information. “That is not solving the problem,” said Meyer. “That’s just begging the question.”

C. S. Lewis perceptively observed in his final book that “nature gives most of her evidence in answer to the questions we ask her.” Lewis’s point was that old paradigms often persist because they blind us from asking certain questions. They begin to disintegrate once we start asking the right questions. Scientific materialism continues to surge, but perhaps the right questions are finally beginning to be asked.

It remains to be seen whether as a society we will be content to let those questions be begged or whether we will embrace the injunction of Socrates to “follow the argument . . . wherever it may lead.” The answer to that question may determine our culture’s future.

Go here and read the rest.

J. Warner Wallace responds to “we don’t have free will”?

Junk DNA: Evolutionary Arguments Helping Prove God

For the record, Junk DNA being false was a positive position of the Intelligent Design movement:

[Intelligent] design is not a science stopper. Indeed, design can foster inquiry where traditional evolutionary approaches obstruct it. Consider the term “junk DNA.” Implicit in this term is the view that because the genome of an organism has been cobbled together through a long, undirected evolutionary process, the genome is a patchwork of which only limited portions are essential to the organism. Thus on an evolutionary view we expect a lot of useless DNA. If, on the other hand, organisms are designed, we expect DNA, as much as possible, to exhibit function. And indeed, the most recent findings suggest that designating DNA as “junk” merely cloaks our current lack of knowledge about function. For instance, in a recent issue of the Journal of Theoretical Biology, John Bodnar describes how “non-coding DNA in eukaryotic genomes encodes a language which programs organismal growth and development.” Design encourages scientists to look for function where evolution discourages it.

William Dembski, “Intelligent Science and Design,” First Things, Vol. 86:21-27 (October 1998)

(EVOLUTION NEWS & SCIENCE TODAY)

I wanted to expand my thinking in another recent post about the past mantra that Chimapnzees and Humans are genetically 99% alike. This of course has been disproved, but disproving this or that proposition has already been done, my main concern here is rhetoric. I will clip my point from THAT POST and continue the point with “Junk DNA” (pseudogenes) below. Enjoy:

Even a recent 2006 TIME article continues the mantra when they say, “Scientists figured out decades ago that chimps are our nearest evolutionary cousins, roughly 98% to 99% identical to humans at the genetic level.” So while science moves on and corrects itself, our culture is stuck in what was said to be a proof, and reject what ACTUALLY an evidence against the evolutionary proposition. Similar refutations of evolutionary positions that Richard Dawkins

What do I mean by that? I mean that if something is said to be evidence and is used to promote [FOR] the evolutionary paradigm… and then it is shown not to be the case wouldn’t it then logically be an evidence AGAINST this said paradigm? I think so.

So my point is what was once thought to be an ARGUMENT AGAINST intelligent design or creationism ends up being an ARGUMENT FOR… if this is not the case, then the original position is no position at all but merely rhetoric. Here is a video highlighting past “Junk DNA” positions used to counter creationists — and — if it is a real position, it should be one to counter evolutionists today:

Here is where we get into the fun part. And really it is just common sense. Take Richard Dawkins position a bit more seriously, and we will find that the Christian apologists position gets stronger, and mind you there are a myriad of examples like this, in science, archaeology, history, philosophy, and the like. (A good post on this is here: Modern Science Refutes The Evolutionary Theory)

Dawkins in 2009:

“It stretches even their creative ingenuity to make a convincing reason why an intelligent designer should have created a pseudogene — a gene that does absolutely nothing and gives every appearance of being a superannuated version of a gene that used to do something — unless he was deliberately setting out to fool us… 

Leaving pseudogenes aside, it is a remarkable fact that the greater part (95 percent in the case of humans) of the genome might as well not be there, for all the difference it makes.”

The 2009 iteration of Richard Dawkins asserts confidently that most of the genome is junk, just as Darwinism predicts! What an embarrassment to Darwin doubters!

Dawkins in 2012:

“I have noticed that there are some creationists who are jumping on [the ENCODE results] because they think that’s awkward for Darwinism. Quite the contrary it’s exactly what a Darwinist would hope for, to find usefulness in the living world….

Whereas we thought that only a minority of the genome was doing something, namely that minority which actually codes for protein, and now we find that actually the majority of it is doing something. What it’s doing is calling into action the protein-coding genes. So you can think of the protein-coding genes as being sort of the toolbox of subroutines which is pretty much common to all mammals — mice and men have the same number, roughly speaking, of protein-coding genes and that’s always been a bit of a blow to self-esteem of humanity. But the point is that that was just the subroutines that are called into being; the program that’s calling them into action is the rest [of the genome] which had previously been written off as junk.”

The 2012 iteration of Richard Dawkins asserts confidently that most of the genome is not junk, just as Darwinism predicts! What an embarrassment to Darwin doubters!

(EGNORANCE)

What EGNORANCE leaves out is the first part of that quote from the 2009 book by Richard Dawkins, “The Greatest Show on Earth,” is this (via APOLOGETIC PRESS):

  • What pseudogenes are useful for is embarrassing creationists. It stretches even their creative ingenuity to make up a reason why an intelligent designer should have created a pseudogene unless he was deliberately setting out to fool us” (p. 332)

To wit, if the argument of pseudogenes (Y) is a refutation of proposition “A” [Intelligent Design/creative acts] is to be considered valid for the evolutionist/atheist position (B), then when “Y” is shown to in fact be the opposite of the stated position, would not “Y” be an argument against “B”? I think so.

Man With The Golden Arm ~ Eliminating Chance Statistically

REFORMATTED

This is a large excerpt from a book worth reading in total, The Design Inference, by William Dembski. It is a classic in I.D. literature, whether you are a skeptic of Intelligent Design, or not:

1.2 THE MAN WITH THE GOLDEN ARM

[p. 9>] 

Even if we can’t ascertain the precise causal story underlying an event, we often have probabilistic information that enables us to rule out ways of explaining the event. This ruling out of explanatory options is what the design inference is all about. The design inference does not by itself deliver an intelligent agent. But as a logical apparatus for sifting our explanatory options, the design inference rules out explanations incompatible with intelligent agency (such as chance). The design inference appears widely, and is memorably illustrated in the following example (New York Times, 23 July 1985, p. B1):

TRENTON, July 22 — The New Jersey Supreme Court today caught up with the “man with the golden arm:’ Nicholas Caputo, the Essex County Clerk and a Democrat who has conducted drawings for decades that have given Democrats the top ballot line in the county 40 out of 41 times.

[p. 10>]

Mary V. Mochary, the Republican Senate candidate, and county Republi­can officials filed a suit after Mr. Caputo pulled the Democrat’s name again last year.

The election is over — Mrs. Mochary lost — and the point is moot. But the court noted that the chances of picking the same name 40 out of 41 times were less than I in 50 billion. It said that “confronted with these odds, few persons of reason will accept the explanation of blind chance.”

And, while the court said it was not accusing Mr. Caputo of anything, it said it believed that election officials have a duty to strengthen public confidence in the election process after such a string of “coincidences.”

The court suggested — but did not order — changes in the way Mr. Caputo conducts the drawings to stem “further loss of public confidence in the integrity of the electoral process.”

Justice Robert L. Clifford, while concurring with the 6-to-0 ruling, said the guidelines should have been ordered instead of suggested.

Nicholas Caputo was brought before the New Jersey Supreme Court because the Republican party filed suit against him, claim­ing Caputo had consistently rigged the ballot lines in the New Jersey county where he was county clerk. It is common knowledge that first position on a ballot increases one’s chances of winning an election (other things being equal, voters are more likely to vote for the first person on a ballot than the rest). Since in every instance but one Caputo positioned the Democrats first on the ballot line, the Republicans ar­gued that in selecting the order of ballots Caputo had intentionally favored his own Democratic party. In short, the Republicans claimed Caputo cheated.

The question, then, before the New Jersey Supreme Court was, Did Caputo actually rig the order of ballots, or was it without malice and forethought that Caputo assigned the Democrats first place forty out of forty-one times? Since Caputo denied wrongdoing, and since he conducted the drawing of ballots so that witnesses were unable to observe how he actually did draw the ballots (this was brought out in a portion of the article omitted in the preceding quote), determining whether Caputo did in fact rig the order of ballots becomes a matter of evaluating the circumstantial evidence connected with this case. How, then, is this evidence to be evaluated?

In trying to explain the remarkable coincidence of Nicholas Caputo selecting the Democrats forty out of forty-one times to head the ballot line, the court faced three explanatory options:

[p. 11>]

Regularity: Unknown to Caputo, he was not employing a reliable random process to determine ballot order. Caputo was like some­one who thinks a fair coin is being flipped when in fact it’s a double-headed coin. Just as flipping a double-headed coin is going to yield a long string of heads, so Caputo, using his faulty method for ballot selection, generated a long string of Democrats coming out on top. An unknown regularity controlled Caputo’s ballot line selections.

Chance: In selecting the order of political parties on the state ballot, Caputo employed a reliable random process that did not favor one political party over another. The fact that the Democrats came out on top forty out of forty-one times was simply a fluke. It occurred by chance.

Agency: Caputo, acting as a fully conscious intelligent agent and intending to aid his own political party, purposely rigged the ballot line selections to keep the Democrats coming out on top. In short, Caputo cheated.

The first option — that Caputo chose poorly his procedure for selecting ballot lines, so that instead of genuinely randomizing the ballot order, it just kept putting the Democrats on top — was not taken seriously by the court. The court could dismiss this option outright because Caputo claimed to be using an urn model to select ballot Iines. Thus, in a portion of the New York Times article not quoted, Caputo claimed to have placed capsules designating the various political parties running in New Jersey into a container, and then swished them around. Since urn models are among the most reliable randomization techniques available, there was no reason for the court to suspect that Caputo’s randomization procedure was at fault. The key question, therefore, was whether Caputo actually put this procedure into practice when he made the ballot line selections, or whether he purposely circumvented this procedure to keep the Democrats coming out on top. And since Caputo’s actual drawing of the cap­sules was obscured to witnesses, it was this question the court had to answer.

With the regularity explanation at least for the moment bracketed, the court next decided to dispense with the chance explanation. Hav­ing noted that the chance of picking the same political party 40 out of 41 times was less than 1 in 50 billion, the court concluded that…

[p. 12>]

“confronted with these odds, few persons of reason will accept the explanation of blind chance.” Now this certainly seems right. Nev­ertheless, a bit more needs to be said. As we saw in Section 1.1, exceeding improbability is by itself not enough to preclude an event from happening by chance. Whenever I am dealt a bridge hand, I par­ticipate in an exceedingly improbable event. Whenever I play darts, the precise position where the darts land represents an exceedingly improbable configuration. In fact, just about anything that happens is exceedingly improbable once we factor in all the other ways what actually happened might have happened. The problem, then, does not reside simply in an event being improbable.

All the same, in the absence of a causal story detailing what happened, improbability remains a crucial ingredient in eliminating chance. For suppose that Caputo actually was cheating right from the beginning of his career as Essex County clerk. Suppose further that the one exception in Caputo’s career as “the man with the golden arm” —that is, the one case where Caputo placed the Democrats second on the ballot line — did not occur till after his third time selecting ballot lines. Thus, for the first three ballot line selections of Caputo’s career the Democrats all came out on top, and they came out on top precisely because Caputo intended it that way. Simply on the basis of three bal­lot line selections, and without direct evidence of Caputo’s cheating, an outside observer would be in no position to decide whether Caputo was cheating or selecting the ballots honestly.

With only three ballot line selections, the probabilities are too large to reliably eliminate chance. The probability of randomly selecting the Democrats to come out on top given that their only competi­tion is the Republicans is in this case 1 in 8 (here p equals 0.125; compare this with the p-value computed by the court, which equals 0.00000000002). Because three Democrats in a row could eas­ily happen by chance, we would be acting in bad faith if we did not give Caputo the benefit of the doubt in the face of such large probabilities. Small probabilities are therefore a necessary condi­tion for eliminating chance, even though they are not a sufficient condition.

What, then, besides small probabilities do we need for evidence that Caputo cheated? As we saw in Section 1.1, the event in question needs to conform to a pattern. Not just any pattern will do, however. Some patterns successfully eliminate chance while others do not.

[p. 13>]

Consider the case of an archer. Suppose an archer stands fifty meters from a large wall with bow and arrow in hand. The wall, let us say, is sufficiently large that the archer cannot help but hit it. Now suppose every time the archer shoots an arrow at the wall, she paints a target around the arrow, so that the arrow is positioned squarely in the bull’s-eye. What can be concluded from this scenario? Absolutely nothing about the archer’s ability as an archer. The fact that the archer is in each instance squarely hitting the bull’s-eye is utterly bogus. Yes, she is matching a pattern; but it is a pattern she fixes only after the arrow has been shot and its position located. The pattern is thus purely ad hoc.

But suppose instead that the archer paints a fixed target on the wall and then shoots at it. Suppose she shoots 100 arrows, and each time hits a perfect bull’s-eye. What can be concluded from this second scenario? In the words of the New Jersey Supreme Court, “confronted with these odds, few persons of reason will accept the explanation of blind chance.” Indeed, confronted with this second scenario we infer that here is a world-class archer.

The difference between the first and the second scenario is that the pattern in the first is purely ad hoc, whereas the pattern in the second is not. Thus, only in the second scenario are we warranted eliminat­ing chance. Let me emphasize that for now I am only spotlighting a distinction without explicating it. I shall in due course explicate the distinction between “good” and “bad” patterns — those that respec­tively do and don’t permit us to eliminate chance (see Chapter 5). But for now I am simply trying to make the distinction between good and bad patterns appear plausible. In Section 1.1 we called the good pat­terns specifications and the bad patterns fabrications. Specifications are the non-ad hoc patterns that can legitimately be used to eliminate chance and warrant a design inference. Fabrications are the ad hoc patterns that cannot legitimately be used to eliminate chance.

Thus, when the archer first paints a fixed target and thereafter shoots at it, she specifies hitting a bull’s-eye. When in fact she repeatedly hits the bull’s-eye, we are warranted attributing her success not to beginner’s luck, but to her skill as an archer. On the other hand, when the archer paints a target around the arrow only after each shot, squarely positioning each arrow in the bull’s-eye, she fabri­cates hitting the bull’s-eye. Thus, even though she repeatedly hits the…

[p. 14>]

bull’s-eye, we are not warranted attributing her “success” in hitting the bull’s-eye to anything other than luck. In the latter scenario, her skill as an archer thus remains an open question.2 (jump)

How do these considerations apply to Nicholas Caputo? By se­lecting the Democrats to head the ballot forty out of forty-one times, Caputo appears to have participated in an event of probability less than 1 in 50 billion (p = 0.00000000002). Yet as we have noted, events of exceedingly small probability happen all the time. Hence by itself Caputo’s participation in an event of probability less than 1 in 50 billion is no cause for alarm. The crucial question is whether this event is also specified — does this event follow a non-ad hoc pattern so that we can legitimately eliminate chance?

Now there is a very simple way to avoid ad hoc patterns and gen­erate specifications, and that is by designating an event prior to its occurrence — C. S. Peirce (1883 [1955], pp. 207-10) referred to this type of specification as a predesignation. In the archer example, by painting the bull’s-eye before taking aim, the archer specifies in ad­vance where the arrows are to land. Because the pattern is set prior to the event, the objection of ad-hocness or fabrication is effectively blocked.

In the Caputo case, however, the pattern is discovered after the event: only after we witness an extended series of ballot line selec­tions do we notice a suspicious pattern. Though discovered after the fact, this pattern is not a fabrication. Patterns given prior to an event, or Peirce’s predesignations, constitute but a proper subset of the pat­terns that legitimately eliminate chance. The important thing about a pattern is not when it was identified, but whether in a certain well-defined sense it is independent of an event. We refer to this relation of independence as detachability, and say that a pattern is detachable just in case it satisfies this relation.

[p. 15>]

Detachability distinguishes specifications from fabrications. Al­though a precise account of detachability will have to wait until Chapter 5, the basic intuition underlying detachability is this: Given an event, would we be able to formulate a pattern describing it if we had no knowledge which event occurred? Here is the idea. An event has occurred. A pattern describing the event is given. The event is one from a range of possible events. if all we knew was the range of possible events without any specifics about which event actually occurred, could we still formulate the pattern describing the event? If so, the pattern is detachable from the event.

To illustrate detachability in the Caputo case, consider two pos­sible courses Nicholas Caputo’s career as Essex County clerk might have taken (for simplicity assume no third-party candidates were ever involved, so that all elections were between Democrats and Republi­cans). In the one case — and for the sake of argument let us suppose this is what actually happened — Caputo chose the Democrats over the Republicans forty out of forty-one times in the following order:

(A)  DDDDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDD

Thus, the initial twenty-two times Caputo chose the Democrats to head the ballot line, then the twenty-third time he chose the Republi­cans, after which for the remaining times he chose the Democrats.

In the second possible course Caputo’s career as county clerk might have taken, suppose Caputo once again had forty-one occasions on which to select the order of ballots, but this time that he chose both Democrats and Republicans to head the ballot pretty evenly, let us say in the following order

(B)   DRRDRDRRDDDRDRDDRDRRDRRDRRRDRRRDRDDDRDRDD

In this instance the Democrats came out on top only twenty times. and the Republicans twenty-one times.

Sequences (A) and (B) are both patterns and describe possible ways Caputo might have selected ballot orders in his years as Essex County clerk. (A) and (B) are therefore patterns describing possible events. Now the question detachability asks is whether (A) and (B) could have been formulated without our knowing which event occurred. For (A) the answer is yes, but for (B) the answer is no. (A) is therefore detachable whereas (B) is not.

[p. 16>]

How is this distinction justified? To formulate (B) I just one mo­ment ago flipped a coin forty-one times, recording “D” for Democrat whenever I observed heads and “R” for Republican whenever I ob­served tails. On the other hand, to formulate (A) I simply recorded “D” forty times and then interspersed a single “R.” Now consider a human subject S confronted with sequences (A) and (B). S comes to these sequences with considerable background knowledge which, we may suppose, includes the following:

(1) Nicholas Caputo is a Democrat.
(2) Nicholas Caputo would like to see the Democrats appear first on the ballot since having the first place on the ballot line signifi­cantly boosts one’s chances of winning an election.

(3) Nicholas Caputo, as election commissioner of Essex County, has full control over who appears first on the ballots in Essex County.
(4) Election commissioners in the past have been guilty of all manner of fraud, including unfair assignments of ballot lines.
(5) If Caputo were assigning ballot lines fairly, then both Democrats and Republicans should receive priority roughly the same number of times.

Given this background knowledge S is in a position to formulate various “cheating patterns” by which Caputo might attempt to give the Democrats first place on the ballot. The most blatant cheat is of course to assign the Democrats first place all the time. Next most blatant is to assign the Republicans first place just once (as in (A) — there are 41 ways to assign the Republicans first place just once). Slightly less blatant — though still blatant — is to assign the Republicans first place exactly two times (there are 820 ways to assign the Republicans first place exactly two times). This line of reasoning can be extended by throwing the Republicans a few additional sops. The point is, given S’s background knowledge, S is easily able (possibly with the aid of a personal computer) to formulate ways Caputo could cheat, one of which would surely include (A).

Contrast this now with (B). Since (B) was generated by a sequence of coin tosses, (B) represents one of two trillion or so possible ways Caputo might legitimately have chosen ballot orders. True, in this respect probabilities do not distinguish (A) from (B) since all such sequences of Ds and Rs of length 41 have the same small probability of occurring by chance, namely 1 in 241, or approximately 1 in two…

[p. 17>]

trillion. But S is a finite agent whose background knowledge enables S to formulate only a tiny fraction of all the possible sequences of Ds and Rs of length 41. Unlike (A), (B) is not among them. Confronted with (B), S will scrutinize it, try to discover a pattern that isn’t ad hoc, and thus seek to uncover evidence that (B) resulted from something other than chance. But given S’s background knowledge, nothing about (B) suggests an explanation other than chance. Indeed, since the relative frequency of Democrats to Republicans actually favors Republicans (twenty-one Rs versus twenty Ds), the Nicholas Caputo responsible for (B) is hardly “the man with the golden arm.” Thus, while (A) is detachable, (B) is not.

But can one be absolutely certain (B) is not detachable? No, one cannot. There is a fundamental asymmetry between detachability and its negation, call it nondetachability. In practice one can decisively demonstrate that a pattern is detachable from an event, but not that a pattern is incapable of being detached from an event. A failure to establish detachability always leaves open the possibility that detachability might still be demonstrated at some later date.

To illustrate this point, suppose I walk down a dirt road and find some stones lying about. The configuration of stones says nothing to me. Given my background knowledge I can discover no pattern in this configuration that I could have formulated on my own without actually seeing the stones lying about as they do. I cannot detach the pattern of stones from the configuration they assume. I therefore have no reason to attribute the configuration to anything other than chance. But suppose next an astronomer travels this same road and looks at the same stones only to find that the configuration precisely matches some highly complex constellation. Given the astronomer’s background knowledge, this pattern now becomes detachable. The astronomer will therefore have grounds for thinking that the stones were intentionally arranged to match the constellation.

Detachability must always be relativized to a subject and a subject’s background knowledge. Whether one can detach a pattern from an event depends on one’s background knowledge coming to the event. Often one’s background knowledge is insufficient to detach a pattern from an event. Consider, for instance, the case of cryptographers trying to break a cryptosystem. Until they break the cryptosystem, the strings of characters they record from listening to their enemy’s communications will seem random, and for all the cryptographers know…

[p. 18>]

might just be gibberish. Only after the cryptographers have broken the cryptosystem and discovered the key for decrypting their enemy’s communications will they discern the detachable pattern present in the communications they have been monitoring (cf. Section 1.6).

Is it, then, strictly because our background knowledge and abil­ities are limited that some patterns fail to be detachable? Would. for instance, an infinitely powerful computational device be capa­ble of detaching any pattern whatsoever? Regardless whether some super-being possesses an unlimited capacity to detach patterns, as a practical matter we humans find ourselves with plenty of patterns we cannot detach. Whether all patterns are detachable in some grand metaphysical sense, therefore, has no bearing on the practical prob­lem whether a certain pattern is detachable given certain limited back­ground knowledge. Finite rational agents like ourselves can formulate only a very few detachable patterns. For instance, of all the possible ways we might flip a coin a thousand times, we can make explicit only a minuscule proportion. It follows that a human subject will be unable to specify any but a very tiny fraction of these possible coin flips. In general, the patterns we can know to be detachable are quite limited.(jump)

Let us now wrap up the Caputo example. Confronted with Nicholas Caputo assigning the Democrats the top ballot line forty out of forty-one times, the New Jersey Supreme Court first rejected the regularity explanation, and then rejected the chance explanation (“confronted with these odds, few persons of reason will accept the explanation of blind chance”). Left with no other option, the court therefore accepted the agency explanation, inferred Caputo was cheating, and threw him in prison.

Well, not quite. The court did refuse to attribute Caputo’s golden arm to either regularity or chance. Yet when it came to giving a positive explanation of Caputo’s golden arm, the court waffled. To be sure, the court knew something was amiss. For the Democrats to get the top ballot line in Caputo’s county forty out of forty-one times, especially…

[p. 19>]

with Caputo solely responsible for ballot line selections, something had to be fishy. Nevertheless, the New Jersey Supreme Court was unwilling explicitly to charge Caputo with corruption. Of the six judges, Justice Robert L. Clifford was the most suspicious of Caputo, wanting to order Caputo to institute new guidelines for selecting ballot lines. The actual ruling, however, simply suggested that Caputo institute new guidelines in the interest of “public confidence in the integrity of the electoral process.” The court therefore stopped short of charging Caputo with dishonesty.

Did Caputo cheat? Certainly this is the best explanation of Caputo’s golden arm. Nonetheless, the court stopped short of convicting Ca­puto. Why? The court had no clear mandate for dealing with highly improbable ballot line selections. Such mandates exist in other legal settings, as with discrimination laws that prevent employers from at­tributing to the luck of the draw their failure to hire sufficiently many women or minorities. But in the absence of such a mandate the court needed an exact causal story of how Caputo cheated if the suit against him was to succeed. And since Caputo managed to obscure how he selected the ballot lines, no such causal story was forthcoming. The court therefore went as far as it could.

Implicit throughout the court’s deliberations was the design infer­ence. The court wanted to determine whether Caputo cheated. Lack­ing a causal story of how Caputo selected the ballot lines, the court was left with circumstantial evidence. Given this evidence, the court immediately ruled out regularity. What’s more, from the specified im­probability of selecting the Democrats forty out of forty-one times, the court also ruled out chance.

These two moves — ruling out regularity, and then ruling out chance — constitute the design inference. The conception of design that emerges from the design inference is therefore eliminative, as­serting of an event what it is not, not what it is. To attribute an event to design is to say that regularity and chance have been ruled out. Refer­ring Caputo’s ballot line selections to design is therefore not identical with referring it to agency. To be sure, design renders agency plau­sible. But as the negation of regularity and chance, design is a mode of explanation logically preliminary to agency. Certainly agency (in this case cheating) best explains Caputo’s ballot line selections. But no one was privy to Caputo’s ballot line selections. In the absence of…

[p. 20>]

an exact causal story, the New Jersey Supreme Court therefore went as far as it could in the Caputo case.(jump)

[….]

[p. 22>]

1.4 FORENSIC SCIENCE AND DETECTION

Forensic scientists, detectives, lawyers, and insurance fraud investiga­tors cannot do without the design inference. Something as common as a forensic scientist placing someone at the scene of a crime by match­ing fingerprints requires a design inference. Indeed, there is no logical or genetic impossibility preventing two individuals from sharing the same fingerprints. Rather, our best understanding of fingerprints and the way they are distributed in the human population is that they are, with very high probability, unique to individuals. And so, whenever the fingerprints of an individual match those found at the scene of a crime, we conclude that the individual was indeed at the scene of the crime.

The forensic scientist’s stock of design inferences is continually increasing. Consider the following headline: “DNA Tests Becoming Elementary in Solving Crimes.” The lead article went on to describe…

[p. 23>]

the type of reasoning employed by forensic scientists in DNA testing. As the following excerpt makes clear, all the key features of the design inference described in Sections 1.1 and 1.2 are present in DNA testing (The Times — Princeton-Metro, N.J., 23 May 1994, p. A 1):

TRENTON — A state police DNA testing program is expected to be ready in the fall, and prosecutors and police are eagerly looking forward to taking full advantage of a technology that has dramatically boosted the success rate of rape prosecutions across the country.

Mercer County Prosecutor Maryann Bielamowicz called the effect of DNA testing on rape cases “definitely a revolution. It’s the most exciting development in my career in our ability to prosecute.”

She remembered a recent case of a young man arrested for a series of three sexual assaults. The suspect had little prior criminal history, but the crimes were brutal knifepoint attacks in which the assailant broke in through a window, then tied up and terrorized his victims.

“Based on a DNA test in one of those assaults he pleaded guilty to all three. He got 60 years. He’ll have to serve 271/2 before parole. That’s pretty good evidence, she said.

All three women identified the young man. But what really intimidated the suspect into agreeing to such a rotten deal were the enormous odds —one in several million — that someone other than he left semen containing the particular genetic markers found in the DNA test. Similar numbers are intimidating many others into foregoing trials, said the prosecutor.6 (jump)

Not just forensic science, but the whole field of detection is in­conceivable without the design inference. Indeed, the mystery genre would be dead without it.7 (jump) When in the movie Double Indemnity Edward G. Robinson (“the insurance claims man”) puts it together that Barbara Stanwyck’s husband did not die an accidental death by falling off a train, but instead was murdered by Stanwyck to…

[p. 24>]

collect on a life insurance policy, the design inference is decisive. Why hadn’t Stanwyck’s husband made use of his life insurance pol­icy earlier to pay off on a previously sustained injury, for the pol­icy did have such a provision? Why should he die just two weeks after taking out the policy? Why did he happen to die on a train, thereby requiring the insurance company to pay double the usual in­demnity (hence the title of the movie)? How could he have broken his neck falling off a train when at the time of the fall, the train could not have been moving faster than 15 m.p.h.? And who would seriously consider committing suicide by jumping off a train mov­ing only 15 m.p.h.? Too many pieces coalescing too neatly made the explanations of accidental death and suicide insupportable. Thus, at one point Edward G. Robinson exclaims, “The pieces all fit together like a watch!” Suffice it to say, in the movie Barbara Stanwyck and her accomplice/lover Fred MacMurray did indeed kill Stanwyck’s husband.

Whenever there is a mystery, it is the design inference that elicits the crucial insight needed to solve the mystery. The dawning recog­nition that a trusted companion has all along been deceiving you (cf. Notorious); the suspicion that someone is alive after all, even though the most obvious indicators point to the person having died (cf. The Third Man); and the realization that a string of seemingly accidental deaths were carefully planned (cf. Coma) all derive from design in­ferences. At the heart of these inferences is a convergence of small probabilities and specifications, a convergence that cannot properly be explained by appealing to chance.


Notes


2) The archer example introduces a tripartite distinction that will be implicit throughout our study of chance elimination arguments: a reference class of possible events (e.g.. the arrow hitting the wall at some unspecified place): a pattern that restricts the reference class of possible events (e.g.. a target on the wall); and the precise event that has occurred (e.g., the arrow hitting the wall at some precise location). In a chance elimination argument, the reference class, the pattern, and the event are always inseparably linked, with the pattern mediating between the event and the reference class, helping to decide whether the event really is due to chance. Throughout this monograph we shall refer to patterns and events as such, but refer to reference classes by way of the chance hypotheses that characterize them (cf. Section 5.2). (back)

3) This conclusion is consistent with algorithmic information theory, which regards a sequence of numbers as nonrandom to the degree that it is compressible. Since compressibility within algorithmic information theory constitutes but a special case of detachability, and since most sequences are incompressible, the detachable sequences are indeed quite limited. See Kolmogorov (1965), Chaitin (1966). and van Lambalgen (1989). See also Section 1.7 (back)

4) Legal scholars continue to debate the proper application of probabilistic reasoning to legal problems. Larry Tribe (1971), for instance, views the application of Bayes’s theorem within the context of a trial as fundamentally unsound. Michael Finkelstein takes the opposite view (see Finkelstein, 1978, p. 288 ff.). Still, there appears no getting rid of the design inference in the law. Cases of bid-rigging (Finkelstein and Levin, 1990, p. 64), price-fixing (Finkelstein and Levenbach, 1986, pp. 79-106), and collusion often cannot be detected save by means of a design inference. (back)

[….]

6) It’s worth mentioning that at the time of this writing, the accuracy and usefulness of DNA testing is still a matter for debate. As a New York Times (23 August 1994, p. A10) article concerned with the currently ongoing 0..1. Simpson case remarks. “there is wide disagree­ment among scientific experts about the accuracy and usefulness of DNA testing and they emphasize that only those tests performed under the best of circumstances are valuable?’ My interest, however, in this matter is not with the ultimate fate of DNA testing, but with the logic that underlies it, a logic that hinges on the design inference. (back)

7) Cf. David Lehman’s (1989, p. 20) notion of “retrospective prophecy” as applied to the detective-fiction genre: “If mind-reading. backward-reasoning investigators of crimes —sleuths like Dupin or Sherlock Holmes — resemble prophets, it’s in the visionary rather than the vatic sense. It’s not that they see into the future; on the contrary. they’re not even looking that way. But reflecting on the clues left behind by the past, they see patterns where the rest of us see only random signs. They reveal and make intelligible what otherwise would be dark.” The design inference is the key that unlocks the patterns that “the rest of us see only Iasi random signs.” (back)

William A. Dembski, The Design Inference: Estimating Chance Through Small Probabilities (Cambridge, United Kingdom: Cambridge Press, 1998), 9-20, 22-24.

God and Evolution: The Problem with Theistic Evolution (Serious Saturday)

http://www.faithandevolution.org – Although there are some theistic evolutionists in the Intelligent Design Movement (like Michael Behe and William Dembski) they are not radical like the Neo-Darwinian evolutionists. This video shows the many problems of theistic evolution. (Jay Richards, John G. West, Jonathan Wells, Richard Sternberg, Stephen Meyer)

Whats Okay with Big Gov and Not Okay With It (Nanny State Comparisons)

Libertarian Republican notes the latest nanny state move by the banning of cartoon characters for sugary cereals.

The Federal Government, pushed along by liberal pressure groups, is taking the first steps towards banning the sale of sugary cereals and salt-abundant foods to kids.

CBS News reports: “GOP decries “nanny state” push on junk food ads”:

To critics, it’s the latest example of “nanny state” overreach by the federal government that could cost money and jobs.

The issue? A proposed set of voluntary guidelines backed by the Obama administration designed to limit the marketing of junk food to children through mascots like “Tony the Tiger,” the smiling animated figure used for decades to sell Kellogg’s “Frosted Flakes” breakfast cereal. Under the guidelines, companies would only be able to advertize and promote healthy foods low in fat, sugar…

…(read more)…

Professor Walter Williams described these “Do Gooders” as lifestyle Nazis. CS Lewis aptly talked about his fear of such people:

Of all tyrannies, a tyranny exercised for the good of its victims may be the most oppressive. It may be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience. They may be more likely to go to Heaven yet at the same time likelier to make a Hell of earth. Their very kindness stings with intolerable insult. To be ‘cured’ against one’s will and cured of states which we may not regard as disease is to be put on a level of those who have not yet reached the age of reason or those who never will; to be classed with infants, imbeciles, and domestic animals.

C. S. Lewis, God in the Dock, p. 292.

The government will demand the following to be placed on cigarette packs (forcing private business to place gruesome photos on their product:

That, however, is not my main focus. What I do wish to zero in on is what causes the secular left does go to the mat/floor for. That is, the above at the Federal level is kosher… the below at the local level is not! That is, to ban simple labels inserted into biology textbooks simply warning the school children about the monolithic view taught in their science classes [in regards to origin science, not working science] in a small label inserted into their biology textbooks:

  1. This textbook contains material on evolution. Evolution is a theory, not a fact, regarding the origin of living things. This material should be approached with an open mind, studied carefully, and critically considered. (Selman v. Cobb County School District)
  2. Intelligent Design is an explanation of the origin of life that differs from Darwin’s view. The reference book Of Pandas and People is available for students to see if they would like to explore this view in an effort to gain an understanding of what Intelligent Design actually involves. As is true with any theory, students are encouraged to keep an open mind. (Tammy Kitzmiller v. Dover Area School District)

How is this argument misconstrued with straw-men and non-sequiturs? Here is a great example that comes from an evolutionary website, first the person posts this graphic equating ID to the following:

Did you notice the lumping in of Neo-Darwinian THEORY with laws of science and effects that are repeatable, observable? The author contunues down the non-sequitur road creating a straw-man and then defeating it, not the real argument:

How long does this fight need to go on? Do we need to teach the “strengths and weaknesses” of the theory of gravity? That’s right. That’s all it is. A theory. But I don’t see any creationists defiantly jumping off cliffs.

One commentator also hops in and says something that is truly amazing and shows you the depths of non-thinking in regards to this topic:

The world is flat, the moon landing was a hoax, global warming is not real, and intelligent design is true. Amazing what some people will resort to, just to avoid facing the truth and questioning their beliefs or their lifestyles.

All I have to say is “WOW!” Which brings me to the god centered vacuum that man tries to fill with himself. And this is the bottom line, do you want to give ultimate credence to The Designer, or the creature:

Romans 1:21-23 (ESV):

For although they knew God, they did not honor him as God or give thanks to him, but they became futile in their thinking, and their foolish hearts were darkened. Claiming to be wise, they became fools, and exchanged the glory of the immortal God for images resembling mortal man and birds and animals and creeping things.

 I would hope, to rightly understand what Intelligent Design theorists ARE saying, one would take the time to read the small portion entitled “The Golden Arm,” posted after an atheists point about ID:

If science really is permanently committed to methodological naturalism – the philosophical position that restricts all explanations in science to naturalistic explanations – it follows that the aim of science is not generating true theories. Instead, the aim of science would be something like: generating the best theories that can be formulated subject to the restriction that the theories are naturalistic. More and more evidence could come in suggesting that a supernatural being exists, but scientific theories wouldn’t be allowed to acknowledge that possibility.

Bradley Monton, author of Seeking God in Science: An Atheist Defends Intelligent Design ~ Apologetics315 h/t

Enjoy the following read, click to enlarge:

Photobucket
Photobucket
Photobucket
Photobucket
Photobucket
Photobucket
Photobucket
Photobucket
Photobucket
Photobucket
Photobucket
Photobucket
Photobucket
Photobucket

William Dembski & Christopher Hitchens Debate Gods Existence

Dembski-Hitchens Debate — The Real “Universal Acid”

As Dembski points out, Hitchens hitches much of his atheistic wagon to Darwinism (the creation myth of atheistic materialism, which is dissolving rapidly in the universal acid* of genuine scientific rigor).

Very revealing is Hitchens’ reference to cave-dwelling creatures that lose their eyes. He thinks that this is evidence of “evolution.” In fact, this is evidence of devolution — the loss of information, not the origin or creation of it. It is evidence for informational entropy. Decay happens all by itself.

Consider computer code like mine that simulates human intelligence in the game of checkers.

It is approximately 65,000 lines of highly optimized and refined computer code in the C/C++ language. (This does not include several tens of thousands of additional lines of code that compute, compress, store, and provide real-time execution access and decompression to the endgame databases.)

Introduce random errors into that code and some of its functions will be disabled (or, the program will die upon conception when compiled and executed). Try to improve the same program by the introduction of random errors and there is no chance of success, even given an infinite amount of time, since random degradation will always outrun any possible random improvement.

As it turns out, virtually all examples of the creative powers of Darwinian evolution (random variation/errors and natural selection) rest either on the mixing and matching of existing information, or what Michael Behe refers to as trench warfare (destroying information for a temporary advantage in a pathological environment, such as a bacterium in the presence of an antibiotic) as opposed to an arms race.

The bottom line is that the infinitely creative, information-producing powers of the Darwinian mechanism are nonexistent, and only exist in the infinitely creative imaginations of Darwinists, who have an infinite capacity for spinning fanciful stories that are completely out of contact with modern scientific evidence and reasoning.

* Daniel Dennett calls Darwinism a universal acid that essentially destroys all traditional theistic belief.

Link in Picture: