1.2 THE MAN WITH THE GOLDEN ARM
Even if we can’t ascertain the precise causal story underlying an event, we often have probabilistic information that enables us to rule out ways of explaining the event. This ruling out of explanatory options is what the design inference is all about. The design inference does not by itself deliver an intelligent agent. But as a logical apparatus for sifting our explanatory options, the design inference rules out explanations incompatible with intelligent agency (such as chance). The design inference appears widely, and is memorably illustrated in the following example (New York Times, 23 July 1985, p. B1):
TRENTON, July 22 — The New Jersey Supreme Court today caught up with the “man with the golden arm:’ Nicholas Caputo, the Essex County Clerk and a Democrat who has conducted drawings for decades that have given Democrats the top ballot line in the county 40 out of 41 times.
Mary V. Mochary, the Republican Senate candidate, and county Republican officials filed a suit after Mr. Caputo pulled the Democrat’s name again last year.
The election is over — Mrs. Mochary lost — and the point is moot. But the court noted that the chances of picking the same name 40 out of 41 times were less than I in 50 billion. It said that “confronted with these odds, few persons of reason will accept the explanation of blind chance.”
And, while the court said it was not accusing Mr. Caputo of anything, it said it believed that election officials have a duty to strengthen public confidence in the election process after such a string of “coincidences.”
The court suggested — but did not order — changes in the way Mr. Caputo conducts the drawings to stem “further loss of public confidence in the integrity of the electoral process.”
Justice Robert L. Clifford, while concurring with the 6-to-0 ruling, said the guidelines should have been ordered instead of suggested.
Nicholas Caputo was brought before the New Jersey Supreme Court because the Republican party filed suit against him, claiming Caputo had consistently rigged the ballot lines in the New Jersey county where he was county clerk. It is common knowledge that first position on a ballot increases one’s chances of winning an election (other things being equal, voters are more likely to vote for the first person on a ballot than the rest). Since in every instance but one Caputo positioned the Democrats first on the ballot line, the Republicans argued that in selecting the order of ballots Caputo had intentionally favored his own Democratic party. In short, the Republicans claimed Caputo cheated.
The question, then, before the New Jersey Supreme Court was, Did Caputo actually rig the order of ballots, or was it without malice and forethought that Caputo assigned the Democrats first place forty out of forty-one times? Since Caputo denied wrongdoing, and since he conducted the drawing of ballots so that witnesses were unable to observe how he actually did draw the ballots (this was brought out in a portion of the article omitted in the preceding quote), determining whether Caputo did in fact rig the order of ballots becomes a matter of evaluating the circumstantial evidence connected with this case. How, then, is this evidence to be evaluated?
In trying to explain the remarkable coincidence of Nicholas Caputo selecting the Democrats forty out of forty-one times to head the ballot line, the court faced three explanatory options:
Regularity: Unknown to Caputo, he was not employing a reliable random process to determine ballot order. Caputo was like someone who thinks a fair coin is being flipped when in fact it’s a double-headed coin. Just as flipping a double-headed coin is going to yield a long string of heads, so Caputo, using his faulty method for ballot selection, generated a long string of Democrats coming out on top. An unknown regularity controlled Caputo’s ballot line selections.
Chance: In selecting the order of political parties on the state ballot, Caputo employed a reliable random process that did not favor one political party over another. The fact that the Democrats came out on top forty out of forty-one times was simply a fluke. It occurred by chance.
Agency: Caputo, acting as a fully conscious intelligent agent and intending to aid his own political party, purposely rigged the ballot line selections to keep the Democrats coming out on top. In short, Caputo cheated.
The first option — that Caputo chose poorly his procedure for selecting ballot lines, so that instead of genuinely randomizing the ballot order, it just kept putting the Democrats on top — was not taken seriously by the court. The court could dismiss this option outright because Caputo claimed to be using an urn model to select ballot Iines. Thus, in a portion of the New York Times article not quoted, Caputo claimed to have placed capsules designating the various political parties running in New Jersey into a container, and then swished them around. Since urn models are among the most reliable randomization techniques available, there was no reason for the court to suspect that Caputo’s randomization procedure was at fault. The key question, therefore, was whether Caputo actually put this procedure into practice when he made the ballot line selections, or whether he purposely circumvented this procedure to keep the Democrats coming out on top. And since Caputo’s actual drawing of the capsules was obscured to witnesses, it was this question the court had to answer.
With the regularity explanation at least for the moment bracketed, the court next decided to dispense with the chance explanation. Having noted that the chance of picking the same political party 40 out of 41 times was less than 1 in 50 billion, the court concluded that…
“confronted with these odds, few persons of reason will accept the explanation of blind chance.” Now this certainly seems right. Nevertheless, a bit more needs to be said. As we saw in Section 1.1, exceeding improbability is by itself not enough to preclude an event from happening by chance. Whenever I am dealt a bridge hand, I participate in an exceedingly improbable event. Whenever I play darts, the precise position where the darts land represents an exceedingly improbable configuration. In fact, just about anything that happens is exceedingly improbable once we factor in all the other ways what actually happened might have happened. The problem, then, does not reside simply in an event being improbable.
All the same, in the absence of a causal story detailing what happened, improbability remains a crucial ingredient in eliminating chance. For suppose that Caputo actually was cheating right from the beginning of his career as Essex County clerk. Suppose further that the one exception in Caputo’s career as “the man with the golden arm” —that is, the one case where Caputo placed the Democrats second on the ballot line — did not occur till after his third time selecting ballot lines. Thus, for the first three ballot line selections of Caputo’s career the Democrats all came out on top, and they came out on top precisely because Caputo intended it that way. Simply on the basis of three ballot line selections, and without direct evidence of Caputo’s cheating, an outside observer would be in no position to decide whether Caputo was cheating or selecting the ballots honestly.
With only three ballot line selections, the probabilities are too large to reliably eliminate chance. The probability of randomly selecting the Democrats to come out on top given that their only competition is the Republicans is in this case 1 in 8 (here p equals 0.125; compare this with the p-value computed by the court, which equals 0.00000000002). Because three Democrats in a row could easily happen by chance, we would be acting in bad faith if we did not give Caputo the benefit of the doubt in the face of such large probabilities. Small probabilities are therefore a necessary condition for eliminating chance, even though they are not a sufficient condition.
What, then, besides small probabilities do we need for evidence that Caputo cheated? As we saw in Section 1.1, the event in question needs to conform to a pattern. Not just any pattern will do, however. Some patterns successfully eliminate chance while others do not.
Consider the case of an archer. Suppose an archer stands fifty meters from a large wall with bow and arrow in hand. The wall, let us say, is sufficiently large that the archer cannot help but hit it. Now suppose every time the archer shoots an arrow at the wall, she paints a target around the arrow, so that the arrow is positioned squarely in the bull’s-eye. What can be concluded from this scenario? Absolutely nothing about the archer’s ability as an archer. The fact that the archer is in each instance squarely hitting the bull’s-eye is utterly bogus. Yes, she is matching a pattern; but it is a pattern she fixes only after the arrow has been shot and its position located. The pattern is thus purely ad hoc.
But suppose instead that the archer paints a fixed target on the wall and then shoots at it. Suppose she shoots 100 arrows, and each time hits a perfect bull’s-eye. What can be concluded from this second scenario? In the words of the New Jersey Supreme Court, “confronted with these odds, few persons of reason will accept the explanation of blind chance.” Indeed, confronted with this second scenario we infer that here is a world-class archer.
The difference between the first and the second scenario is that the pattern in the first is purely ad hoc, whereas the pattern in the second is not. Thus, only in the second scenario are we warranted eliminating chance. Let me emphasize that for now I am only spotlighting a distinction without explicating it. I shall in due course explicate the distinction between “good” and “bad” patterns — those that respectively do and don’t permit us to eliminate chance (see Chapter 5). But for now I am simply trying to make the distinction between good and bad patterns appear plausible. In Section 1.1 we called the good patterns specifications and the bad patterns fabrications. Specifications are the non-ad hoc patterns that can legitimately be used to eliminate chance and warrant a design inference. Fabrications are the ad hoc patterns that cannot legitimately be used to eliminate chance.
Thus, when the archer first paints a fixed target and thereafter shoots at it, she specifies hitting a bull’s-eye. When in fact she repeatedly hits the bull’s-eye, we are warranted attributing her success not to beginner’s luck, but to her skill as an archer. On the other hand, when the archer paints a target around the arrow only after each shot, squarely positioning each arrow in the bull’s-eye, she fabricates hitting the bull’s-eye. Thus, even though she repeatedly hits the…
bull’s-eye, we are not warranted attributing her “success” in hitting the bull’s-eye to anything other than luck. In the latter scenario, her skill as an archer thus remains an open question.2 (jump)
How do these considerations apply to Nicholas Caputo? By selecting the Democrats to head the ballot forty out of forty-one times, Caputo appears to have participated in an event of probability less than 1 in 50 billion (p = 0.00000000002). Yet as we have noted, events of exceedingly small probability happen all the time. Hence by itself Caputo’s participation in an event of probability less than 1 in 50 billion is no cause for alarm. The crucial question is whether this event is also specified — does this event follow a non-ad hoc pattern so that we can legitimately eliminate chance?
Now there is a very simple way to avoid ad hoc patterns and generate specifications, and that is by designating an event prior to its occurrence — C. S. Peirce (1883 , pp. 207-10) referred to this type of specification as a predesignation. In the archer example, by painting the bull’s-eye before taking aim, the archer specifies in advance where the arrows are to land. Because the pattern is set prior to the event, the objection of ad-hocness or fabrication is effectively blocked.
In the Caputo case, however, the pattern is discovered after the event: only after we witness an extended series of ballot line selections do we notice a suspicious pattern. Though discovered after the fact, this pattern is not a fabrication. Patterns given prior to an event, or Peirce’s predesignations, constitute but a proper subset of the patterns that legitimately eliminate chance. The important thing about a pattern is not when it was identified, but whether in a certain well-defined sense it is independent of an event. We refer to this relation of independence as detachability, and say that a pattern is detachable just in case it satisfies this relation.
Detachability distinguishes specifications from fabrications. Although a precise account of detachability will have to wait until Chapter 5, the basic intuition underlying detachability is this: Given an event, would we be able to formulate a pattern describing it if we had no knowledge which event occurred? Here is the idea. An event has occurred. A pattern describing the event is given. The event is one from a range of possible events. if all we knew was the range of possible events without any specifics about which event actually occurred, could we still formulate the pattern describing the event? If so, the pattern is detachable from the event.
To illustrate detachability in the Caputo case, consider two possible courses Nicholas Caputo’s career as Essex County clerk might have taken (for simplicity assume no third-party candidates were ever involved, so that all elections were between Democrats and Republicans). In the one case — and for the sake of argument let us suppose this is what actually happened — Caputo chose the Democrats over the Republicans forty out of forty-one times in the following order:
Thus, the initial twenty-two times Caputo chose the Democrats to head the ballot line, then the twenty-third time he chose the Republicans, after which for the remaining times he chose the Democrats.
In the second possible course Caputo’s career as county clerk might have taken, suppose Caputo once again had forty-one occasions on which to select the order of ballots, but this time that he chose both Democrats and Republicans to head the ballot pretty evenly, let us say in the following order
In this instance the Democrats came out on top only twenty times. and the Republicans twenty-one times.
Sequences (A) and (B) are both patterns and describe possible ways Caputo might have selected ballot orders in his years as Essex County clerk. (A) and (B) are therefore patterns describing possible events. Now the question detachability asks is whether (A) and (B) could have been formulated without our knowing which event occurred. For (A) the answer is yes, but for (B) the answer is no. (A) is therefore detachable whereas (B) is not.
How is this distinction justified? To formulate (B) I just one moment ago flipped a coin forty-one times, recording “D” for Democrat whenever I observed heads and “R” for Republican whenever I observed tails. On the other hand, to formulate (A) I simply recorded “D” forty times and then interspersed a single “R.” Now consider a human subject S confronted with sequences (A) and (B). S comes to these sequences with considerable background knowledge which, we may suppose, includes the following:
(1) Nicholas Caputo is a Democrat.
(2) Nicholas Caputo would like to see the Democrats appear first on the ballot since having the first place on the ballot line significantly boosts one’s chances of winning an election.
(3) Nicholas Caputo, as election commissioner of Essex County, has full control over who appears first on the ballots in Essex County.
(4) Election commissioners in the past have been guilty of all manner of fraud, including unfair assignments of ballot lines.
(5) If Caputo were assigning ballot lines fairly, then both Democrats and Republicans should receive priority roughly the same number of times.
Given this background knowledge S is in a position to formulate various “cheating patterns” by which Caputo might attempt to give the Democrats first place on the ballot. The most blatant cheat is of course to assign the Democrats first place all the time. Next most blatant is to assign the Republicans first place just once (as in (A) — there are 41 ways to assign the Republicans first place just once). Slightly less blatant — though still blatant — is to assign the Republicans first place exactly two times (there are 820 ways to assign the Republicans first place exactly two times). This line of reasoning can be extended by throwing the Republicans a few additional sops. The point is, given S’s background knowledge, S is easily able (possibly with the aid of a personal computer) to formulate ways Caputo could cheat, one of which would surely include (A).
Contrast this now with (B). Since (B) was generated by a sequence of coin tosses, (B) represents one of two trillion or so possible ways Caputo might legitimately have chosen ballot orders. True, in this respect probabilities do not distinguish (A) from (B) since all such sequences of Ds and Rs of length 41 have the same small probability of occurring by chance, namely 1 in 241, or approximately 1 in two…
trillion. But S is a finite agent whose background knowledge enables S to formulate only a tiny fraction of all the possible sequences of Ds and Rs of length 41. Unlike (A), (B) is not among them. Confronted with (B), S will scrutinize it, try to discover a pattern that isn’t ad hoc, and thus seek to uncover evidence that (B) resulted from something other than chance. But given S’s background knowledge, nothing about (B) suggests an explanation other than chance. Indeed, since the relative frequency of Democrats to Republicans actually favors Republicans (twenty-one Rs versus twenty Ds), the Nicholas Caputo responsible for (B) is hardly “the man with the golden arm.” Thus, while (A) is detachable, (B) is not.
But can one be absolutely certain (B) is not detachable? No, one cannot. There is a fundamental asymmetry between detachability and its negation, call it nondetachability. In practice one can decisively demonstrate that a pattern is detachable from an event, but not that a pattern is incapable of being detached from an event. A failure to establish detachability always leaves open the possibility that detachability might still be demonstrated at some later date.
To illustrate this point, suppose I walk down a dirt road and find some stones lying about. The configuration of stones says nothing to me. Given my background knowledge I can discover no pattern in this configuration that I could have formulated on my own without actually seeing the stones lying about as they do. I cannot detach the pattern of stones from the configuration they assume. I therefore have no reason to attribute the configuration to anything other than chance. But suppose next an astronomer travels this same road and looks at the same stones only to find that the configuration precisely matches some highly complex constellation. Given the astronomer’s background knowledge, this pattern now becomes detachable. The astronomer will therefore have grounds for thinking that the stones were intentionally arranged to match the constellation.
Detachability must always be relativized to a subject and a subject’s background knowledge. Whether one can detach a pattern from an event depends on one’s background knowledge coming to the event. Often one’s background knowledge is insufficient to detach a pattern from an event. Consider, for instance, the case of cryptographers trying to break a cryptosystem. Until they break the cryptosystem, the strings of characters they record from listening to their enemy’s communications will seem random, and for all the cryptographers know…
might just be gibberish. Only after the cryptographers have broken the cryptosystem and discovered the key for decrypting their enemy’s communications will they discern the detachable pattern present in the communications they have been monitoring (cf. Section 1.6).
Is it, then, strictly because our background knowledge and abilities are limited that some patterns fail to be detachable? Would. for instance, an infinitely powerful computational device be capable of detaching any pattern whatsoever? Regardless whether some super-being possesses an unlimited capacity to detach patterns, as a practical matter we humans find ourselves with plenty of patterns we cannot detach. Whether all patterns are detachable in some grand metaphysical sense, therefore, has no bearing on the practical problem whether a certain pattern is detachable given certain limited background knowledge. Finite rational agents like ourselves can formulate only a very few detachable patterns. For instance, of all the possible ways we might flip a coin a thousand times, we can make explicit only a minuscule proportion. It follows that a human subject will be unable to specify any but a very tiny fraction of these possible coin flips. In general, the patterns we can know to be detachable are quite limited.3 (jump)
Let us now wrap up the Caputo example. Confronted with Nicholas Caputo assigning the Democrats the top ballot line forty out of forty-one times, the New Jersey Supreme Court first rejected the regularity explanation, and then rejected the chance explanation (“confronted with these odds, few persons of reason will accept the explanation of blind chance”). Left with no other option, the court therefore accepted the agency explanation, inferred Caputo was cheating, and threw him in prison.
Well, not quite. The court did refuse to attribute Caputo’s golden arm to either regularity or chance. Yet when it came to giving a positive explanation of Caputo’s golden arm, the court waffled. To be sure, the court knew something was amiss. For the Democrats to get the top ballot line in Caputo’s county forty out of forty-one times, especially…
with Caputo solely responsible for ballot line selections, something had to be fishy. Nevertheless, the New Jersey Supreme Court was unwilling explicitly to charge Caputo with corruption. Of the six judges, Justice Robert L. Clifford was the most suspicious of Caputo, wanting to order Caputo to institute new guidelines for selecting ballot lines. The actual ruling, however, simply suggested that Caputo institute new guidelines in the interest of “public confidence in the integrity of the electoral process.” The court therefore stopped short of charging Caputo with dishonesty.
Did Caputo cheat? Certainly this is the best explanation of Caputo’s golden arm. Nonetheless, the court stopped short of convicting Caputo. Why? The court had no clear mandate for dealing with highly improbable ballot line selections. Such mandates exist in other legal settings, as with discrimination laws that prevent employers from attributing to the luck of the draw their failure to hire sufficiently many women or minorities. But in the absence of such a mandate the court needed an exact causal story of how Caputo cheated if the suit against him was to succeed. And since Caputo managed to obscure how he selected the ballot lines, no such causal story was forthcoming. The court therefore went as far as it could.
Implicit throughout the court’s deliberations was the design inference. The court wanted to determine whether Caputo cheated. Lacking a causal story of how Caputo selected the ballot lines, the court was left with circumstantial evidence. Given this evidence, the court immediately ruled out regularity. What’s more, from the specified improbability of selecting the Democrats forty out of forty-one times, the court also ruled out chance.
These two moves — ruling out regularity, and then ruling out chance — constitute the design inference. The conception of design that emerges from the design inference is therefore eliminative, asserting of an event what it is not, not what it is. To attribute an event to design is to say that regularity and chance have been ruled out. Referring Caputo’s ballot line selections to design is therefore not identical with referring it to agency. To be sure, design renders agency plausible. But as the negation of regularity and chance, design is a mode of explanation logically preliminary to agency. Certainly agency (in this case cheating) best explains Caputo’s ballot line selections. But no one was privy to Caputo’s ballot line selections. In the absence of…
an exact causal story, the New Jersey Supreme Court therefore went as far as it could in the Caputo case.4 (jump)
1.4 FORENSIC SCIENCE AND DETECTION
Forensic scientists, detectives, lawyers, and insurance fraud investigators cannot do without the design inference. Something as common as a forensic scientist placing someone at the scene of a crime by matching fingerprints requires a design inference. Indeed, there is no logical or genetic impossibility preventing two individuals from sharing the same fingerprints. Rather, our best understanding of fingerprints and the way they are distributed in the human population is that they are, with very high probability, unique to individuals. And so, whenever the fingerprints of an individual match those found at the scene of a crime, we conclude that the individual was indeed at the scene of the crime.
The forensic scientist’s stock of design inferences is continually increasing. Consider the following headline: “DNA Tests Becoming Elementary in Solving Crimes.” The lead article went on to describe…
the type of reasoning employed by forensic scientists in DNA testing. As the following excerpt makes clear, all the key features of the design inference described in Sections 1.1 and 1.2 are present in DNA testing (The Times — Princeton-Metro, N.J., 23 May 1994, p. A 1):
TRENTON — A state police DNA testing program is expected to be ready in the fall, and prosecutors and police are eagerly looking forward to taking full advantage of a technology that has dramatically boosted the success rate of rape prosecutions across the country.
Mercer County Prosecutor Maryann Bielamowicz called the effect of DNA testing on rape cases “definitely a revolution. It’s the most exciting development in my career in our ability to prosecute.”
She remembered a recent case of a young man arrested for a series of three sexual assaults. The suspect had little prior criminal history, but the crimes were brutal knifepoint attacks in which the assailant broke in through a window, then tied up and terrorized his victims.
“Based on a DNA test in one of those assaults he pleaded guilty to all three. He got 60 years. He’ll have to serve 271/2 before parole. That’s pretty good evidence,“ she said.
All three women identified the young man. But what really intimidated the suspect into agreeing to such a rotten deal were the enormous odds —one in several million — that someone other than he left semen containing the particular genetic markers found in the DNA test. Similar numbers are intimidating many others into foregoing trials, said the prosecutor.6 (jump)
Not just forensic science, but the whole field of detection is inconceivable without the design inference. Indeed, the mystery genre would be dead without it.7 (jump) When in the movie Double Indemnity Edward G. Robinson (“the insurance claims man”) puts it together that Barbara Stanwyck’s husband did not die an accidental death by falling off a train, but instead was murdered by Stanwyck to…
collect on a life insurance policy, the design inference is decisive. Why hadn’t Stanwyck’s husband made use of his life insurance policy earlier to pay off on a previously sustained injury, for the policy did have such a provision? Why should he die just two weeks after taking out the policy? Why did he happen to die on a train, thereby requiring the insurance company to pay double the usual indemnity (hence the title of the movie)? How could he have broken his neck falling off a train when at the time of the fall, the train could not have been moving faster than 15 m.p.h.? And who would seriously consider committing suicide by jumping off a train moving only 15 m.p.h.? Too many pieces coalescing too neatly made the explanations of accidental death and suicide insupportable. Thus, at one point Edward G. Robinson exclaims, “The pieces all fit together like a watch!” Suffice it to say, in the movie Barbara Stanwyck and her accomplice/lover Fred MacMurray did indeed kill Stanwyck’s husband.
Whenever there is a mystery, it is the design inference that elicits the crucial insight needed to solve the mystery. The dawning recognition that a trusted companion has all along been deceiving you (cf. Notorious); the suspicion that someone is alive after all, even though the most obvious indicators point to the person having died (cf. The Third Man); and the realization that a string of seemingly accidental deaths were carefully planned (cf. Coma) all derive from design inferences. At the heart of these inferences is a convergence of small probabilities and specifications, a convergence that cannot properly be explained by appealing to chance.
2) The archer example introduces a tripartite distinction that will be implicit throughout our study of chance elimination arguments: a reference class of possible events (e.g.. the arrow hitting the wall at some unspecified place): a pattern that restricts the reference class of possible events (e.g.. a target on the wall); and the precise event that has occurred (e.g., the arrow hitting the wall at some precise location). In a chance elimination argument, the reference class, the pattern, and the event are always inseparably linked, with the pattern mediating between the event and the reference class, helping to decide whether the event really is due to chance. Throughout this monograph we shall refer to patterns and events as such, but refer to reference classes by way of the chance hypotheses that characterize them (cf. Section 5.2). (back)
3) This conclusion is consistent with algorithmic information theory, which regards a sequence of numbers as nonrandom to the degree that it is compressible. Since compressibility within algorithmic information theory constitutes but a special case of detachability, and since most sequences are incompressible, the detachable sequences are indeed quite limited. See Kolmogorov (1965), Chaitin (1966). and van Lambalgen (1989). See also Section 1.7 (back)
4) Legal scholars continue to debate the proper application of probabilistic reasoning to legal problems. Larry Tribe (1971), for instance, views the application of Bayes’s theorem within the context of a trial as fundamentally unsound. Michael Finkelstein takes the opposite view (see Finkelstein, 1978, p. 288 ff.). Still, there appears no getting rid of the design inference in the law. Cases of bid-rigging (Finkelstein and Levin, 1990, p. 64), price-fixing (Finkelstein and Levenbach, 1986, pp. 79-106), and collusion often cannot be detected save by means of a design inference. (back)
6) It’s worth mentioning that at the time of this writing, the accuracy and usefulness of DNA testing is still a matter for debate. As a New York Times (23 August 1994, p. A10) article concerned with the currently ongoing 0..1. Simpson case remarks. “there is wide disagreement among scientific experts about the accuracy and usefulness of DNA testing and they emphasize that only those tests performed under the best of circumstances are valuable?’ My interest, however, in this matter is not with the ultimate fate of DNA testing, but with the logic that underlies it, a logic that hinges on the design inference. (back)
7) Cf. David Lehman’s (1989, p. 20) notion of “retrospective prophecy” as applied to the detective-fiction genre: “If mind-reading. backward-reasoning investigators of crimes —sleuths like Dupin or Sherlock Holmes — resemble prophets, it’s in the visionary rather than the vatic sense. It’s not that they see into the future; on the contrary. they’re not even looking that way. But reflecting on the clues left behind by the past, they see patterns where the rest of us see only random signs. They reveal and make intelligible what otherwise would be dark.” The design inference is the key that unlocks the patterns that “the rest of us see only Iasi random signs.” (back)
William A. Dembski, The Design Inference: Estimating Chance Through Small Probabilities (Cambridge, United Kingdom: Cambridge Press, 1998), 9-20, 22-24.