Dembski's "Explantory Filter"

by Lenny Flank

(c) copyright 2006

Perhaps the most celebrated of the Intelligent Design "theorists" is William Dembski, a mathematician and theologian. A prolific author, Dembski has written a number of books defending Intelligent Design.

The best-known of his arguments is the "Explanatory Filter", which is, he claims, a mathematical method of detecting whether or not a particular thing is the product of design. As Dembski himself describes it:

"The key step in formulating Intelligent Design as a scientific theory is to delineate a method for detecting design. Such a method exists, and in fact, we use it implicitly all the time. The method takes the form of a three-stage Explanatory Filter. Given something we think might be designed, we refer it to the filter. If it successfully passes all three stages of the filter, then we are warranted asserting it is designed. Roughly speaking the filter asks three questions and in the following order: (1) Does a law explain it? (2) Does chance explain it? (3) Does design explain it? . . . . . . . . I argue that the explanatory filter is a reliable criterion for detecting design. Alternatively, I argue that the Explanatory Filter successfully avoids false positives. Thus whenever the Explanatory Filter attributes design, it does so correctly." (http://www.arn.org/docs/dembski/wd_explfilter.htm)

The most detailed presentation of the Explanatory Filter is in Dembski's book No Free Lunch: Why Specified Complexity Cannot Be Purchased Without Intelligence. In the course of 380 pages, heavily loaded with complex-looking mathematics, Dembski spells out his "explanatory filter", along with such concepts as "complex specified information" and "the law of conservation of information". ID enthusiasts lauded Dembski for his "groundbreaking" work; one reviewer hailed Dembski as "The Isaac Newton of Information Theory", another declared Dembski to be "God's Mathematician".

Stripped of all its mathematical gloss, though, Dembski’s “filter” boils down to: “If not law, if not chance, then design.” Unfortunately for IDers, every one of these three steps presents insurmountable problems for the "explanatory filter" and "design theory".

According to Dembski, the first step of applying his "filter" is:

"At the first stage, the filter determines whether a law can explain the thing in question. Law thrives on replicability, yielding the same result whenever the same antecedent conditions are fulfilled. Clearly, if something can be explained by a law, it better not be attributed to design. Things explainable by a law are therefore eliminated at the first stage of the Explanatory Filter." (http://www.arn.org/docs/dembski/wd_explfilter.htm)

Right away, the filter runs into problems. When Dembski refers to laws that explain the thing in question, does he mean all current explanations that refer to natural laws, or does he mean all possible explanations using natural law? If he means all current explanations, and if ruling out all current explanations therefore means that Intelligent Design is a possibility, then Dembski is simply invoking the centuries-old "god of the gaps" argument --- "if we can't currently explain it, then the designer diddit".

On the other hand, if Dembski's filter requires that we rule out all possible explanations that refer to natural laws, then it is difficult to see how anyone could ever get beyond the first step of the filter. How exactly does Dembski propose we be able to rule out, not only all current scientific explanations, but all of the possible ones that might be found in the future? How does Dembski propose to rule out scientific explanations that no one has even thought of yet -- ones that can't be made until more data and evidence is discovered at some time in the future?

Science, of course, is perfectly content to say “we don't know, we don’t currently have an explanation for this”. Science then moves on to find possible ways to answer the question and uncover an explanation for it. ID, on the other hand, simply declares “Aha!! you don’t know, therefore my hypothesis must be correct! Praise God! -- uh, I mean The Unknown Intelligent Designer!” ID then does nothing -- nothing at all whatsoever in any way shape or form -- to go on and find a way to answer the question and find an explanation for it.

Let’s assume that there is something, call it X, that science can’t currently explain using natural law. Suppose, ten years later, we do find an explanation. Does this mean: (1) The Intelligent Designer was producing X up until the time we discovered a natural mechanism for it, then stopped doing it at that point? Or (2) The Intelligent Designer was doing it all along using the very mechanism we later discovered, or (3) the newly discovered natural mechanism was doing X all along and The Intelligent Designer was never actually doing anything at all?

Dembski's filter, however, completely sidesteps the whole matter of possible explanations that we don't yet know about, and simply asserts that if we can't give an explanation now, then we must go on to the second step of the filter:

"Suppose, however, that something we think might be designed cannot be explained by any law. We then proceed to the second stage of the filter. At this stage the filter determines whether the thing in question might not reasonably be expected to occur by chance. What we do is posit a probability distribution, and then find that our observations can reasonably be expected on the basis of that probability distribution. Accordingly, we are warranted attributing the thing in question to chance. And clearly, if something can be explained by reference to chance, it better not be attributed to design. Things explainable by chance are therefore eliminated at the second stage of the Explanatory Filter." (http://www.arn.org/docs/dembski/wd_explfilter.htm)

This is, of course, nothing more than the standard creationist "X is too improbable to have evolved" argument, and it falls victim to the same weaknesses. But, Dembski concludes, if we rule out law and then rule out chance, then we must go to the third step of the "filter":

"Suppose finally that no law is able to account for the thing in question, and that any plausible probability distribution that might account for it does not render it very likely. Indeed, suppose that any plausible probability distribution that might account for it renders it exceedingly unlikely. In this case we bypass the first two stages of the Explanatory Filter and arrive at the third and final stage. It needs to be stressed that this third and final stage does not automatically yield design -- there is still some work to do. Vast improbability only purchases design if, in addition, the thing we are trying to explain is specified. The third stage of the Explanatory Filter therefore presents us with a binary choice: attribute the thing we are trying to explain to design if it is specified; otherwise, attribute it to chance. In the first case, the thing we are trying to explain not only has small probability, but is also specified. In the other, it has small probability, but is unspecified. It is this category of specified things having small probability that reliably signals design. Unspecified things having small probability, on the other hand, are properly attributed to chance." (http://www.arn.org/docs/dembski/wd_explfilter.htm)

In No Free Lunch, Dembski describes what a designer does:

(1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (Dembski, No Free Lunch, p xi)

But Dembski and the rest of the IDers are completely unable (or unwilling) to give us any objective way to measure "complex specified information", or how to differentiate "specified" things from nonspecified. He is also unable to tell us who specifies it, when it is specified, where this specified information is stored before it is embodied in a thing, or how the specified design information is turned into an actual thing.

Dembski's inability to give any sort of objective method of measuring Complex Specified Information does not prevent him, however, from declaring a grand "Law of Conservation of Information", which states that no natural or chance process can increase the amount of Complex Specified Information in a system. It can only be produced, Dembski says, by an intelligence. Once again, this is just a rehashed version of the decades-old creationist "genetic information can't increase" argument.

With the Explanatory Filter, Dembski and other IDers are using a tactic that I like to call “The Texas Marksman”. The Texas Marksman walks over to the side of the barn, blasts away randomly, then draws bullseyes around each bullet hole and declares how wonderful it is that he was able to hit every single bullseye. Of course, if his shots had fallen in different places, he would then be declaring how wonderful it is that he hit those marks, instead.

Dembski's filter does the same thing. It draws a bullseye around the bullet hole after it has already appeared, and then declares how remarkable it is that “the designer” hit the target. If the bullseye had been somewhere else, though, Dembski would be declaring with equal intensity how remarkably improbable it was that that bullseye was hit. If ID "theory" really wanted to impress me, it would predict where the bullet hole will be before it is fired. But, ID does not make testable predictions of any sort.

Dembski, it seems, simply wants to assume his conclusion. His “filter”, it seems, is nothing more than “god of the gaps” (if we can't explain it, then the Designer must have done it), written with nice fancy impressive-looking mathematical formulas. That suspicion is strengthened when we consider the carefully specified order of the three steps in Dembski's filter. Why is the sequence of Dembski’s Filter, “rule out law, rule out chance, therefore design”? Why isn’t it “rule out design, rule out law, therefore chance”? Or “rule out law, rule out design, therefore chance”? If Dembski has an objective way to detect or rule out "design", then why doesn't he just apply it from the outset? The answer is simple -- Dembski has no more way to calculate the “probability” of design than he does the “probability” of law, and therefore simply has no way, none at all whatsoever, to tell what is “designed” and what isn’t. So he wants to dump the burden onto others. Since he can't demonstrate that any thing was designed, he wants to relieve himself of that responsibility, by simply declaring, with suitably impressive mathematics, that the rest of us should just assume that something is designed unless someone can show otherwise. Dembski has conveniently adopted the one sequence of steps in his "filter", out of all the possible ones, that relieves “design theory” of any need to either propose anything, test anything, or demonstrate anything

I suspect that isn’t a coincidence.

Return to Creation Science Debunked Home Page

1