Here’s why I love Linda. A quick review.
The Linda problem starts with a dataset describing Linda.
You can say anything you want about her. It doesn’t even have to be true.
Then you try to decide which of two statements is more
likely to be true:
1. Linda is X.
2. Linda is X and Y.
There is no relationship between X and your dataset about
Linda. There is also no relationship between Y and your dataset, or between Y
and X. So you can’t prove that either X or Y is true about Linda based on the
dataset.
To turn this into math, assign a probability of truth to
each of X and Y, and the number is between zero and one because you can’t prove
anything about either one. Because statement #2 is a conjunction, the math is
multiplication. When you multiply two fractions smaller than 1, the product is
always smaller than both of them. That means it will be smaller than X. So the
product of #2 is smaller than #1 and 1 is more likely to be true. But most
people get that wrong the first time they meet a Linda problem, and even after
you explain it, some people still get it wrong the next time they meet one.
Linda’s formal name is the Conjunction Fallacy.
Here’s why I love Linda. The Documentary Hypothesis has a
dataset describing at least four putative documents. DH claims that my Torah is
made up of these four documents. You may have heard of JEDP; that’s them.
DH denies that “Linda is X” is possible. They reject any
description of “Linda” that allows the simple statement.
DH insists that “Linda is Y and Z and…”, that is, Torah is made up of four sources. The probability of that is the product of the probability that each assignment to one of the four putative documents is correct. There is no hard evidence that any of them existed, so the probability of every assignment is between zero and one.
To get the answer, you must multiply because DH says you can’t assign the whole thing to just one document, and you must have enough assignments to achieve the whole Torah. So the answer is some fractions between zero and one multiplied together, with at least four terms contributing to the product.
Now, it would be one thing if DH took each book in Torah and assigned the whole book to one of the four sources. So your terms might be J for Genesis, E for Exodus, P for Leviticus, D for Deuteronomy, and then one of them again for Numbers.
But the dataset doesn’t support that because the
descriptions of the documents don’t match any one book. P gets the closest to
Leviticus and assignments to P would have the highest value – if they were all
restricted to Leviticus.
But they aren’t. DH splits all five books. Numbers is split
up the most, with parts from each of the four documents. Leviticus is split up
the least, but some of its putative sources are not even JEDP. But at any rate,
the number of terms is larger than four and you might as well make it ten for
starters. That would be the outcome if Leviticus was all P and Deuteronomy was
all D, and each of the other three books was split between two or three of the
documents.
But they’re not. All right, then, it would be nice if DH took each narrative in Torah and said it came from one of the four sources. That means you’re looking at each narrative as one term and assigning it a probability of coming from one of those four documents. I’ve counted some 80 narratives in Torah, stories with plots and characters and action (and this feeds into something I will talk about later). So there are at least 80 terms in the calculation, all of them fractions between zero and one. And then you have to consider the non-narrative portions, which fall between the narratives. So the number of terms is larger than 80; we could set it at 100 for starters. A fraction between zero and one to the 100th power is infinitesimal, down around 10 to the minus 61. (For comparison, the Planck length is 10 to the minus 35 meters.)
But that’s not what DH does. DH splits some narratives up
and assigns part to one document and part to another. So you can’t count on two
verses that are sequential, being assigned to the same document. Actually, it’s
worse than that, because DH splits some verses up, assigning part of the words
to one document and part to another. But let’s ignore that last bit, because
what I’m saying is that DH has a probability that is the product of some fractions between zero and
one. You have the same number of terms as the number
of pieces in the DH assignment, something more than 100 terms.
Since you can’t count on a chunk of verses to all be assigned to the same document, you have to consider every verse a separate term.
Torah has 5845 verses.
DH’s probability calculation has at least 5845 terms. Each
term has a value between zero and one. If every term had the same value, the
answer would be that value raised to the 5845th power. The answer is
infinitesimal. The probability of DH being true is vanishingly small.
Now, DH will say that it is not a Linda problem, there IS a connection between the dataset and the assignments, the dataset describes the four documents to which they are making the assignments. But as I said, we don’t have hard evidence (yet) that they ever existed. What’s worse, I show on my blog that the descriptions themselves have a basis in fallacies, two of which I will get to later. Worst of all, I show that, from the start, DH relied on false data. This is what I wrote about last time: even if you don’t have a real conjunction fallacy, but your dataset contains falsehoods, you’re wrong. The whole concept has a zero probability of being true.
My science author was a trained biochemist but, like I said
last week, that doesn’t mean he had training in logic. And even if he did, it's
obvious that he didn't subject DH to a probability calculation. He had no
training in the Bible, he admits that. It was one of a number of instances
where a scientist writes about something they’ve never researched, and the work
gets attention because of who they are, not because they know what they’re
talking about. I won’t go into that rant here.