Imagine you were a juror in a trial in which a woman was accused of murdering her infant sons. The sons died about a year apart, each of crib death. The prosecuting attorney tells you that 1 in 8500 affluent, nonsmoking families with a mother over the age of 26 have a child die of crib death. The attorney also tells you that if you simply multiply the two probabilities, you’ll find that the odds of two children in such a family dying of crib death are about 1 in 72 million. Would you find the defendant guilty?
At first glance, you might say yes. But implied in the multiplication is the idea that the two events are independent. Research suggests that they’re not. According to Ray Hill, a British professor of mathematics, having a sibling who died of crib death makes a child 10-22 times more likely to die of crib death.
This is really a Bayesian problem. What you really need to find is Pr(A child dies of crib death | The child’s sibling died of crib death). When the problem is looked at this way, the probability that two children die of crib death becomes Pr(A child dies of crib death)*Pr(A child dies of crib death | The child’s sibling died of crib death). The answer to this, Hill suggests, is about 1 in 130,000. Still unlikely, but much more likely than 1 in 72 million.
This may seem like a morbid example, but I chose it because it actually happened to a woman in England. Sally Clark was imprisoned for two counts of murder after her two sons died of crib death. She was released from jail after about four years, but died only a few years after.
It’s possible that if jurors or the prosecutors had considered Bayes’ Theorem, this might not have happened. Strangely, in 2011 a British judge ruled against using this approach when considering the probability of someone’s guilt given the evidence against them.