The Bayesian Doomsday Argument

After spending a couple of days wrapping our heads around Bayes’ theorem, I (and I’m sure many other students) wondered what this could really be applied to. We went over a couple examples about spam filtering and the marbles, which provide good instances of having to make a decision based on very limited information, but it’s not like Bayes has anything to offer on the extinction of mankind. BUT WAIT! This week’s NPR blogpost by Adam Frank answered the question using probability theory.

They call it “the Doomsday Argument”. It was first explored in 1983 by astrophysicist Brandon Carter, but has been critiqued and defined by many others thereafter. The argument is detached from any economic, political, cultural, or religious context, so the logic is strictly probabilistic.

In a simple example, Bayes’ rule says that if you observe one outcome more than another, you’re going to assume (or believe) that the observed outcome is representative of likely outcomes. As Frank notes, this works well for doomsday because we can only observe humanity’s existence on Earth once. Based on simple probability theory, we should assume that the total number of humans that will ever be born (N) will not be much greater than the number of humans born so far (n). As Frank says, “If you are the lucky 1 billionth human to be born, then probabilistically, there won’t be a trillion more after you.” By this logic, THE END IS NEAR!

I hope this sounds fishy to you, so that I’m not alone. If you were born in 1400, you would think doomsday would come in a couple centuries. But I think that should be the real point behind this Bayesian interpretation. It helps us explain why throughout human existence there have ALWAYS been people saying the end was near, e.g. Book of Revelations, Mayan Calendar, Y2K, the Doritos Locos Tacos. We think we’re the center of the universe, and at the end of time. We think this, because right here, up to now, is all we know.