Is Bayesian Statistics the Best?

The article I chose this week is basically a glorified book review of “The Signal and the Noise” by Nate Silver, a “god of predictions”. What I’m focusing on, however, is the authors argument about Bayes’ Statistics. Silver, in his book, is pro Bayesian statistics, using it for all kinds of predictions. What the author is trying to convey, though, is that fact that Bayesian statistics has a specific niche within probabilities, that is in “predicting outcome probabilities in cases where one has strong prior knowledge of a situation.” The example he uses is when women, over 40, go to get a mammogram. He cites the information that there are different prior probabilities in play as well as conditional probabilities. It is only when both these types of probabilities are known that Bayesian statistics not only works, but is the best model to use for predictions. However, not every situation in which predictions could be useful are like so. For instance, the author referenced the Stanley Milgram experiment in which subjects would torture a victim if they were told it was for science by an authority figure. How does one predict the outcome of such an event? The author suggests Ronald Fisher’s approach which states that “a hypothesis is considered validated by data only if the data pass a test that would be failed ninety-five or ninety-nine per cent of the time if the data were generated randomly.” This approach eliminates the issue that is found in Bayesian statistics; you need prior information.
Both approaches have their advantages and disadvantages. The point that the author was trying to convey, which I found interesting, was the fact that predicting outcome probabilities should not be limited to one approach.