A very real problem in public policy decisions is the role of the "prior," or the probability assignment before one has been presented with any evidence. Two politicians with very different priors, when presented with the same evidence, will come to very different conclusions. The beauty of subjective Bayes is that it gives us an analysis of how this phenomenon is a byproduct of rationality.
Consider, for example, the debacle over WMDs in Iraq. Many critics attribute irrationality to the policy makers who determined the probability that Iraq possessed WMDs high enough to justify an invasion. A disadvantage to this approach is that it rules out the prospects for dealing strategically with these policy makers (via debates, speeches, compromises, etc.) as all theories of strategic interaction presume the rationality of one's opponent. The subjective Bayes approach allows us to characterize the conclusions of these policy makers as rational given an appropriate assignment of priors.
Before discussing more realistic scenarios, let's examine a toy example. A coin has been tossed 4 times, with outcomes THTT (tails, heads, tails, tails). Now, consider 3 politicians:
Politician Q is a frequentist
Politician R is a Bayesian who believes strongly in hypothesis A, namely that P(H) = 1/2
Politician S is a Bayesian who believes strongly in hypothesis B, namely that P(H) = 1/100
When presented with the same data set, Q, R, and S will each come to different conclusions, respectively:
Politician Q will believe hypothesis C, namely that P(H) = 1/4
Politician R will continue to believe (at roughly the same strength) hypothesis A, namely that P(H) = 1/2
Politician S will continue to believe (at greatly reduced, but still more than 50% strength) hypothesis B, namely that P(H) = 1/100
[For the calculations and relevant simplifying assumptions, please see the appendix.]
Many simplifying assumptions were made here, but the essential point still stands: given suitably strong priors and suitably ambiguous evidence, rational policy makers can disagree.
What of our frequentist here? In this example, perhaps, he seems better off. However, we should not forget the conceptual problems associated with frequentism, especially the problem of one time probabilities. For example, consider a situation like "climate change": the prospects for running the relevant "experiment" repeatedly (letting industrial society evolve on earth an infinite number of times?) are nil, yet the need for some kind of conclusion is unavoidable. Returning to the case of WMDs, there is a similar situation: the relevant evidence does not allow for a "reading off" of the probability in as clear a manner as successive coin tosses.
Obviously, all relevant positions have been greatly simplified. The essential point to make here is that neither "science" nor "rationality" dictate the correct policy responses in the face of uncertainty. Furthermore, failure to acknowledge this point weakens one's position in the ensuing debate as one is left unable to strategically militate for one's own position (as one cannot model one's opponent as rational).
We turn next to some more realistic policy issues and the specific complications which arise in dealing with the relevant probabilities.