*part one of this series here*]

The

*subjective*view of probability leaves probability entirely in our heads, it is merely a reflection of our uncertainty about the relevant details. An

*objective*view of probability places probability "out in the world" somehow. The first such theory we will examine is

*frequentism*.

The frequentist believes probability attributions make a hypothetical claim about an infinite number of trials. Consider again our coin toss; for a frequentist, the claim that the probability of heads on a given toss is 1/2 means that

*if*the coin were tossed an infinite number of times,

*then*1/2 of those tosses would come out heads. This approach considers probability as a hypothetical property of a physical system, but is extremely problematic when we examine the details.

Suppose the underlying physical mechanism of the toss is indeed fair,

*i.e.*the probability "really" is 1/2: what guarantees that exactly half of the trials will come out heads? This relationship between the limit of the number of heads as the number of tosses goes to infinity and the underlying physical mechanism is theoretically guaranteed by the law of large numbers.

In order to further clarify this distinction between the frequency and the physical mechanism, consider another classic example, this time from Bernoulli. Suppose one fills an urn with 50 red balls and 50 green balls. Then, one uses some procedure (say shaking the urn then reaching in with one's eyes closed) in order to select from the urn at random. After each selection of a ball from the urn (a "trial") the color of the ball is noted

*and it is returned to the urn*. Here, there is a physical fact about the ratio of red balls to total balls in the urn (50/100 = 1/2); there is also a physical process to randomly select balls. What the law of large numbers tells us is that

*if the process for selecting balls from the urn is indeed "random," then the ratio of red balls to total balls examined will approach 1/2 as the number of trials grows*.

There are two related remarks to make here. First, the physical systems associated with most classic probability setups will deteriorate over the course of such an extended number of trials. For example, if one rolls a die several hundred thousand times, it begins to turn spherical as the corners chip away from friction. The point here is just that if the procedure of rolling the die were actually carried out an enormous number of times, the number of rolls returning five will only approach 1/6 for a while, until the die has deteriorated enough that the physical properties of the system change, at which point, it is no longer clear that an unambiguous answer of five will even be possible.

This brings us to our second remark. The assumption behind the law of large numbers and the urn, coin, and die examples is that the underlying physical processes will produce stable probabilities. Logically, however, nothing rules out the possibility of a coin which comes up heads 1/2 the time on the first 100 tosses, but 1/4 the time on the next 100 tosses. In fact, the die example shows that there are strong empirical reasons for suspecting that many systems do not in fact demonstrate this

*long term stability*.

So, frequentism succeeds in putting probability "in the world," but at the cost of plausibility. Still, there are many situations for which frequentism is an especially useful view, and it is one of the prime competitors against subjective Bayes in the realm of statistics.

next:

*propensity theories*

## No comments:

Post a Comment