###
Bayesian analysis, part III

Let's continue the discussion of Bayes rule as described in Dennis Lindley's book, Understanding Uncertainty. As he explains, there is a simple method for updating probabilities which uses the likelihood odds ratio. Remember Bayes rule as applied to the two urns containing red and white balls, where U1 has 2/3 red balls and U2 has 1/3 red balls:

P(U1 | r) = P(r | U1) P(U1) / P(r)

and

P(r) is the total or marginal probability of r under all models. Here:

P(r) = P(r | U1) P(U1) + P(r | U2) P(U2)

With more than two models, the term P(r) can become hard to calculate.

The odds ratio approach simplifies the analysis by consolidating all alternative models to U1 into a single one. So we can write U for the model (formerly U1) and ~U for the negation of U (i.e. U2 is true if that is the only alternative):

P(U | r) = P(r | U) P(U) / P(r)

P(~U | r) = P(r | ~U) P(~U) / P(r)

P(r) = P(r | U) P(U) + P(r | ~U) P(~U)

We next calculate the ratio of these posterior probabilities for the two cases U and its complement ~U (noticing that the P(r) term cancels):

P(U | r) / P(~U | r) = P(r | U) P(U) / P(r | ~U) / P(~U)

We call this the odds of U | r:

*o*(U | r) = P(r | U) P(U) / P(r | ~U) / P(~U)

Rearranging:

*o*(U | r) = [P(r | U) / P(r | ~U)] * [P(U) / P(~U)]

The first term is the ratio of likelihoods, the second term is the prior odds on U1. If we have a uniform prior for the probabilities:

P( U) = 1/2

P(~U) = 1/2

then we consider the initial *odds on* U:

*o*(U) = 1

We have these likelihoods:

P(r | U) = 2/3

P(r | U2) = P(r | ~U) = 1/3

We can calculate the likelihood ratio:

P(r | U) / P(r | ~U) = 2/3 / 1/3

= 2

Now, having observed a red ball, we update as follows:

*o*(U | r) = ( P(r | U) / P(r | ~U) ) * ( P(~U) / P(U) )

= P(r | U) / P(r | ~U) * 1

= 2 * 1

= 2

Upon observing a second red ball, we update again in a simple way:

*o*(U) = odds ratio *o*(U) * likelhood ratio

= 2 * 2

= 4

We convert from odds to probabilities (and back) by remembering that:

*o* = p / 1 - p

So, after observing a single red ball drawn from the urn, we had

*o* = 2 = p / 1 - p

2 - 2p = p

2 = 3p

p = P(U | r) = 2/3

And after observing a second red ball drawn from the urn,

*o* = 4 = p / 1 - p

4 - 4p = p

4 = 5p

p = P(U | r) = 4/5

These match the results seen with the straightforward application of Bayes rule, but are obtained more easily.

Notice that if we draw a white ball, the likelihood ratio is

P(U | w) / P(~U | w) = 1/3 / 2/3

= 1/2

So, if we drew a red and a white ball in sequence, the odds would be:

*o*(U) = 2 * 1/2 * 1 = 1

And in any sequence of draws, all that really matters is the excess draws of one type of ball, not the order in which they appear.