I'm going to continue with the discussion of phylogenetic methods. I want to try to understand MCMC, and I'll post more about the actual application of it later. For today, we have a demonstration using a Python simulation, showing that the samples from a simple MCMC are appropriately distributed.
In the first part of the code we generate a set of 9 models with different "likelihoods": 0.1, 0.2 .. 0.9. These are assigned names from the letters 'A' through 'I'. For interest, we make the assignments randomly, and then sort the letters by the assigned likelihood for the model each represents.
In the second part, we implement the logic of MCMC. It's a Markov chain, starting at model 'A'. At each step we generate a proposal (here by random choice), and we compare the likelihood q of the proposal to the likelihood p of the current model. If q > p, we move. If q < p, we still move but only with probability q/p, otherwise we re-sample the current model.
[ UPDATE: I left something important out of the first version of this simulation. The proposals are generated by random choice, but only choosing among "nearby" models. For this version, a given model only generates a proposal of one immediately next to it: e.g. 'C' can return either 'B' or 'D'. It makes no difference to the result, so I've used the sample graphic.]
The Counter class from here is used to tally the results, and plot a histogram using matplotlib. The models are sorted by their likelihoods to make it easy to see what has happened.
The models had these likelihoods:
They match the histogram. It's that simple.