I want to spend some time working on concepts in phylogenetics, and I'm going to start with models of sequence evolution. But before I do that I set up a simulation to visualize the broad outlines of what's going on. So, here is a basic simulation of sequence changes. The script sets up a sequence of length 1000 (50% GC) and then the sequence undergoes repeated rounds of mutation with a mutation rate of 1% for each round. At each step, we calculate the Hamming distance (the number of changes required to turn the evolved sequence back into the original), and convert it to a fraction of the total sequence length.
I plotted the results from 3 independent runs above. (They have been offset slightly for clarity).
The general shape of the curves shows saturation, as expected. Each plot is approximately linear at low levels of mutagenesis: ten cycles of mutagenesis at 1% per cycle gives a Hamming distance of ≈ 0.1. But by 50 cycles the distance is only about 0.35 and by 100 cycles it is about 0.55. This is a simple result of the fact that in any mutation event, sometimes the target has been mutated before, so it takes a long time before all the targets have been hit. Eventually, we do, and the Hamming distance peaks at 75% of the length, since in this example the model of sequence evolution is to change a nucleotide into one of the other three at equal rates.
The script can be modified to alter the initial GC content or the rates, although it hasn't been made particularly easy to do this. I did some runs with modified conditions and observe that: (i) a GC-content far from 50% is ameliorated quite quickly and (ii) even massively unbalanced rates (e.g.
rD['T'] = 'ACG' + 'G' * 100
) have modest effects on the nucleotide composition. That conflicts with what I think I remember of the theory, so I'll be interested to return to the issue after some reading.Here's the code: