It uses the MarkovHelper module from here, modified a bit. Because long chains yield very low likelihoods, there is a function to convert the probabilities to logs:
There are two points where you can make edits to get more verbose output. Here is an example of output from a model with both transition probabilities set to 0.2. The top line in each group is the sequence of states, then the observed data, followed by the results from the Viterbi algorithm.
At first, this looks promising. But remember, we tell the model what all the probabilities are. So, knowing that both states are equally likely, random guessing would give 50% correct. For transition probabilities of 0.2 and 0.1, the HMM scores 0.72, 6 points higher than random. With transition probabilities of 0.2 and 0.01, the model score 0.952 correct. At least that is no worse than random!