`primes.py`

script the other day (here) about the patterns within each decade: how many decades have primes ending in 1,3,7 or 7,9 and so on? I modified the script to keep track of this information in a global variable L. I still kept pL because we need to use the primes in pL to test new candidates. I grabbed the first one million primes and looked at the patterns. The results are amazing, at least I think so:

We look through 1548587 decades to get the first 10

^{6}primes (actually one more than that, in order to count the pattern in the last decade correctly). Slightly less than half of these contain no primes, of the rest about 80% have a single prime. (Also of interest, the overall density of primes is reduced by about 1/6 between 10

^{5}and 10

^{6}---not shown).

For the pairs and triples we observe an amazingly consistent distribution of primes considering only their last digit. Each of the four is generally equivalent

*except*that the pairs 1,7 and 3,9 are about 2.6 times more likely than the other pairs. What property of integers can account for the amazing constancies and also this systematic difference?

Before getting too excited, I should probably head over to the prime pages and check out their lists. But that just seems incredible to me. Something deep is going on here (or not.. maybe it's something to do with the way a "decade" is defined. Have to think about that).

BTW, this algorithm isn't scaling well. 10

^{5}is really easy, 10

^{6}is agonizingly slow. Use something better for big problems.

[UPDATE: It turns out the patterns depend on the definition of a decade. For example, to have the decades go from 15 to 25, etc. I changed these four lines:

and now the pattern is different.

I'm not saying I

*understand*it, but the fact that it changes argues pretty strongly that it's an artifact. ]