Friday, July 16, 2021

Pentagons, again

 It's been some years, but I took another look at pentagons.  It is fun to see the connection between φ,  the "golden ratio", and the internal triangles in a pentagon.

First of all. the figure has five-fold rotational symmetry.

The marked angles in the panel on the left are equal by the isosceles triangle theorem, so the angles on either side of the central angle at each vertex are equal.  

Then we do some addition:  3 black + 2 magenta = 180 = 4 magenta + 1 black.  We conclude that black = mageneta.  So the vertex angle is divided into 3 equal parts.



The small triangle at the top is isosceles, because the triangles on either side are congruent to each other (by rotational symmetry, or ASA), so both sides are equal. Now, we see that the horizontal chord is parallel to the bottom side, by alternate interior angles.  We have a bunch of rhombi!

There are two types of similar triangles:  one tall and skinny, the other short and squat.

In the right-hand panel, a triangle of the second type is shown in light red.  It has equal sides of length x and a base of length x + 1.  The red one has sides of length 1 and a base of length x.  We form the equal ratios:  (x+1) /x = x.

One can also use the tall skinny triangles.

This allows the base to have length 1.  Then, the sides of the tall skinny triangle in the middle are x.  The small, tall skinny triangle at the top has sides x - 1 and base x - (2x - 1) so the ratios of similar triangles are x = (x - 1)/(x - 2x - 1) = (x - 1)/(-x - 1), which simplifies to the same expression as before.



The apex angle of a tall skinny triangle is equal to 1/3 of the interior angle of the pentagon, which is 108/3 = 36.

We will find an expression for the sine of 36°, and this will give us another way to calculate the side length for the pentagon in terms of the chords.

We start by working with 18°, because 90/18 = 5 exactly.  The trick is that (using θ for 18°):  5θ = 90° so 2θ = 90 - 3θ.  Taking the sine of both sides and then converting to the complementary angle on the right, we have sin 2θ = cos 3θ.

One can also use de Moivre's formula and take the real part.

[Since φ - 1 = 1/φ, the above result can be written more simply, but it makes the calculation below more complicated].






There is something else.  If one draws the central angles of the pentagon, then the triangle containing a central angle of 72° and a base of one side length has a half angle of 36°.

So one-half the side length is the sine of 36°, which we found above.  The circle containing the five vertices for a pentagon of side length 1, has a radius of r = 1/2 sin36° = 1/sqrt(3 - φ).

The pdf of the write-up is here (Latex one level down).

Friday, May 28, 2021

Wednesday, May 26, 2021

Origins of SARS-CoV-2

Here are some links to helpful articles about the "lab-leak" hypothesis:

Nicholas Wade (link), Yuri Deigin (link), Rossana Segreto and Yuri Deigin (link)

Andersen et al. (link), Yangyang Cheng (link), Glenn Kessler (link), Matt Yglesias (link)

Daniel Engber (link)

My favored hypothesis has been that the Covid-19 pandemic began in the same way the original SARS pandemic did, in a "spillover" of SARS-CoV-2 from a natural reservoir of the virus in bats, by way of an intermediate host.

Along with much of the scientific and political world, this week I've been reading, and updating my likelihood of the alternative hypothesis, that the pandemic began when SARS-CoV-2 escaped in an accident, from a lab at the Wuhan Institute of Virology (WIV).

The problem with the spillover hypothesis is that, unlike with the original SARS, there is very little supporting evidence.  Rather, what we know about SARS-CoV-2 indicates that the virus appeared in humans in a single event. There is very little diversity in sequences of early virus samples.  It was not strongly associated with a "wet market" and there is no known intermediate host.

When the virus appeared in humans, it was already well-adapted to growth in them, with an S protein that fits its human receptor (the ACE2 receptor protein) very well, as well as a furin-cleavage site that is known to facilitate viral entry.  The furin site is essential for infectivity in humans.

With respect to the escape hypothesis, it did not help that Donald Trump supported this idea, suggesting the sinister possibility that release not only explains the pandemic, but that it was deliberate.  Since Trump has lied repeatedly about nearly everything, his statements actually decreased (at least for me) the likelihood that this hypothesis is correct.  Add to that the fact that so many right-wing politicians are looking to pick a fight with China.

I do not put much weight on vague reports that WIV staff were hospitalized with flu-like illness in the fall of 2019, including other reports that samples from them were tested for SARS-CoV-2 and came back negative.  The former reports come from "intelligence sources" who were motivated to support Trump's contention, and the latter from official Chinese government sources equally motivated to keep things under wraps.  They are possibly true, but it is impossible to evaluate their accuracy now.  

I also do not think much of the idea that, as David Baltimore reportedly said, the furin site is a "smoking gun".  Acquisition of such a sequence could easily have happened naturally, by recombination.  The site's codon usage does not provide convincing evidence that it was engineered.  Indeed, similar sequences that might have been recombined to give the furin site would perhaps also have been out-of-frame originally, and recent acquisition hasn't given enough time for codon optimization to the human host.  Also, if you were engineering a site, you would not likely have chosen this sub-optimal sequence over known examples that work better.

Modern methods do not leave signs in the sequence (such as convenient restriction sites for assembling a whole genome from DNA clones of parts of it) that would be unmistakable "signs of engineering" for older techniques.  So the frequent claim that the sequence doesn't show such signs isn't dispositive either.

What I find most troubling is the reporting that WIV (as well as labs elsewhere) were doing manipulations with parts of the S gene (the RBD).  They built hybrid viruses with S genes whose product was known to bind well to human cells.  These were done not just with pseudo-typed lentiviruses (which would be safe), but with bat coronavirus backbones (which is definitely unsafe).  Incredibly, some labs even added furin cleavage sites.  Multiple labs did such experiments, in China, in the US and Japan, and in other places.

Adaptation experiments were also carried out, by serial culture in human cells and in "humanized" mice bearing Hu ACE2 receptor.  Richard Ebright is correct to insist that all these experiments were exceedingly irresponsible.

A second problematic aspect is lab safety.  Shi Zheng-Li, the PI on these projects at WIV, stated that their work was carried out at BSL-2 and BSL-3, and Wade seizes on this to claim repeatedly that dangerous experiments were carried out at BSL-2.  If this is true, then that would also be very very bad, and even in the absence of any other confirming evidence, would weight the scale heavily in favor of the escape hypothesis.

A third problematic feature is the publication of a viral sequence closely related to SARS-CoV-2 (the one most closely related) just after the beginning of the pandemic, by Shi and collaborators.  This sample was clearly from around 2013. It is reported that the sequence file naming scheme indicates it was sequenced in 2017 and 2018.  Their attempt to hide this and pretend that it was only sequenced recently, is also troubling, if this is true.

Even if the relationship between the US and China were less antagonistic, I do not imagine that the Chinese government would ever allow investigators to access the original notebooks and sequence databases of the WIV investigators.  So at the end of the day, we shall probably never know the truth.  All we are left with is likelihoods that are heavily weighted by our "priors" about the two hypotheses.  

The real bottom line here is that even if the pandemic started by spillover, it certainly could have started by lab escape.  We should act on that, to ensure it doesn't happen again.

Update:  

Just to be clear though, the post above is my analysis of the likelihoods for two different hypotheses of what might have happened.  

Florian Kramer advises patience.  It took more than one year to find the relevant evidence for SARS.  For some other viruses, we have still not found the source of the spillover.  I think his priors for lab escape are too low, but still, this thing is unresolved.  It might change in the future if evidence supporting spillover becomes available.

However, if the leak theory is what actually happened, the evidence is going to be buried very deep.  Plus, the Chinese will never allow such an investigation.  All that pressing for a scientific investigation will do is lead to China-bashing, which is not helpful.  And the idea that the American intelligence community will sort this out is laughable.

What matters most is for us to accept that it could have been a leak, and change our attitude about whether research of that type should continue.

Saturday, May 8, 2021

Sums from a uniform random distribution

Recently, I came across some pages about a problem that I simulated in Python years ago, but never solved analytically, although a reader left a hint in the Comments.

Here we start by taking a look at what this author calls "random number blackjack" or the problem of the sum of uniformly distributed random numbers.

For the rest of the post, all the random numbers that we will talk about are given by a random number generator which draws from uniformly distributed real numbers in the range [0,1), i.e. greater than or equal to zero, and less than one.  

We also consider the sum of one or more independent values from such a distribution, which forms a different distribution.

the game

On each turn of this unusual game, you draw a random number.  You receive $100 for each turn you play, including the last.  Suppose you drew 0.45, 0.35, 0.25, on the third turn you would go bust, since the sum of the numbers drawn would be 1.05 > 1.0.  The payout would be $300.

And the question is, what is your expected winnings if you play the game many times?  Is it worth it to play (accepting some risk of bad luck) if the initial buy-in requires $250?

initial thoughts

First of all, since the generator always gives a number less than 1.0, you always play at least two turns.  The expected value (mean) of the first number is 0.5.  For a continuous distribution, the mean has a technical definition, but it is always on the axis of symmetry of the distribution, which is obviously 0.5 here.

The expected value of the sum of several random numbers is the sum of their expectations.

So, for example, E[3] = 1.5.  Most of the time we will quit after two or three turns, but there will occasionally be an extended run of smaller numbers and a corresponding increase in the number of turns.

[spoiler alert, you may want to look at random number blackjack to try to work out the answer]

What famous number do we know that lies between two and three?  This is so trivial to simulate in Python I won't even post an example.

On one run I got 2.718531.

So it looks like the result is equal to e.  (The internet says that adding more rounds doesn't help the accuracy because of limitations in the random number generator).

I came across this problem in a slightly different form in the terrific introductory calculus book, Calculus Made Easy, which was originally written by Sylvanus P. Thompson (and available as a Project Gutenberg download).  

It was added by Martin Gardner when he edited the classic work (1998 ed., p. 153) and is simply a note about how e shows up everywhere.

But the problem is at least as old as Feller's classic text Probability (which I don't have, unfortunately).

related problem

Tim Black solves a related problem (here).  

Recall that for a standard (fair) die or dice, the expected value of a throw is the sum of each value times the probability that it will occur.  For a single die, the average is (1 + ... 6).1/6 = 21/6 or 3.5.  

For two dice, we have

1(2) + 2(3) + ... + 6(7) + ... + 2(11) + 1(12)

The distribution is no longer uniform, but it is still symmetric around the value of 7, which is the mean.  The expected values from adding two random draws from a uniform distribution are observed to add, but the resulting distribution is no longer uniform. 

Suppose we know a probability distribution for the sum of n random numbers, for some value of n, and then calculate the probability that the sum is greater than or equal to 1.  We can then obtain the expected value over a number of trials as that value n times the probability we calculated.

The probability distribution for the sum of two random numbers from the generator has a mean of 1.0, so the probability of exceeding 1.0 is 0.5.  That event has a weight of 2, so the contribution to the total expected value for a number of trials is 2(0.5) = 1.  So, in the same way as we did for the dice, we have that P(2) = 0.5.  

We also have that P(1) = 1.  That's another 1 to add to the expected value overall.

So now, what is the probability distribution for the sum of three random numbers?  That gets a little trickier.  The difficulty is that the probability distribution changes as n changes.  Eventually, it becomes normal, but how different is it for small n like 3, 4, or 5?

Here is where our analyst has a great idea.

Imagine that we change the game slightly.  We still have $100 as the payout at each stage.

From a stream of random numbers, as the numbers are drawn we write into another stream the sum at each stage.  So in the example above we would get 0, 0.45, 0.80,  and then at the third draw the sum is 1.05.  Rather than write the last value, subtract 1 first, then write that down and keep going.  

Notice that over this stretch we have a valid game, a sequence of increasing values followed by one that must be smaller than the last.

The values are

0 0.45 0.80 0.05

The values are in ascending order until the last one, which might be anything smaller than 0.80.  This must be true for a single round from the game according to the rules we have set up.

Since there are n! ways of arranging n values, and only one of those arrangements has the numbers in strictly ascending order, the probability of the event (for a random uniform distribution) is 1/n!.  In other words, starting at the beginning of a stream of random numbers

the probability of obtaining a result of 1 is 1.

the probability of obtaining a result of 2 is 1/2!.

the probability of obtaining a result of 3 is 1/3!.

Multiplying the value of each result by its probability we end up with 1 + 1 + 1/2!.  The expected value of a series of games is the sum of all the possibilities.  

E = sum 1/n!

This is just the infinite series for e.

simulations

I wrote two simulations to show results relevant to this problem.  The first one shows the distribution of sums of n = 1, 2, 3 or 4 random numbers.  As you can see from the figure



even 3 at a time, the sums look pretty close to a normal distribution.  The Central Limit Theorem says that they will tend to normal, and there is a bunch of theory that I don't understand that says if the draws are from a uniform distribution then the convergence is very rapid.

I got curious about this alternate game, so I wrote a simulation which shows that the sum of random numbers, when computed as above, by discarding the whole numbers from the result, appears to be still random uniform.  (gist here).  The original data and the summed series are plotted in the same hisotgram with transparency 0.5.  The new data is random uniform or close to it.


I don't know what the theoretical explanation for this is.  However, if it's true, then rather than do the sums, we can just draw from the random uniform distribution, and tally up the runs where all the values are increasing, until the last one  If we do the bookkeeping correctly, we get e as the result.

That means the original problem has the same as the alternative one.

serious analysis

I have reworked what is on the Mathworld page as follows:



That's where I am so far.  There's plenty more to investigate.  

The sum of two random numbers from a uniform distribution has a distribution that is given by convolution of the individual distributions.  But then each distribution for n > 2 is formed by another convolution.  Ultimately, the distributions tend to the normal.  

I don't see how you get to something as simple as 1 - 1/n! from that, although Tim Black gave us a different path above, which is why I wrote this post.

[Update:  I've been dense.  The "different path" is in fact the means by which the integral is evaluated.  It is not done by writing some complex expression and then seeking the antiderivative and evaluating it.  Instead, we know that the value for the cumulative distribution function at the upper bound must be 1, and at the lower bound it must be 1/n!. ]

There is a suggestion that this sort of thing is done more easily with generating or characteristic functions.  

Probably the first, simple thing would be to run the simulation using random numbers and not bother with the sum part, as we also looked at here.  [Update:  the result is as I suspected.  See gist.  If we simply find the length of runs in a stream of random numbers from a uniform distribution, where they are in increasing order, and then find the mean of those lengths, the result is e.]


Wednesday, April 21, 2021

Pythagorean triples

A Pythagorean triple is a set of integers a, b, and c such that

a^2 + b^2 = c^2

We want primitive triples, those where a, b and c do not have a common factor.

Euclid presents a formula for that:  take any two integers n > m > 0, then

b = 2mn
a = n^2 - m^2
c = n^2 + m^2

One more restriction:  m and n must not have any common factor, they should be co-prime, otherwise the triple will not be primitive.

It is easy to show that when a, b, and c are found in this way, they satisfy the Pythagorean equation.

The trick is to see where this comes from, and more importantly, to show that it describes all such triples.

Derivation

I found a very nice derivation in Maor (The Pythagorean theorem).  We start by investigating the properties of a, b and c.  

For example a and b cannot both be even, for in that case, c will also be even, and then there is a common factor.

Recall that any even number can be written as 2k, and any odd number as 2k + 1 for positive integer k (or k = 0 for n = 1), so it is easy to see that if n is even, so is n^2.  And if n is odd then so is n^2 as well.  Therefore an even perfect square implies an even integer square root, and an odd square implies an odd root.

In the case where a and b are both odd, c must be even, since odd plus odd equals even.  To see a problem with this re-write the equation 

(2i + 1)^2 + (2j + 1)^2 = (2k)^2

The left-hand side is not a multiple of 4, but the right-hand side is.  This is impossible.  Hence one of a or b must be even and the other odd.  Let a be odd.  Then b can be written as b = 2t for some integer t.  We have

(2t)^2 = c^2 - a^2 = (c + a)(c - a)

Every a, b and c must satisfy this equation.  Furthermore, we see that there is a factor of 4 on the left-hand side, hence we can get a divisor of 2 for each factor on the right:

t^2 = (c + a)/2 . (c - a)/2

a and c are both odd:

a = 2i + 1
c = 2k + 1

so their sum and difference are even, since

2k + 1 + 2i + 1 = 2(k + i) + 2
2k + 1 - 2i - 1 = 2(k - i)

After canceling the factor of 2 we get

(c + a)/2 = k + i + 1
(c - a)/2 = k - i

Thus both of these factors on the right-hand side are integers, while the left-hand side (t^2) is a perfect square.

Key step

Here's the last step.  We claim that these two terms do not have a common factor.  

Whenever x and y have a common factor, so do their sum and difference since x + y = fp + fq = f(p + q).  and so on.  The converse is also true.  

But the sum and difference for these terms are:

(c + a)/2 + (c - a)/2 = c
(c + a)/2 - (c - a)/2 = a

On the supposition that (c+a)/2 and (c-a)/2 did have a common factor, then they would share that common factor with both c and a.   Since we know that a and c (at least the particular ones we're interested in) do not have a common factor, neither do these two terms.

So the two terms have no common factor and yet multiply together to give a perfect square.

t cannot be prime itself (because there would not be two different factors).

So clearly there are two factors m^2 and n^2 (m and n not necessarily prime), and both terms are perfect squares!  Write:

n^2 = (c + a)/2
m^2 = (c - a)/2

n^2 + m^2 = c
n^2 - m^2 = a

And

m^2n^2 = t^2
mn = t

Since b^2 = 4t^2, b = 2t = 2mn.

These properties of m and n follow from the standard rules about factors, applied to a, b and c.  Therefore, any triple a, b, and c can be written in terms of an integer m and n by this method.

I checked this by using a Python script to search the entire space of squares below an arbitrary limit, for those which sum to give another square, keeping only those triples of squares with no common factors.

I tested each triple by taking the even member, dividing by 2, and then finding its factors.  This gives candidate pairs m and n with 2mn = b.  Then, just check for whether n^2 - m^2 = a.

The gist is here.

It is interesting to see which patterns among the triples come from which choices of m and n.


Sunday, April 18, 2021

Pi again

 I was working on a post about Archimedes calculation for pi, and I saw a reference that said Ptolemy used 377/120 as a rational approximation to the value.  

And then I also read that 355/113 is an even better approximation, and it is credited to the Chinese mathematician Zu Chongzhi (5th century CE).

There is a whole article in wikipedia about this subject.  Recall that Archimedes established upper and lower bounds on pi of 22/7 (3.14285714) and 3-10/71 (3.14084507)

In decimal form, Ptolemy's value is 

377/120 = 3.14166...

This is correct to the third digit after the decimal point.  The error is 7401 parts in ten million.  And really, it is accurate enough for any practical work.

I found it curious that the other value has a smaller denominator, but was not found by Ptolemy, and yet it is better, much better.

355/113 = 3.14159292

This is correct to the sixth digit after the decimal point.  The error is 27 parts in ten million.  That's about 275 times better than Ptolemy's value.

There is some text from Pi: A Source Book, by Berggren et al online.  In a chapter by Tam Lay-Young and And Tian-Se, I found a calculation ascribed to Liu Hui (3rd century CE).

The calculation does not seem to be any kind of a technical breakthrough, it is just a basic usage of the Pythagorean theorem.  (However, there are some values given that I don't see where they came from, so maybe I'm missing something).

Start with a circle of radius 10, and inscribe a hexagon.  One of the six equilateral triangles is shown.

DC = 5.17638.

Repeat until you're satisfied.  Keep track of n, the total number of sides.

At the end, compute the area of the polygon as 1/2 times the radius times n times the chord.  Since r = 10, the area of the circle is pi times 100.

The estimate based on the dodecagon is 6 x 10 x 5.17638 divided by 100 = 3.10582854.

In the end, they obtain 314-64/625, 314-169/625 as the bounds on pi (it's not clear to me how an upper bound was obtained).

In decimal, this means that 3.141024 < pi < 3.142704, which is slightly better than Archimedes.  

Apparently, Zu Chongzhi (5th century CE) went farther, far enough to recognize that 355/113 matches an improved estimate very well.  

The wikipedia article on Zu Chongzhi claims that he is "most notable for calculating pi as between 3.1415926 and 3.1415927".  It claims that his works show he calculated pi to 6 digits using a "12,288-gon".  

That's obtained by doubling a hexagon 9 times.  Using Python the other day, we obtained the sixth digit (after the decimal point) on round 9 of hexagon doubling as well. 

The answer as to how Ptolemy missed it seems to be that he didn't know the true value of pi well enough to believe that 355/113 was any better than 377/120.  The other possibility is that he was led to his value by some logic, and didn't check nearby values systematically.

I wrote a Python program to find rational approximations to (irrational) numbers.  The gist is here.

355/113 is a spectacular success.  

No other value comes close for a long time.  Aside from 355/113, every other fraction that becomes a better approximation is a very gradual improvement on what came before.  The first fraction to be a better approximation for pi than 355/113 is 52163/16604.

A Dedekind cut has to land somewhere.  I suppose 355/113 just lands particularly close to pi by accident.

Here is a plot of the errors.  The x-axis is the denominator.  The log of the absolute value of the difference from pi is plotted for all fractions that are closest to pi for a given denominator and also in lowest terms.  (gist).  You can see how special 355/113 is, and it stays that way (not shown).


I've written up the math for the area and perimeter methods for estimating pi and compared it to Archimedes (here).

Saturday, April 17, 2021

Archimedes and pi

In this post we take a look at Archimedes' approximation of the value of pi.  I'll try to make things as simple as I possibly can, but no simpler.

Here is the fundamental idea.  We will approximate the distance around a circle, called its circumference, by the distance around a regular polygon that just fits inside the circle, called its perimeter.  The points or vertices of the polygon will lie on the circle.

                                                 

It doesn't actually matter what kind of a polygon it is (i.e. how many sides there are), but it has to be regular, with sides all of the same length.  

For an inscribed polygon, the perimeter will be smaller than the circumference of the circle.

But the trick is, there is a way to compute the new perimeter for a polygon with twice the number of sides, 2n, given the original perimeter for n sides.  

Let's look at that what that means physically.  Actually, I'm going to show below just a small arc of the circle.  One side of our initial polygon will be colored red, and the chord it makes is the red line in the figure below.  The white area shows that it doesn't track very closely with the circle, and its length will be smaller than the length of this arc of the circle.


But then, when we double the number of sides, we get a polygon of 2n sides, and its perimeter follows the two blue lines.  This is clearly more like the circumference, there is less white space.  One side has become 2.

Double the number of sides again, to 4n.  The result is the magenta perimeter.  I hope you can see that as the number of sides gets larger, the perimeter of the polygon becomes a better and better approximation to the circumference of the circle.

This method is great, if you can find a way to express the new perimeter in terms of the old one each time. This is called the method of exhaustion and it was probably invented by a Greek mathematician named Eudoxus who lived before Archimedes.

To make things easier, we will only compute the lower limit.  That is, we will use inscribed polygons and compute their perimeters as we double the number of sides again and again.  

Python will do the actual computation.  Archimedes has this estimate in the end as 3-10/71 or about 3.140845.   Let's see how we compare.

Remember that pi is the ratio of the circumference of a circle, C, to its diameter.  If we use a circle whose diameter is equal to 1, then the circumference C will be equal to pi.  

The procedure we follow here is to inscribe a hexagon into a circle of diameter 1, and then calculate the length of the perimeter of the hexagon.  We do this because a hexagon's perimeter is particularly easy to find.

We will then derive a simple method to calculate the length of the perimeter after the number of sides is doubled (to give a 12-sided polygon).  This method will be applied repeatedly.  Archimedes reached 96 sides.  That is 4 doublings (6 - 12 - 24 - 48 - 96).

What he achieved is not just an estimate for the value of pi, it is an algorithm to compute the value of pi to any desired accuracy.

Begin with the hexagon.  A hexagon can be viewed as a group of six equilateral triangles.  Since the diameter of the circle will be 1, we choose the length of each triangle's side as 1/2.  This matches the diameter, made from two sides, and it results in a total perimeter of 6 times 1/2 or just 3.  

3 is our first lower bound for pi.

I introduced the side length of 1/2 above in order to see, even before we really get started, what the perimeter of the hexagon is and give a first estimate for the value of pi.  I must ask you to set that value aside for a minute.

We will need two ratios of sides in a right triangle, and it will make the math slightly easier if we scale the next equilateral triangle up to have a side length of 2 (we'll scale it back later, all we need are the ratios).  

Obtain a right triangle by cutting an equilateral triangle of side length 2 in half, so the base of the halved triangle is 1.  Pythagoras gives us the other side as the square root of 3, since 1^2 + 3 = 2^2.

Since the original triangle was equilateral, with angles of 60 degrees each, the halved triangle has one angle of 30 degrees.  Archimedes would call that measure one-third of a right angle, and the essential ratios for him would be 2 to 1 (hypotenuse to short side), and the square root of 3 to 1.

We have the angle that we're going to start with, 30 degrees, and we have two essential ratios.

The next graphic is a proof without words of Thales' famous circle theorem:  any angle inscribed in a semicircle is a right angle.  Since PQ is a diameter, the angle PRQ is a right angle.  

Proof.  the two smaller triangles formed by the radius OR are both isosceles, so by Euclid I.5 they have equal base angles.  Since the total of all the angles in a triangle is equal to two right angles and angle PRQ is one-half that, the result follows.



We're going to choose the angle P to have the particular value of 30 degrees.  Then the complementary angle Q is 60 degrees, and RQ can be viewed as one side of an inscribed hexagon.  (We won't prove it, but the peripheral angle P is one-half the measure of the central angle ROQ which cuts off the same arc of the circle).  That makes ROQ equilateral.

The ratios that we had above will apply, because P is a 30 degree angle in a right triangle.  So the ratio of PQ to RQ is 2, and the ratio of PR to RQ is the square root of 3.

We said that we will inscribe a hexagon into circle.  I don't actually have a picture of that, but I do have a picture of an octagon, so I'll show that, and ask you to use your imagination.

Let us just focus on one of the six sides.  Suppose ROQ is an equilateral triangle, as we had before, so that RQ is one of the six sides of an inscribed hexagon.  We approximated the value of pi by finding the length of RQ (in diameter units) and then multiplying by n = 6.

Our procedure is going to be as follows:  at each step we will halve the angle at P, then compute the new length SQ.  After one cycle, SQ is one side of a 12-sided polygon, so we multiply by 12 to get an approximation for pi.

To carry out this program, we need one more preliminary result.  It is called the angle bisector theorem.  This theorem provides a way to start with the special ratios for any angle, and find the same ratios for one-half that angle.

The line segment b bisects the angle at the left, so the two half-angles labeled theta are equal.  In the upper triangle, we draw the altitude to side c and label it d.  We can do this because the larger of the two triangles formed by the altitude is congruent to the triangle in red with sides a,b,d.  

Next, the small triangle with sides e and d at the top right is similar to the whole starting triangle with sides a,f,c (both are right triangles with one smaller angle gamma).  Similar triangles gives us this ratio:

e/d = c/a

Now add 1 to both sides

e/d + d/d = c/a + a/a

f/d = c/a + a/a

a/d = c/f + a/f

On the right-hand side we have what are called in trigonometry the cosecant and cotangent of the angle 2 theta.  For the case of 2 theta equal to 30 degrees, these are the ratios from above:  c/f = 2 and a/f = square root of 3.  The left-hand side is the cotangent of the half-angle, theta, which is obtained by simple addition.  So the cotangent of 15 degrees is 2 + square root of 3.

Let's just check:


To get the other ratio for the half-angle (b/d), recall that our new triangle has sides a,b,d, and we know the ratio a/d.  But we also know from the Pythagorean theorem that

a^2 + d^2 = b^2 

so

b/d = sqrt(1 + a^2/d^2)

And that's it!  

Going back to this figure

We have that PR/RQ = square root of 3 and PQ/RQ = 2.  We compute the ratios for the half-angle, PS/SQ and PQ/SQ using the method laid out above. 

To get an estimate for pi, invert the last ratio to get SQ/PQ, recall that PQ is equal to 1, and multiply by the number of sides.  We've doubled the number of sides so n = 12. 

The essential code  is:

I set it up as a generator, which is initialized with the values of sine and cosine for 30 degrees.  A gist for the whole program is here.

Here's a screenshot of the results:


The fifth entry gives the result from four doublings as 3.141032.  Archimedes has this estimate in the end as 3-10/71 or about 3.140845.  His lower bound is smaller than ours, because we've used a powerful computer to do very accurate calculations.

One interesting aspect of the whole story is to think about how he may have obtained a rational approximation to the square root of three (all of his calculations were with rational numbers).  That's for another day.

Many thanks to the author of this page and to Neil Carothers.  (His pages have since been disappeared by his university, which hosted them, you may be able to find them by using the wayback machine).  The  post you're reading does things the way Archimedes did them, but there is a simpler formula using the values for the perimeter itself (see my pdf).

Monday, April 12, 2021

Ptolemy again

 My last post was about Ptolemy's theorem.  In the meantime, I've come across a cool "proof without words" for this.

credit

I can see how, knowing that we want ac + bd could lead you to work on scaling triangles into the parallelogram, which would then give all the angles for the central triangle (although note the switch in order for gamma and delta).

It's simply beautiful.

Monday, April 5, 2021

Ptolemy's theorem

 Ptolemy's theorem says that for a cyclic quadrilateral, the product of opposing sides, summed, is equal to the product of the diagonals:

AB CD + BC AD = AC BD




Proof. adapted from wikipedia (here).

Let the angle s (red dot) subtend arc AB and the angle t (black dot) subtend arc CD.  Then the central angle DPC = s + t and it has sin s + t.  The other central angle APD has the same sine, as it is supplementary to s + t.

Let the components of the diagonals be AC = q + s and BD = p + r.  

Twice the areas of the four small triangles will then be equal to

(pq + qr + rs + sp) sin s + t

Simple algebra will show that 

(pq + qr + rs + sp) = (p + r)(q + s) = AC BD

The product of the diagonals times the sine of either central angle is equal to twice the area of the quadrilateral.  We're on to something.

Now, the great idea.  Move D to D', so that AD' = CD and CD' = AD.  

The triangles ACD and ACD' are congruent, by SSS, so they have the same area.

Therefore the area of ABCD is equal to the area of ABCD'.

Some of the angles switch with the arcs.  In particular, angle t (black dot) now subtends arc AD'.  As a result s + t is the measure of the whole angle at vertex C.  The whole angle at vertex A is supplementary, and the sine of the whole angle at vertex A is equal to that at C.

So twice the area of triangle ABD' is AB AD' sin s + t, and twice that of BCD' is BC CD' sin s + t.  Add these two areas, equate them with the previous result, and factor out the common term sin s + t:

AC BD = AB AD' + BC CD' 

But AD' = CD and CD' = AD so

AC BD = AB CD + BC AD

This is Ptolemy's theorem. 

Here is one result from the theorem.  Draw an equilateral triangle and its circumcircle.  Pick any other point P on the circle and connect it to the vertices as shown.

Ptolemy says that

ps = qs + rs

p = q + r

Monday, March 29, 2021

Image conversion for FB



I got an iPhone 11 Pro about 18 months ago.  I love the quality of the images.  Low-light is still more amazing.  

But the image file type is .HEIC.  I have a recollection that Facebook didn't accept .HEIC at the time, when trying again now I find that it does.  In any event, my workflow since then has included opening all the images up in Preview and manually exporting them to relatively low quality .jpg files, one by one.

I finally got tired of this.

I found a page where the guy shows how to set up the macOS Automator app to tell Preview to do the conversion.  Some day I might try that.  But it got me thinking.  

I used trusty old Homebrew to install imagemagick.  (It took a while, there are *a lot* of dependencies).  Anyhow, so then I just cd to the Downloads directory and do this in Terminal:

magick convert -quality 10 IMG_9836.HEIC IMG_9836.jpg

It works great.  I get a 10-fold compressed jpg compared to the original.  So even if FB can now handle .HEIC I'm saving all that bandwidth on the upload.

The next problem was how to specify the new filenames for batch mode.  One could use sed, but it's so awkward to read.  Or I could always go to Python to bundle up the processes, but eventually I found a one-liner that works in Terminal:

for f in *.HEIC;  do magick convert -quality 10 $f ${f:0:9.}jpg;  done

The first filename is the source, like IMG_9836.HEIC.  The second filename is ${f:0:9.}jpg, it extracts characters at specified positions from the string variable.  I didn't know you could do this in the shell.

It's a hack, but it works in this case because the variable names are all exactly the same length.  Anyway, it's great.  As long as you're not afraid of Terminal.


Thursday, January 14, 2021

Senator Hawley, provocateur

Sen. Josh Hawley of Missouri voted to object to the certification of the ballots of PA electors.  A statement from him that was printed in a Missouri newspaper is also available on Twitter.  Hawley claims that when the PA legislature authorized mail-in ballots what they did was unconstitutional. 

The "Pennsylvania Supreme Court .. changed the rules for when mail-in ballots could be returned ... the Pennsylvania Supreme Court threw out the case on on procedural grounds, in violation of its own precedent.  To this day, no court has found the mail-in voting scheme to be constitutional, or even heard the merits of the case."

It is true that they tried to change the rules for when mail-in ballots could be returned, but that is a different case.  Hawley is deliberately conflating two different issues.  The question about timing of return was addressed by the U.S. Supreme Court. This case is about whether absentee voting is constitutional.  (Hawley is being dishonest).

So then, here's a bit of history about absentee voting (voting by mail, or mail-in ballots) in PA.  You can read what the PA Constitution says about absentee voting here.

"The Legislature shall provide a manner in which .. qualified electors .. [who] are unable to attend at their proper polling places .. may vote, and for the return and canvass of their votes".

And it gives examples of reasons:  "because of illness or physical disability or who will not attend a polling place because of the observance of a religious holiday or who cannot vote because of election day duties, in the case of a county employee".

Sen. Hawley relies upon a restrictive reading of this text where he claims that what the PA legislature did was unconstitutional (that only the listed reasons are valid).  There has been a lot of discussion about why that is not the case (speech by Sen. Casey, good newspaper review).  It is true that the PA Supreme Court has not actually ruled on this issue.

 In October, 2019 (not "last year"), the PA legislature passed, and the governor signed into law Act 77 of 2019 in Pennsylvania (P.L. 552, No. 77).  You can read the text here, but like most legislation it is frankly, unreadable.  It provides for universal mail-in ballots.

It is noteworthy that the bill was introduced by Republicans and passed with majority support by Republicans.  Several previous elections have been run using these procedures.

On Nov. 21 (18 days after the 2020 presidential election) a suit was filed (by U.S. Representative Mike Kelly and others) making the claim of unconstitutionality.  The PA Supreme Court ruled against them on Nov. 28, on the grounds that their claim was untimely.  You can read the Order here.

"As a remedy, Petitioners sought to invalidate the ballots of the millions of Pennsylvania voters who utilized the mail-in voting procedures established by Act 77 and count only those ballots that Petitioners deem to be “legal votes."

In a concurrence, Justice Wecht wrote

"Unsatisfied with the results of that wager, they would now flip over the table, scattering to the shadows the votes of millions of Pennsylvanians,"

In summary then, Republicans voted to allow mail-in ballots in PA, ran several elections this way without challenging (their own) law, and then filed suit to challenge the process, more than two weeks after their preferred candidate for President lost the election in PA.

Not only did the PA Supreme Court dismiss this case, but it is a fundamental tenet of election law (as promulgated by the U.S. Supreme Court), that authorities may not change the rules after the election.  Voters are entitled to rely on the rules established beforehand.

You can read a good post about election law here, and an NPR article about the recent case here.

Sen. Hawley is dishonest, but you knew that.  Was this done deliberately by the GOP with the plan to try to overturn the election?  Who knows?  

What seems clear is that the modern GOP is the party of voter suppression, the party that (arguably) wants to return to Jim Crow, by disenfranchising Black voters.

Thursday, December 24, 2020

Archimedes and pi

 I've written a lot about Archimedes and the approximation of pi over the years (appropriated from the web, actually).

Most of that material is in my Analytic Geometry book.  I am in the process of re-doing the Github repos for the books, so they're not currently up-to-date, but that will change over the next week or so.  You can find me on Github here.

In the last couple of days, I wrote a summary of the math of Archimedes with respect to pi, including how his formulas connect with others based on perimeter and area that are due to Gregory.  There is also extensive discussion of methods to approximate the square root of 3, essential to Archimedes work.

That pdf is on Dropbox here.  Enjoy, and as always, comments would be welcome.

Monday, December 14, 2020

Pain with mac OS

I've run Macs ever since 1984.  Never had anything else except one Raspberry Pi running Linux.  

But I'm alarmed by clear signs of decay in QA for Mac system software.  Of course, they say you get what you pay for.  :)  But Apple is a hardware company and I've gladly paid a premium for a beautiful OS and well-designed hardware.

My current issue (though not the first) is the latest mac OS 11 (Big Sur).

So there I was, minding my own business, running Catalina and streaming video through the browser, Safari, when out of nowhere I got the spinning beachball.  Machine is unresponsive, unable to kill the app, transitioning to a black screen of death, which was new to me.  Bizarrely, it had a cursor which responded to the trackpad.

Luckily, there is this thing called googling.  Turns out you can press the power button for 10 seconds and the laptop will shut down.  Reboot and it seems fine.  So what happened?  The internet says that it has to do with system software updating, some conflict that happens with sleep.

I go to System Prefs > Software Update and it looks like this:


I never set this.  My 1 yr old laptop came with this as the default.  So the auto-updater crashes the machine.  Apparently it is downloading/installing Big Sur.

I uncheck everything.  Then, some demon tempted me to download and install Big Sur myself, which I did manually from the main screen for this preference.  Repeatedly (3x), I download 12 GB, the install fails, and then the 12 GB is deleted (or at least, neither I nor Update knows where it is).  You gotta start over.  The third time, I am on it when the download finishes, and I hit the button, so it goes.  

Not crazy about the updated UI, but that's life.

Now, one day later, there's an update.

Since auto-update can crash, I have only the first checkbox selected in the first window, above.  

I manually download the 11.1 update (Update Now).  Three times through, I download 3.x GB, then it stalls.  Right here.

 

I don't have the patience to do anything more.  I'm just angry.  How hard could it be to set up a fake install and test whether it works properly?  And then to throw it away each time.  They just don't frickin' care anymore.

Saturday, July 25, 2020

Rosalind Franklin's centenary

Today is the 100 year anniversary of the birth of Rosalind Franklin, and I have something to say about Nobel Prizes and scientific reputations.

I think we can agree, from the start, that prizes for science suck.  In particular, the Nobel Prize sucks.  Some prizes have been given to yahoos.  Other prizes have been given to mentors but not to the juniors who actually did the work, and in some of those cases the juniors had the great ideas.  See Enders, John for a counter-example.

Some prizes were limited because more people deserved them than just the maximum of 3.

Bob Gallo should have been awarded the Nobel Prize, he invented every tool for studing HIV that anyone ever used.  But he was an asshole, and the Swedes decided to make a point.

Rosalind Franklin arguably deserved the Nobel Prize.  Her major problem was that she died of cancer several years before the prize for DNA was awarded, and there is a rule against posthumous awards.

One often sees (now more frequently) the claim that Watson and Crick "stole" Franklin's data and received recognition for discovery of the double helix when it was rightfully hers.  It's feminist dogma.

It's also revisionist nonsense.

It is false that Franklin discovered the double-helix.  What she did was to take a beautiful X-ray picture of DNA that Crick knew immediately implied the double helix.  Franklin herself was unsure.  She wrote "if a helix..."  Furthermore, this would be her only contribution, seminal to be sure, but her only one.

Franklin was in some respects in competition with Watson and Crick, and she was definitely unhappy when it became known that her boss Wilkins (she disputed his authority), and Max Perutz, had made her picture available to Watson and Crick.  It had already been shown in local seminars.  Perutz, as straight an arrow who ever lived, was not bothered about how things went down.

Beyond that, the discovery of the double helix relied on deep crystallographic and structural knowledge of Crick, as well as the enormous luck of Watson.  He had no idea what the picture meant, but he knew about Chargaff's ratios, and he knew the correct structures for the bases in solution.  This was crucial.  The ultimate aha moment was Watson's, much as I would wish otherwise.  (Because he is also an asshole).

Franklin was, in years shortly following the events, friendly enough with Crick and his wife that they vacationed together.  There was no animosity between them about DNA.  She died 5 years later, in 1958.

If you dispute any of this, go re-read Judson's book and Perutz's writing about this and get back to me.

The idea that Watson and Crick stole the double helix from Franklin is complete nonsense.


Sum of angles, revisited

I've run into a number of different derivations for the "sum of angles" formulas over the years.  These are 

cos s + t = cos s cos t - sin s sin t
sin s + t = sin s cos t + cos s sin t

For example, here is one using Euler's formula, and here is a derivation of the difference version of the cosine that I encountered in Gil Strang's Calculus textbook.  As I've said, I find it only necessary to memorize one:

cos s - t = cos s cos t + sin s sin t

which is easily checked for the case where s = t.

Recently I ran into a couple more.  Probably the simplest is this one:


Two stacked right triangles with a surrounding rectangle.  The upper triangle is sized so that the hypotenuse is 1 and the sine and cosine are obvious.  For the triangle with angle phi, we need an extra term in the sine and cosine so that when dividing, say, opposite/hypotenuse, the result is correct.

The angle theta + phi is known by the alternate interior angles theorem, and the small triangle with angle phi is known by a combination of the complementary and supplementary angles theorems.

Now, just read the result.  One diagram gives both formulas!  

There is also a simple derivation for the sine formula based on area calculations.  We calculate the area of this triangle in two ways:

On the left, we have that

A = (1/2) a sin (theta + phi) b

On the right (h = a cos phi = b cos theta) and

A = (1/2) h a sin phi + 1/2 h b sin theta 
  = (1/2) (b cos theta a sin phi + a cos phi b sin theta)

Equating the two and factoring out (1/2)ab, we obtain the addition formula for sine.  (Unfortunately, I've lost the links where I saw these).

Here is a link to a rather comprehensive chapter on the subject from my geometry book (Github).






Thursday, June 25, 2020

Shapefiles

I've been working with maps using Python, primarily maps for the United States.  The standard format for much geographic data is GeoJSON.  

But there is another format that is even more official, and is maintained by the US Census Bureau.  That is a collection of Shapefiles.

To recap, here is a site where GeoJSON files for the Us are available in multiple sizes (small, medium, large), as well as their .kml and Shapefile .shp equivalents.  The sizes are 500k, 5m, 20m, from largest to smallest (the labels are the scales, 500k being the most detailed).
 
The files were obtained from links on this page.  It includes data for the US, for the states, and for counties.  And it contains Congressional districts, which would be useful to remember.

A Shapefile is binary-encoded geographic data in a particular format.  A good discussion is here.

The specification was developed at least partly by ESRI, which develops geographic information software..  The encoding was undoubtedly designed to save space, back when space (storage, transmission bandwidth) was much more expensive.  Now, the opaque data is a liability.

Shapefiles

There is actually not just one file, but always a minimum of three including .shp, .shx and .dbf inside a zip container.  The Shapefiles from the US Census also have .prj and .xml.

I can't tell much by looking at one with hexdump, except that most of it is aligned (in sections), on 16-byte boundaries.  The format is described at this link, but I haven't worked with that.

One way to open and read a Shapefile is to use geopandas.  I grab that with pip3 install geopandas

The example is

>>> d = 'gz_2010_us_040_00_20m'
>>> fn = d + '/gz_2010_us_040_00_20m.shp'
>>> df = gpd.read_file(fn) 

The columns are:
  • GEO_ID
  • STATE
  • NAME
  • LSAD
  • CENSUSAREA
  • geometry
>>> df.NAME.head()
0        Arizona
1       Arkansas
2     California
3       Colorado
4    Connecticut
Name: NAME, dtype: object

For some reason the order is several joined partial lists of states, each one alphabetized.

We need to extract the coordinates for a particular state:

import geopandas as gpd
fn = 'ex.shp'
df = gpd.read_file(fn)
sel = df['NAME'] == 'Maine'
g = me = df.loc[sel].geometry
from shapely.geometry import mapping
D = mapping(g)
for f in D['features']:
print(f['id'])
L = f['geometry']['coordinates']

for m in L:
print(len(m[0]))
print(m[0][0])
print(m[0][1])
print()

> python3 script.py
39
11
(-69.307908, 43.773767)
(-69.30675099999999, 43.775095)
11
(-69.42792, 43.928798)
(-69.423323, 43.922871)
...

Pythagorean theorem redux

Here is a cool proof I saw on Twitter, it was an RT by @StevenStrogatz from this guy:



Take a generic right triangle.  Flip, rotate and scale it by multiplying each side by a factor of b.  I did this by imagining that a = 1 and then b = ab.  The complementary angles are marked with circles on the right.


So then construct a rotated triangle from the same input, but scaled by a factor of a, and attach it to the other one.  We know that the sides marked ab are parallel, and that the angle between bc and ac is a right angle, by the properties of right, complementary and supplementary angles.


The four outside vertices form a rectangle, from the angles and also since ab = ab.

Finally, rotate and scale again, by a factor of c:


Slide them together


Albers projection

The standard projection used in converting latitudes and longitudes on the (roughly) spherical earth to a planar map depends on what you’re projecting.

Most people know about the Mercator projection.

The one used extensively for maps of the United States is called the Albers Equal-Area Conic projection.



The wikipedia article gives some formulas:


You must first choose two reference latitudes.  Latitudes are referred to by the letter phi and longitudes by the letter lambda.  So the two references are phi_1 and phi_2. 

You also choose a center for the map, at phi_0, lambda_0.

From these four values you calculate n, C and then rho_0, which are each the same for every transformation in this projection.  

Finally, for each coordinate phi, lambda one calculates rho and theta and then finally

x = rho sin theta
y = rho_0 - rho cos theta

This, however, is for the assumption of a spherical earth.  The equations for an ellipsoid are a bit harder.

The wikipedia article gives this url for a pdf and I found the same report referenced in this answer to a question on Stack Exchange.

so that was lucky, because it gave me not only the equations but also a worked numerical example for each, which helped greatly in finding my mistakes in the code.

The math is explained in a write-up done in LaTeX, as a Dropbox link to a pdf.  The math has been encoded as Python scripts sphere.py and ellipsoid.py here.

Here are screenshots from the manual:



The test is a latitude of 35°N., -96°W., which should give a result (for the ellipsoid) of 



> python ellipsoid.py 
test1
p0: 23.0
l0: -96.0
p:  35.0
l:  -75.0
x:  1885472.7
y:  1535925.0

The first part of the output is the center we chose.  The result for x and y matches the source.

So then, for a script in the same directory as ellipsoid.py, we can do import project and do 

> python3 map_counties.py counties.geo.txt AL



No longer squashed.

Naturally, there is software out there that will do this.  In particular, the Proj transformation software.

I obtained it with Homebrew

> brew install proj

> echo 55.2 12.2 | proj +proj=merc +lat_ts=56.5 +ellps=GRS80
3399483.80 752085.60

which matches their example.

Of course, we really want the Albers equal area conical projection based on the ellipsoid.
> echo -75 35 | proj +proj=aea +lat_1=29.5 +lat_2=44.5 +lat_0=23 +lon_0=-96 +ellps=clrk66
1887211.95 1533994.75
These are close but don't quite match.  I will have to explore Proj more to know why.

Put the input data into a file and do:
> cat data.txt | proj +proj=aea +lat_1=29.5 +lat_2=44.5 +lat_0=23 +lon_0=-96 +ellps=clrk66
1887211.95	1533994.75
>