Wednesday, September 15, 2021

Sum of angles: Ptolemy

 The "sum of angles" theorems are incredibly useful in calculus.  These are formulas for the sine (or cosine) of the sum (or difference) of two angles.  There are a number of derivations, of which my favorite is a proof-without-words (here).

Recently, I got interested in Ptolemy's theorem, which is illustrated in this graphic:



There are proofs based on similar triangles (a bit complicated), and one based on areas with a wonderful trick in the middle.  Another terrific proof-without-words for that is here.

It turns out that there are easy proofs for the trigonometric theorems starting from Ptolemy's theorem, and I want to look at them here.  The first one is for sin s+t:

The proof depends on the following details.  First, the black line in the middle is a diameter of the circle, scaled to have length 1.  As a result, the two triangles are right triangles, justifying the labels on the sides.  Ptolemy says to multiply two pairs of opposite sides and add them.

In addition, the dotted line is the chord of the sum of the two angles, and its length is:  2R sin s+t = L.  But 2R is scaled to be 1, so finally we have the formula for the sine of s+t.  It basically writes itself.

The second one is for the difference.  This is extra, since the easiest way to get from the sum to the difference is to use the fact that cosine is an even function and sine is odd:  sin(x) = -sin(-x).

If you're trying to set this up yourself, a big clue is that since the formula has a minus sign in it (sin s cos t minus sin t cos s), the sin s-t term is going to be one of the sides, and naturally, it is the side opposite the diameter.

As for the cosine formulas, there is yet another trick.  We will still have that the dotted line is going to be the sine of some angle θ, but it is also the cosine of the angle that is complementary to θ!  And again, cosine s+t is one of the sides.


The last one isn't quite as pretty (I don't know if there are others, I came up with this one myself).  The hard part is writing an expression for the angle marked with the black dot, but its complement is just s-t.


Wednesday, August 25, 2021

Calculating continued fractions

 I've been playing with continued fractions.  The golden ratio is one of the simplest examples.


Substituting the whole right-hand side for the denominator on the right:

And here is sqrt(3) for another.  I find the square root symbol to be a bit of a distraction, so I substituted x for sqrt(3).

The standard notation for this is [1;(1,2)].  Here is the derivation for this example:



At this point, the trick is to add and subtract 1 in the denominator on the right-hand side  

But (x - 1)/2 = 1/(x + 1), and after another round of add and subtract we have

We are done because x - 1 appears on both sides.

Then to evaluate it we must chop off the continuation somewhere.  For this example, there are two different chains of rational approximations to sqrt(3).  Here is one


I think I finally understand how the derivation works.  I've done a number of examples with as many as 8 terms in the continued part.  This resource was very helpful, since it has the correct answers.

I also wrote a simple Python script to evaluate a continued fraction based on the coefficients, which does the somewhat tedious calculations.

I wrote this up as a pdf on GitHub, and the script is a gist here.

And finally, just note that the existence of a continued fraction that continues forever is a proof of irrationality.  All of the integers that are not perfect squares are irrational, and they all have continued fraction representations.

[Sorry for the large image sizes.  They are screenshots from my pdf, which blogger insists on making giant size even when "small".  I know, I know, MathJax, but ... ].

Friday, July 16, 2021

Pentagons, again

 It's been some years, but I took another look at pentagons.  It is fun to see the connection between φ,  the "golden ratio", and the internal triangles in a pentagon.

First of all. the figure has five-fold rotational symmetry.

The marked angles in the panel on the left are equal by the isosceles triangle theorem, so the angles on either side of the central angle at each vertex are equal.  

Then we do some addition:  3 black + 2 magenta = 180 = 4 magenta + 1 black.  We conclude that black = mageneta.  So the vertex angle is divided into 3 equal parts.



The small triangle at the top is isosceles, because the triangles on either side are congruent to each other (by rotational symmetry, or ASA), so both sides are equal. Now, we see that the horizontal chord is parallel to the bottom side, by alternate interior angles.  We have a bunch of rhombi!

There are two types of similar triangles:  one tall and skinny, the other short and squat.

In the right-hand panel, a triangle of the second type is shown in light red.  It has equal sides of length x and a base of length x + 1.  The red one has sides of length 1 and a base of length x.  We form the equal ratios:  (x+1) /x = x.

One can also use the tall skinny triangles.

This allows the base to have length 1.  Then, the sides of the tall skinny triangle in the middle are x.  The small, tall skinny triangle at the top has sides x - 1 and base x - 2(x - 1) so the ratios of similar triangles are x = (x - 1)/(-x + 2), which simplifies to the same expression as before.



The apex angle of a tall skinny triangle is equal to 1/3 of the interior angle of the pentagon, which is 108/3 = 36.

We will find an expression for the sine of 36°, and this will give us another way to calculate the side length for the pentagon in terms of the chords.

We start by working with 18°, because 90/18 = 5 exactly.  The trick is that (using θ for 18°):  5θ = 90° so 2θ = 90 - 3θ.  Taking the sine of both sides and then converting to the complementary angle on the right, we have sin 2θ = cos 3θ.

One can also use de Moivre's formula and take the real part.

[Since φ - 1 = 1/φ, the above result can be written more simply, but it makes the calculation below more complicated].






There is something else.  If one draws the central angles of the pentagon, then the triangle containing a central angle of 72° and a base of one side length has a half angle of 36°.

So one-half the side length is the sine of 36°, which we found above.  The circle containing the five vertices for a pentagon of side length 1, has a radius of r = 1/2 sin36° = 1/sqrt(3 - φ).

The pdf of the write-up is here (Latex one level down).

Friday, May 28, 2021

Wednesday, May 26, 2021

Origins of SARS-CoV-2

Here are some links to helpful articles about the "lab-leak" hypothesis:

Nicholas Wade (link), Yuri Deigin (link), Rossana Segreto and Yuri Deigin (link)

Andersen et al. (link), Yangyang Cheng (link), Glenn Kessler (link), Matt Yglesias (link)

Daniel Engber (link)

My favored hypothesis has been that the Covid-19 pandemic began in the same way the original SARS pandemic did, in a "spillover" of SARS-CoV-2 from a natural reservoir of the virus in bats, by way of an intermediate host.

Along with much of the scientific and political world, this week I've been reading, and updating my likelihood of the alternative hypothesis, that the pandemic began when SARS-CoV-2 escaped in an accident, from a lab at the Wuhan Institute of Virology (WIV).

The problem with the spillover hypothesis is that, unlike with the original SARS, there is very little supporting evidence.  Rather, what we know about SARS-CoV-2 indicates that the virus appeared in humans in a single event. There is very little diversity in sequences of early virus samples.  It was not strongly associated with a "wet market" and there is no known intermediate host.

When the virus appeared in humans, it was already well-adapted to growth in them, with an S protein that fits its human receptor (the ACE2 receptor protein) very well, as well as a furin-cleavage site that is known to facilitate viral entry.  The furin site is essential for infectivity in humans.

With respect to the escape hypothesis, it did not help that Donald Trump supported this idea, suggesting the sinister possibility that release not only explains the pandemic, but that it was deliberate.  Since Trump has lied repeatedly about nearly everything, his statements actually decreased (at least for me) the likelihood that this hypothesis is correct.  Add to that the fact that so many right-wing politicians are looking to pick a fight with China.

I do not put much weight on vague reports that WIV staff were hospitalized with flu-like illness in the fall of 2019, including other reports that samples from them were tested for SARS-CoV-2 and came back negative.  The former reports come from "intelligence sources" who were motivated to support Trump's contention, and the latter from official Chinese government sources equally motivated to keep things under wraps.  They are possibly true, but it is impossible to evaluate their accuracy now.  

I also do not think much of the idea that, as David Baltimore reportedly said, the furin site is a "smoking gun".  Acquisition of such a sequence could easily have happened naturally, by recombination.  The site's codon usage does not provide convincing evidence that it was engineered.  Indeed, similar sequences that might have been recombined to give the furin site would perhaps also have been out-of-frame originally, and recent acquisition hasn't given enough time for codon optimization to the human host.  Also, if you were engineering a site, you would not likely have chosen this sub-optimal sequence over known examples that work better.

Modern methods do not leave signs in the sequence (such as convenient restriction sites for assembling a whole genome from DNA clones of parts of it) that would be unmistakable "signs of engineering" for older techniques.  So the frequent claim that the sequence doesn't show such signs isn't dispositive either.

What I find most troubling is the reporting that WIV (as well as labs elsewhere) were doing manipulations with parts of the S gene (the RBD).  They built hybrid viruses with S genes whose product was known to bind well to human cells.  These were done not just with pseudo-typed lentiviruses (which would be safe), but with bat coronavirus backbones (which is definitely unsafe).  Incredibly, some labs even added furin cleavage sites.  Multiple labs did such experiments, in China, in the US and Japan, and in other places.

Adaptation experiments were also carried out, by serial culture in human cells and in "humanized" mice bearing Hu ACE2 receptor.  Richard Ebright is correct to insist that all these experiments were exceedingly irresponsible.

A second problematic aspect is lab safety.  Shi Zheng-Li, the PI on these projects at WIV, stated that their work was carried out at BSL-2 and BSL-3, and Wade seizes on this to claim repeatedly that dangerous experiments were carried out at BSL-2.  If this is true, then that would also be very very bad, and even in the absence of any other confirming evidence, would weight the scale heavily in favor of the escape hypothesis.

A third problematic feature is the publication of a viral sequence closely related to SARS-CoV-2 (the one most closely related) just after the beginning of the pandemic, by Shi and collaborators.  This sample was clearly from around 2013. It is reported that the sequence file naming scheme indicates it was sequenced in 2017 and 2018.  Their attempt to hide this and pretend that it was only sequenced recently, is also troubling, if this is true.

Even if the relationship between the US and China were less antagonistic, I do not imagine that the Chinese government would ever allow investigators to access the original notebooks and sequence databases of the WIV investigators.  So at the end of the day, we shall probably never know the truth.  All we are left with is likelihoods that are heavily weighted by our "priors" about the two hypotheses.  

The real bottom line here is that even if the pandemic started by spillover, it certainly could have started by lab escape.  We should act on that, to ensure it doesn't happen again.

Update:  

Just to be clear though, the post above is my analysis of the likelihoods for two different hypotheses of what might have happened.  

Florian Kramer advises patience.  It took more than one year to find the relevant evidence for SARS.  For some other viruses, we have still not found the source of the spillover.  I think his priors for lab escape are too low, but still, this thing is unresolved.  It might change in the future if evidence supporting spillover becomes available.

However, if the leak theory is what actually happened, the evidence is going to be buried very deep.  Plus, the Chinese will never allow such an investigation.  All that pressing for a scientific investigation will do is lead to China-bashing, which is not helpful.  And the idea that the American intelligence community will sort this out is laughable.

What matters most is for us to accept that it could have been a leak, and change our attitude about whether research of that type should continue.

Saturday, May 8, 2021

Sums from a uniform random distribution

Recently, I came across some pages about a problem that I simulated in Python years ago, but never solved analytically, although a reader left a hint in the Comments.

Here we start by taking a look at what this author calls "random number blackjack" or the problem of the sum of uniformly distributed random numbers.

For the rest of the post, all the random numbers that we will talk about are given by a random number generator which draws from uniformly distributed real numbers in the range [0,1), i.e. greater than or equal to zero, and less than one.  

We also consider the sum of one or more independent values from such a distribution, which forms a different distribution.

the game

On each turn of this unusual game, you draw a random number.  You receive $100 for each turn you play, including the last.  Suppose you drew 0.45, 0.35, 0.25, on the third turn you would go bust, since the sum of the numbers drawn would be 1.05 > 1.0.  The payout would be $300.

And the question is, what is your expected winnings if you play the game many times?  Is it worth it to play (accepting some risk of bad luck) if the initial buy-in requires $250?

initial thoughts

First of all, since the generator always gives a number less than 1.0, you always play at least two turns.  The expected value (mean) of the first number is 0.5.  For a continuous distribution, the mean has a technical definition, but it is always on the axis of symmetry of the distribution, which is obviously 0.5 here.

The expected value of the sum of several random numbers is the sum of their expectations.

So, for example, E[3] = 1.5.  Most of the time we will quit after two or three turns, but there will occasionally be an extended run of smaller numbers and a corresponding increase in the number of turns.

[spoiler alert, you may want to look at random number blackjack to try to work out the answer]

What famous number do we know that lies between two and three?  This is so trivial to simulate in Python I won't even post an example.

On one run I got 2.718531.

So it looks like the result is equal to e.  (The internet says that adding more rounds doesn't help the accuracy because of limitations in the random number generator).

I came across this problem in a slightly different form in the terrific introductory calculus book, Calculus Made Easy, which was originally written by Sylvanus P. Thompson (and available as a Project Gutenberg download).  

It was added by Martin Gardner when he edited the classic work (1998 ed., p. 153) and is simply a note about how e shows up everywhere.

But the problem is at least as old as Feller's classic text Probability (which I don't have, unfortunately).

related problem

Tim Black solves a related problem (here).  

Recall that for a standard (fair) die or dice, the expected value of a throw is the sum of each value times the probability that it will occur.  For a single die, the average is (1 + ... 6).1/6 = 21/6 or 3.5.  

For two dice, we have

1(2) + 2(3) + ... + 6(7) + ... + 2(11) + 1(12)

The distribution is no longer uniform, but it is still symmetric around the value of 7, which is the mean.  The expected values from adding two random draws from a uniform distribution are observed to add, but the resulting distribution is no longer uniform. 

Suppose we know a probability distribution for the sum of n random numbers, for some value of n, and then calculate the probability that the sum is greater than or equal to 1.  We can then obtain the expected value over a number of trials as that value n times the probability we calculated.

The probability distribution for the sum of two random numbers from the generator has a mean of 1.0, so the probability of exceeding 1.0 is 0.5.  That event has a weight of 2, so the contribution to the total expected value for a number of trials is 2(0.5) = 1.  So, in the same way as we did for the dice, we have that P(2) = 0.5.  

We also have that P(1) = 1.  That's another 1 to add to the expected value overall.

So now, what is the probability distribution for the sum of three random numbers?  That gets a little trickier.  The difficulty is that the probability distribution changes as n changes.  Eventually, it becomes normal, but how different is it for small n like 3, 4, or 5?

Here is where our analyst has a great idea.

Imagine that we change the game slightly.  We still have $100 as the payout at each stage.

From a stream of random numbers, as the numbers are drawn we write into another stream the sum at each stage.  So in the example above we would get 0, 0.45, 0.80,  and then at the third draw the sum is 1.05.  Rather than write the last value, subtract 1 first, then write that down and keep going.  

Notice that over this stretch we have a valid game, a sequence of increasing values followed by one that must be smaller than the last.

The values are

0 0.45 0.80 0.05

The values are in ascending order until the last one, which might be anything smaller than 0.80.  This must be true for a single round from the game according to the rules we have set up.

Since there are n! ways of arranging n values, and only one of those arrangements has the numbers in strictly ascending order, the probability of the event (for a random uniform distribution) is 1/n!.  In other words, starting at the beginning of a stream of random numbers

the probability of obtaining a result of 1 is 1.

the probability of obtaining a result of 2 is 1/2!.

the probability of obtaining a result of 3 is 1/3!.

Multiplying the value of each result by its probability we end up with 1 + 1 + 1/2!.  The expected value of a series of games is the sum of all the possibilities.  

E = sum 1/n!

This is just the infinite series for e.

simulations

I wrote two simulations to show results relevant to this problem.  The first one shows the distribution of sums of n = 1, 2, 3 or 4 random numbers.  As you can see from the figure



even 3 at a time, the sums look pretty close to a normal distribution.  The Central Limit Theorem says that they will tend to normal, and there is a bunch of theory that I don't understand that says if the draws are from a uniform distribution then the convergence is very rapid.

I got curious about this alternate game, so I wrote a simulation which shows that the sum of random numbers, when computed as above, by discarding the whole numbers from the result, appears to be still random uniform.  (gist here).  The original data and the summed series are plotted in the same hisotgram with transparency 0.5.  The new data is random uniform or close to it.


I don't know what the theoretical explanation for this is.  However, if it's true, then rather than do the sums, we can just draw from the random uniform distribution, and tally up the runs where all the values are increasing, until the last one  If we do the bookkeeping correctly, we get e as the result.

That means the original problem has the same as the alternative one.

serious analysis

I have reworked what is on the Mathworld page as follows:



That's where I am so far.  There's plenty more to investigate.  

The sum of two random numbers from a uniform distribution has a distribution that is given by convolution of the individual distributions.  But then each distribution for n > 2 is formed by another convolution.  Ultimately, the distributions tend to the normal.  

I don't see how you get to something as simple as 1 - 1/n! from that, although Tim Black gave us a different path above, which is why I wrote this post.

[Update:  I've been dense.  The "different path" is in fact the means by which the integral is evaluated.  It is not done by writing some complex expression and then seeking the antiderivative and evaluating it.  Instead, we know that the value for the cumulative distribution function at the upper bound must be 1, and at the lower bound it must be 1/n!. ]

There is a suggestion that this sort of thing is done more easily with generating or characteristic functions.  

Probably the first, simple thing would be to run the simulation using random numbers and not bother with the sum part, as we also looked at here.  [Update:  the result is as I suspected.  See gist.  If we simply find the length of runs in a stream of random numbers from a uniform distribution, where they are in increasing order, and then find the mean of those lengths, the result is e.]


Wednesday, April 21, 2021

Pythagorean triples

A Pythagorean triple is a set of integers a, b, and c such that

a^2 + b^2 = c^2

We want primitive triples, those where a, b and c do not have a common factor.

Euclid presents a formula for that:  take any two integers n > m > 0, then

b = 2mn
a = n^2 - m^2
c = n^2 + m^2

One more restriction:  m and n must not have any common factor, they should be co-prime, otherwise the triple will not be primitive.

It is easy to show that when a, b, and c are found in this way, they satisfy the Pythagorean equation.

The trick is to see where this comes from, and more importantly, to show that it describes all such triples.

Derivation

I found a very nice derivation in Maor (The Pythagorean theorem).  We start by investigating the properties of a, b and c.  

For example a and b cannot both be even, for in that case, c will also be even, and then there is a common factor.

Recall that any even number can be written as 2k, and any odd number as 2k + 1 for positive integer k (or k = 0 for n = 1), so it is easy to see that if n is even, so is n^2.  And if n is odd then so is n^2 as well.  Therefore an even perfect square implies an even integer square root, and an odd square implies an odd root.

In the case where a and b are both odd, c must be even, since odd plus odd equals even.  To see a problem with this re-write the equation 

(2i + 1)^2 + (2j + 1)^2 = (2k)^2

The left-hand side is not a multiple of 4, but the right-hand side is.  This is impossible.  Hence one of a or b must be even and the other odd.  Let a be odd.  Then b can be written as b = 2t for some integer t.  We have

(2t)^2 = c^2 - a^2 = (c + a)(c - a)

Every a, b and c must satisfy this equation.  Furthermore, we see that there is a factor of 4 on the left-hand side, hence we can get a divisor of 2 for each factor on the right:

t^2 = (c + a)/2 . (c - a)/2

a and c are both odd:

a = 2i + 1
c = 2k + 1

so their sum and difference are even, since

2k + 1 + 2i + 1 = 2(k + i) + 2
2k + 1 - 2i - 1 = 2(k - i)

After canceling the factor of 2 we get

(c + a)/2 = k + i + 1
(c - a)/2 = k - i

Thus both of these factors on the right-hand side are integers, while the left-hand side (t^2) is a perfect square.

Key step

Here's the last step.  We claim that these two terms do not have a common factor.  

Whenever x and y have a common factor, so do their sum and difference since x + y = fp + fq = f(p + q).  and so on.  The converse is also true.  

But the sum and difference for these terms are:

(c + a)/2 + (c - a)/2 = c
(c + a)/2 - (c - a)/2 = a

On the supposition that (c+a)/2 and (c-a)/2 did have a common factor, then they would share that common factor with both c and a.   Since we know that a and c (at least the particular ones we're interested in) do not have a common factor, neither do these two terms.

So the two terms have no common factor and yet multiply together to give a perfect square.

t cannot be prime itself (because there would not be two different factors).

So clearly there are two factors m^2 and n^2 (m and n not necessarily prime), and both terms are perfect squares!  Write:

n^2 = (c + a)/2
m^2 = (c - a)/2

n^2 + m^2 = c
n^2 - m^2 = a

And

m^2n^2 = t^2
mn = t

Since b^2 = 4t^2, b = 2t = 2mn.

These properties of m and n follow from the standard rules about factors, applied to a, b and c.  Therefore, any triple a, b, and c can be written in terms of an integer m and n by this method.

I checked this by using a Python script to search the entire space of squares below an arbitrary limit, for those which sum to give another square, keeping only those triples of squares with no common factors.

I tested each triple by taking the even member, dividing by 2, and then finding its factors.  This gives candidate pairs m and n with 2mn = b.  Then, just check for whether n^2 - m^2 = a.

The gist is here.

It is interesting to see which patterns among the triples come from which choices of m and n.


Sunday, April 18, 2021

Pi again

 I was working on a post about Archimedes calculation for pi, and I saw a reference that said Ptolemy used 377/120 as a rational approximation to the value.  

And then I also read that 355/113 is an even better approximation, and it is credited to the Chinese mathematician Zu Chongzhi (5th century CE).

There is a whole article in wikipedia about this subject.  Recall that Archimedes established upper and lower bounds on pi of 22/7 (3.14285714) and 3-10/71 (3.14084507)

In decimal form, Ptolemy's value is 

377/120 = 3.14166...

This is correct to the third digit after the decimal point.  The error is 7401 parts in ten million.  And really, it is accurate enough for any practical work.

I found it curious that the other value has a smaller denominator, but was not found by Ptolemy, and yet it is better, much better.

355/113 = 3.14159292

This is correct to the sixth digit after the decimal point.  The error is 27 parts in ten million.  That's about 275 times better than Ptolemy's value.

There is some text from Pi: A Source Book, by Berggren et al online.  In a chapter by Tam Lay-Young and And Tian-Se, I found a calculation ascribed to Liu Hui (3rd century CE).

The calculation does not seem to be any kind of a technical breakthrough, it is just a basic usage of the Pythagorean theorem.  (However, there are some values given that I don't see where they came from, so maybe I'm missing something).

Start with a circle of radius 10, and inscribe a hexagon.  One of the six equilateral triangles is shown.

DC = 5.17638.

Repeat until you're satisfied.  Keep track of n, the total number of sides.

At the end, compute the area of the polygon as 1/2 times the radius times n times the chord.  Since r = 10, the area of the circle is pi times 100.

The estimate based on the dodecagon is 6 x 10 x 5.17638 divided by 100 = 3.10582854.

In the end, they obtain 314-64/625, 314-169/625 as the bounds on pi (it's not clear to me how an upper bound was obtained).

In decimal, this means that 3.141024 < pi < 3.142704, which is slightly better than Archimedes.  

Apparently, Zu Chongzhi (5th century CE) went farther, far enough to recognize that 355/113 matches an improved estimate very well.  

The wikipedia article on Zu Chongzhi claims that he is "most notable for calculating pi as between 3.1415926 and 3.1415927".  It claims that his works show he calculated pi to 6 digits using a "12,288-gon".  

That's obtained by doubling a hexagon 9 times.  Using Python the other day, we obtained the sixth digit (after the decimal point) on round 9 of hexagon doubling as well. 

The answer as to how Ptolemy missed it seems to be that he didn't know the true value of pi well enough to believe that 355/113 was any better than 377/120.  The other possibility is that he was led to his value by some logic, and didn't check nearby values systematically.

I wrote a Python program to find rational approximations to (irrational) numbers.  The gist is here.

355/113 is a spectacular success.  

No other value comes close for a long time.  Aside from 355/113, every other fraction that becomes a better approximation is a very gradual improvement on what came before.  The first fraction to be a better approximation for pi than 355/113 is 52163/16604.

A Dedekind cut has to land somewhere.  I suppose 355/113 just lands particularly close to pi by accident.

Here is a plot of the errors.  The x-axis is the denominator.  The log of the absolute value of the difference from pi is plotted for all fractions that are closest to pi for a given denominator and also in lowest terms.  (gist).  You can see how special 355/113 is, and it stays that way (not shown).


I've written up the math for the area and perimeter methods for estimating pi and compared it to Archimedes (here).

Saturday, April 17, 2021

Archimedes and pi

In this post we take a look at Archimedes' approximation of the value of pi.  I'll try to make things as simple as I possibly can, but no simpler.

Here is the fundamental idea.  We will approximate the distance around a circle, called its circumference, by the distance around a regular polygon that just fits inside the circle, called its perimeter.  The points or vertices of the polygon will lie on the circle.

                                                 

It doesn't actually matter what kind of a polygon it is (i.e. how many sides there are), but it has to be regular, with sides all of the same length.  

For an inscribed polygon, the perimeter will be smaller than the circumference of the circle.

But the trick is, there is a way to compute the new perimeter for a polygon with twice the number of sides, 2n, given the original perimeter for n sides.  

Let's look at that what that means physically.  Actually, I'm going to show below just a small arc of the circle.  One side of our initial polygon will be colored red, and the chord it makes is the red line in the figure below.  The white area shows that it doesn't track very closely with the circle, and its length will be smaller than the length of this arc of the circle.


But then, when we double the number of sides, we get a polygon of 2n sides, and its perimeter follows the two blue lines.  This is clearly more like the circumference, there is less white space.  One side has become 2.

Double the number of sides again, to 4n.  The result is the magenta perimeter.  I hope you can see that as the number of sides gets larger, the perimeter of the polygon becomes a better and better approximation to the circumference of the circle.

This method is great, if you can find a way to express the new perimeter in terms of the old one each time. This is called the method of exhaustion and it was probably invented by a Greek mathematician named Eudoxus who lived before Archimedes.

To make things easier, we will only compute the lower limit.  That is, we will use inscribed polygons and compute their perimeters as we double the number of sides again and again.  

Python will do the actual computation.  Archimedes has this estimate in the end as 3-10/71 or about 3.140845.   Let's see how we compare.

Remember that pi is the ratio of the circumference of a circle, C, to its diameter.  If we use a circle whose diameter is equal to 1, then the circumference C will be equal to pi.  

The procedure we follow here is to inscribe a hexagon into a circle of diameter 1, and then calculate the length of the perimeter of the hexagon.  We do this because a hexagon's perimeter is particularly easy to find.

We will then derive a simple method to calculate the length of the perimeter after the number of sides is doubled (to give a 12-sided polygon).  This method will be applied repeatedly.  Archimedes reached 96 sides.  That is 4 doublings (6 - 12 - 24 - 48 - 96).

What he achieved is not just an estimate for the value of pi, it is an algorithm to compute the value of pi to any desired accuracy.

Begin with the hexagon.  A hexagon can be viewed as a group of six equilateral triangles.  Since the diameter of the circle will be 1, we choose the length of each triangle's side as 1/2.  This matches the diameter, made from two sides, and it results in a total perimeter of 6 times 1/2 or just 3.  

3 is our first lower bound for pi.

I introduced the side length of 1/2 above in order to see, even before we really get started, what the perimeter of the hexagon is and give a first estimate for the value of pi.  I must ask you to set that value aside for a minute.

We will need two ratios of sides in a right triangle, and it will make the math slightly easier if we scale the next equilateral triangle up to have a side length of 2 (we'll scale it back later, all we need are the ratios).  

Obtain a right triangle by cutting an equilateral triangle of side length 2 in half, so the base of the halved triangle is 1.  Pythagoras gives us the other side as the square root of 3, since 1^2 + 3 = 2^2.

Since the original triangle was equilateral, with angles of 60 degrees each, the halved triangle has one angle of 30 degrees.  Archimedes would call that measure one-third of a right angle, and the essential ratios for him would be 2 to 1 (hypotenuse to short side), and the square root of 3 to 1.

We have the angle that we're going to start with, 30 degrees, and we have two essential ratios.

The next graphic is a proof without words of Thales' famous circle theorem:  any angle inscribed in a semicircle is a right angle.  Since PQ is a diameter, the angle PRQ is a right angle.  

Proof.  the two smaller triangles formed by the radius OR are both isosceles, so by Euclid I.5 they have equal base angles.  Since the total of all the angles in a triangle is equal to two right angles and angle PRQ is one-half that, the result follows.



We're going to choose the angle P to have the particular value of 30 degrees.  Then the complementary angle Q is 60 degrees, and RQ can be viewed as one side of an inscribed hexagon.  (We won't prove it, but the peripheral angle P is one-half the measure of the central angle ROQ which cuts off the same arc of the circle).  That makes ROQ equilateral.

The ratios that we had above will apply, because P is a 30 degree angle in a right triangle.  So the ratio of PQ to RQ is 2, and the ratio of PR to RQ is the square root of 3.

We said that we will inscribe a hexagon into circle.  I don't actually have a picture of that, but I do have a picture of an octagon, so I'll show that, and ask you to use your imagination.

Let us just focus on one of the six sides.  Suppose ROQ is an equilateral triangle, as we had before, so that RQ is one of the six sides of an inscribed hexagon.  We approximated the value of pi by finding the length of RQ (in diameter units) and then multiplying by n = 6.

Our procedure is going to be as follows:  at each step we will halve the angle at P, then compute the new length SQ.  After one cycle, SQ is one side of a 12-sided polygon, so we multiply by 12 to get an approximation for pi.

To carry out this program, we need one more preliminary result.  It is called the angle bisector theorem.  This theorem provides a way to start with the special ratios for any angle, and find the same ratios for one-half that angle.

The line segment b bisects the angle at the left, so the two half-angles labeled theta are equal.  In the upper triangle, we draw the altitude to side c and label it d.  We can do this because the larger of the two triangles formed by the altitude is congruent to the triangle in red with sides a,b,d.  

Next, the small triangle with sides e and d at the top right is similar to the whole starting triangle with sides a,f,c (both are right triangles with one smaller angle gamma).  Similar triangles gives us this ratio:

e/d = c/a

Now add 1 to both sides

e/d + d/d = c/a + a/a

f/d = c/a + a/a

a/d = c/f + a/f

On the right-hand side we have what are called in trigonometry the cosecant and cotangent of the angle 2 theta.  For the case of 2 theta equal to 30 degrees, these are the ratios from above:  c/f = 2 and a/f = square root of 3.  The left-hand side is the cotangent of the half-angle, theta, which is obtained by simple addition.  So the cotangent of 15 degrees is 2 + square root of 3.

Let's just check:


To get the other ratio for the half-angle (b/d), recall that our new triangle has sides a,b,d, and we know the ratio a/d.  But we also know from the Pythagorean theorem that

a^2 + d^2 = b^2 

so

b/d = sqrt(1 + a^2/d^2)

And that's it!  

Going back to this figure

We have that PR/RQ = square root of 3 and PQ/RQ = 2.  We compute the ratios for the half-angle, PS/SQ and PQ/SQ using the method laid out above. 

To get an estimate for pi, invert the last ratio to get SQ/PQ, recall that PQ is equal to 1, and multiply by the number of sides.  We've doubled the number of sides so n = 12. 

The essential code  is:

I set it up as a generator, which is initialized with the values of sine and cosine for 30 degrees.  A gist for the whole program is here.

Here's a screenshot of the results:


The fifth entry gives the result from four doublings as 3.141032.  Archimedes has this estimate in the end as 3-10/71 or about 3.140845.  His lower bound is smaller than ours, because we've used a powerful computer to do very accurate calculations.

One interesting aspect of the whole story is to think about how he may have obtained a rational approximation to the square root of three (all of his calculations were with rational numbers).  That's for another day.

Many thanks to the author of this page and to Neil Carothers.  (His pages have since been disappeared by his university, which hosted them, you may be able to find them by using the wayback machine).  The  post you're reading does things the way Archimedes did them, but there is a simpler formula using the values for the perimeter itself (see my pdf).

Monday, April 12, 2021

Ptolemy again

 My last post was about Ptolemy's theorem.  In the meantime, I've come across a cool "proof without words" for this.

credit

I can see how, knowing that we want ac + bd could lead you to work on scaling triangles into the parallelogram, which would then give all the angles for the central triangle (although note the switch in order for gamma and delta).

It's simply beautiful.

Monday, April 5, 2021

Ptolemy's theorem

 Ptolemy's theorem says that for a cyclic quadrilateral, the product of opposing sides, summed, is equal to the product of the diagonals:

AB CD + BC AD = AC BD




Proof. adapted from wikipedia (here).

Let the angle s (red dot) subtend arc AB and the angle t (black dot) subtend arc CD.  Then the central angle DPC = s + t and it has sin s + t.  The other central angle APD has the same sine, as it is supplementary to s + t.

Let the components of the diagonals be AC = q + s and BD = p + r.  

Twice the areas of the four small triangles will then be equal to

(pq + qr + rs + sp) sin s + t

Simple algebra will show that 

(pq + qr + rs + sp) = (p + r)(q + s) = AC BD

The product of the diagonals times the sine of either central angle is equal to twice the area of the quadrilateral.  We're on to something.

Now, the great idea.  Move D to D', so that AD' = CD and CD' = AD.  

The triangles ACD and ACD' are congruent, by SSS, so they have the same area.

Therefore the area of ABCD is equal to the area of ABCD'.

Some of the angles switch with the arcs.  In particular, angle t (black dot) now subtends arc AD'.  As a result s + t is the measure of the whole angle at vertex C.  The whole angle at vertex A is supplementary, and the sine of the whole angle at vertex A is equal to that at C.

So twice the area of triangle ABD' is AB AD' sin s + t, and twice that of BCD' is BC CD' sin s + t.  Add these two areas, equate them with the previous result, and factor out the common term sin s + t:

AC BD = AB AD' + BC CD' 

But AD' = CD and CD' = AD so

AC BD = AB CD + BC AD

This is Ptolemy's theorem. 

Here is one result from the theorem.  Draw an equilateral triangle and its circumcircle.  Pick any other point P on the circle and connect it to the vertices as shown.

Ptolemy says that

ps = qs + rs

p = q + r

Monday, March 29, 2021

Image conversion for FB



I got an iPhone 11 Pro about 18 months ago.  I love the quality of the images.  Low-light is still more amazing.  

But the image file type is .HEIC.  I have a recollection that Facebook didn't accept .HEIC at the time, when trying again now I find that it does.  In any event, my workflow since then has included opening all the images up in Preview and manually exporting them to relatively low quality .jpg files, one by one.

I finally got tired of this.

I found a page where the guy shows how to set up the macOS Automator app to tell Preview to do the conversion.  Some day I might try that.  But it got me thinking.  

I used trusty old Homebrew to install imagemagick.  (It took a while, there are *a lot* of dependencies).  Anyhow, so then I just cd to the Downloads directory and do this in Terminal:

magick convert -quality 10 IMG_9836.HEIC IMG_9836.jpg

It works great.  I get a 10-fold compressed jpg compared to the original.  So even if FB can now handle .HEIC I'm saving all that bandwidth on the upload.

The next problem was how to specify the new filenames for batch mode.  One could use sed, but it's so awkward to read.  Or I could always go to Python to bundle up the processes, but eventually I found a one-liner that works in Terminal:

for f in *.HEIC;  do magick convert -quality 10 $f ${f:0:9.}jpg;  done

The first filename is the source, like IMG_9836.HEIC.  The second filename is ${f:0:9.}jpg, it extracts characters at specified positions from the string variable.  I didn't know you could do this in the shell.

It's a hack, but it works in this case because the variable names are all exactly the same length.  Anyway, it's great.  As long as you're not afraid of Terminal.


Thursday, January 14, 2021

Senator Hawley, provocateur

Sen. Josh Hawley of Missouri voted to object to the certification of the ballots of PA electors.  A statement from him that was printed in a Missouri newspaper is also available on Twitter.  Hawley claims that when the PA legislature authorized mail-in ballots what they did was unconstitutional. 

The "Pennsylvania Supreme Court .. changed the rules for when mail-in ballots could be returned ... the Pennsylvania Supreme Court threw out the case on on procedural grounds, in violation of its own precedent.  To this day, no court has found the mail-in voting scheme to be constitutional, or even heard the merits of the case."

It is true that they tried to change the rules for when mail-in ballots could be returned, but that is a different case.  Hawley is deliberately conflating two different issues.  The question about timing of return was addressed by the U.S. Supreme Court. This case is about whether absentee voting is constitutional.  (Hawley is being dishonest).

So then, here's a bit of history about absentee voting (voting by mail, or mail-in ballots) in PA.  You can read what the PA Constitution says about absentee voting here.

"The Legislature shall provide a manner in which .. qualified electors .. [who] are unable to attend at their proper polling places .. may vote, and for the return and canvass of their votes".

And it gives examples of reasons:  "because of illness or physical disability or who will not attend a polling place because of the observance of a religious holiday or who cannot vote because of election day duties, in the case of a county employee".

Sen. Hawley relies upon a restrictive reading of this text where he claims that what the PA legislature did was unconstitutional (that only the listed reasons are valid).  There has been a lot of discussion about why that is not the case (speech by Sen. Casey, good newspaper review).  It is true that the PA Supreme Court has not actually ruled on this issue.

 In October, 2019 (not "last year"), the PA legislature passed, and the governor signed into law Act 77 of 2019 in Pennsylvania (P.L. 552, No. 77).  You can read the text here, but like most legislation it is frankly, unreadable.  It provides for universal mail-in ballots.

It is noteworthy that the bill was introduced by Republicans and passed with majority support by Republicans.  Several previous elections have been run using these procedures.

On Nov. 21 (18 days after the 2020 presidential election) a suit was filed (by U.S. Representative Mike Kelly and others) making the claim of unconstitutionality.  The PA Supreme Court ruled against them on Nov. 28, on the grounds that their claim was untimely.  You can read the Order here.

"As a remedy, Petitioners sought to invalidate the ballots of the millions of Pennsylvania voters who utilized the mail-in voting procedures established by Act 77 and count only those ballots that Petitioners deem to be “legal votes."

In a concurrence, Justice Wecht wrote

"Unsatisfied with the results of that wager, they would now flip over the table, scattering to the shadows the votes of millions of Pennsylvanians,"

In summary then, Republicans voted to allow mail-in ballots in PA, ran several elections this way without challenging (their own) law, and then filed suit to challenge the process, more than two weeks after their preferred candidate for President lost the election in PA.

Not only did the PA Supreme Court dismiss this case, but it is a fundamental tenet of election law (as promulgated by the U.S. Supreme Court), that authorities may not change the rules after the election.  Voters are entitled to rely on the rules established beforehand.

You can read a good post about election law here, and an NPR article about the recent case here.

Sen. Hawley is dishonest, but you knew that.  Was this done deliberately by the GOP with the plan to try to overturn the election?  Who knows?  

What seems clear is that the modern GOP is the party of voter suppression, the party that (arguably) wants to return to Jim Crow, by disenfranchising Black voters.