Previously, a function was defined that can be used to generate a probability for any value of a random variable x, from a probability density function or distribution (pdf) with a given mean and standard deviation. We remember that for a continuous distribution, the actual probability at any discrete x is zero, since the total number of possible x's is infinite. Technically the definition is that the pdf is the derivative of the cdf, which I misremembered as the cumulative

*density*function. So the cdf(x) is the area under the pdf from negative infinity to x.

One nice thing about this approach is it is then easy to define a sum of weighted distributions.

We get what looks like a smooth curve by plotting a (relatively) large number of points (in this example, 3502). The plotting uses matplotlib (see this post referencing set-up on the Mac). The cdf is computed by simply accumulating values from the pdf. Normalization is usually done by dividing by the total, but the method I showed was just slightly more subtle:

Unfortunately, it is also wrong! (Sorry). I won't try to explain what I was thinking, but the fact that it gave an accurate value (as shown by the maximum of the cdf being equal to 1) is an accident, and you should instead simply divide by the sum of the values, and moreover, do the operation on the pdf before constructing the cdf:

This change to the pdf means we need to magnify it before plotting, and really, should provide a different y-axis on the right hand side, with the true values. The left-hand y-axis only is accurate only for the cdf.

The idea for sampling is to generate a random float using

`np.random.random()`

, and then ask which of the values in the cdf (which by definition are ordered), first exceeds this value. The indexes resulting from repeating this procedure are concentrated in the steep part of the cdf (the peaks of the pdf), because the probability that a given position in our "discretized" form of the cdf satisfies this relationship is proportional to the slope of the cdf curve (the added vertical distance between a given index and the one previous).As a reader suggested, an improvement to the code is to recognize that the list we're searching (the cdf) is

*ordered*and so can more efficiently be searched using a binary search. The code is a little tricky to write, so I skipped it last time. And luckily, Python comes with "batteries included" and for this application what we want is the

`bisect`

module from the standard library. The example `find_le`

function's docstring says: 'Find rightmost value less than or equal to x'. I just modified this code (which calls `bisect.bisect_right`

) to return the index rather than the value. We're interested in intervals where the slope is steepest. The original

`find_first`

function returned the index of the right-hand value, while this function will return the index of the left-hand value. I suppose either one is fine, but perhaps it would be better to use the midpoint.There are a few more steps in the code that are a little obscure, including bins with fractional width, and the use of the

`Counter`

class to organize the data for the histogram. But this post is getting a bit long so I'll skip them for now.Modified code:

## 1 comment:

Are you saying that the code in your previous code is incorrect? If so, you should probably update the incorrect post to say that it's wrong. As it stands now, it's possible that most people visiting the incorrect code will never know it's incorrect.

Post a Comment