> Nobody who cares about performance enough to look at a benchmark like this would do large loops in pure python.
Because they already know it would be very slow, and putting specific numbers on that is helpful for them and for non-veterans too. Even with some wiggle room for "microbenchmarks are weird".
I already know that python is generally slower than C. This microbenchmark tells me that python is 100x slower than C, but only in these specific circumstances that wouldn't happen in the real world, but may or may not be similar enough to the real world to infer that C could be 10^(2±1) faster, but only for a small portion of someone's program, and only if they don't use one of the libraries that they probably do use.
Personally, I find it misleading for the results of a benchmark to be posted when it's not made clear up front what the benchmark is measuring. At minimum I'd want a disclaimer that mentions at least one of the first two points mentioned in the top comment https://news.ycombinator.com/item?id=42250205
You keep acting like this code is super far from the real world, but it isn't. It's representative of basic number crunching just fine.
The one serious caveat is that someone that already knows CPython is godawful at this will switch to numpy, but that reinforces what the benchmark says.
None of the things in the post are going to change the order of magnitude here.