lazide 15 hours ago

I think it has shaken out the way it has, is because compile time optimizations to this extent require knowing runtime constraints/data at compile time. Which for non-trivial situations is impossible, as the code will be run with too many different types of input data, with too many different cache sizes, etc.

The CPU has better visibility into the actual runtime situation, so can do runtime optimization better.

In some ways, it’s like a bytecode/JVM type situation.

1
PinkSheep 10 hours ago

If we can write code to dispatch different code paths (like has been used for decades for SSE, later AVX support within one binary), then we can write code to parallelize large array execution based on heuristics. Not much different from busy spins falling back to sleep/other mechanisms when the fast path fails after ca. 100-1000 attempts to secure a lock.

For the trivial example of 2+2 like above, of course, this is a moot discussion. The commenter should've lead with a better example.

lazide 10 hours ago

Sure, but it’s a rare situation (by code path) where it will beat the CPU’s auto optimization, eh?

And when that happens, almost always the developer knows it is that type of situation and will want to tune things themselves anyway.