duped 3 days ago

All numerical methods define "correct" to be within a range or to some precision. There are very few algorithms that require FTZ mode to be "correct" - the linked article and the article it links don't even have an example (there are good examples of where say, -ffinite-math is super dangerous, because inf/NaNs are way more common than arithmetic on subnormal numbers).

And yea, the fact that crt1.o being linked into shared libraries fucking up the precision of some computations depending on library dependencies (and the order they're loaded!) was bad.. but it lingered in the entire Linux ecosystem for over a decade. So how bad was it, if it took that long to notice?

If you have a numerical algorithm that requires subnormal arithmetic to converge, a) don't that's super shaky, b) set/unset mxcsr at the top/bottom of your function and ensure you never unwind the stack without resetting it. It's preserved across context switches so you're not going to get blown away by the OS scheduler.

This isn't practical numerical methods in C 101 but it's at least 201. In practice you don't trust floats for bit exact math. Use different types for that.

1
dapperdrake 3 days ago

IEEE 754 defaults are for people who don't get deeply into numerical analysis and Cauchy sequences. Like, ostensibly, most FOSS maintainers. Or most people who write software in general.

There are people that do. HPC and the demoscene have numerous examples. Most of the people I met here are capable of reading gcc's manual and picking the optimizations they actually need. And they know how to debug this stuff.

If it's not obvious who gcc's defaults should cater to, then redefine human-friendly until it becomes obvious.