I do wonder how much smaller the STL source code would be if it was pre-processed or written with only a single C++ standard in mind. So only for C++20 or only for C++23 etc. In that case how much faster would things be to compile where it doesn't need to filter through hundreds of preprocessor options?
From what I've read on mailing lists and whatnot, it seems a lot of complexity comes from explicit choices made, like iterators being unaffected by insertions[1] for maps and such, or time complexity guarantees that forces the implementation into certain corners.
[1]: https://kera.name/articles/2011/06/iterator-invalidation-rul...
> In that case how much faster would things be to compile where it doesn't need to filter through hundreds of preprocessor options?
I think most of the time spent isn’t running the preprocessor, but parsing the declarations and definitions.
Regardless, the way to speed up importing definitions in modern C++ is to use #import instead of #include.
https://news.ycombinator.com/item?id=38904758 says they could import the entire std namespace in under a second (that is long when you want to run C++ as a scripting language, but not when you compile large programs)