I really really like this article. I think the two camps the author describes very much reflect my experience over the past couple of decades at a dotcom startup, then a game developer, and now at Google
However, I think the author is a little off on the root cause. They emphasize tooling: the ability to build reliably and cleanly from source. That's a piece of it, but a relatively small piece.
I think the real distinguishing factor between the two camps is automated testing. The author mentions testing a couple of times, but I want to emphasize how critical that is.
If you don't have a comprehensive set of test suites that you are willing to rely on when making code changes, then your source code is a black box. It doesn't matter if you have the world's greatest automated refactoring tools that output the most beautiful looking code changes. If you don't have automated tests to validate that the change doesn't break an app and cost the company money, you won't be able to land it.
Working on a "legacy C++ app" (like, for example, Madden NFL back when I was at EA) was like working on a giant black box. You could fairly confidently add new features and new code onto the side. But if you wanted to touch existing code, you needed a very compelling reason to do so in order to outweigh the risk of breaking something unexpectedly. Without automated tests, there was simply no reliable way to determine if a change caused a regression.
And, because C++ is C++, even entirely harmless seeming code changes can cause regressions. Once you've got things like reinterpret_cast<>, damn near any change can break damn near anything else.
So people working in these codebases behave sort of like surgeons with a "do no harm" philosophy. They touch as little as possible, as non-invasively as possible. Otherwise, the risk of harming the patient is too high.
It's a miserable way to program long-term. But it's really hard to get out of that mess once you're in it. It takes a monumental amount of political capital from engineering leadership to build a strong testing culture, re-architect a codebase to be testable, and write all the tests.
A lot of C++ committee changes aimed at legacy C++ developers are about "how can we help these people that are already in a mess survive?" That's a very different problem than asking, "Given a healthy, tested codebase, how can we make developers working in it go faster?"
> A lot of C++ committee changes aimed at legacy C++ developers are about "how can we help these people that are already in a mess survive?" That's a very different problem than asking, "Given a healthy, tested codebase, how can we make developers working in it go faster?"
Having also worked at a few gamedev studios, IME there isn't a real distinction between the two since it is always a matter of time for the former to become the latter.
Sometimes it doesn't even take that long, all it takes is a single innocuous vertical slice with a pointlessly immovable deadline to inject enough harm in a codebase so you spend the next year fighting bugs that shouldn't have existed in the first place while also having to do everything else at the same time (and all planned timeframes made with only the "everything else" in mind, of course).
IMO even if it doesn't sound good, it is much more practical to learn how to deal with the mud than assume pigs do not exist :-P
> Having also worked at a few gamedev studios, IME there isn't a real distinction between the two since it is always a matter of time for the former to become the latter.
That was very much my experience at EA, but has definitely not been my experience at Google. While everyone struggles with tech debt, at Google I've worked in many codebases that have been continuously well-maintained with good test coverage for over a decade.
Really, once you build a culture that says, "People not on your team may edit your code without asking and will rely on your tests to make sure they don't break things,", teams get highly incentivized to write tests.
Agreed. As much as I want it to be simpler to build C++ programs from source, it's pretty much always _possible_ in my experience, it can just a PITA frequently.
I think that tests are a sure-fire way to improve the quality of your code, but I'd throw another piece in to the ring: sanitizers [0]. Projects that have good tests and run them regularly with TSan/ASan/UBSan in my experience are much better to work on because it means that it's much less likely there's deep seeded issues that are lurking. It gives you increased confidence that you're not introducing hard-to-detect issues as you go.
These tools aren't just exclusive to C++. I've said the same thing about C, Go, Zig, Odin, etc. Projects that use them (and have good automated tests) tend to be in good shape, and projects that don't tend to take a long time to make any progress on.