“We must minimize the need to change existing code. For adoption in existing code, decades of experience has consistently shown that most customers with large code bases cannot and will not change even 1% of their lines of code in order to satisfy strictness rules, not even for safety reasons unless regulatory requirements compel them to do so.” – Herb Sutter
with large code bases cannot and will not change even 1% of their lines of code in order to satisfy strictness rules
Do people really say this? Voice this in committee? I have been in a few companies, and one fairly large one, and all are happy to and looking forward to upgrade newer standards and already spend a lot of time updating their build systems. Changing 1% of code on top of that is probably not really that much compared > Changing 1% of code on top of that is probably not really that much compared
Quite a few companies have millions and millions of lines of code. Changing 1% of it would mean changing more than 10K lines of code, perhaps even more than 100K. In much bigger code bases, where changing anything has a risk of breaking something — not just because you might make a mistake, but because your program is full of Undefined Behaviour, and changing anything might manifest latent bugs.
Given that, I'm not surprised people say that Sutter quote with a straight face.
Many of my customers are in an industry with a huge C++ code base and it's all under active development. Safety certification requirements are onerous and lead-times for development are long: many are now experimenting with C++17 and C++20 is on the long-term horizon but not yet a requirement. Because of the safety certification requirements and the fact that the expected lifecycle of the software is the order of decades after their products have been released, changing any lines of their code for any reason is always risky. Lives can be at stake.
But this is a multi-billion-dollar industry. If you're working on scripting a little browser "app" for a phone things may be different.
Is there a lot of manual work for getting the new certificate? E.g. is human rewiewing the code? If not, someone should build CI pipeline for the certification process.
Hundreds of hours of manual testing. I don't have to do safety certificates, but my code gets 500 hours of manual testing (I'm not allowed to give real numbers, these numbers are close enough) - they find enough critical can't ship issues where the fix is risky enough to start all over that we typically are doing 2500 hours of manual testing. on every release.
We have a large automated test suite that runs on every build and takes hours. The problem with automated tests is they only verify situations you thought of work the way you think they should, while human testers find slight variations of setup that you wouldn't think matter until they do. Human tests also find cases where the way you expect things to work don't make sense in the real world.
Wait until you find out about the cat test. It found a failure mode no human had thought of. No amount of the developer claiming a test like that was not fair was enough to invalidate the results. No actual cats were harmed but treats may have been given.
Do you have more context? I'm having trouble googling what you're referencing.
Simulate a cat walking on the keyboard to handle weird inputs?
Isn't that just fuzzing? I thought maybe there was a specific thing called the cat test.
People just don't make mass changes to existing working code. Mostly they cannot. Even if the tooling was available, which it's not, it's also about reeducating their developers, who don't want to or can't change. Plus it'd have to be recertified. It's all cost with no benefit.
Except, allegedly, at Google. But is there any evidence they actually do this, eg. in public code bases? Or is it just hype?
Google do this to their internal monorepo.
This is one of the reason why they are bad at open sourcing - their internal code almost never match what is released
Could be selection bias. Companies (or departments within companies) who are still actively developing their C++ code probably tend to hire more developers and consultants than companies who are doing minimal maintenance on their code base, and that might correlate well with the “two factions of C++” discussed here.
“Our code is an asset” ⇒ code kept up-to-date
“Our code is a burden, but we need it” ⇒ change averse
> Changing 1% of code on top of that is probably not really that much compared
Changing 1% across all modules is a nightmare. Changing one module which is 1% of the code is nothing.
A company that I worked at had a few very large C++ related migrations, and they were all very very expensive.
The first was removing `long` from the code, since a lot of code assumed its size (is it like `int` or like `long long int`?) and as machines were upgraded it caused problems.
The second was moving to C++11/14/17. Most of the difficulty was toolchains on unixen that did not support the new versions of the language, or for which support was incomplete, or for which upgrading to a version with support broke existing builds.
The third was moving to Linux from big iron unixen. As far as I understand, this initiative is still underway. It was already underway in 2011 when I joined the company.
This is a rich company with a large, healthy engineering department. I imagine that most other companies would not or could not bother.
That old joke about Stroustrup inventing C++ to keep developers perpetually employed keeps ringing true.
Are you referring to his book written 20 years ago or 25 years ago? "customers with large [C++] code bases" there aren't that many of these. Vendors, government. With code bases that have stewards, not programmers.