bagxrvxpepzn 3 days ago

To the people who work on C++ standards: I approve of the current C++ trajectory and please ignore all of the online noise about "the future of C++." To anyone that disagrees severely with the C++ trajectory as stated, please just consider another language, e.g. Rust. I don't want static lifetime checking in C++ and if you want static lifetime checking, please use Rust. I am not a government contractor, if you are a government contractor who must meet bureaucratic risk-averse government requirements, please use Rust. I have an existing development process that works for me and my customers, I have no significant demand for lifetime checking. If your development process is shiny and new and necessitates lifetime checking, then please use Rust. To Rust advocates, you can have the US government and big tech. You can even have Linux. Just leave my existing C++ process alone. It works and the trade offs we have chosen efficiently accomplish our goals.

13
aiono 2 days ago

You frame it like "Rust advocates" try to infiltrate into C++ language decision making and inject safety features into it. That's not the case at all. For years C++ committee simply ignored the need for safety and they didn't take Rust and lifetime analysis seriously. But now they themselves want it.

AlotOfReading 3 days ago

C++ has lifetime rules just like Rust. They're simply implicit in the code and not enforced by the compiler. Do you prefer the uncertainty of silent miscompilations and undefined behavior to upfront compiler errors?

You're already using a language with a strong type system, so it's confusing to me why you would choose to draw the line here.

bagxrvxpepzn 3 days ago

> Do you prefer the uncertainty of silent miscompilations and undefined behavior to upfront compiler errors?

Yes because then I don't have to spend hours writing esoteric spaghetti code to prove something to the compiler that is trivially known to be true. Your error is assuming static lifetime checking is free. As an engineer, I use judgement to make context-dependent trade offs.

If you like playing the compiler olympics, or your employer forces you to, please use Rust.

zozbot234 2 days ago

"Trivially known to be true" until the code evolves making your unstated assumptions not hold and everything breaks, often in complex and unintuitive ways involving interactions across modules. This is why these automated soundness checks are valuable.

restalis 2 days ago

"until the code evolves [...]"

That is already a desirable place to be, where you managed to get a working implementation ready to evolve. My issue with opinionated languages like Rust is that they make development more expensive. I then afford to pay the necessary work-effort for fewer projects than I otherwise could if I was to focus more on the problem(s) at hand instead of that and other mandatory constraints forced upon me by the compiler. I very much want my development tools to limit themselves on being tools, to assist me on the part of the problem I chose to focus on with little to no cost paid for their usage. I want to be able to focus on prototyping some working solution first, and only then, if the project's needs really warrant it, to switch on paying the development cost for other aspects, be it safety or whatnot.

wiseowise 3 days ago

> Yes because then I don't have to spend hours writing esoteric spaghetti code to prove something to the compiler that is trivially known to be true.

And that’s exactly the reason why we need more safety in C++.

I’m terrified at amount of code in real world written with this mindset.

virgilp 2 days ago

At the same time, you should recognize that not all real code in the world is used to run planes & thermonuclear power plants. For a lot of the business software, it's actually fine if it's not perfectly safe. So if it's cheaper/ faster to develop it without paying the price of static safety checks, who is to say that this was a bad tradeoff?

I actually love the ideas that Rust brought forth. It definitely has a place in the ecosystem, and I'm glad to hear critical software is being rewritten in Rust! But that doesn't mean that C++ should copy it.

AlotOfReading 2 days ago

C++ doesn't permit you to write code that's not perfectly safe. By using a C++ compiler, you're promising that you will write safe code even if the compiler can't verify that, lest nasal demons and other misfortunes fall upon you. If your code isn't safe and you expect that to be fine, you're not writing C++. This is a discussion about C++, so the default assumption is that you'll pay the costs of safe code instead of inventing an ill-specified dialect that happens to do what you want when it's shoved into a C++ compiler.

If you think we should instead evolve C++ so that safety isn't mandatory I'm right there with you, but it's not where the language is today and that discussion has also been shut down by the evolution working group. Moreover, Bjarne's policies mean that telling the critical software people to go fuck off to a different language fundamentally isn't part of the plan either.

virgilp 2 days ago

It is kindof an interesting point you bring up here. However, it's also true that languages (and software in general) are what people make of it, not necessarily what their creators intended. I do believe that the stuff that happens to do what you want when shoved into a C++ compiler _is_, for all intents and purposes, C++. And I kinda' think/feel that that is also what the committee is saying - "we want C++ to keep doing that, rather than evolve into a safer thing that is no longer C++"

AlotOfReading 2 days ago

I get the argument, but the community argument doesn't actually change anything. No compiler will guarantee any particular behavior in the presence of memory safety issues. Very few programs happily tolerate random memory corruption or race conditions, etc.

andrewprock 2 days ago

Run valgrind on any large successful code base and you will find tons of memory corruption. It just happens to occur in places where it does not matter.

lmm 2 days ago

> For a lot of the business software, it's actually fine if it's not perfectly safe.

Is it fine if it silently gives the wrong answer? If so, why are you bothering with the software at all?

In my experience all nontrivial C++ codebases have silent memory corruption bugs (at least when built with popular compilers).

virgilp 2 days ago

Well, let's put it like this:

- Webkit, GCC, and a few others are non-trivial C++ codebases that are (I argue) useful.

- In your experience, since they are non-trivial, they have silent memory corruption bugs (i.e. they are not "perfectly safe").

Does this answer the "why bother with software at all" question?

lmm 1 day ago

Webkit, as I understand it, is not really a C++ codebase built with a popular compiler, it's a codebase that follows its own significantly stricter standards and has a lot of additional tooling to avoid bugs.

And I'd say that even with all that additional effort, it has a level of bugs that's not "fine". Indeed, per the article, I suspect that the maintainers of Webkit are some of the people pushing to make C++ more Rust-like.

virgilp 1 day ago

Webkit TBH wasn't a great example, since it's arguably a piece of software that would benefit from being developed in Rust. That said, the point is that we don't need "one language to rule them all" - C++ has made some tradeoffs, that will not be ideal in all circumstances/for all projects. Trying to change the tradeoffs because of a handful of projects (like Webkit) would be better suited to new tradeoffs is not necessarily the right choice for the language itself, or its community of users. Things are not so simple, "There are 2 factions of C++, those that agree with me and are right, and those that disagree and are wrong".

lmm 1 day ago

I think silent memory corruption is almost never a good tradeoff. (The one possible exception is something like a single-player videogame, where unknown corruption might be less bad than crashing - but even then, avoiding having the situation come up in the first place is better). An argument used to be made (if not in so many words) that accepting a certain amount of occasional memory corruption was a necessary tradeoff for performance; it's an argument that I was always dubious about, and now Rust has proven it completely false.

Fundamentally I don't think this is a case where C++ makes a deliberate design tradeoff that makes sense for some projects. I think it's just a bad design choice (not even a choice as such - it wasn't a question that was considered at all when C++ was first designed) that should be corrected. Sometimes there is a right answer.

virgilp 23 hours ago

> Sometimes there is a right answer.

Indeed. And when that "right answer" comes along, it tends to swipe away everything else. If it's universally better, why wouldn't it?

Except that, Rust does not do that. Which is a hint that it's not an "universally right answer", but a right answer for a subdomain of problems. That's basically what I was trying to say. That it does come with its own tradeoffs/downsides.

(maybe I'm wrong and it's only a matter of time until that happens; but I don't think so.. it's been a while, there was time for it to make the impact. Lifetime annotations are not yet adopted by any other mainstream language, AFAIK)

lmm 15 hours ago

> Indeed. And when that "right answer" comes along, it tends to swipe away everything else. If it's universally better, why wouldn't it?

> Except that, Rust does not do that. Which is a hint that it's not an "universally right answer", but a right answer for a subdomain of problems. That's basically what I was trying to say. That it does come with its own tradeoffs/downsides.

Rust may not be the only right answer, but memory unsafety is the wrong one. New projects overwhelmingly pick memory-safe languages, governments and organisations are banning memory-unsafe languages at least for new projects. I don't think anyone is picking C++ at this point if they don't already have a big sunk cost invested in it (even if that cost is just their personal programming experience).

> Lifetime annotations are not yet adopted by any other mainstream language, AFAIK

Linear Haskell is getting there, but most languages aren't flexible enough to retrofit lifetimes (or at best it would be a multi-year effort, like adding types to Python) - as we're seeing in this whole C++ discussion. Also non-GC languages are niche in the first place, and the problem lifetimes solve is a lot less urgent in a GC language. I don't think any post-Rust language has hit "mainstream" yet (we only really get a couple of new mainstream languages a decade), so we'll see what happens in the future.

SpaceNugget 2 days ago

Most C++ developers care greatly about the quality of their code, and suggesting that since the code isn't in a life threatening situation like a flight controller or medical device it can be buggy with no repercussions is pretty silly.

Your examples of GCC and Webkit are both projects that have spent enormous amounts of effort to be as memory safe as they can be, and have both had many memory safety related CVEs in the past. As was already pointed out, you still have to pay the cost of engineering memory safe code, even when your compiler/static analysis doesn't have your back.

virgilp 1 day ago

I was not saying anywhere that people don't or shouldn't care about the quality of their code. I was just pointing out that, whether we like it or not, "quality" is just one of the factors that goes into the mix of "things to optimize of". Other factors like "time" and "effort" and "efficiency" and "compatibility" and even trivial stuff like "familiarity" play a role - or else you'd have formal proofs written in TLA+ or Alloy or the like, before writing any system; And you'd have people immediately switch to safer languages like Rust (which is obviously not happening at scale).

The GCC/Webkit examples were not the best examples, but were nevertheless easily available examples that made one particular point: OP's comment was self-contradictory.

wiseowise 1 day ago

> Most C++ developers care greatly about the quality of their code

Not at our org. Though I know a couple of die hard fans that will eat you for lunch if you do something stupid or ugly.

roland35 3 days ago

I've found that often when I am writing esoteric spaghetti rust code... I need to start thinking about what I am trying too do! Most of the time it's a bad idea :)

HelloNurse 3 days ago

If one needs to "prove something to the compiler" it is usually something both complex and against the grain; on the other hand lifetime annotations are usually just "promise something to the compiler" to allow it to make a better job.

rramadass 3 days ago

> As an engineer, I use judgement to make context-dependent trade offs.

Well said.

This is why i am firmly in the Stroustrup camp of backward compatibility/zero overhead/better-C/etc. goodness of "old C++". I need to extend/maintain/rewrite tons of them and that needs to be as painless as possible. The current standards trajectory needs to be maintained.

The OP article is a rather poor one with no insights but mere hoopla over nothing.

munchler 2 days ago

If it's hoopla over nothing, why do you firmly identify with one of the factions defined by the article?

rramadass 2 days ago

What a silly question! There is no major schism in the C++ community as the article implies; merely a strong difference of opinion on certain proposals. This is normal in any committee. But since people are strongly wedded to their own proposals it might seem more severe than it actually is.

adastra22 2 days ago

> to prove something to the compiler that is trivially known to be true

I don't think you've ever done any serious work with lifetimes. I've been a rust developer for a number of years, and I have never once encountered a situation where the rust compiler forces me to add annotations for something which is trivially true. Never.

What actually happens is 95% of the time I never have to add lifetime annotations anyway because the compiler infers the correct annotation from the lifetime elision rules. The remaining 1 in 20 instances is when the borrow checker yells at me, and literally every single time it is due to a latent logic bug in my code. For example, accessing memory after it's been freed, or using a container after it has been consumed. Stuff that C++ would call "undefined behavior" and are generally considered Very Bad Things by C++ developers as well.

It boggles my mind that you don't want the compiler to tell you that “you have a logic error here.”

th2oi34234234 3 days ago

LOL; someone has definitely played with type-systems here.

lelanthran 3 days ago

> C++ has lifetime rules just like Rust. They're simply implicit in the code and not enforced by the compiler.

The problem is that the rules enforced by Rust is not restricted to lifetime rules, it's a much much larger superset that includes quite a lot of safe, legitimate and valid code.

AlotOfReading 3 days ago

Sure, but that's not a design philosophy C++ adheres to. Look at the modern C++ guidelines or profiles. The entire point is to rule out large swathes of safe, legitimate, and valid code in an optional and interoperable way.

C++ isn't beholden to Rust's trade-offs either. There's a whole spectrum of possibilities that don't require broken backwards compatibility. Hence: "Why draw the line specifically at lifetime annotations?"

PittleyDunkin 3 days ago

That's what the unsafe keyword is for.

guappa 3 days ago

> You're already using a language with a strong type system

I'll have you know I made a variable void* just yesterday, to make my compiler shut up about the incorrect type :D

GrantMoyer 3 days ago

While programming in Rust, I've never thought to myself, "man, this would be so much easier to express in C++". I've plenty of times thought the reverse while programming in C++ though.

Edit: except when interfacing with C APIs.

throwawayffffas 2 days ago

I have had the exact opposite experience.

bowsamic 3 days ago

Then you must be avoiding situations that traditionally use OOP

zozbot234 3 days ago

Most kinds of OOP can be expressed idiomatically in Rust. The big exception is implementation inheritance, which is highly discouraged in modern code anyway due to its complex and unintuitive semantics. (Specifically, its reliance on "open recursion", and the related "fragile base class" problem)

galangalalgol 3 days ago

People often say that modern c++ doesn't have the problems needing a solution like rust. Ironically that means people who write modern c++ haven't had any aramp up time needed when joining our rust projects. They were already doing things the right way. At least mostly. But now they don't have to worry about that one person who seems to be trying to trick the static analysis tools on purpose.

int_19h 2 days ago

Anything that involves object graphs (as opposed to trees) is a pain in Rust.

zozbot234 2 days ago

True, but not in a way that wouldn't be just as painful in C++.

int_19h 2 days ago

In Rust, the de facto standard advice for such cases seems to be, "just use indices into an array instead of references".

While this is sometimes done in C++ as well for various reasons, it's certainly not the default pattern there. If you have two things that need to point to each other, you just do that.

empath75 2 days ago

> While this is sometimes done in C++ as well for various reasons, it's certainly not the default pattern there. If you have two things that need to point to each other, you just do that.

And then you have to handle all the subtle memory bugs that you've introduced by doing that.

int_19h 2 days ago

I'm not arguing that there isn't a gain here, but GP's original assertion was that

> While programming in Rust, I've never thought to myself, "man, this would be so much easier to express in C++".

This is a concrete example of something that is much easier to express in C++. And, sure, you do pay the tax for that (although I will also dispute the notion that it is impossible to write C++ without memory bugs; it's just hard).

LinXitoW 2 days ago

I guess this is a semantics argument, but I assume they mean to express the same thing with same (or reasonably same) security guarantees. After all, the security and "bug freeness" is part of what they are expressing. If you attempt to create something reasonably similar to Rust, you do suddenly need a lot of complex checking code and maybe tests for things that were trivial in Rust (because the compiler does the tests for you).

simonask 2 days ago

Is it really easy to express if the straightforward way is buggy and error-prone?

People think C++ is expressive because they think they are allowed to do a lot of things that they aren't, in fact, allowed to do in C++.

kkert 3 days ago

This is interesting because i'm writing quite a bit of embedded Rust, and i always run into limitations of very barebones const generics. I always wish they'd have half the expressiveness of C++ constexpr and templates.

Win some, lose some though, as the overall development workflow is lightyears ahead of C++, mostly due to tooling

badmintonbaseba 3 days ago

The expressiveness of const generics (NTTPs) in C++ wouldn't go away if it adopted lifetime annotations and "safe" scopes. It's entirely orthogonal.

Rust decided to have more restrictive generic programming, with the benefit of early diagnostic of mistakes in generic code. C++ defers that detection to instantiation, which allows the generics to be more expressive, but it's a tradeoff. But this is an entirely different design decision to lifetime tracking.

zozbot234 3 days ago

Rust generics are not intended as a one-to-one replacement for C++ templates. Most complex cases of template-level programming would be addressed with macros (possibly proc macros) in Rust.

galangalalgol 3 days ago

Const generic expressions are still being worked. They are what is blocking portable simd. They are also a much cleaner way to implement things like matrix operations or really anything where a function with two or more arguments of one or more types returns things that have types that are a combination or modification of the input types.

zozbot234 3 days ago

The problem AIUI is that "const generic expressions" in full generality are as powerful as dependent types. It's not clear to me that the Rust folks will want to open that particular can of worms.

galangalalgol 3 days ago

I thought dependent types were types that depended on a value? What they are proposing are types that depend on types or compile time constants.

zozbot234 3 days ago

The problem is combining the "const generic" and "expression" part. If your "compile time constants" can actually be complex expressions, you arguably end up with the same kind of generality as dependent types.

This is true even for expressions that are only evaluated in a compile-time context, since dependently-typed languages do "everything" at compile time anyway, they don't have a phase distinction where you can talk about "runtime" being separate.

galangalalgol 3 days ago

Ah, yeah! I get it now. So c++ is a dependently typed language. That is hilarious. I want lisp syntax in c++29. That said, too many features are blocked on const generic expressions, so I think they are going to have to bite that off. There is already talk about migrating proceduralacros to be something more like normal rust, this moght fit in with that.

Rusky 3 days ago

C++ is not a dependently typed language, for the same reason that templates do not emit errors until after they are instantiated. All non-type template parameters get fully evaluated at instantiation time so they can be checked concretely.

A truly dependently typed language performs these checks before instantiation time, by evaluating those expressions abstractly. Code that is polymorphic over values is checked for all possible instantiations, and thus its types can actually depend on values that will not be known until runtime.

The classic example is a dynamic array whose type includes its size- you can write something like `concat(vector<int, N>, vector<int, M>) -> vector<int, N + M>` and call this on e.g. arrays you have read from a file or over the network. The compiler doesn't care what N and M are, exactly- it only cares that `concat` always produces a result with the length `N + M`.

groos 2 days ago

I'm not sure what "dependently typed" means but in C++20 and beyond, concepts allow templates to constrain their parameters and issue errors for the templates when they're specialized, before the actual instantiation happens. E.g., a function template with constraints can issue errors if the template arguments (either explicit or deduced from the call-site) don't satisfy the constraints, before the template body is compiled. This was not the case before C++20, where some errors could be issued only upon instantiation. With C++20, in theory, no template needs to be instantiated to validate the template arguments if constraints are provided to check them at specialization-time.

Rusky 2 days ago

This is the wrong side of the API to make C++20 dependently typed. Concepts let the compiler report errors at the instantiation site of a template, but they don't do anything to let the compiler report errors with the template definition itself (again before instantiation time).

To be clear this distinction is not unique to dependent types, either. Most languages with some form of generics or polymorphism check the definition of the generic function/type/etc against the constraints, so the compiler can report errors before it ever sees any instantiations. This just also happens to be a prerequisite to consider something "dependently typed."

zozbot234 2 days ago

> performs these checks before instantiation time

Notably Rust type-based generics do this, a key difference wrt. C++ templates. (You can use macros if you want checks after instantiation, of course.)

galangalalgol 3 days ago

In c++ it does care what N and M are at compile time, at least the optimizer does for autovectorization and unrolling. Would that not be the case with const generic expressions?

Rusky 2 days ago

The question of whether a language is dependently typed only has to do with how type checking is done. The optimizer doesn't come into play until later, so whether it uses the information is unrelated to whether the language is dependently typed.

galangalalgol 2 days ago

Ok, I think I understand now, but is it really dependently typed just because it symbolically verified it can work with any N and M? Because it will only generate code for the instantiations that get used at compile time.

Rusky 1 day ago

Is what really dependently typed? I'm saying C++ is not dependently typed, because it doesn't do any symbolic verification of N and M.

galangalalgol 1 day ago

If rust did add const generic expressions I mean. It still would only generate code for the used instantiations.

Rusky 1 day ago

Ah, I wasn't really talking about Rust.

Rust already does have some level of const generic expressions, but they are indeed only possible to instantiate with values known at compile time, like C++.

The difficulty of type checking them symbolically still applies regardless of how they're instantiated, but OTOH it doesn't look like Rust is really trying to go that direction.

jcelerier 2 days ago

the only thing needed here is to be able to lift N & M from run-time to the type system (which in C++ as it stands exists only at compile-time). For "small" values of N&M that's doable with switches and instantiations for instance.

Rusky 1 day ago

The point of dependent types is to check these uses of N and M at compile time symbolically, for all possible values, without having to "lift" their actual concrete values to compile time.

Typical implementations of dependent types do not generate a separate copy of a function for every instantiation, the way C++ does, so they simply do not need the concrete values in the same way.

smilekzs 2 days ago

> as the overall development workflow is lightyears ahead of C++, mostly due to tooling

My experience has been the other way around. Eclipse-based IDEs from NXP, TI, ST all have out-of-the-box usable tooling integration:

- MCU pinout and configuration codegen

- no need to manually fiddle with linker scripts

- static stack and code size analyzers (very helpful for fitting stuff in low-cost MCUs)

- stable JTAG-based debugging with:

  - peripheral registers view (with bitfield definitions)

  - RTOS threads view (run status, blocked on which resources, ...)
And yes, these are important enough for me to put up with Eclipse and pre-modern C/C++. I really want to write Rust for embedded but struggling with the tooling all the time didn't help.

afdbcreid 3 days ago

That's actually quite interesting because this is not an inherent limitation of Rust, and it is definitely planned to be improved. And AFAIK, today (as opposed to last years) it is even being actively worked on!

natemcintosh 3 days ago

And what about, for example, those government contractors who are in the same position as you: they have a large C++ codebase that currently works, and is too big to re-write in rust? Now they're being asked to make it safer. How will they do that with the "existing C++ process"?

jart 3 days ago

Didn't Project Zero publish a blog post a few months ago, saying that old code isn't your security problem? They said it's new code you have to worry about. Zero also had copious amounts of data to demonstrate their point. In any case, if you really want to rewrite C++ in Rust, LLMs are fantastic at doing that. They're not really good yet at writing a new giant codebase from first principles. But if you give them something that already exists and ask them to translate it into a different language, oftentimes the result works for me on the first try. Even if it's hundreds of lines long.

fulafel 2 days ago

A link would be helpful, but at face value: of course old code vulnerabilities are still a problem. Vulnerabilities in old code make the headlines all the time.

jart 2 days ago

It was difficult to dig up, but I found it for you. https://security.googleblog.com/2024/09/eliminating-memory-s... Also headlines do not accurately model reality. The news only reports on things that are newsworthy. It's comparatively rare that we'll discover new vulnerabilities in old code that's commonly used. That's what makes it newsworthy.

fulafel 2 days ago

Thanks. It's an interesting analysis around the "vulnerabilities decay exponentially" model, discussing how there are more vulnerabilities to be found in new code than old code given equal attention.

SkiFire13 2 days ago

The issue is that newer code often needs to communicate with older code, and interfacing C++ and Rust is not trivial.

jesse__ 3 days ago

Yeah I remember reading that post about bugs over time. IIRC 5 years was the time it takes for most bugs to get ferreted out.

moregrist 3 days ago

The funny thing about government funding is that it may be easier to secure capital for a Rust rewrite than for ongoing maintenance to add static lifetimes and other safety features to an existing C++ codebase.

Legislatures seem a lot more able to allocate large pots of money for major discrete projects than to guarantee an ongoing stream of revenue to a continuing project.

pizlonator 2 days ago

They can use Fil-C++ and then they get memory safety without any rewrites.

bluGill 3 days ago

C++ is on the trajectory to create a future with more safety. Should we do profiles or static lifetime checking (or something else??) is still an open question (and both may be valid). However I'm glad c++ is thinking about that. We have real problems around safety in the real world and people are writing unsafe code even when modern safe code would be easier to write.

Of course it remains to be seen how this all plays out. Static lifetimes can be done good or bad. Profiles can be good or bad. Even if whatever we come up with is done well that doesn't mean people will (I know rust programmers who just put unsafe everywhere).

zozbot234 3 days ago

Profiles are vaporware. The C++ folks are pushing a fantasy of "full memory safety with no changes to existing code, not even annotations to enable sound static analysis." That's just a non-starter, there is no way to get to full memory safety from there unless you count very silly things like making "delete" and "free()" a no-op - and also running everything in a single thread for "concurrency safety".

bluGill 3 days ago

The only way to get anywhere is provide a path forward. I have a lot of C++98 code that has been working just find for 14+years (that is since before C++11). It isn't worth changing that unless we discover a bug in the code (after 14+ years unlikely) or we need to add new features (if we haven't in 14+ years we probably won't need a new feature there anytime soon). Code I write today is the latest C++. What I really want is a way to say don't write the bad things today, but still allow that old code to work. That is what profiles promises to me. Sure we will never to get full memory safety that way, but that isn't my goal, I just want to make my new code better, and when I come back to old code improve that too.

zozbot234 3 days ago

The case for "100% Safe C++" is that you might be able to annotate that old C++98 code in ways that don't otherwise alter its semantics, but still ensure safety. That would be a one-time cost that might be well-worth paying if the cost is low enough - Where "cost" depends on developer experience as opposed to mere volume of annotations. A "viral" compiler feature that auto-surfaces all the places that will need annotation for a given level of safety has the potential to be quite easy to learn and use effectively. It's not clear why the C++ folks are rejecting that approach, seemingly out-of-hand.

bluGill 3 days ago

I have > 10 million lines of C++ that is not annotated. There are many projects much larger than mine. If you cannot automatically annotate the code there is no point in trying as you can't do it manually. If you can automate it why not just build that into the compiler and skip the syntax?

zozbot234 2 days ago

> If you cannot automatically annotate the code there is no point in trying as you can't do it manually.

How can you know this without a "viral" analysis that tells you how much annotation is needed, and where? Perhaps the code factors out all the low-level, "memory unsafe" hacks to its own module, and that can be feasibly annotated. It's just not something we can know in advance.

usefulcat 2 days ago

> Perhaps the code factors out all the low-level, "memory unsafe" hacks to its own module, and that can be feasibly annotated.

While it is theoretically not impossible for that scenario to occur, I'd say it sounds wildly unlikely for anything that can be descried as 'old' code.

tialaramex 2 days ago

I suspect the best case scenario is a "Stone soup". https://en.wikipedia.org/wiki/Stone_Soup

The fantasy is enough to get engagement and once you have engagement you can persuade people to do a "little" extra work to get the full benefits. My mother won't buy the product for $5, but if you tell her that it costs $10 but they're 2-for-1 today, she's going to buy that and feel like she got a bargain.

In terms of actually solving the problem well, it's not even captured in these hypothetical regulatory requirements. What you actually want is a safety culture, Rust has one, C++ does not, and no technology will change that. From what I can tell nobody at WG21 wants that to change anyway.

zozbot234 2 days ago

> What you actually want is a safety culture, Rust has one

Rust has a safety culture because it involves requirements for Safe Rust that preserve safety while also playing well with modularity and iterative development. If "Safe C++" can enforce similar requirements, we can expect that a safety culture can be sustained there as well.

tialaramex 2 days ago

The technology does not gift you associated culture, and it's worth knowing that even far outside this business because it applies everywhere.

Yes a technology can be enabling, but, it isn't enough to inculcate the desired culture, that has to come from somewhere else. You can't "sustain" something which does not exist.

Actually WG21 ("The C++ Language Committee") illustrates this well in another way. When WG21 was created it was after the Mother Of All Demos, and so after video conferencing exists as an idea, but to be fair to them it was not really practical at the scale needed for WG21 processes at that time. When C++ 98 shipped it was just about practical, although most ordinary people would have needed to travel to some place with appropriate equipment. By this point the IETF is routinely but not yet universally using such technology.

By the time C++ 11 shipped, I have an ordinary job where I worked full time from home, travelling to a physical location only once or twice per month because video conferencing is now such a mudane and ordinary capability as to go unremarked.

Only since the COVID-19 pandemic has WG21 finally adopted the option for attendance without flying around the world several times per year. The technology to do this had existed for decades, but the culture did not exist.

pjmlp 2 days ago

If you have access to the WG21 meeting minutes, it appears the safety discussions of the last meeting were quite entertaining.

suby 2 days ago

I assume they aren't freely available online? How does one gain access to these meeting minutes?

pjmlp 2 days ago

One becomes a WG21 member.

titzer 2 days ago

Look, we need more than just promises. C++ is charting a future to the past in the most torturously slow process possible, primarily because of absolutely intrasigent performance obsession that won't even admit the possibility of a 1% performance overhead for bounds checks. The C++ steering committee are the real extremists that are holding back the entire software industry because of a sacred cow and a free pass to externalize that cost onto the rest of us in terms of significantly less secure software.

bagxrvxpepzn 2 days ago

> The C++ steering committee are the real extremists that are holding back the entire software industry because of a sacred cow and a free pass to externalize that cost onto the rest of us in terms of significantly less secure software.

The C++ leadership serves the C++ community, not the entire software industry. You and everyone who disagrees with them are free to use and write software based on other languages, e.g. Java and Rust.

pjmlp 2 days ago

Many in the C++ community wouldn't acknowledge that.

Which is why disabling RTTI, disabling exceptions, creating their own standard library replacement, static analysers forbinding specific language constructs, is such a big deal in some C++ circles.

humanrebar 2 days ago

You can even add nonstandard features to existing compilers!

The neat thing is that once the standard committee learns about this use case, it could get de facto support as existing use!

feelamee 3 days ago

Ok. Please, just use your current C++ standard. But we will go to use the new one with all features we want to use.

blub 2 days ago

Who’s “we”? The C++ developers that like the “Safe C++” proposal which is tacking Rust on top of C++ are a tiny minority.

It seems very fair to tell them to just use Rust and leave C++ alone.

pjmlp 2 days ago

Indeed, that is exactly what many FAANG companies are doing, have you noticed the slow down in velocity in major compilers regarding ISO C++ compliance?

bobnamob 2 days ago

See Apple’s slowdown on clang development and subsequent advances in Swift<->C++ interop (even going as far as merging Swift code into FoundationDB)

And ofc Google’s investment in Carbon

pjmlp 2 days ago

Or MSVC slow pace with C++23, after being the first to reach full C++20 support.

Everyone else outside the big three, is somewhere between C++14 and C++17.

blub 2 days ago

Nope, still using C++17 and not bothered by any slowdown. C++ has been moving too fast lately.

pjmlp 2 days ago

It is currently an open debate what will be the very last ISO version the world will care about, C++17 might be the one, or C++26, bets are open.

feelamee 2 days ago

obviosly.. we is

> Relatively modern, capable tech corporations that understand that their code is an asset. (This isn’t strictly big tech. Any sane greenfield C++ startup will also fall into this category.)

and @bagxrvxpepzn is ofc

> Every ancient corporation where people are still fighting over how to indent their code, and some young engineer is begging management to allow him to set up a linter.

:)

sumanthvepa 3 days ago

Thank you for this. C++ should NOT try to be Rust. I find modern C++ really nice to program in, for the work I'm doing - 3D graphics. The combination of very powerful abstractions and excellent performance is what I'm looking for. I'm more than willing to endure percived lack of safety in the language.

tsimionescu 3 days ago

The lack of safety is perceived because it is there. There is no proof that anyone can write a C++ program larger than, say, 100k lines of code that doesn't have memory safety issues.

logicchains 3 days ago

And that memory safety is completely not an issue if you're writing something like a game, trading system, simulation, internal application or science calculation where there's no potentially hostile users who could do real harm by hacking your code. It's just a class of bug that in modern C++ is generally far outnumbered by logic bugs.

tsimionescu 2 days ago

Games absolutely are a problem for lack of memory safety - because the majority of games played today are connected to the internet explicitly. For trading system I don't even know what you mean, but I can't think of a trading system where you wouldn't care about security.

For simulations and scientific calculations, I do agree, to a vast extent. But in a world that is moving more and more towards zero-trust networking, even many of those will start being looked at as potential attack vectors into other systems.

PaulDavisThe1st 2 days ago

As a DAW developer, I find myself chuckling over security concerns in other kinds of apps.

You see, it is absolutely expected and required that our applications will load and run arbitrary 3rd party code, generally with the expectation that it lives in the same address space as our application (though this is not formally required).

No sockets, no network, no backdoor hacks. You write code, call it a VST plugin, make it sound desirable ... we are expected to load and run it.

Yes, several DAWs have made the move toward out-of-process execution of plugins, but that doesn't begin to address the myriad problems caused by loosely-written plugin APIs not adequately pinning down threading, thread priority, memory access and more.

Filesystem access? Of course! That code runs as you! Because you want it to!

lmm 2 days ago

And when someone creates a project file that sends them the personal information of anyone who opens it, is that an issue? Yes, pervasive arbitrary code plugins are game over if you can get anyone to use your plugin, but there's at least some awareness that you need to be careful opening a plugin you don't trust.

PaulDavisThe1st 2 days ago

Not sure that's true for the majority of DAW users.

Plugins are not associated with attack vectors, even though they are literally just that.

PLG88 2 days ago

I may be off base, but as the world moves to zero-trust networking, we can always embed a zero-trust network into our C++ app so that it can be distributed across the network while having no listening ports on the underlay network - i.e., my memory safety exploit cannot just be exploited by anyone on the WAN, LAN, or host OS network. My C++ app unattackable via conventional IP-based tooling, all conventional network threats are immediately useless.

This capability exists in completely open source, such as OpenZiti - https://openziti.io/.

AlotOfReading 2 days ago

The way C and C++ are standardized, you can't rely on the correct functioning of anything in the presence of undefined behavior, including memory unsafety. For what it's worth, I also opened a random file in the OpenZiti C SDK and immediately found safety issues like this: https://github.com/openziti/ziti-tunnel-sdk-c/blob/9993f61e6...

That's why this topic is such a big deal. Even people who really should know better like the OpenZiti authors aren't able to reliably write safe code.

drivebyhooting 2 days ago

Why is that a safety issue?

AlotOfReading 2 days ago

Malloc/Calloc can fail even if they typically don't on most Linux systems. You should always check for null pointers before accessing the resulting buffer, which doesn't happen here. The connections() block is also never explicitly freed anywhere I was able to find in a quick search. That's allowed, but definitely bad practice.

SkiFire13 2 days ago

You'll still have to e.g. parse and interpret data from the internet if you want to communicate with anyone else, and that's a potential vector for an exploit. This has commonly be the way exploitations work in games.

PLG88 2 days ago

The edge SDKs do not parse and interpret data from the internet, they provide ingress/egress off the overlay. They authenticate and authorise to the controller and make outbound connections to the overlay network. This is why any app embedded with Ziti has no listening ports to host OS network, LAN, or WAN; they only listen to specific application calls across the overlay.

Now, you may say, "well, you have merely moved the listening port from the app to the overlay". Yes, true, not simple. Firstly overlay is written in Golang (thus memory safe). Secondly, if a vulnerability exists in the overlay network that would allow an attacker to bypass the security of the zero trust network, but what does that mean in practice? Well, to do this they would need to:

- bypass the mTLS requirement necessary to connect to the data plane (note, each hope is uses its own mTLS with its own, separate key). - strong identity that authorizes them to connect to the remote service in question (or bypass the authentication layer the controller provides through exploits... note again, each app uses separate and distinct E2E encryption, routing, and keys) - know what the remote service name is, allowing the data to target the correct service (not easy as OpenZiti provides its own private DNS that does not need to comply to TLDs, so it could literally be 'madeup.service.123') - bypass whatever "application layer" security is also applied at the service (ssh, https, oauth, whatever) - know how to negotiate the end to end encrypted tunnel to the 'far' identity

So yes, if they can do all that, then they'd definitely be able to attack that remote service. But I said "remote service", not "remote services". All that work and compromises and they only have access to 1 single service among hundreds, thousands, or potentially millions of services. Lateral movement is almost impossible. So the attacker would have to repeat each of the 5 steps for every service possible. Also, they don't know which company sits behind which OpenZiti fabric, so its pot luck if its even against the target they want to try and exploit.

Finally, we have developed a stateful firewall called 'ZitiFW' - https://github.com/netfoundry/zfw - which uses eBPF to look at the IP information of any incoming connections/packets to an Edge Router (Ziti's Policy Enforcement Point), if a connection/packet is received from an IP address which is not correlated to a known, bootstrapped endpoint to the overlay, the packet can be blackholed.

zozbot234 2 days ago

The issue of memory safety goes well beyond adversaries "hacking your code". Without memory safety, your code doesn't even have any kind of well-defined semantics so it's not feasible to defend against even "logic" bugs by automated means.

If you care about program correctness in any real sense, memory safety is table stakes.

uecker 2 days ago

No, this is not how it works. Even without memory safety, the code has well-defined semantics for correct input, i.e. input that does not trigger undefined behavior. And if you prove your program correct for all inputs, this then implies that it does not have undefined behavior for any input. Memory safety is not a prerequisite for applying formal methods to show correctness.

diath 3 days ago

On the contrary, why would I not want these things in C++ if I'm developing every project with -fsanitize=address,undefined to catch these types of errors anyway?

Attrecomet 3 days ago

What I don't understand is why you demand that C++ evolution be halted in a clearly suboptimal position so you don't need to change your processes. Just use the version of C++ that meets your needs, you clearly don't want nor need new developments. You are fine with being locked into bad designs for hash maps and unique ptr due to the (newly invented, in 2011/13) ABI stability being made inviolable, you clearly need no new developments in usability and security.

So why not be honest and just use C++01, or 11, or whatever it is that works for you, and let the rest of the ecosystem actually evolve and keep the language we invested so much effort into as a viable alternative? There's zero benefit, except to MS who want to sell this year's Visual Studio to all the companies with 80's-era C++...

liontwist 3 days ago

> evolution be halted in a clearly suboptimal position

It’s clear it’s imperfect. But not clear there is an obvious path to a nearby local maxima.

Design choices have tradeoffs.

And even if that were true, who would take advantage of that “better” language in a purely abstract sense? New language standards primarily exist to benefit existing C++ code bases, and the cohort of engineers who work on them. You have to consider that social reality.

bagxrvxpepzn 2 days ago

> What I don't understand is why you demand that C++ evolution be halted in a clearly suboptimal position so you don't need to change your processes.

I don't demand that C++ evolution be halted. I support the current trajectory of not adding viral annotations for the sake of implementing static lifetime checking. I want C++ to evolve into a better version of itself, I don't want it to become something it's not. If you want static lifetime checking, please use Rust. It already exists and it's great for people who need static lifetime checking.

chlorion 3 days ago

Imagine an engineer in any other field acting like this.

"I don't want to install air bags and these shiny safety gadgets into my cars. We have been shipping cars without them for years and it works for us and our customers."

The problem is that it doesn't actually work as well as you think, and you are putting people at risk without realizing it.

andrewflnr 3 days ago

You're trying to install airbags on a motorcycle, though. The design of the vehicle/language is incompatible with airbags/lifetimes. So if you want airbags... don't use C++.

(Yes, I know about airbag vests. Let's analogize those with external static checkers.)

bookspace 2 days ago

What if, bagxrv, is a Rust fan, just playing ya? Everyone knows Rust fans are the most vigorous developers on the internet. Just take a look at https://izzys.casa/2024/11/on-safe-cxx/

downut 3 days ago

You are making a general statement about the distribution of general consumers of computer languages, complete with a long tail, and the commenter is explaining that he is an expert car driver, way out there on the long tail. This tyranny of the less capable mode is really grating, especially on a site named "Hacker News".

As usual, the answer is quite simple: "please use rust". We promise to never mention when we break out nasm.

Driver anecdote: I have antilock brakes on my Tundra, but they are annoyingly counterproductive in 4WD descending 6" or larger sandy rocky steps. Do antilock brakes work overall best for the less capable mode? Of course! Do they work best for me? No.

ModernMech 3 days ago

We learned a long time ago as an industry that the expert car drivers are not immune to causing pile ups, which makes it all our problem to solve.

Safety by default with escape hatches when absolutely necessary is the better way to go for all, even if it means some power users have to change their ways.

lubesGordi 3 days ago

I don't know enough about what it would take to implement static lifetime checking. Is that fundamentally impossible to do in a backwards compatible way?

steveklabnik 2 days ago

It depends on what you mean by "backwards compatible," and what you mean by "static lifetime checking" :)

The profiles proposal suggests adding static lifetime checking, "without viral annotations." I use quotations because I don't really agree with this framing, but whatever. The paper is here if you'd like to read it yourself: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p30...

The core idea here is that you add annotations to opt in or out of certain checks. And opting in may be a compiler flag, requiring no changes to source code. So that would be "backwards compatibility" in that sense. Of course, code may fail these checks, so you'll have to add annotations to opt out, or re-write the code. We will see in practice how much change is required once implementations exist and are tried out.

But the other part is, these profiles do not attempt to cover all valid cases. And what I mean by that is, there are some lifetime issues that this proposal does not attempt to analyze. And, where the analysis is similar, they offer a subset of what other proposals do. These decisions were made because the authors believe that they'll reduce a significant number of issues, and are easier to adopt. And that's worth it instead of going for more checks.

The competing proposal, Safe C++, has you opt into safety checks on a file-per-file basis. So in that sense, it is also backwards compatible: all existing code compiles as-is. When you opt in to those checks, it adds new syntax, similar to Rust, to do the safety analysis checks. So you gain this benefit for only new code, but it also you get much more power. This syntax is necessary to communicate programmer intent to the checks, but is the "viral annotations" that the proponents of profiles don't like.

So, basically, that's the thing: both are backwards compatible, but offer very different tradeoffs in the design space.

aiono 2 days ago

If you want alias tracking and lifetime checking, yes they are backwards incompatible. They need "viral annotations" if we speak with the C++ committee terminology.

jandrewrogers 3 days ago

The parts of the government that think everything should be written in a memory-safe language (like Rust) are the same parts that already write everything in Java. Most of the high-end systems work is in C++, and that is the type of software where lifetimes and ownership are frequently unknowable at compile-time, obviating Rust's main selling point.

AlotOfReading 3 days ago

It's not a hard dichotomy. Almost all of the rules Rust imposes are also present in C++, enforcement is simply left up to the fallible human programmer. Frankly though, is it that big a deal whether we call it unique_ptr/shared_ptr or Box/Arc if a lifetime is truly unknowable?

Rust shines in the other 95% of code. I spend some time every morning cleaning up the sorts of issues Rust prevents that my coworkers have managed to commit despite tooling safeguards. I try for 3 a day, the list is growing, and I don't have to dig deep to find them. My coworkers aren't stupid people, they're intelligent people making simple mistakes because they aren't computers. It won't matter how often I tell them "you made X mistake on Y line, which violates Z rule" because the issue is not their knowledge, it's the inherent inability of humans to follow onerous technical rules without mistakes.

galangalalgol 3 days ago

Yeah, I don't end up fighting rust very often, and when I do, it is right. And when I run into a case that it isnt, I have unsafe and the rustonimicon to help me. You can do anything in rust you can do in c++, it is just that rust defaults to safe instead of unsafe, and there is no single keyword to let you know the c++ you are looking at is safe.

mempko 2 days ago

This! The hardest part of making software is making something that works for people. What I love about C++ is multi-paradigm programming. I can express my ideas directly using the appropriate paradigms. Regarding safety, with modern C++ programming, it's not hard to write software that's correct. Safety is often never the first thing I worry about.

If having strict safety means I can't express my mental models in code, I don't want it. It will slow me down. It will make it harder to write software that's useful.

Remember people, we are here to make things that are useful to people. If safety gets in the way of that, then it's not worth it.