Posted by charles_irl 9/13/2025
> Profiles seem less radical and more adoptable, a safer-by-default C++ without forcing the Rust model that aims to tackle the most common C++ pitfalls.
I don't want to play with a plastic sword, just put it in a sheath.
For example, their tooling prevents code like this:
if (m_weakMember) { m_weakMember->doThing(); }
from compiling, forcing you to explicitly create an owning local reference, like so: if (RefPtr strongLocal = m_weakMember.get()) { strongLocal->doThing(); }
unless it's a trivial inlined function, like a simple getter.My interpretation of Geoff's presentation is that some version of profiles might work, at least in the sense of making it possible to write C++ code that is substantially safer than what we have today.
The most hopeful thing I saw in Geoff's talk was cultural. It sounds like Geoff's team wanted to get to safer code. Things went faster than expected, people landed "me too" unsolicited patches, that sort of thing. Of course this is self-reported, but assuming Geoff wasn't showing us a very flattering portrait of a grim reality, which I can't see any incentive for, this team sounds like while they'd get value from Rust they're delivering many of the same security benefits in C++ anyway.
Bureaucrats don't like culture because it's hard to measure. "Make sure you hire programmers with a good culture" is hard to chart. You're probably going to end up running some awful quiz your team hates "Answer D, A, B, B, E, B to get 100%". Whereas "Use Rust not C++" is measurable, team A has 93% of code in Rust, but team B scored 94.5% so that's more Rust, they win.
That's not true at all.
- The bounds safety part of it prevents those C operations that Fil-C or something like it would dynamically check. You can to use hardened API instead.
- The cast safety part of it prevents C casts except if they're obviously safe.
- The lifetime safety part of it forces you to use WebKit's smart pointers except when you have an overlooking root.
Those are type safety rules. It's disingenuous to call them heuristics.
It is true, however, that Geoff's rules don't go to 100% because:
- There are nasty corners of C that aren't covered by any of those rules.
- WebKit still has <10% ish code that isn't opted into those rules.
- WebKit has JITs.
r/cpp is full of people with such heuristics, ways that they personally have fewer safety bugs in their software. That's how C++ got its "core guidelines", and it is clearly the foundation of Herb's profiles. You can't get to safety this way, you can get closer than you were in a typical C++ codebase and for Geoff that was important.
“Prevent something unless obviously safe” is a core pattern of rules in type systems. For example variable assignment in Java. If it’s possibly unsafe (RHS is not a subtype of LHS) then it’s prevented.
Are you saying Java and all of classic type theory is just heuristics?
You could compile your program with address sanitizer then it at least crashes in a defined way at runtime when memory corruption would happen. TCC (tiny C compiler initially written by fabrice bellard) also has such a feature I think.
This of course makes it significantly slower.
You’d possibly just be trading one problem for another though - ask anyone who’s had to debug a shared ownership issue.
What infuriates me about the C++ safety situation is that C++ is by and large a better, more expressive language than Rust is, particularly with respect to compile time type level metaprogramming. And I am being walked hands handcuffed behind my back, alongside everyone else, into the Rust world with its comparatively anemic proc macro shit because the C++ committee can't be bothered to care about memory safety.
Because of the C++ standards committee's misfeasance, I'm going to have to live in a world where I don't get to use some of my favorite programming techniques.
I don't see how that follows at all. What makes Python Python (and slow!) is dynamic dispatch everywhere down to the most primitive things. Refcounted smart pointers are a very minor thing in the big picture, which is why we've seen Python implementations without them (Jython, IronPython). Performance-wise, yes, refcounting certainly isn't cheap, but you just do that and keep everything else C++-like, the overall performance profile of such a language is still much closer to C++ than to something like Python.
You can also have refcounting + something like `ref` types in modern C# (which are essentially restricted-lifetime zero-overhead pointers with inferred or very simplistic lifetimes):
https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...
It doesn't cover all the cases that a full-fledged borrow checked with explicit lifetime annotations can, but it does cover quite a few; perhaps enough to adopt the position that refcounting is "good enough" for the rest.
Both of which have modern, concurrent, parallel, and generational garbage collectors.
To be more precise, it's old Python. Recent versions of Python use a gc.
> And I am being walked hands handcuffed behind my back, alongside everyone else, into the Rust world with its comparatively anemic proc macro shit because the C++ committee can't be bothered to care about memory safety.
Out of curiosity (as someone working on static analysis), what properties would you like your compiler to check?
Have you worked much with SAL and MIDL from Microsoft? Using SAL (an aesthetically hideous but conceptually beautiful macro based gradual typing system for C and C++) overlay guarantees about not only reference safety but also sign comparison restriction, maximum buffer sizes, and so on.
But first: we need to take step-zero and introduce a type "r64": a "f64" that is not nan/inf.
Rust has its uint-thats-not-zero - why not the same for floating point numbers??
Why do we need to single out a specific value. It would be way better if we also could use uint-without-5-and-42. What I would wish for is type attributes that really belong to the type.
typedef unsigned int __attribute__ ((constraint (X != 5 && X != 42))) my_type;While I think it's a great idea, this also sounds like it would require fairly major rewrites (and possibly specialized libraries?), which suggests that it would be hard to get much buy-in.
> Reference counting is the primary mechanism of garbage collection, however, it doesn’t work when the references have cycles between them and for handling such cases it employs the GC.
- temporal safety (e.g. no use after free) - initialization safety (no read of initialized memory) - thread safety (no data races) - type safety (accessing memory with the correct type)
I have a little post that explains this using a few more words, if interested: https://burakemir.ch/post/memory-safety-the-missing-def/
Lifetime parameters aren't necessary, lifetime contracts may be implemented in a different and much easier way. This may be expressed in form of a function attribute, which may be calculated via constexpr code.
Special operators for borrowing just add more complexity. C++ already achieves same behavior by normal references (which may be mutable and non-mutable).
Introducing immutability by default isn't strictly necessary for achieving safety. C++ developers are already mostly fine writing "const" almost everywhere.
Wouldn't that just be slightly different syntax for the same idea? If you want to represent relationships like "the pointer in the field Struct::field_name within this argument is linked with the lifetime of the returned pointer" and you want them to be reusable across functions and contexts, isn't the most obvious way to write that a variant of
struct Struct<'a> {
char *'a field_name;
};
char *'a f<'a>(Struct<'a>& struct);
?Mutability by default is good, but it's not strictly necessary. I repeat, you can just write "const" everywhere manually, except places, where mutation is needed.