Posted by theanonymousone 18 hours ago
Fast forward a few decades, and we're still very much on this journey of finding the right abstractions/interfaces/libraries/languages. I feel like there must be a complexity equivalent to Parkinson's law: complexity expands to fill the space left in between abstractions.
Imagine if this was a new language that the dev community was seeing for the first time. It's hard to imagine it gaining much traction.
There is no “modern” alternative. If you read Reddit threads, C++ programmers actually believe that it’s a reasonable file reading API.
Most companies that I’ve worked at have just implemented our own on top of the OS syscalls. Which is annoying because it requires at least a Windows and UNIX variant.
Look, I like C++. I’ve been programming in it for years. But some of the stereotypes around C++ programmers are true. I still occasionally run into design decisions so untethered from reality that it still shocks me after all these years.
But it's not a new language. It's backwards compatible with C.
So "iterators" behave the same as pointers, since that's how you'd iterate through an array. You can add and subtract, then pass them to other functions.
You can't just have a function that returns a vector of strings, because that function would do an allocation. When is it deallocated? Before unique_ptr (the guide was written before), it'd be the caller's responsibility to manually do so.
Meaning you have to assign the output of that function to a variable every single time and manually remember to deallocated it or you get a memory leak.
C avoids this with `strtok` by destructively modifying the string in place. This is arguably worse.
If you were designing a new, non-GC, language, you'd have good ownership semantics and not allow pointer arithmetic. That'd be Rust.
The reason it works is because D has actual array types.
If you choose to use automatic memory management with D, you are memory safe.
So while a much older date is probably appropriate, maybe 20-30 years ago, we can at least mark this (2022) until somebody justifies a particular previous date.
I have some very smart friends who think it's the perfect language, but I kind of prefer almost every language that has come out after C++. I feel like the language adds some very strange semantics in some very strange places that can be hard to reason about until you've spent a lot of time with the language. This wouldn't necessarily be so bad if not for the fact that most people who write C++ have not spent sufficient time to understand it (and I consider myself in that group, though I don't write C++).
I have mixed feelings on D, but I'm very grateful that Rust came along. Rust is arguably even more complicated than C++, but the good thing is that getting a lot of these complications wrong will simply not allow your code to compile. It's a huge pain in the ass at first but I've become grateful for it.
I still write C very occasionally, but Rust has supplanted like 95% of jobs that were formerly C. I still really need to play with Zig.
Personally I use C or something, anything other than C++ really, if I need something more ergonomic or provably correct. Many excuse C++’s design history with “they didn’t know better”, but the oft forgotten history explained in that video shows otherwise.
I should write an Ffmpeg codec with it or something this weekend to try it out.
Rust is complex. It's solving complex problems. It's not complicated, though. There's not much you could remove without creating leaky abstractions.
In my opinion, C++ is equally complex. It's solving the same kinds of problems as Rust (although it'd be fairer to say "Rust is solving the same problems as C++"). However, it's hella complicated. There's a vast number of twists and turns to keep in mind if you want to use it, and most of them are things you could not have anticipated by reasoning about it from first principles.
If you took the design goals of Rust, and reinvented it from scratch, it'd probably end up looking a lot like Rust. If you were to reinvent C++ from scratch, I bet it wouldn't remotely resemble modern C++. If anything, I bet it would also end up looking like Rust, and the fractal of powerful footguns would be left on the cutting room floor because "that's insane, there's no way anyone would want that".
Gauging how "complicated" a language is somewhat subjective, so it's kind of hard for me to give a straightforward answer. I think it's certainly easier to be (some definition of) productive with C++ than with Rust. I feel like to do anything even remotely non-trivial with Rust, you kind of have to understand everything, because if you don't do it in the "Rust way", it often won't compile. I think this is a good thing, but it does make it harder to get started.
C++ has a lot less consistency and (kind of) more features, and lots of strange semantics to go with those features, and so if people actually use them it can get confusing and hard to read pretty quickly.
My knowledge is a bit out of date, to be clear; previously whenever I need something in the C++ domain, I could fairly easily just reach for C and use that instead. Now Rust is available and I think overall better (though I do sometimes miss how utterly simple and dumb C is).
To be fair, the alternative to having to worry about Send/Sync/Pin is not "not worrying about Send/Sync/Pin". It's having to worry about correctly enforcing the constraints they describe on your own, without any kind of mechanical help. E.g., not moving data to another thread that shouldn't be and not accessing data from multiple threads that shouldn't be. This stuff is intrinsic.
In this sense the Rust mental model is simpler, because failing to uphold these constraints is no longer "your fault", it's Rust's fault.
And relying on people to check them. Versus a compiler.
I confess I haven't dug into it much yet, but this reminds me of how Haskell was. By the time you got a program to compile your project was more or less done.
With Rust, I had to get used to single ownership or explicit cloning. There's an argument that this is "better", but I found it a bit harder to learn.
You can make an argument that the K&R C "is" an utterly simple and dumb language, but if it was it's long gone and it's also irrelevant for modern hardware.
Today because C only has a single kind of reference, the raw pointer, that means if you want references at all (which you do) you need pointers, and to get decent performance from this sort of language you need pointer provenance, and so now all your reference types involve understanding compiler internals minutiae. Bad luck though, those aren't specified in the C language standard, that's a TODO item from the turn of the century. The committee agrees that C pointers do have provenance but declines to explain how that could possibly work.
I haven't seen places where they wanted this, but they definitely can exist. In the cases I'm thinking of any valid pointers are definitely unique (so no aliasing), or they're definitely pointing at something immutable (so aliasing is fine) or both and so there's no problem as I understand it.
There is an outstanding issue with LLVM - for any language including Rust - that it has unsound optimizations for pointers and this has implications for provenance tricks, but as I said that's not Rust specific and I think worse there are signs the same illness afflicts the GCC backend so maybe it's worse than "LLVM is buggy" and is a wider problem in how compiler developers have thought about this vague unspecified problem.
Circa 2004 I was in college and took a C++ class. I spent an entire week trying to get my final project working and couldn't for the life of me figure out what was wrong. It was only a few hundred lines and was like an employee record type demo. I spent about 3 hours one on one with the professor trying to figure it out. I remember that removing the auto_ptr stuff and using regular pointers would make it work (because the problem has to be with the pointer stuff right?), but part of the requirements was that I had to use auto_ptr because it was safer or whatever.
We tried compiling it on different systems and nothing would get it to work. He ended up giving me a C on the project admitting "it should work, but that doesn't cut it in the business world" or something to that effect which really pissed me off.
I just had a chat with GPT about this and that was almost certainly what was causing my program to segfault.
std::auto_ptr<int> a(new int(5));
std::auto_ptr<int> b = a; // a becomes null
Wild.
He's using auto_ptr to demonstrate RAII, which is fine. I would assume that the use of auto_ptr indicates that the example was written some time ago.
Garbage collectors don't guarantee the absence of memory leaks. GCs remove one important source of memory leaks but it's still very possible in GC languages to use up all available memory unintentionally simply by holding onto things in a big data structure that you've forgotten about (often it's a cache). Weak pointers in conjunction with GC help a great deal with that problem but even so GC and weakness are not going to guarantee leak-prevention in all cases.
I still strongly prefer GC languages to the alternative.
Love it!
The real trick, in my experience, is to design your software with things like bounded queues or ring buffers, and to avoid manual memory management (new/delete). This works in C++ just as well as GC'ed languages.
One of my favorite consequences of LLM-heavy workflows (vibe-coding "make me a CRUD app"-style prompts aside) is that prompting the LLM forces the user to put at least a modicum of thought into how the software is actually architected.
The main reason I saw around me for memory leaks in GC'ed languages, is devs only thinking about the 'add' part, not the 'when-to-remove' part. I always think of both and the only leaks I got were from slowdowns causing events to pile up in scheduler queues (deliberately not bounded).
For example, the 1987 edition of "The C++ Programming Language" (only 328 pages, including the index!) explains how the user can handle `new` failures with `set_new_handler` to "plug in" a garbage collection function that frees up memory and handles the failure.
And section 10.7 of "The Design and Evolution of C++", is titled "Automatic Garbage Collection", and covers in depth his reasons for not including a garbage collector, and explains a bit about how a plugin automatic collector might work. The TL;DR is that the hardware of the time was too limited and the performance overhead would have killed C++'s chances in its target market. He also posits that memory leaks "are quite acceptable" in many applications because most don't have to run forever and aren't "foundation libraries', but he's probably changed his mind on that by now.
https://repo.autonoma.ca/repo/mandelbrot/blob/HEAD/main.c
When writing:
fractal.image = image_open( fractal.width, fractal.height );
I will immediately write below it: image_close( fractal.image );
This hides memory allocations altogether. As long as the open/close functions are paired up, it gives me confidence that there are no inadvertent memory leaks. Using small functions eases eyeballing the couplings.For C++, developing a unit test framework based on Catch2 and ASAN that tracks new/delete invocations is rather powerful. You can even set it up to discount false positives from static allocations. When the unit tests exercise the classes, you get memory leak detection for free.
(I don't mind down votes, but at least reply with what you don't like about this approach, and perhaps suggest a newer approach that we can learn from; contribute to the conversation, please.)
Let me stop you right there. I did not downvote you, but I bet that's why others did. If humans were capable of correctly pairing open/close, new/delete, malloc/free, then we could've called C's memory management "good enough" and stopped there. Decades of experience show that humans are completely incapable of doing this at any scale. Small teams can do it for small projects for a small period of time. Large teams on large projects over long eras just can't.
If the advice for avoiding resource errors includes "all the programmer has to remember is...", then forget it. It's not happening. Thus the appeal of GC languages that do this for the programmer, and newer compiled languages like Rust that handle resource cleanup by default unless you deliberately go out of your way to confuse them.
cat += *p+"+";
Feels very cheap because it was so few keystrokes, but what it's actually doing is:1. Making a brand new std::string with the same text inside it as `p` but one longer so as to contain an extra plus symbol. Let's call this temporary string `tmp`
2. Paste that whole string on the end of the string named `cat`
3. Destroy `tmp` freeing the associated allocation if there is one
Now, C++ isn't a complete trash fire, the `cat` std::string is† an amortized constant time allocated growable array under the hood. Not to the same extent as Rust's String (which literally is Vec<u8> inside) but morally that's what is going on, so the appends to `cat` aren't a big performance disaster. But we are making a new string, which means potentially allocating, each time around the loop, and that's the exact sort of costly perf leak that a Zig or Odin programmer would notice here.
† All modern C++ std::string implementations use a crap short string optimisation, the most important thing this is doing - which is the big win for C++ is they can store their empty value, a single zero byte, without allocating but they can all store a few bytes of actual text before allocating. This might matter for your input strings if they are fairly short like "Bjarne" "Stroustrup" and "Fool" but it can't do "Disestablishmentarianism".
And it's still possible to improve performance here without returning to manual memory management. Just replace it with something like this:
cat += *p;
cat += "+";
Now no temporary string is created and thrown away, only cat performs memory allocations under the hood.But do you really need arenas? Does doing allocations in a traditional way creates a bottleneck in your specific use-case? Or you just want to justify broad manual memory management (with its bugs and secure vulnerabilities) in hope to gain (or not) a tiny amount of extra performance?
Also, with allocators you can replace new and delete, but you can't implement anything that doesn't fit into the new/delete paradigm, like any allocation strategy that requires moving objects.
As I know in newer C++ standards there is something which allows to workaround this issue. Or at least there are proposals for this.
C++ added polymorphic memory allocators in C++17 along with polymorphic versions of all standard library containers under the std::pmr namespace, so that you have std::pmr::vector, std::pmr::map, etc... all of which fully abstract out the details of memory allocation.
But arenas can't be used in any case. They are suitable only if large amounts of allocations take place once and need to be deallocated all at once. If reallocation of freeing of individual memory chunks is needed, arenas can't be used, so, it's needed to manage each allocation individually, for which containers is a better choice compared to manual memory management.
It's fairly straightforward to compose memory arenas with a pool allocator in these circumstances.
1) An allocated memory chunk cannot outlive its arena (leaks are impossible). You probably mean a stale reference? The arena is put at such a level in the memory hierarchy that this bug becomes impossible. The bug here would be that the allocation was done in the wrong arena. In C this would be avoided by putting temporary arenas in a local function scope by passing them as parameters. Fool proof references require C++ smart pointers. This is one example of you mixing concepts. Smart pointers/containers can still be used with arenas.
2) You mix up arenas and bump allocators. An arena can also use a pool allocator for example. Arena refers to the concept of scoping blocks of allocations.
3) Individual deallocations and arenas are not exclusive, for example using pools. But even with bump allocators free lists are a thing (and linked lists are more attractive in bump allocators because of locality).
You probably don't want to have an Arena in main, and you do all of your allocations from there, for example. That "just" leaks everything.
Here's a classic Arena-with-rewind bug:
{
Arena a;
avec<int> v(a);
{
RewindMark rm(a);
v.push(1); v.push(1); // Trigger resize
}
v[2] // Oh no! The underlying data array has been deallocated
}Generally, it's best to avoid the rewind trick, IMO, it makes it difficult to create composable programs.
Since C++11 it is permissible to write stateful memory allocators including arena based memory allocators. You can even write memory allocators that are tied to a specific object, so called sticky allocators.