UFCS is such an underrated language feature. When you have UFCS you can toss out 90% of the uses of methods in favor of just plain ole functions. Add generic functions and concepts and you rarely end up needing OO support.
expect(6).toBe(even)
Eg. any function call can be converted to a method call on the function's first parameter:
let mylist = [3, 2, 1]
" prints "1" as these two are equivalent
echo sort(mylist) == mylist->sort()
Helps a lot with chaining.In addition, without uniform call syntax, adding a new method can only break subclasses, whereas with uniform call syntax it can break other client code.
int addOne(int v){
return v+1;
}
You can now write code like this: int foo=3;
writeln(foo.addOne);
There is absolutely no reason that typing "foo." would not suggest "addOne" as possibility.Furthermore, I don’t think it necessarily makes sense for all functions that happen to take, say, a string as their first argument, to be listed in the code completion for method invocation on a string variable.
If you merely want to define auxiliary methods outside of a class, which is the thing the GP seems to like, that’s what’s usually called “extension methods”. It doesn’t require uniform call syntax.
hmm, yeah fair enough I suppose. I don't think I've found a good use-case for that yet. I guess having the symmetry there makes the feature easier to explain at least? I dunno.
> Furthermore, I don’t think it necessarily makes sense for all functions that happen to take, say, a string as their first argument, to be listed in the code completion for method invocation on a string variable.
All functions in scope that happen to take a string as their first argument. If this turns into an actual problem in practice it's quite doable to refactor things such that it's not an issue.
I find that when I use autocomplete I'll be typing the first bit of the method name in any case. I never "browse" autocomplete to look for the thing I need.
Extension methods are another way to do the same thing yes, but that feels like special-casing behavior, where UFCS is more general. With extension methods you need to think of how to implement it wrt things that don't usually have methods attached. With UFCS that just works the way you'd expect it to.
Can those go away if you use inheritance or polymorphism, and you need your functions to access protected or private members? I mean, OO is as much about methods as functional programming is about functions.
Not really. It's value proposition is code reuse. It's not a misfeature just because it breaks a simple explanation.
Implementation inheritance "in the large" remains a total misfeature and should not be used, other than for very tightly defined, extensible "plug in" architectures (where the point of extensibility can itself be treated as a single module). But these are really quite rare in practice.
Not really. Interfaces play no role in invoking member functions, even if they are defined in a base class. Inheritance is used to allow member functions declared in a parent class to be called in a member functions without requiring boilerplate code. Instead of having to duplicate code, inheritance provides a concise way to specify a) this is what I want to reuse, b) this little thing is what I want to change in the implementation.
> The genuine value proposition for implementation inheritance is, quite unsurprisingly, the same as for typestate and generic typestate, namely better type checking within a single self-contained program module.
No. The value proposition is not requiring any boilerplate code to extend or change any detail in a base class. With inheritance, you just declare the thing you want to add or change, and you do not need to touch anything else. The alternatives being floated fail to meet very basic requirements such as visibility, encapsulation, and access control.
> Implementation inheritance "in the large" remains a total misfeature and should not be used (...)
Not true. This is a personal belief based on specious reasoning. You need to go way out of your way to ignore the problems that inheritance solved while ignoring the negative impact of the alternatives being floated.
The defining characteristic of implementation inheritance is open recursion. When you call a "member function declared in a parent class" there's no telling what code will actually be run, because that member function may have been overridden at any point in the class hierarchy. There's no way of knowing whether the caller and callee code will agree on the required semantics and invariants involved, or even what these should be for any given case. That's why this is a misfeature for a "programming in the large" scenario.
By contrast, these issues can be managed when using implementation inheritance within a self-contained, smaller-scale program module, and then its open recursion behavior, properly managed, matches what we expect from the use of the typestate pattern.
Recursion does not register as a concern in inheritance or polymorphism.
> When you call a "member function declared in a parent class" there's no telling what code will actually be run, because that member function may have been overridden at any point in the class hierarchy.
This is a feature, and a valuable one. You don't care what your parent class is doing. Your concern is that you want to extend it, and that's it.
> By contrast, these issues can be managed when (...)
You didn't pointed any issues. Also, composition addresses any concern you might have.
You're searching for problems that fit a solution, and so far you pointed no problem.
Agreed about this, of course. My approach is based on trying to describe what exactly it is that implementation inheritance adds beyond pure composition, and "open recursion", meaning late-bound dispatching through the 'this' or 'self' implied parameter (which is what potentially allows method calls anywhere in the hierarchy to dispatch to overridden methods in derived classes) is the pithy answer to that question.
The generic typestate pattern turns out to be a close analog in that it also involves function calls that go through a genericized "Self" object.
I do think that this leads to quite severe problems wrt. getting caller and callee code to agree on expected semantics in a large, multi-modular and potentially a quickly evolving codebase (which is what "programming in the large" means as a term of art) but then I've made that point above already and I'm not going to belabor it.
Though I grant that having an object hierarchy does make it a bit more explicit what’s being inherited or needs implementing. However, a OO hierarchy also tends to obscure the actual parent implementations as well. Just having a set of functions from a module generally lowers the number of indirections.
In general I find working through a non-OO code base generally easier to grok and understand. Especially the OP culture from Java or C# style OO, even if I’m generally good at understanding OO patterns.
There's far more to inheritance that code reuse. For example, encapsulation, access control, and static type checking, etc.
There's no mystery. It's a jack of all trades, master of none. E.g., the virality of the GC makes it a non-starter for its primary audience (C/C++ developers). The need to compile and the extra verbosity makes it a bad substitute for scripting like Python. Etc.
Basically, name any large niche you'd expect it to fill and you'll probably find there's a tool already better suited for that niche.
No. If you were to say you need the GC to use all features of the language and standard library, of course, the GC does important things, but to claim a C developer wouldn't be comfortable with it because of the GC is nonsense. Just don't allocate with the GC and use the same mechanisms you'd use with C (and then build on top of them with things like @safe, reference counting, and unique pointers).
> Just don't allocate with the GC
"virality" is not just a word you can ignore.
What you're suggesting is the moral equivalent of "it's easy to avoid diseases, just avoid contact with those infected", or "it's easy to avoid allergens, just avoid foods you're allergic to", or "it's easy to avoid contamination, just set up a cleanroom", or "it's easy to write deterministic code, just avoid randomness", etc.
Yes, there are things that are easy to achieve in the vacuum of outer space, but that's not where most people are interested in living.
Thank you. Yes, exactly. The problems aren't even subtle; it's impossible to miss them if you actually try. I don't recall even finding a reasonable way to concatenate or split strings on the heap and return them without a GC, let alone anything more complicated. It boggles my mind that people repeat the talking point that it's somehow practical to program with @nogc when the most basic operations are so painful. Nobody is going to drool at the idea of spending days/weeks of their lives reinventing nonstandard second-class-citizen counterparts to basic types like strings just to use a new language.
> They want to make the situation better but there’s just not enough people to tackle the huge amount of work that requires (I.e. changing most of the stdlib)
I don't agree that it's lack of manpower that's the problem -- at least, not yet. I think it's primarily the unwillingness to even admit this is a problem (an existential problem for the language, I think) and confront it instead of denying the reality, and secondarily the inertia and ecosystem around the existing language outside the standard library. It's not like the problem is subtle (like you said, a few minutes of coding makes it painfully obvious) or novel. The language has been out there for over a decade and a half, and people have been asking for no-GC version nearly that long. Yet, at least to the extent I've had the energy to follow it, the response has always been the canned you-can-totally-program-D-without-a-GC denials you see repeated for the millionth time here, or (at best) silence. If this sentiment has changed and I'm unaware of it, that's already significant progress.
Maybe the unwillingness to confront reality is due to the lack of manpower and how daunting the problem looks; I'm not sure. But it seems as bright as daylight that D is not going to be successful without solving this problem.
https://github.com/dlang/dmd/blob/master/compiler/src/dmd/ba...
It's pretty minimalist on purpose. I don't much care for kitchen sink types.
The BetterC is the no-gc version. Use the -betterC switch on the compiler.
Or, if you want a string result,
import core.stdc.stdlib;
string concat(string s1, string s2)
{
const length = s1.length + s2.length;
char* p = cast(char*)malloc(length);
assert(p);
p[0 .. s1.length] = s1;
p[s1.length .. s1.length + s2.length] = s2;
return cast(string)p[0 .. length];
}
I tend to not use this sort of function because it doesn't manage its own memory. I use barray instead because it manages its memory using RAII. D provides enormous flexibility in managing memory. Or, you can just leave it to the gc to do it for you.I feel you're demonstrating exactly the problems I highlighted through your example here -- including the very lack of acknowledgment of the overall problem.
The very simplest and straightforward way is to use the gc to manage the memory. It works very very well. All the other schemes have serious compromises.
That's why you can use the method most appropriate in D for the particular usage. I routinely use several different methods.
And funny, C++ has been copying many features that DLang have for many time ago. Beginning with the type inference (ie using "auto" to declare vars). And now, contractual programing and static reflection.
I really loved the language, but it's very sad that never manages to take off and become more popular and well maintained.
It's a prototype to gauge the interest in having a borrow checker in the language. I did not continue with it because the interest is not there.
And while some features in other languages might have seen an implementation first in D, claiming first to the finish line as it usually comes up, even on this thread, hardly does anything for language adoption.
On the contrary, it is one reason less leave those languages, as eventually they get the features, and already have the ecosystem.
Most of the stuff that interested me back in 2007, is now available across in C#, F#, JVM languages, Swift, OCaml, Haskell, with a better ecosystem and IDE tooling.
And then there is C++26 and Rust 2024.
You achieved a lot as language designer, but it is still not clear how the community would keep the language going forward in lets say 10 years from now.
GCC (and possibly other compilers) had typeof in the '90s well before D was first released. Macros in the form:
#define TYPEOF(name, expr) typeof(expr) name = expr
Were widely in use since then.I'm sure that C++ borrowed concepts from D, but type deduction (not inference BTW) is not one of them.
I've been happily using D for a better experience with C code for more than a decade. First, because it's extremely rare to need to completely avoid the GC for everything in your program. Second, because everything you want and need from C is available, but you can use it from a more convenient language. Sure, exceptions won't work if you're avoiding the GC (which doesn't have anything to do with C), but so what. It's not like C programmers are currently using exceptions. You can continue to use whatever mechanism you're using now.
that works if your main use case for d is as a top-level wrapper program that is basically calling a bunch of c libraries. if you want to use d libraries you will run into ones that need the gc pretty quickly.
I don't think what hinders their adoption is their direction, everything they say they accomplished/plan to accomplish is ideal IMO.
Having strictly more features (if we even assume that, which I don't think is accurate) does not imply better.
Javascript is just JSON with more features too. Is it a mystery that people don't ship Javascript code everywhere instead of JSON?
Another example is D enables nested functions. Yes, there are ways in C to do the equivalent, but they are clumsy and indirect.
(D's ability to use C code is to make use of existing C code. There's not much point to writing C code to use with D.)
That's kind of why I said it's the "jack of all trades". It's not a bad language, it just doesn't beat any existing languages I know of in their own niches (hence "master of none"), so few people are going to find it useful to drop any existing languages and use D in lieu of them.
D also attracts expert programmers who are very comfortable using the GC when appropriate, stack allocation when appropriate, malloc/free, even ref counting. These are just tools in the toolbox, like I use socket wrenches, end wrenches, box wrenches, crow foot wrenches, pipe wrenches, monkey wrenches, etc. I don't try to use socket wrenches for everything.
BTW, the GC makes managing memory in compile time function execution trivial. Something that non-GC languages struggle with.
I guess what I've been trying to say is that you would find yourself pleased much, much more often (and D being much more successful) if you recognized and addressed these high-level issues that people have been pointing out for decades, instead of denying them and going on forums telling customers why their expectations are wrong or unnecessary. I'm saying this because D really is a great piece of technology that got a lot of things right, except a few crucial details for some of the most crucial users. And it has had so much potential - potential that has been gradually lost largely because you haven't even recognized the flaws and hurdles that come with it.
It remind me of the infamous Dropbox comment. It's as if you invented FTP, but then whenever people told you it's hard to store & share files, you kept insisting that it's trivial with just a few simple steps on Linux, completely missing the massive market opportunity and the barriers you're telling people to walk through. https://news.ycombinator.com/item?id=9224
And I'm not saying all this out of out hate for D, but out of (at least past) love for it. I desperately wanted to see it succeed, but I gave up because I realized you simply did not see the Achilles heel that frustrates many of its users and that has held back its potential.
There is a great irony that a replacement to C++ should have lots of features in it. (Not necessarily the same too-many features.) One of the key requirements of a real C++ alternative would be fewer language features.
Because at the scale many companies use C++, the additions into ISO C++, for how bad WG21 process currently might be, don't land there because a group of academics found a cool feature, rather some company or individual has seen it as a must have feature for their industry.
Sadly also a similar reason on how you end up with extension spaghetti on Khronos APIs, CSS levels or what have you.
Naturally any wannabe C++ replacement to be taken seriously by such industries, has to offer similar flexibility in featuritis.
Maybe it's just me but, sorry, I cannot parse this sentence.
As bad as the WG21 process might be, the additions into ISO C++ don't land there because a group of academics found a cool feature; they land there because some company or individual has seen it as a must-have feature for their industry.
You 100% cannot do that. I mean, you can if you're writing some toy project just for your own use. But as soon as you start interacting with other programmers, it's inevitable that some will use some other subset of language features.
> Can you briefly explain which features can be thrown out and language will not miss a lot without them?
I don't think that it's controversial that C++ is a huge language with many features, and I doubt I'm the best person to rehash that. One often quoted example is the multitude of ways to initialise a variable (Foo x = y; Foo x(y); Foo x = {y}; Foo x{y} and for default initialisation Foo x; Foo x = {}; Foo x{}; Foo x() (not really - that's the most vexing parse); Foo x = Foo()). There's multiple syntaxes to define a function including auto and decltype(auto) return types. There are const, consteval and constexpr - you may know the difference but I've forgotten. There are so many templating features that I wouldn't know where to start. Concepts are layered on top of that - which are useful and a good idea but no denying that it's layering extra complexity on top (which can be said for many C++ features). I've really just scratched the surface.
The thing is, I learned C++ over 20 years ago, when the latest standard was C++03 (which was essentially the same as C++98). Even at the time, C++ seemed like a bit of a chunky language (e.g., compared to C or Object Pascal - languages tended to be simpler back then), but it was achievable to mostly understand it all. But each revision that passed has added a huge volume of features. So I really feel how big C++ is because it's even big compared to (an older version of) itself. I've mostly kept up over the years but I can't imagine how I would properly learn the language today from scratch - I feel like you don't really stand a chance unless you've also been closely following it for decades.
I disagree. Such a problem will be in any language. So, any project should have a code style which will describe how you can do things and how you should not do them.
> I feel like you don't really stand a chance unless you've also been closely following it for decades.
From my experience, 2 years is enough. I started from zero and now I'm pretty good in C++. Although this can look like a really long time, but you should consider that to understand a lot of parts of C++ you need knowledge in computer science and programs design, which you should learn with any other language too.
Re: your first sentence: I neither understand the logic nor do I understand how insulting the developer is going to help D succeed here even if the logic was sound.
That's not data science or AI; "more famous" -- ridiculous distinction.
Fun fact: Ansible is orchestrated Python. Half your Linux distribution of choice is a pile of Python scripts. It's everywhere.
D does escape analysis from an alternative direction. If a pointer is qualified with `scope`, the compiler guarantees it does not escape the stack frame.
BTW:
I am not fond of stuff like:
// Sort lines
import std.stdio;
import std.array;
import std.algorithm;
void main()
{
stdin
.byLine(KeepTerminator.yes)
.uniq
.map!(a => a.idup)
.array
.sort
.copy(stdout.lockingTextWriter());
Are there any ways to do this that do not involve a bunch of "."s? I do not understand "map!" and "a.idup" either, FWIW.I really want to like D, but it tries to do too many things all at once, in my opinion.
Perhaps I will give C3 a fair try.
`map` is an operation on a data structure that replaces one element with another, in this case `a` gets replaces with `idup(a)`. The `idup` makes a copy of its argument in memory, and marks the data is immutable.
* https://learn.microsoft.com/en-gb/dotnet/api/system.linq.enu...
* https://learn.microsoft.com/en-gb/dotnet/api/system.linq.enu...
How would it look like with this particular code? Just for comparison.
> The `idup` makes a copy of its argument in memory, and marks the data is immutable.
How is one supposed to know this? Reading the documentation? I really want to look at the code and be able to know straight away what it does, or have a rough idea.
As for idup... The first several search results for "dlang idup" are all useful.
> I really want to look at the code and be able to know straight away what it does, or have a rough idea.
I presume you really don't like perl, ML based (ocaml, f sharp, rust) Haskell or K.
[1] https://news.ycombinator.com/item?id=44359539
> As for idup... The first several search results for "dlang idup" are all useful.
Yes, I am sure it was, I am sure an LLM would have helped too, but I think that is besides the point here.
$string =~ s/\d+/NUM/g;
I don't have a clue what is going on. Sure, I see the regex, but what is =~ doing?
There's only so far you can stretch most languages before you need to actually put in effort to learn them.
FWIW, I knew this as a kid, too, despite knowing absolutely nothing about the language at the time.
Anyways, you should read https://news.ycombinator.com/item?id=44463391 if you care about why I dislike the way D does it.
Yes?
> I really want to look at the code and be able to know straight away what it does, or have a rough idea.
You said elsewhere that you love Perl. Would you say your sentence above applies to Perl?
Funny though, because most of the things these people accuse me of are dead wrong, and my comment history is proof of that. In fact, I have been down-voted to oblivion for telling people to read the documentation. I guess we may have come full circle.
Glad we had this utterly pointless chat.
Then again, it is entirely subjective, and we should not argue about taste.
To each their own.
> How would it look like with this particular code? Just for comparison.
I do not know how to write D, so the following might not compile, but it's not hard to give it a go:
copy(sort(array(map!(uniq(byLine(stdin, KeepTerminator.yes)), a => idup(a)))), stdout.lockingTextWriter())
> > The `idup` makes a copy of its argument in memory, and marks the data is immutable.> How is one supposed to know this? Reading the documentation? I really want to look at the code and be able to know straight away what it does, or have a rough idea.
Are you serious? You are offended by the idea of reading documentation?This is not helping the credibility of your argument. Again, I'm not a D user, but this is just silly.
If you knew me, and you read my comment history, you would have NEVER said that. It is not even a matter of reading the documentation or not. "idup" seems arbitrary, sorry, I meant the whole line sounds arbitrary. Why "a"? Why "a.idup"? Why "map!"? I was asking genuine questions. You do not have to bash me and see ghosts. I was curious as to why it was implemented the way it was.
I am an active speaker against people who hate reading the documentation.
And FYI, I love Perl[1] and OCaml[2].
[1] https://news.ycombinator.com/item?id=44359539
[2] You would have to check the comment history.
`a` is a parameter in the lambda function `a => a.idup`.
> Why "map!"
This is definitely something that can trip up new users or casual users. D does not use <> for template/generic instantiation, we use !. So `map!(a => a.idup)` means, instantiate the map template with this lambda.
What map is doing is transforming each element of a range into something else using a transformation function (this should be familiar I think?)
FWIW, I've been using D for nearly 20 years, and the template instantiation syntax is one of those things that is so much better, but you have to experience it to understand.
> "idup" seems arbitrary
Yes, but a lot of things are arbitrary in any language.
This name is a product of legacy. The original D incarnation (called D1) did not have immutable data as a language feature. To duplicate an array, you used the property `dup`, which I think is pretty well understood.
So when D2 came along, and you might want to duplicate an array into an immutable array, we got `idup`.
Yes, you have to read some documentation, not everything can be immediately obvious. There are a lot of obvious parts of D, and I think the learning curve is low.
> Yes, but a lot of things are arbitrary in any language.
I disagree, but to each their own.
> Yes, you have to read some documentation, not everything can be immediately obvious.
I do not disagree, but I wanted to know the rationale behind it ("map!(a => a.idup)")!
How is one supposed to know this?
> How is one supposed to know this? Reading[…]?
People are too quick to use the "down-vote" button, and are too quick to judge. I love documentation, I write them. I am an active speaker against people who hate reading the documentation. This was not a case against reading documentation, yet people - wrongfully - believed so. People always glance past things like: "not fond of", and "in my opinion". It is tiresome.
This thread could have been educational, but instead it was a thread meant to bash me. It is my fault.
stdin
.byLine(KeepTerminator.yes)
.uniq
.map!(a => a.idup)
.array
.sort
.copy(stdout.lockingTextWriter());
I would like to emphasize that this is a personal preference. No need to continue to bash me over it.I prefer Elixir's |> operator, if you want an example of something I prefer.
stdin
|> byLine(yes)
|> uniq
|> map(a => aidup)
|> array
|> sort
|> copy(stdout)
I’m sorry if you took it as bashing. It’s mere curiosity as I’ve never seen that preference before. IO.stream(:stdio, :line)
|> Stream.map(&String.trim_trailing/1)
|> Enum.uniq()
|> Enum.map(&String.duplicate(&1, 1))
|> Enum.sort()
|> Enum.each(&IO.puts/1)
This is not equivalent in style or presentation to: stdin
.byLine(KeepTerminator.yes)
.uniq
.map!(a => a.idup)
.array
.sort
.copy(stdout.lockingTextWriter());
Personally, I find the D version visually unappealing (and confusing), especially the way "stdin" sits alone on its own line, followed by a sequence of indented method calls. The excessive use of dots combined with the indentation structure makes it look, to me, rather awkward.That is just my own opinion.
"map(a => aidup)" caught me by surprise, too. Would Elixir do such a thing?
print join '',
sort { $a cmp $b }
grep { !$seen{$_}++ } # = uniq
<STDIN>;
Go spread the way it did because it was dead simple to make websites and services, the syntax is insanely simple, but the language allows you to scale without going through a ton of hoops. Look at how much of our IT infrastructure is powered by Go now.
D has a lot of potential, but someone has to sit down and build frameworks and tooling for D.
I think if D had officially supported libraries by core maintainers that take full advantage of the best of D it might be a different landscape.
Another area where D could shine is GUIs. Everything is electron these days, it feels like nobody builds usable GUI stacks. If D had an official solution to this, and it worked nicely and gave you enough power to customize your UI, we might see a shift there too. I welcome a Electron free future.
Look at the Zed editor (ignore the AI buzz) and how insanely fast it is. Its coded in Rust, and uses WGPU iirc to just render everything kind of like a video game, but it runs insanely fast. It is my new favorite text editor.
Sadly despite my deep love of D, Go is where I'm leaning more towards, due to industry pull there.
I've often thought that, to the extent that I spent a while looking for some active projects I could contribute to, and came up blank. if I do have some new gui based program of my own I want to write I will at least consider d for it, though ocaml is another great language in the same space and I already have some experience with ocaml/gtk. my hope was that d would have more mature gui toolkit bindings and more of a community of people writing apps, which would have been some incentive to switch over from ocaml, I was disappointed to find that wasn't the case.
For some reason, and mostly that being Mozilla, Rust got quite an initial kick to overcome that initial hurdle in haste. We're not going to mention a lot of those libs are stale in Rust world, but at least they're there and that kind of gives you momentum to go forward. Whatever you're trying to do, there's a non-zero chance there's a library or something out there for you in Rust.. and we got there real quick which then encouraged people to proceed.
That's just like my opinion, man.. but I think a key part is that first lib bindings hurdle which Rust somehow went over real quick for a critical mass of it; D hasn't.
Love the D though lol, and Walter is a 10000x programmer if you ever saw one but it might be time to hang the hat. I can only imagine how a community like Rust or I don't know Zig of those up-and-coming would benefit from his help and insights. He'd probably single-handedly make rust compile 100x faster. One can hope.
That is basically table stakes for a new language now.
For me any alternative to Rust implies having automatic resource managment, eventually coupled with improved type system, in a mix of affine types, linear types, effects or dependent types.
Something that in regards to safety is already available today by using GCC's Modula-2 frontend, FreePascal and similar, is not bringing too much to the table, comptime notwithstanding.
I know its petty - I still can't get past how idiotic and frustrating it is that Zig treats unused variables as a compiler error. Its the worst of all worlds:
- Its inconvenient (I have to explicitly suppress them in my code with _ = foo)
- Once I've suppressed them, I don't get any compiler warnings any more - so ironically, it takes more effort to find and fix them before committing. I end up accidentally committing code with unused variables more than in Rust or C.
- And it totally breaks my flow. I like to explore with my hands and run my code as I go. I clean up my code after my tests pass so I can be sure I don't introduce new bugs while refactoring.
Zig's handling of unused variables seems like an unforgivably bad design choice to me. Its been raised by the community plenty of times. Why hasn't it been fixed? I can understand if Andrew Kelly doesn't program the same way I do. We all have our idiosyncrasies. But does he seriously not have any smart people around him who he trusts who can talk him out of this design?
It seems like a huge pity to me. It otherwise seems like a lovely language.
Because yours is just an opinion. It's perfectly legitimate to not like unused variable errors, but its factually wrong to say that no one wants it. You're just yucking somebody else's yum.
> But does he seriously not have any smart people around him who he trusts who can talk him out of this design?
He does, most of them also like unused variable errors. For what it's worth, I do too.
However, I've written a decent amount of Zig code and that's probably my biggest complaint. Zig has put a ton of effort into making an ultra-fast developer experience with very low iteration times, and it's amazing. But then when I'm refactoring some code or trying to figure stuff out by, for example, commenting out some lines of code, I might get a bunch of unused variable errors. And so I spend more time fixing those than I do even compiling the code itself!
One thing I've seen suggested is using the linter to automatically insert `_ = foo` for unused variables, but I don't love that either because then what even is the point of the error in the first place?
But like I said that's all downstream of the no-warnings policy. And I totally understand the failure mode of warnings - I've worked on plenty of large projects that had 4,000 warnings and everyone ignored them and the actually useful ones would be invisible. Is there some middle ground where, I don't know, Debug builds can have warnings, but Release builds don't?
Zig is a language that demands a lot of rigour of the programmer. It offers a lot of trust. Far more so than Go or even Rust. It’s in light of that philosophy that it seems so weird. The compiler trusts me to manually manage my memory, but it’ll scold me like a naughty child if I ignore an unused variable for 5 minutes? Pick a lane.
I’d love to hear some arguments in support of this choice. The closest I’ve heard is “it doesn’t bother me, personally” - which isn’t a very strong argument.
I’m a little tempted to fork the compiler just to fix this. Can’t be that hard, right?
The more time I spend thinking about it, the more convinced I am that its strictly worse. What am I missing? Why do you like it?
Automatic constructors - You only have to write the 'make me a box of two apples' code and not 'this is how two apples go into a box'! This is as revolutionary as 'automatic function calls', where you don't have to manually push the instruction pointer to pop it back off later.
Parenthesis omission!
If I were to parody this I'd talk about how good Scala is - in addition to classes, you can also declare objects, saving you the effort of typing out the static keyword on each member.
Sell me something nice! Millions of threads on a node. Structured concurrency. Hygienic macros. Homoiconicity. Higher-kinded types. Decent type inference. Pure functions. Transactions.
https://dlang.org/spec/function.html#pure-functions
D's pure functions are quite strict. It can be a challenge to write a function that passes strict purity guarantees - but the result is worth it!
int x = f(); // f() is run at run time
enum y = f(); // f() is run at compile time
I did not like DUB at all. Its default behavior was to not segregate artifacts by configuration, and trying to change that was a headache.
It’s too bad, though. It’s a nice language, but I can’t see it making any inroads at this point.
Plus the chicken and the egg problem. This is mostly from the AerynOS experience : it seems like if you want to write some moderately complicated code then you're becoming the upstream of many libraries. Especially now with Rust's popularity and ecosystem maturity on the rise, it's super hard to convince people (e.g. your boss) that you'd be better of with D compared to e.g. Rust.
Then came the D2 re-write which broke backwards compatibility and took another few years.
In the meantime everyone moved on
No such thing happened. D has always been built on the same codebase, and the labels "D1" and "D2" are just arbitrary points on a mostly linear evolution (in fact, the tags D 1.0 and D 2.0 came only 6 months apart; 1.0 was just meant to be a long term support branch, not a different language. It was the addition of `const` that broke most code around release 2.6 but if you update those, old and new compilers generally work.
I'd say where D failed was its insistence on chasing every half-baked trend that someone comments on Hacker News. Seriously, look at this very thread, Walter is replying to thing after thing saying "D has this too!!" nevermind if it actually is valuable irl or not.
I am not sure if what supporting every paradigm is a definitely good thing though I feel like I would love the freedom to do so. I am a tinkerer at heart/first and a coder at second and my favourite language is probably go and then nim/julia/elixir/typescript/<it depends on the project>
What D or C++26 can do, is a subset of Eiffel capabilities, or more modern approaches like theorem proving in tools like Ada/SPARK, Dafny, FStar,...
Anyway, I used the DM C++ compiler originally because it was the only one I could download to the high school computers without filling out a form, and pimply-face youth me saw "DESIGN BY CONTRACT" at the top of the website and got kinda excited thinking it was a way to make some easy money coding online.
Imagine my disappointment when I saw it was just in/out/invariant/assert features. (I'm pretty sure D had just come out when I saw that, but I saw `import` instead of `#include` and dismissed it as a weenie language. Came back a couple years later and cursed my younger self for being a fool! lol)
`import` is so cool we extended it to be able to import .c files! The D compiler internally translates them to D so they can be used. When this was initially proposed, the reaction was "what's that good for?" It turned out to be incredibly useful and a huge time saver.
The concept is sort of like C++ being a superset of C and so being able to incorporate C code, except unlike C++, the C syntax can be left behind. After all, don't we get tired of:
struct Tag { ... } Tag;
?
What's the thing with the syntax? If you don't intend to use the type elsewhere don't give it a tag, if you want, you have to give it a name. (Assuming you are annoyed by the duplicate Tag)
struct Tag { ... }
or: typedef struct Tag { ... } Tag;
? It's just simpler and easier to write code in D than in C/C++. For another example, in C/C++: int foo();
int bar() { return foo(); }
int foo() { return 3; }
The D equivalent: int bar() { return foo(); }
int foo() { return 3; }
struct Tag { ... };
if it needs to rely on the internals, but the user shouldn't care: typedef struct { ... } Tag;
if it can be opaque (what I would default to): typedef struct {} Tag;
I also think that is a good feature to separate specification from implementation, I like being forced to declare first what I want to implement. Funnily in your special case you wouldn't need the declaration. (But of course it's a bad idea to rely on this "feature")Modern C++ is slowly adopting D features, many of which came from extensions I added to my C++ compiler.
Checking input parameters is easy, just write asserts at the start of the function.
Checking result requires "destructor" block and some kind of accessible result variable, so you can write asserts in this destructor block which you can place at the start of the function, as well.
Checking class invariants requires a way to specify that some function should be called at the end of every public function. I think, it's called aspect-oriented programming in Java and it's actually useful for more things, than just invariant checking. Declarative transaction management, logging.
There are probably two schools of programming language designs. Some put a lot of features into language and other trying to put a minimal number of features into language which are enough to express other features.
Same reason function calls are better than arbitrary jumps.
But what I love the most is: https://news.ycombinator.com/item?id=43936007
Instead of:
const MIN_U32 = 0;
const MAX_U32 = 2 ** 32 - 1;
function u32(v) {
if (v < MIN_U32 || v > MAX_U32) {
throw Error(`Value out of range for u32: ${v}`);
}
return leb128(v);
}
You can do this, in Ada: subtype U32 is Interfaces.Unsigned_64 range 0 .. 2 ** 32 - 1;
or alternatively: type U32 is mod 2 ** 32;
and then you can use attributes such as: First : constant U32 := U32'First; -- = 0
Last : constant U32 := U32'Last; -- = 2 ** 32 - 1
Range_ : constant U32 := U32'Range; -- Range 0 .. 2**32 - 1
Does D have anything like this? Or do any other languages?is it not the same as the one in Eiffel?
these are just runtime assertions
EDIT: how am i getting downvoted for copy-pasting literally what the article verifies?
See also the scope(exit) feature.
https://en.cppreference.com/w/cpp/experimental/scope_exit.ht...
He demonstrated it with C++ templates, but the D one is far more straightforward.
Like: software programs can't be that difficult to create properly because they are just 1s and 0s.
int foo(int a) {
assert(a > 5);
int b = a * 10;
assert(b > 50);
return b;
}
do you think those asserts don't "run automatically"?For example, function arguments can be "in", "out", "inout", "ref", "scope", "return ref" - and combinations.
Another example is conditional compilation. Great when used sparely, but can otherwise make it very difficult to understand how the code flows.
In the end, reading the source code of the standard library convinced me against it.
(The source code for the C++ standard library is much worse, of course).
None of them are required. What they do is provide enforcement of semantics that otherwise would have to be put in the documentation. And, as we all know, function documentation is always either missing, outdated or just plain wrong.
For a (trivial) example:
void foo(int* p) { *p = 3; }
vs: void foo(out int i) { i = 3; }
In the latter, you know that `i` is being initialized by the function. In the former, you'll have to rely on the non-existent documentation, or will have to read/understand the foo()'s internals.`out` definitely is a win. Let's look at `scope`:
void foo(int* p) { static int* pg = p; }
void bar(*) {
int x;
foo(&x); // oops
}
vs:
void foo(scope int* p) { static int* pg = p; } // compiler flags errorThis is most definitely a memory safety feature.
Reminds me of "In case you forgot, Swift has 217 keywords now" https://x.com/jacobtechtavern/status/1841251621004538183
In general, the design of the standard library is much less alien and baroque than the STL, and is more battries-included, so you spend much less time puzzling over incantations and more time writing code. The code you have at the end is also much more concise and readable.
Likewise, because D is in a lot of ways "C++ with fewer problems and papercuts", I spend way less time figuring out totally inscrutable C++ compilation errors.
Consequently, I can spend more of time writing code and thinking about how to use all D's nice features to better effect. Plus, given how fungible and malleable the language is, it doesn't take a lot of effort to rework things if I want to change them in the future.
Personally, I think this is the main reason D hasn't caught on. It's selling point is that it's pragmatic and doesn't shove a lot of dogma or ideology down your throat. This isn't sexy and there's nothing to latch onto. There are many styles you can write D code in... MANY more than C++: Python-style, C#-style, C++-style, C-style... hell, bash style, MATLAB-style, R style, whatever you want. But for some of these styles, you have to build the tools! The fact that all of this is possible is the result of combining one very practical and ergonomic programming language, with a thousand different QOL improvements and handy tools... plus top tier metaprogramming.
IMO, the major thing holding D back right now is also along the same lines. It offers pragmatism and practicality, but the tooling is still weak. Languages like C++, Rust, and Python totally outclass D when it comes to tooling... but you have to sacrifice flexibility and ergonomics for baroque madness (C++) or BDSM (Rust) or slow and impossible to maintain code (Python). The choice is yours, I guess!
Application code often only needs a subset of the language and can be much more readable.
The standard library needs to use every trick available to make itself expressive, reusable, backwards-compatible, and easy-to-use for callers. But for your application code you certainly want to make different tradeoffs.
But you prefer C++?
(The D standard library is in the process of being re-engineered for clarity, but still, it is far more comprehensible than C++'s.)
One good memory I had is a couple of years ago when I built a little forum using D. Man the site was blazing fast, like the interaction was instant. Good times.
I want a meta list of all these interesting features across languages.
EDIT: I found one! “Micro features I’d like to see in more languages” https://buttondown.com/hillelwayne/archive/microfeatures-id-...
The current LSP is _that_ bad, it doesn't even recognize most notable D features such as templates and named arguments..
This should be their #1 priority, as the language is starting to get steam again, they should not miss tat opportunity
I know Walter does not use that kind of tools, but that's becoming a requirement nowadays for young developers
Please, invest into tooling!
There is no excuse for D, if they can make a great compiler, they surely can make great tooling too
I suggest you give the D LSP a try, I have mentioned templates, but it's very frail for everything else, using `auto`, having a chain of identifiers or even using reference/pointers is enough to confuse it, most of the time
I think it stems from the fact that both Zig/Odin provide a parser/lexer as part of their std, making it easier to just focus on building the tools, the community built parser is not good and lags behind
https://dlang.org/articles/exception-safe.html
In concrete, looks to me to be the only language that covers the major ways to do it.
(In concrete the `scope` way is the one I found inspiring. . I think the exceptions can go and be replace by it for langs where exceptions are removed)
In Rust land, it really need integration of something like flux into the language or as a gradually-compatible layer.
Can't have safe software without invariant checking, and not just stopping at bounds checking.