Posted by dmalcolm 2 days ago
Honestly, uBlock Origin shouldn't be an extension to begin with, it should be a literally built in feature of all browsers. Only reason it's not is we can't trust ad companies to maintain an ad blocker.
For me the back button wasn't hijacked.
But I am for disallowing the use of `history.go()` or any kind of navigation inside of `onpopstate`, `onhashchange`, `onbeforeunload` or similar or from a timer started from one of those.
Setting it to what?
They aren't going overboard on it, they just put a warning emoji in front of the error message.
If you use whatever OS other than Windows, I'm sure there are similar flows available if you search for it. And since it's just Unicode, I'm sure there are numpad based keybinds available too.
how does it work over ssh? :)
grep is a bit more iffy. UNIX command line tools seem to be a bit of a crapshoot in how or if they support Unicode, especially if you switch between different systems like Linux, BSD, Cygwin etc. You might need a bit of experimenting with the LANG variable to get it work (e.g. Git Bash on Windows needs LANG=C.UTF16 to match an emoji). I've also had cases where grep or sed works, but awk doesn't, or vice versa. On the whole it works a lot better nowadays than it used to, though, and that's a win for non-English users of the command line as well as emoji fans.
I am non english. I use ăâîșț. When writing a text document for humans.
I also use :) but not .
I'm still of the opinion that anything not 7-bit ascii doesn't belong in something that may be machine processed. Which includes compiler output.
Edit: hey, HN erased my emoji. The dot was supposed to be the :) emoji style.
Yes but it was for humans originally? Not for machine processing.
> Not to mention the folks who deliberately set LANG so that their compiler and everything else will give them localized error messages.
The horror! Who even works on translations for compiler error messages ?!? It makes absolutely no sense!
Next they'll want to localize programming language keywords. I wonder how well that will work at this current project of mine that has people native to 3 countries, none english speaking ...
Older than that?
I wanted something I could grep for easily. Which doesn't seem to be the emoji, since it needs extra software to get on the grep command line...
> How do you grep for it
And then how badly or well this works will depend on your build of grep and your environment variables, as the other user noted. I did not consider this, because I'd expect grep to just work with Unicode symbols like this when my stdin is set to UTF-8, which I'd further expect to always being the case in 2025, but it appears that's not an expectation one can reasonably have in the *nix world.
It was and continues to be unclear to me why you'd want to grep for the warning emoji though, since according to the article these are inserted somewhere deep in the console-visual explanations. They do not replace the slug denoting the compiler message type at the start of these, which as you said, can (still) be found by just grepping for "warning".
Oh but in the real world you vpn into a server that privately tailscales to some boxes that are hard to reach inside a factory and no one has physically touched them since 2018 at best ...
What's this 2025 you speak of? Not in production.
I was investigating C++-style templates for a hobby language of mine and SFINAE is an important property to make them work in realistic codebases, but leads to exactly this problem. When a compile error occurs, there isn't a single cause, or even a linear chain of causes, but an potentially arbitrarily large tree of them.
For example, it sees a call to foo() and there is a template foo(). It tries to instantiate that but the body of foo() calls bar(). It tries to resolve that and finds a template bar() which it tries to instantiate, and so on.
The compiler is basically searching the entire tree of possible instantiations/overloads and backtracking when it hits dead ends.
Showing that tree as a tree makes a lot of sense.
Past that point SFINAE should be left for existing code, while new code should make use of concepts and compile time execution.
All template requirements should be verified at the function definition, not at every call site.
There is concepts. But they are so unwieldy.
Yes, it's definitely nice to be able to typecheck generic code before instantiation. But supporting that ends up adding a lot of complexity to the typesystem.
C++-style templates are sort of like "compile-time dynamic types" where the type system is much simpler because you can just write templates that try to do stuff and if the instantiation works, it works.
C++ templates are more powerful than generics in most other languages, while not having to deal with covariance/contravariance, bounded quantification, F-bounded quantification, traits, and all sorts of other complex machinery that Java, C#, etc. have.
I still generally prefer languages that do the type-checking before instantiation, but I think C++ picks a really interesting point in the design space.
(But Rust traits works like that)
While not perfect concepts lite, alongside compile time evaluation do the job.
Plus I heard COBOL was merged in with the compiler collection, nice!
Why? I don't personally use GCC except to compile other people's projects in a way that's mostly invisible to me, but it seems like it's still widely used and constantly improving, thanks in part to competition with LLVM/Clang. Is the situation really so dire?
I for one don't think so. From my perspective, there's at least as much momentum in GCC as clang/LLVM, especially on the static analysis and diagnostics front over the past 5 or so years. That was originally one of the selling points for clang, and GCC really took it to heart. It's been 10 years since GCC adopted ASAN, and after playing catchup GCC never stopped upping their game.
Perhaps the image problem is that LLVM seems to be preferred more often for interesting research projects, drawing more eyeballs. But by and large these are ephemeral; the activity is somewhat illusory, at least when comparing the liveliness of the LLVM and GCC communities.
For example, both clang/LLVM and GCC have seen significant work on addressing array semantics in the language, as part of the effort to address buffer overflows and improve static analysis. But GCC is arguably farther along in terms of comprehensive integration, with a clearer path forward, including for ISO standardization.
More importantly, the "competition" between GCC and clang/LLVM is mutually beneficial. GCC losing prominence would not be good for LLVM long-term, just as GCC arguably languished in the period after the egcs merger.
You're right to note that "competition" here is more like inspiration than a deathmatch. But I vaguely remember two things that seem similar to motivation via competitive pressure to me: (1) when GCC 5 came out, it had way nicer error messages, and I immediately thought "Oh, they wanted to make GCC nice like Clang" and (2) IIRC the availability of a more modular compiler stack like LLVM/Clang essentially neutralized Stallman's old strategic argument against more a more pluggable design, right?
- has about the same quality of error messages as GCC now
- is now almost exactly as slow (/fast) as GCC at compiling now
- sometimes produces faster code than Clang, sometimes slower, about the same overall
I see no reason why the default would change.
How much do you think they contribute back upstream regarding ISO compliance outside LLVM backend for their hardware and OS?
They contribute some things, sure. But the also don't contribute some things. It is hard to know how much because it's kept secret from all of us, even their own customers.
> "It's not like the Rust compiler is proprietary software with their own closed-source fork of LLVM..."
Rust no, hut there are a lot of proprietary, semi-incompatible proprietary forks out there.
One of the reasons that LLVM has been able to evolve so quickly is because of all the corporate contribution it gets.
GCC users want Clang/LLVM users to know how dumb they are for taking advantage of all the voluntary corporate investment in Clang/LLVM because, if you just used GCC instead, corporate contributions would be involuntary.
The GPL teaches us that we are not really free unless we have taken choice away from the developers and contributors who provide the code we use. This is the “fifth freedom”.
The “four freedoms” that the Free Software Foundation talks about are all provided by MIT and BSD. Those only represent “partial freedom”.
Only the GPL makes you “fully free” by providing the “fifth freedom”—-freedom to claim ownership over code other people will write in the future.
Sure, the “other people” are less free. But that is the price that needs to be paid for our freedom. Proper freedom (“full freedom”) is always rooted in the subjugation of others.
I think that's the core difference.
But hey, if you try hard to be nice to your master and do not demand anything, for sure they will always treat you well!
Gnu promotes Unix, but also promotes Emacs on top. NT can run Win32 on top of it, but there's far more than Win32 with NT systems. Just get ReactOS and open the NT object explorer under explorer.exe.
Far more advanced than Windows 95/98.
It almost sounds like you think UNIX is an API like Win32, and that GNU is an operating system which "implements UNIX" like NT is an operating system which "implements Win32"? Are you confusing UNIX with POSIX?
GNU was made to replace UNIX, not to promote it.
Even OpenBSD is not as 'pure Unix' as Unix V7.
I have not tried OpenIndiana, however.
Clang has a very nice specific page for ISO C version compliance: https://clang.llvm.org/c_status.html#c2x
I could not find the same for GCC, but I found an old one for C99: https://gcc.gnu.org/c99status.html
CppRef has a joint page, but honestly, I am more likely to believe a page directly owned/controlled/authored by the project itself: https://en.cppreference.com/w/c/compiler_support/23
Finally, is there a specific feature of C23 that you need that Clang does not support?
GCC's support for C23 is essentially complete. Clang is mostly catching up, but features I need that I am missing are storage class in compound literals and tag compatibility. It is also sad that Clang does not implement strict aliasing correctly (it applies C++'s rules also in C).
The one vendor who forks LLVM and doesn't contribute their biggest patches back is Apple, and if you want bleeding edge or compliance you're not using Apple Clang at all.
If you say "isn't it great vendor toolchains have to contribute back to upstream?" I'm going to say "no, it sucks that vendor toolchains have to exist"
If a company makes a new MCU with some exciting new instruction set, they need to make a compiler available which supports that instruction set and make that compiler available to their customers.
With LLVM as the base, the vendor could make their toolchain proprietary, making it impossible to integrate it back into LLVM, which means the vendor toolchain will exist until the ISA gets wide-spread enough for volunteers to invest the time required to make a separate production-quality LLVM back-end from scratch.
With GCC as the base, the vendor must at least make their GCC fork available to their customers under the GPL. This, in theory, allows the GCC community to "just" integrate the back-end developed by the vendor into GCC rather than starting from scratch.
Now I don't know how effective this is, or how much it happens in practice that the GCC project integrates back-ends from vendor toolchains. But in principle, it's great that vendors have to make their toolchains FOSS because it reduces the need for vendor toolchains.
Previous reality: companies write fully proprietary code to avoid GCC
Current reality: companies choose Clang over GCC because of the license and then contribute many of their changes back.
Code getting open source is not impossible. Companies do it all of the time because it's expensive to rebase.
Because Apple certainly isn't alone.
I am still waiting for the counter examples regarding clang contributions.
The main thing I like about clang is it compiles byzantine c++ code much faster.
Some projects like llama.cpp it's like eating nails if you're not using clang.
So with projects like llamafile I usually end up using both compilers.
That's why my cosmocc toolchain comes with -mgcc and -mclang.
At least with GCC they can sometimes merge the patches.
GCC on xtensa adds extra instructions in several places.
GCC on Arm Cortex-M0 has awful register allocation, and M0 has half as many registers as most ARMs...
I do remember reading about LTO not working properly, you're either unable to link the kernel with LTO, or get a buggy binary which crashes at runtime. Doesn't look like much effort has been put into solving it, maybe it's just too large a task.
There is an alternative backend to rustc that relies on it.
One big problem with libgccjit, despite its fairly bad compile-time performance, is that it's GPL-licensed and thereby makes the entire application GPL, which makes it impossible to use not just in proprietary use-cases but also in cases where incompatible licenses are involved.
Most of the decisions he made over the past 25 years have been self-defeating and led directly to the decline of the influence of his own movement. It's not that "the GCC project" avoided that for ideological reason, Stallman was personally a veto on that issue for years, and his personal objection led to several people quitting the project for LLVM, with a couple saying as much directly to him.
https://gcc.gnu.org/legacy-ml/gcc/2014-01/msg00247.html
https://lists.gnu.org/archive/html/emacs-devel/2015-01/msg00...
(both threads are interesting reading in their entirety, not just those specific emails)
Expecting Stallman to make life easier for commercial vendors is like expecting PETA to recommend a good foie gras farm. That's not what they do.
He threw open-source developers under the bus in the process. As a result approximately nobody writes GCC plugins, open source or otherwise.
> It is not our goal to “help Windows users” by making text editing on Windows more convenient. We aim to replace proprietary software, not to enhance it. So why support GNU Emacs on Windows?
> We hope that the experience of using GNU Emacs on Windows will give programmers a taste of freedom, and that this will later inspire them to move to a free operating system such as GNU/Linux. That is the main valid reason to support free applications on nonfree operating systems.
RMS has been exceedingly clear about his views for decades. At this point it's hard to be surprised that he’ll make a pro-Free Software decision every time, without fail. That doesn't mean you have to agree with his decisions, of course! But to be shocked or disappointed by them is a sign of not understanding his platform.
RMS: Here’s how they'll get ya!
Me: Nice, but that'd never happen.
Vendor: Here’s how we got ya!
Me: Dammit.
Seriously, he must have a working crystal ball.
Now, my agreement with him starts and ends on that subject. He says plenty of other things I wholly disagree with. But his warnings about proprietary software lock-in? Every. Single. Time.
Also, if you want Windows to die you need to work with OEMs: I assume that most users simply use whatever OS is pre-installed.
No, this is giving him too much credit. His stance on gcc wasn't just purity over pragmatism, it was antithetical to Free Software. The entire point of Free Software is to let users modify the software to make it more useful to them, there is no point to Free Software if that freedom doesn't exist - I might as well use proprietary software then, it makes no difference.
Stallman fought tooth and nail to make gcc harder for the end user to modify, he directly opposed letting users make their tools better for them. He claims it was for the greater good, but in practice he was undermining the whole reason for free software to exit. And for what? It was all for nothing anyway, proprietary software hasn't relied on the compiler as the lynchpin of its strategy for decades.
With the benefit of hindsight, I'm glad that that didn't happen, even though I have mixed feelings about LLVM being permissively licensed.
There's an impedance mismatch between people who think gcc should have maximized user utility vs. the actual GNU philosophy. The actions of the gcc project make a lot of sense if you consider the FSF/GNU are monomaniacal about maximizing users freedoms, and not popularity, momentum or other ego-stroking metric.
GCC today has a very interesting license term, the GCC Runtime Library Exception, that makes the use of runtime libraries like libgcc free if-and-only-if you use an entirely Free Software toolchain to compile your code with; otherwise, you're subject to the terms of the GPL on libgcc and similar. That is a sensible pragmatic term, and if they'd come up with that term many years ago, they could have shipped libgccjit and other ways to plug into GCC years ago, and the programming language renaissance that arose due to LLVM might have been built atop GCC instead.
That would have been a net win for user freedoms. Instead, because they were so afraid of someone using an intermediate representation to work around GCC's license, and didn't do anything to solve that problem, LLVM is now the primary infrastructure people build new languages around, and GCC lost a huge amount of its relevance, and people now have less software freedom as a result.
You're right:
https://gcc.gnu.org/legacy-ml/gcc/2005-11/msg00888.html
> If people are seriously in favor of LLVM being a long-term part of GCC, I personally believe that the LLVM community would agree to assign the copyright of LLVM itself to the FSF and we can work through these details.
It was a great start, but you need to adapt or you perish.
That gave vendors more freedom and flexibility (to lock their software away from their customers.)
As usual, customers got less freedom and flexibility.
> They got software that’s open under a different license that otherwise would have just been purely proprietary
This is not a given, even outside of compilers. It's a heck of a cope there.
I was responding to this, which is more widespread than GCC (although it was one of the first wins of the GNU).
There were various companies who wanted to add on backends and other bits to GCC, but wouldn’t due to the license. That’s one of the reasons LLVM is so popular.
But when this realization comes, it will be too late.
Specially when propietary dependencies kill thousands of projects at once.
I, for one, am happy that there are still a couple of people here and there that you can really trust on this stuff.
None of the permissive licenses have this problem.
GPLv2 and Apache.
The "or later" has been used in creative ways, like relicencing all the Wikipedia content, or the Affero to AGPL transition. Nothing shady, but unexpected.
Do you trust RMS to avoid doing shady things in the later GPL licence? I do, but he is not longer in the FSF.
Do you trust the current members of the FSF to avoid doing shady things in the later GPL licence? I don't know them.
Do you trust the future members of the FSF to avoid doing shady things in the later GPL licence???
Yes he is: https://www.fsf.org/about/staff-and-board
If you're worried about the other direction (i.e. a hypothetical GPLv4 that had some bizarre restriction like "all users must donate to the FSF"), the "or any later version" means that as long as you don't decide to update the license you use yourself, people can continue to use it under the GPLv2 or v3 indefinitely.
Was a little surprised to learn that the warning sign is generally considered an Emoji; I guess I don't think of it that way. Was even more surprised to learn that there is no great definition for what constitutes an Emoji. The term doesn't seem to have much meaning in Unicode. The warning sign - U+26A0 - goes all the way back to Unicode version 4.0 and is in the BMP, whereas most Emoji are in the SMP.
The definition is messy, but the list of Unicode emojis is defined. Starting points: https://www.unicode.org/reports/tr51/, https://unicode.org/emoji/charts/full-emoji-list.html
Somewhat related fun fact, anyone can submit an emoji to the Unicode Consortium annually, submissions are actually open right now: https://unicode.org/emoji/proposals.html.
I'm not aware of any emoji fonts like Symbola which provide a monospace typeface, though. That would be a great option.
You can do it on macOS as well, but you have to disable SIP and modify/replace the files for the Apple Color Emoji font, because some widely used GUI libs are hardcoded to use it.
Idr the situation on Windows except that emoji glyphs are inherited from your other font choices, if your chosen font includes emoji. But on Linux it's generally easy to configure certain font substitutions only for some groups of characters, like emoji.
At least in the blog there are a two spaces after the emoji so it can freely draw past its boundaries rightwards without colliding with anything for a good bit; and nothing to its right is used assuming monospace alignment. So at worst you just get a half-emoji.
gitk broke! Can't parse emojis
I want emojis in my code. They are a superior form of communication (non-linear)
Then you will enjoy swift [1]!
[1] Emoji driven development: https://www.swiftbysundell.com/special/emoji-driven-developm...
I literally do not need ascii art to point to my error, just tell me line:col and a unique looking error message so I can spend no more than 1 second understanding what went wrong
Also allow me to extend requires with my own error messages. I know it'll be non standard but it would be very nice tyvm
We are now entering a Rococo period of unnecessarily ornate compiler diagnostics.
Getting these elaborate things to work is a nice puzzle, like Leetcode or Advent of Code --- but does it have to be merged?
Can't use Clang where I'm at, but I do get to use fairly cutting-edge GCC, at least for Windows development. So I may get to see these improvements once they drop into MSYS.
You C standard authors have it bass-ackwards. The version of the code must accompany the code, not the compiler invocation.