> *zgc_alloc*
> Allocate count bytes of zero-initialized memory. May allocate slightly more than count, based on the runtime's minalign (which is currently 16).
> This is a GC allocation, so freeing it is optional. Also, if you free it and then use it, your program is guaranteed to panic.
> libc's malloc just forwards to this. There is no difference between calling malloc and zgc_alloc.
Previously there was that Rust in APT discussion. A lot of this middle-aged linux infrastructure stuff is considered feature-complete and "done". Not many young people are coming in, so you either attract them with "heyy rewrite in rust" or maybe the best thing is to bottle it up and run in a VM.
AFAIK, djb isn't for many "some 3letter guy" for over about thirty years but perhaps it's just age related issue with those less been around.
https://web.archive.org/web/20250513185456/https://mailarchi...
The IETF has now accepted the required new moderation guidelines, which will essentially be a CoC troika that can ban selectively:
https://mailarchive.ietf.org/arch/msg/mod-discuss/s4y2j3Dv6D...
It is very sad that all open source and Internet institutions are being subverted by bureaucrats.
(Obviously he was never the one to back down from a just fight, but it's important to find the right hill to die on. And allies! And him not following RFC 2026 [from 1996, hardly the peak of Internet bureaucracy] is not a CoC thing anyway.)
The IETF is a global standards-setting organization, intentionally created without a membership structure so that anyone with the technical competency can participate in an individual capacity. This lack of membership ensures its position as the primary neutral standards body because participants cannot exert influence as they could in a pay-to-play organization where members, companies, or governments pay fees to set the direction. IETF standards are reached by rough consensus, allowing the ideas with the strongest technical merit to rise to the surface.
Further, these standards that advance technology, increase security, and further connect individuals on a global scale are freely available, ensuring small-to-midsize companies and entrepreneurs anywhere in the world are on equal footing with the large technology companies.
With a community from around the world, and an increased focus on diversity in all its forms, IETF seeks to ensure that the global Internet has input from the global community, and represents the realities of all who use it.
There is only one IETF, and telling dissenters to leave is like telling a dissenting citizen to go to another country. I don't think that people (apart from real spammers) were banned in 1996. The CoC discussion and power grab has reached the IETF around 2020 and it continues.
"Posting too many messages" has been deemed a CoC violation by for example the PSF and its henchmen, and functionally the IETF is using the same selective enforcement no matter what the official rationale is. They won't go after the "director" Wouters, even though his message was threatening and rude.
Because the game is rigged apparently?
If not then let the WG work. If no one except djb feels this strongly about hybrid vs. pure post-quantum stuff then it's okay.
(And I haven't read the threads but this is a clear security trade-off. Involving complexity, processing power and bandwidth and RAM and so on, right? And the best and brightest cryptographers checked the PQ algorithms, and the closer we get to them getting anywhere near standardized in a pure form the more scrutiny they'll receive.
And someone being an NSA lackey is not a technical merit argument. Especially if it's true, because in this case the obvious thing is to start coalition building to have some more independent org up and running, because arguing with a bad faith actor is not going to end well.)
That is one of the contentious issues. See the last paragraphs of:
https://blog.cr.yp.to/20251004-weakened.html
Starting with "Remember that the actual tallies were 20 supporters, 2 conditional supporters, and 7 opponents".
My point here is we're not famous we're just old enough to have a tla from the time before HR demanded everyone get given.surname.
Every Unix system used to ship with a dmr account. It doesn't mean we all knew Dennis Ritchie, it means the account was in the release tape.
There are 17,000 odd of us. Ekr, Kre and Djb are famous but the other 17,573 of us exist.
It felt bit like s/some/random/g perhaps would apply when reading it. Intentional or not by writer. It made me long and write my comment. There are many 3letter user accounts, which some are more famous than others. To my generation not because they were early users, but great things what they have done. I'm early user too and done things then still quite widely being used with many distributions, but wouldn't compare my achievements to those who became famous and known widely by their account, short or long.
Anyhow I thought that "djb" ring bell anyone having been around for while. Not just those who have been around early 90 or so when he was held renegade opinions he expressed programming style (qmail, dj dns, etc.), dragged to court of ITAR issues etc.
But because of his latter work with cryptography and running cr.yp.to site for quite long time.
I was just wondering, did not intend to start argument fight.
There were many unix instances, and likely multiple djb logins around the world, but there's only one considered to be the djb, and it's dur to fame.
https://medium.com/@ewindisch/curl-bash-a-victimless-crime-d...
If that happens its game over. As the article I linked noted, the attackers can change the installation instructions to anything they want - even for packages that are available in Linux distros.
Be careful who you trust when installing software is a fine thing to teach. But that doesn't mean the only people you can trust are Linux distro packagers.
But yes, that the run arbitrary scripts is also a known issue, but this is not the main point as most code you download will be run at some point (and ideally this needs sandboxing of applications to fix).
Not what I meant. Getting software into 5 different distros and waiting years for it to be available to users is not really viable for most software authors.
(And when I say fine, I haven't actually used it successfully yet.)
I think distros don't want this though. They all want everyone to use their format, and spend time uploading software into their repo. Which just means that people don't.
Anyone really tried building PG or MySQL or such a complex system which heavily relies on IO operations and multi threading capabilities
It does (or did, at some point) pass the thorough SQLite test suite, so at least it's probably correct! The famous SQLite test coverage and general proven quality might make SQLite itself less interesting to harden, but in order to run less comprehensively verified software that links with SQLite, we have to build SQLite with Fil-C too.
It's low hanging fruit, and a great way to further differentiate their Linux distribution.
Rust would be about what language to use for new code.
Now that I have been programming in Rust for a couple of years, I don't want to go back to C (except for some hobby projects).
For new code, I would not use Fil-C. For kernel and low-level tools, other languages seem better. Right now, Rust is the only popular language in this space that doesn't have these disadvantages. But in my view, Rust also has issues, specially the borrow checker, and code verbosity. Maybe in the future there will be a language that resolves these issues as well (as a hobby, I'm trying to build such a language). But right now, Rust seems to be the best choice for the kernel (for code that needs to be fast and secure).
And size. About 10x increase both on disk and in memory
  $  stat -c '%s %n' {/opt/fil,}/bin/bash
  15299472 /opt/fil/bin/bash
   1446024 /bin/bash
  $ ps -eo rss,cmd | grep /bash
  34772 /opt/fil/bin/bash
   4256 /bin/bashUbuntu 25.10's rust "coreutils" multicall binary: 10828088 bytes on disk, 7396 KB in RAM while doing "sleep".
Alpine 3.22's GNU "coreutils" multicall binary: 1057280 bytes on disk, 2320 KB in RAM while doing "sleep".
There is no C or C++ memory safe compiler with acceptable performance for kernels, rendering, games, etc. For that you need Rust.
The future includes Fil-C for legacy code that isn’t performance sensitive and Rust for new code that is.
Also, Fil-C's overheads are the lowest for programs that are pushing primitive bits around.
Fil-C's overheads are the highest for programs that chase pointers.
I'm guessing the CPU bound bits of apt (if there are any) are more of the former
Fil-C is useful for the long tail of C/C++ that no one will bother to rewrite and is still usable if slow.
https://salsa.debian.org/apt-team/apt/-/blob/main/apt-pkg/co...
Still, it's all LLVM, so perhaps unsafe Rust for Fil-space can be a thing, a useful one for catching (what would be) UBs even [Fil-C defines everything, so no UBs, but I'm assuming you want to eventually run it outside of Fil-space].
Now I actually wonder if Fil-C has an escape hatch somewhere for syscalls that it does not understand etc. Well it doesn't do inline assembly, so I shouldn't expect much... I wonder how far one needs to extend the asm clobber syntax for it to remotely come close to working.
> libyoloc.so. This is a mostly unmodified [musl/glibc] libc, compiled with Yolo-C. The only changes are to expose some libc internal functionality that is useful for implementing libpizlo.so. Note that libpizlo.so only relies on this library for system calls and a few low level functions. In the future, it's possible that the Fil-C runtime would not have a libc in Yolo Land, but instead libpizlo.so would make syscalls directly.
but mostly you are using a fil-c compiled libc:
> libc.so. This is a modified musl libc compiled with Fil-C. Most of the modifications are about replacing inline assembly for system calls with calls to libpizlo.so's syscall API.
That links here: https://github.com/pizlonator/fil-c/blob/deluge/filc/include...
Quotes from: https://fil-c.org/runtime
1) Rewrite X in Rust
2) Recompile X using Fil-C
3) Recompile X for WASM
4) Safety is for babies
There are a lot of half baked Rust rewrites whose existence was justified on safety grounds and whose rationale is threatened now that HN has heard of Fil-C
It seems sensible to not write new software in plain C. Rust is certainly a valid choice for a safer language, but in many cases overkill wrt how painful the rewrite is vs benefits gained from avoiding a higher-level memory-safe one like OCaml.
At the same time, "let's just rewrite everything!" is also madness. We have many battle-tested libraries written in C already. Something like Fil-C is badly needed to keep them working while improving safety.
And as for wasm, it's sort of orthogonal - whether you're writing in C or in Rust, the software may be bug-free, but sandboxing it may still be desirable e.g. as a matter of trust (or lack thereof). Also, cross-platform binaries would be nice to have in general.
Wouldn't the only cause of mistrust be bugs, or am I missing something? If the program is malicious, sandboxing isn't the pertinent action.