Top
Best
New

Posted by transpute 2 days ago

Notes by djb on using Fil-C(cr.yp.to)
352 points | 239 commentspage 2
gkfasdfasdf 1 day ago|
Does Fil-C catch uninitialized memory reads?
jitl 1 day ago|
malloc'd memory is zeroed in fil-c:

> *zgc_alloc*

> Allocate count bytes of zero-initialized memory. May allocate slightly more than count, based on the runtime's minalign (which is currently 16).

> This is a GC allocation, so freeing it is optional. Also, if you free it and then use it, your program is guaranteed to panic.

> libc's malloc just forwards to this. There is no difference between calling malloc and zgc_alloc.

from https://fil-c.org/stdfil

Slothrop99 2 days ago||
Great to see some 3letter guy into this. This might be one of those rando things which gets posted on HN (and which doesn't involve me in the slightest), but a decade later is taking over the world. Rust and Go were like that.

Previously there was that Rust in APT discussion. A lot of this middle-aged linux infrastructure stuff is considered feature-complete and "done". Not many young people are coming in, so you either attract them with "heyy rewrite in rust" or maybe the best thing is to bottle it up and run in a VM.

mesrik 2 days ago||
>Great to see some 3letter guy into this

AFAIK, djb isn't for many "some 3letter guy" for over about thirty years but perhaps it's just age related issue with those less been around.

https://en.wikipedia.org/wiki/Daniel_J._Bernstein

Slothrop99 2 days ago|||
Just to be clear, I mean to venerate Bernstein for earning his 3letters, not to trivialize him.
jabwd 1 day ago||||
Despite the cool shit the guy has done, keep in mind that "venerate" is not the word to use here. djb is very much not a shorthand used in any positive messaging pretty much ever by any cryptographer. He did it to himself, sadly.
pas 1 day ago||
Sorry, can you explain what he did to himself?
bgwalter 1 day ago||
I would like to know as well. All that is public is that a couple of IETF apparatchiks want to ban him for criticizing corporate and NSA influence:

https://web.archive.org/web/20250513185456/https://mailarchi...

The IETF has now accepted the required new moderation guidelines, which will essentially be a CoC troika that can ban selectively:

https://mailarchive.ietf.org/arch/msg/mod-discuss/s4y2j3Dv6D...

It is very sad that all open source and Internet institutions are being subverted by bureaucrats.

pas 1 day ago||
... if he thinks some WG is making a mistake and he's not welcome there (everyone else seems to be okay with what's happening based on the quoted email on the first link), then - CoC or not - he should then leave, and publicly post distance himself from the outcome.

(Obviously he was never the one to back down from a just fight, but it's important to find the right hill to die on. And allies! And him not following RFC 2026 [from 1996, hardly the peak of Internet bureaucracy] is not a CoC thing anyway.)

bgwalter 1 day ago||
Why should he leave? The IETF pretends on its sponsor page (https://www.ietf.org/support-us/endowment/):

The IETF is a global standards-setting organization, intentionally created without a membership structure so that anyone with the technical competency can participate in an individual capacity. This lack of membership ensures its position as the primary neutral standards body because participants cannot exert influence as they could in a pay-to-play organization where members, companies, or governments pay fees to set the direction. IETF standards are reached by rough consensus, allowing the ideas with the strongest technical merit to rise to the surface.

Further, these standards that advance technology, increase security, and further connect individuals on a global scale are freely available, ensuring small-to-midsize companies and entrepreneurs anywhere in the world are on equal footing with the large technology companies.

With a community from around the world, and an increased focus on diversity in all its forms, IETF seeks to ensure that the global Internet has input from the global community, and represents the realities of all who use it.

There is only one IETF, and telling dissenters to leave is like telling a dissenting citizen to go to another country. I don't think that people (apart from real spammers) were banned in 1996. The CoC discussion and power grab has reached the IETF around 2020 and it continues.

"Posting too many messages" has been deemed a CoC violation by for example the PSF and its henchmen, and functionally the IETF is using the same selective enforcement no matter what the official rationale is. They won't go after the "director" Wouters, even though his message was threatening and rude.

pas 1 day ago||
> Why should he leave?

Because the game is rigged apparently?

If not then let the WG work. If no one except djb feels this strongly about hybrid vs. pure post-quantum stuff then it's okay.

(And I haven't read the threads but this is a clear security trade-off. Involving complexity, processing power and bandwidth and RAM and so on, right? And the best and brightest cryptographers checked the PQ algorithms, and the closer we get to them getting anywhere near standardized in a pure form the more scrutiny they'll receive.

And someone being an NSA lackey is not a technical merit argument. Especially if it's true, because in this case the obvious thing is to start coalition building to have some more independent org up and running, because arguing with a bad faith actor is not going to end well.)

bgwalter 1 day ago||
> If no one except djb feels this strongly about hybrid vs. pure post-quantum stuff then it's okay.

That is one of the contentious issues. See the last paragraphs of:

https://blog.cr.yp.to/20251004-weakened.html

Starting with "Remember that the actual tallies were 20 supporters, 2 conditional supporters, and 7 opponents".

ggm-at-algebras 2 days ago|||
Not to trivialise but being a 3 letter guy means being old. So, it's at best a celebration of achieving longevity and at worst a celebration of creaky joints and a short temper.
vkazanov 2 days ago|||
Most of us will have a problematic joint or two by a certain age. Almost none of us will be recognised by any name by that time.
ggm-at-algebras 1 day ago||
Mate, we're not talking about the future, but about 3 letter guys now. I'm one, I've carried it with me for 40+ years as have the ten or twenty peers of mine I know by their tla. I got it at pobox.com when the door opened, the guy at the desk next door got a one letter name. I set up campus email for the entire uni in 1989 and gave myself the tla with my superuser rights before that. I'd done the same at ucl-cs in 85, and before that in Leeds and York.

My point here is we're not famous we're just old enough to have a tla from the time before HR demanded everyone get given.surname.

Every Unix system used to ship with a dmr account. It doesn't mean we all knew Dennis Ritchie, it means the account was in the release tape.

There are 17,000 odd of us. Ekr, Kre and Djb are famous but the other 17,573 of us exist.

Valodim 1 day ago||
I'm not sure what your point is here. OP was clearly using "three letter guy" in the sense "so famous people know them by their initials". This is hardly unread of, e.g. https://wiki.c2.com/?ThreeLetterPerson
mesrik 1 day ago|||
It was the "Great to see _some_ 3letter guy into this" underlined some that.

It felt bit like s/some/random/g perhaps would apply when reading it. Intentional or not by writer. It made me long and write my comment. There are many 3letter user accounts, which some are more famous than others. To my generation not because they were early users, but great things what they have done. I'm early user too and done things then still quite widely being used with many distributions, but wouldn't compare my achievements to those who became famous and known widely by their account, short or long.

Anyhow I thought that "djb" ring bell anyone having been around for while. Not just those who have been around early 90 or so when he was held renegade opinions he expressed programming style (qmail, dj dns, etc.), dragged to court of ITAR issues etc.

But because of his latter work with cryptography and running cr.yp.to site for quite long time.

https://cr.yp.to/

I was just wondering, did not intend to start argument fight.

debugnik 1 day ago||||
Is this because they're that famous though or simply because there weren't as many people in the scene back then? We just don't do the initials thing anymore.
overfeed 1 day ago||
Yes: the fame is the subtext. It's akin to mononyms; they'd be referring to famous people like Shakira, Madonna, or Beyoncé. A lot of us have first names, but the point isn't that one's family calls them "Dave" without ambiguity.

There were many unix instances, and likely multiple djb logins around the world, but there's only one considered to be the djb, and it's dur to fame.

pixelpoet 1 day ago|||
It's wild how much he looks like ryg, another 3 letter genius
fjfaase 1 day ago||
I am a bit surprised that the build_all_fast_glibc.sh script requires 31Gbyte of memory to run. Can somebody explain? I would like to try out Fil-C.
ComputerGuru 1 day ago|
Building and linking llvm sucks.
scandox 2 days ago||
Interesting to see some bash curl being used by a renowned cryptologist...
IshKebab 2 days ago|
Almost like it's actually fine.

https://medium.com/@ewindisch/curl-bash-a-victimless-crime-d...

uecker 1 day ago|||
It is definitely not fine. The argument seems to be that since you need to trust somebody, curl | bash is fine because you just trust whoever controls the webserver. I think this is missing the point.
oddmiral 1 day ago|||
s/webserver/DNS/
arthur2e5 1 day ago||
HTTPS is there, so you go down to that level only if you want to distrust any element of the public key infrastructure. Which, to be fair, there are plenty of reasons if you are paranoid -- they do tell you who's doing what in a shady way as they revoke, so there's a huge list of transgressions.
zzo38computer 1 day ago||
It is not only that directly; the domain name might be reassigned to someone else, resulting in a valid certificate which is different than the one you wanted. (If you have the hash of the file which you have verified independently then it is more secure (if the hash algorithm is secure enough), although HTTPS is not needed in that case, it can still be used if you wish to avoid spies knowing which file you accessed. You can also use the server's public key if you know what it should be, although this has different issues, such as someone compromising the server (or the key) and modifying the script.) (There is also knowing if the script is what you intended or not anyways (or if there is something unexpected due to the configuration on your computer); if that is your issue, you can read it (and perhaps verifying the character encoding) before executing it, whether or not you trust the server operator and the author of that script.)
IshKebab 1 day ago||
> the domain name might be reassigned to someone else

If that happens its game over. As the article I linked noted, the attackers can change the installation instructions to anything they want - even for packages that are available in Linux distros.

whyever 1 day ago||||
It's missing which point?
uecker 1 day ago||
That you should be very careful about what you install. Cut&pasting some line from a website is the exact opposite of it. This is mostly about psychology and not technology. But there are also other issues with this, e.g. many independent failure points at different levels, no transparency, no audit chain, etc. The counter model we tried to teach people in the past is that people select a linux distribution, independently verify fingerprints of the installation media, and then only install packages from the curated a list of packages. A lot of effort went into making this safe and close the remaining issues.
IshKebab 1 day ago||
None of that has anything to do with curl|bash.

Be careful who you trust when installing software is a fine thing to teach. But that doesn't mean the only people you can trust are Linux distro packagers.

uecker 1 day ago||
I think it has a lot to do with "curl|bash". Cut&paste a curl|bash command-line disables all inherent mechanisms and stumbling blocks that would ensure properly ensuring trust. It was basically invented to make it easy to install software by circumventing all protection a Linux distribution would traditionally provide. It also eliminates all possibility for independent verification about what was installed or done on the machine.
IshKebab 1 day ago||
Downloading and installing a `.deb` or `.rpm` is going to be no more secure. They can run arbitrary scripts too.
uecker 1 day ago||
Downloading a deb via a package manager is more secure. Downloading a deb, comparing the hash (or at least noting down the hash) would also already be more secure.

But yes, that the run arbitrary scripts is also a known issue, but this is not the main point as most code you download will be run at some point (and ideally this needs sandboxing of applications to fix).

IshKebab 1 day ago|||
> Downloading a deb via a package manager is more secure.

Not what I meant. Getting software into 5 different distros and waiting years for it to be available to users is not really viable for most software authors.

uecker 1 day ago||
I think it would be quite viable if there is any willingness to work with the distributions in the interest in security.
IshKebab 1 day ago||
Well, distros haven't really put any effort into making it viable as far as I know. They really should! Why isn't there a standard Linux package format that all distros support? Flatpak is fine for user GUI apps but I don't think it would be feasible to e.g. distribute Rust via a Flatpak.

(And when I say fine, I haven't actually used it successfully yet.)

I think distros don't want this though. They all want everyone to use their format, and spend time uploading software into their repo. Which just means that people don't.

tonetheman 1 day ago|||
[dead]
oguz-ismail 1 day ago|||
[flagged]
nitinreddy88 1 day ago||
Building tools is one thing, building a system like Postgres or Databases is going to be another thing.

Anyone really tried building PG or MySQL or such a complex system which heavily relies on IO operations and multi threading capabilities

mbrock 1 day ago|
Look at how fanatic the compatibility actually is. Building Postgres or MySQL is conceivable but probably will require some changes. (SQLite compiles and runs with zero changes right now.)
SQLite 22 hours ago|||
SQLite runs about 5 times faster compiled with GCC (13.3.0) than it does when compiled with FIL-C. And the resulting compiled binary from GCC is 13 times smaller.
mbrock 21 hours ago||
Interesting! I guess that's from your standard benchmark setup. Please note that Fil-C makes no secret of having a performance penalty. It's definitely a pre-1.0 toolchain and only recently starting to pick up some momentum. The author is eager to keep improving it, and seems to think that there's still plenty of low hanging and medium hanging fruit to pick.

It does (or did, at some point) pass the thorough SQLite test suite, so at least it's probably correct! The famous SQLite test coverage and general proven quality might make SQLite itself less interesting to harden, but in order to run less comprehensively verified software that links with SQLite, we have to build SQLite with Fil-C too.

kragen 1 day ago|||
Thanks for checking! I was wondering.
mbrock 1 day ago||
If you run Nix (whether on NixOS or elsewhere) you can do `cachix use filc` and `nix run github:mbrock/filnix#sqlite` and it should drop you into a Fil-C SQLite after downloading the runtime dependencies from my binary cache (no warranty)!
kragen 1 day ago||
Thanks!
stevefan1999 1 day ago||
djb uses a surprisingly low amount of RAM (12GB) considering my laptop already has 64G which is possible to expand to 128G in the future
erichocean 1 day ago||
I would really like to see Omarchy go this direction. A fully memory-safe userland for Omarchy is possible with existing techhnology.
timeon 1 day ago|
Can you elaborate why Omarchy? I'm asking, in context of recompiling with Fil-C, because that seems to be just Arch + configurations.
erichocean 1 day ago||
For cultural reasons, I would like Omarchy—culturally—to adopt straightforward security as one of their goals, in addition to usability and beauty.

It's low hanging fruit, and a great way to further differentiate their Linux distribution.

jeffrallen 2 days ago||
Wish we were talking about making Fil-C required for apt, not Rust...
phicoh 2 days ago||
Those seems to be independent issues. Fil-C is about the best way to compile/run C code.

Rust would be about what language to use for new code.

Now that I have been programming in Rust for a couple of years, I don't want to go back to C (except for some hobby projects).

thomasmg 1 day ago||
I agree. The main advantage of Fil-C is compatibility with C, in a secure way. The disadvantages are speed, and garbage collection. (Even thought, I read that garbage collection might not be needed in some cases; I would be very interested in knowing more details).

For new code, I would not use Fil-C. For kernel and low-level tools, other languages seem better. Right now, Rust is the only popular language in this space that doesn't have these disadvantages. But in my view, Rust also has issues, specially the borrow checker, and code verbosity. Maybe in the future there will be a language that resolves these issues as well (as a hobby, I'm trying to build such a language). But right now, Rust seems to be the best choice for the kernel (for code that needs to be fast and secure).

kees99 1 day ago||
> disadvantages are speed, and garbage collection.

And size. About 10x increase both on disk and in memory

  $  stat -c '%s %n' {/opt/fil,}/bin/bash
  15299472 /opt/fil/bin/bash
   1446024 /bin/bash

  $ ps -eo rss,cmd | grep /bash
  34772 /opt/fil/bin/bash
   4256 /bin/bash
nialse 1 day ago||
How does that compare with rust? You don't happen to have an example of a binary underway moving to rust in Ubuntu-land as well? Curious to see as I honestly don't know whether rust is nimble like C or not.
kees99 1 day ago|||
My impression is - rust fares a bit better on RAM footprint, and about as badly on disk binary size. It's darn hard to compare apples-to-apples, though - given it's a different language, so everything is a rewrite. One example:

Ubuntu 25.10's rust "coreutils" multicall binary: 10828088 bytes on disk, 7396 KB in RAM while doing "sleep".

Alpine 3.22's GNU "coreutils" multicall binary: 1057280 bytes on disk, 2320 KB in RAM while doing "sleep".

vacuity 1 day ago|||
I don't have numbers, but Rust is also terrible for binary size. Large Rust binaries can be improved with various efforts, but it's not friendly by default. Rust focuses on runtime performance, high-level programming, and compile-time guarantees, but compile times and binary sizes are the drawback. Notably, Rust prefers static linking.
dontlaugh 1 day ago|||
Fil-C is slow.

There is no C or C++ memory safe compiler with acceptable performance for kernels, rendering, games, etc. For that you need Rust.

The future includes Fil-C for legacy code that isn’t performance sensitive and Rust for new code that is.

drnick1 1 day ago|||
No, Rust is awful for game development. It's not really what it was intended for. For one, all the graphics API are in C, so you would have to use unsafe FFI basically everywhere.
sibellavia 1 day ago||||
How slow? In some contexts, the trade-off might be acceptable. From what I've seen in pizlonator's tweets, in some cases the difference in speed didn't seem drastic to me.
kevincox 1 day ago|||
Yeah, I would happily run a bunch of my network services in this. I have loads of services that are public-facing doing a lot of complex parsing and rule evaluation and are mostly idle. For example my whole mailserver stack could probably benefit from this. My few messages an hour can run 2x slower. Maybe I would leave dovecot native since the attack surface before authentication is much lower and the performance difference would be more noticeable (mostly for things like searches).
kragen 1 day ago||
You may be aware that one of the things Bernstein is famous for is revolutionizing mailserver security.
Rebelgecko 1 day ago||||
I imagine Apt is usually IO constrained?
pizlonator 1 day ago||
That's my guess, yeah

Also, Fil-C's overheads are the lowest for programs that are pushing primitive bits around.

Fil-C's overheads are the highest for programs that chase pointers.

I'm guessing the CPU bound bits of apt (if there are any) are more of the former

mbrock 1 day ago|||
What does that have to do with apt?
dontlaugh 1 day ago||
Enough of it is performance sensitive that Fil-C is not an option.

Fil-C is useful for the long tail of C/C++ that no one will bother to rewrite and is still usable if slow.

procaryote 1 day ago||
How is apt performance sensitive?
kragen 1 day ago|||
Apt has been painfully slow since I started using Debian last millennium, but I suspect it's not because it uses a lot of CPU, or it would be snappy by now.
dontlaugh 1 day ago|||
It parses formats and does TLS, I’m assuming it’d be quite bad. I don’t think you can mix and match.
jitl 1 day ago|||
stuff that talks to "the internet" and runs as "root" seems like a good thing to build with filc.
loeg 1 day ago||
It probably uses OS sandboxing primitives already.
kragen 1 day ago||
In normal operation, apt has to be able to upgrade the kernel, the bootloader, and libc, so it can't usefully be sandboxed except for testing or chroots.
loeg 1 day ago||
No, that doesn't follow. That only means the networking and parsing functions can't be sandboxed in the same process that drops new root-owned files. C and C++ services have been using subprocesses for sandboxing risky functionality for a long time now. It appears Apt has some version of this:

https://salsa.debian.org/apt-team/apt/-/blob/main/apt-pkg/co...

kragen 1 day ago||
That's true; you can't usefully sandbox apt as a whole, but, because it verifies the signatures of the packages it downloads, you could usefully sandbox the downloading process, and you could avoid doing any parsing on the package file until you've validated its signature. It's a pleasant surprise to hear that it already does something like this!
lucyjojo 1 day ago|||
doesnt it only work on x86_64?
oddmiral 1 day ago||
I wish, we will have something like Fil-C as an option for unsafe Rust.
arthur2e5 1 day ago|||
Fil-C works because you recompile the whole C userspace. Unsafe Rust doesn't do that... and for many practical purposes you probably want to touch the non-safe-version of the C userspace.

Still, it's all LLVM, so perhaps unsafe Rust for Fil-space can be a thing, a useful one for catching (what would be) UBs even [Fil-C defines everything, so no UBs, but I'm assuming you want to eventually run it outside of Fil-space].

Now I actually wonder if Fil-C has an escape hatch somewhere for syscalls that it does not understand etc. Well it doesn't do inline assembly, so I shouldn't expect much... I wonder how far one needs to extend the asm clobber syntax for it to remotely come close to working.

jitl 1 day ago||
at the bottom of the turtle stack, there's a yolo-c libc that does some syscall stuff:

> libyoloc.so. This is a mostly unmodified [musl/glibc] libc, compiled with Yolo-C. The only changes are to expose some libc internal functionality that is useful for implementing libpizlo.so. Note that libpizlo.so only relies on this library for system calls and a few low level functions. In the future, it's possible that the Fil-C runtime would not have a libc in Yolo Land, but instead libpizlo.so would make syscalls directly.

but mostly you are using a fil-c compiled libc:

> libc.so. This is a modified musl libc compiled with Fil-C. Most of the modifications are about replacing inline assembly for system calls with calls to libpizlo.so's syscall API.

That links here: https://github.com/pizlonator/fil-c/blob/deluge/filc/include...

Quotes from: https://fil-c.org/runtime

simonask 1 day ago|||
Unsafe Rust actually has a great runtime analyzer: Miri. It's very easy to just run `cargo +nightly miri test` in your project to get some confidence in the more questionable choices along the way.
quotemstr 1 day ago|
I can't wait for all the delicious four-way flamewars. Choose your fighter!

1) Rewrite X in Rust

2) Recompile X using Fil-C

3) Recompile X for WASM

4) Safety is for babies

There are a lot of half baked Rust rewrites whose existence was justified on safety grounds and whose rationale is threatened now that HN has heard of Fil-C

Klonoar 1 day ago||
Fil-C has come up on HN plenty of times before. If it was going to make much of a dent in the discussions, it would have by now.
jitl 1 day ago|||
odd fallacy. things grow in popularity / awareness over time
quotemstr 1 day ago|||
It's strange how ideas seem to explode at random into the discourse despite being known for a long time. It's as if some critical mass stumbles on a thing and it becomes "the current thing" everyone talks about until the next current thing.
ddalex 1 day ago|||
I'm on camp 2.
dev_l1x_be 1 day ago|||
We have a saying that jam is made of fruit that gave up the fight becoming a brandy.
Rebelgecko 1 day ago|||
Obviously someone needs to rewrite Rust in Fil-C
pizlonator 1 day ago||
Yeah since Fil-C is just an LLVM transform we could make Rust memory safe with it
int_19h 1 day ago||
It's not an either-or (well, except for this last item).

It seems sensible to not write new software in plain C. Rust is certainly a valid choice for a safer language, but in many cases overkill wrt how painful the rewrite is vs benefits gained from avoiding a higher-level memory-safe one like OCaml.

At the same time, "let's just rewrite everything!" is also madness. We have many battle-tested libraries written in C already. Something like Fil-C is badly needed to keep them working while improving safety.

And as for wasm, it's sort of orthogonal - whether you're writing in C or in Rust, the software may be bug-free, but sandboxing it may still be desirable e.g. as a matter of trust (or lack thereof). Also, cross-platform binaries would be nice to have in general.

vacuity 1 day ago||
> the software may be bug-free, but sandboxing it may still be desirable e.g. as a matter of trust (or lack thereof)

Wouldn't the only cause of mistrust be bugs, or am I missing something? If the program is malicious, sandboxing isn't the pertinent action.

int_19h 19 hours ago||
If any program can potentially be malicious (which is the effectively the case today with any downloaded software), then sandboxing is exactly the pertinent action - provided that the sandbox is tight enough.
vacuity 17 hours ago||
I should have elaborated. If a program is known to be malicious, or should be treated as malicious, then it should probably be terminated. Given a potentially malicious program and no easy way to determine (lack of) malice, sandboxing is a reasonable measure.
More comments...