Posted by todsacerdoti 10/29/2025
with the cleanup attribute(a cheap "defer" for C), and the sanitizers, static analysis tools, memory tagging extension(MTE) for memory safety at hardware level, etc, and a zig 1.0 still probably years away, what's the strong selling point that I need spend time with zig these days? Asking because I'm unsure if I should re-try it.
It also fixes a shitton of tiny design warts that we've become accustomed to in C (also very slowly happening in the C standard, but that will take decades while Zig has those fixes now).
Also, probably the best integration with C code you can get outside of C++. E.g. hybrid C/Zig projects is a regular use case and has close to no friction.
C won't go away for me, but tinkering with Zig is just more fun :)
I think this is because many parts of the stdlib are too object-oriented, for instance the containers are more or less C++ style objects, but this style of programming really needs RAII and restricted visibility rules like public/private (now Zig shouldn't get those, but IMHO the stdlib shouldn't pretend that those language features exist).
As a sister comment says, Zig is a great programming language, but the stdlib needs some sort of basic and consistent design philosophy whitch matches the language capabilities.
Tbf though, C gets around this problem by simply not providing a useful stdlib and delegating all the tricky design questions to library authors ;)
[1] https://github.com/smj-edison/zicl/blob/bacb08153305d5ba97fc...
const Bla = struct {
// public access intended
bla: i32,
blub: i32,
// here be dragons
_private: struct {
x: i32,
y: i32,
},
};
...that way you can still access and mess up those 'private' items, but at least it's clear now which of the struct items are part of the 'public API contract' and which are considered internal implementation details which may change on a whim.It could just be a compiler error/warning that has to be explicitly opted into to touch those fields. This allows you to say "I know this is normally a footgun to modify these fields, and I might be violating an invariant condition, but I am know what I'm doing".
As such, I'm happy to not have visibility modifiers at all.
I do absolutely agree that "std" needs a good design pass to make it consistent--groveling in ".len" fields instead of ".len()" functions is definitely a bad idea. However, the nice part about Zig is that replacement doesn't need extra compiler support. Anyone can do that pass and everyone can then import and use it.
> This allows you to say "I know this is normally a footgun to modify these fields, and I might be violating an invariant condition, but I am know what I'm doing".
Welcome to "Zig will not have warnings." That's why it's all or nothing.
It's the single thing that absolutely grinds my gears about Zig. However, it's also probably the single thing that can be relaxed at a later date and not completely change the language. Consequently, I'm willing to put up with it given the rest of the goodness I get.
Nobody is asking for Pizza * Weather && (Lizard + Sleep), that strawman argument to justify ints and floats as the only algebraic types is infuriating :(
I'd love to read a blog post about your Zig setup and workflow BTW.
fn computeVsParams(rx: f32, ry: f32) shd.VsParams {
const rxm = mat4.rotate(rx, .{ .x = 1.0, .y = 0.0, .z = 0.0 });
const rym = mat4.rotate(ry, .{ .x = 0.0, .y = 1.0, .z = 0.0 });
const model = mat4.mul(rxm, rym);
const aspect = sapp.widthf() / sapp.heightf();
const proj = mat4.persp(60.0, aspect, 0.01, 10.0);
return shd.VsParams{ .mvp = mat4.mul(mat4.mul(proj, state.view), model) };
}
Zig also has a `@Vector` type for SIMD-style vector math though, but I haven't been using that because sometimes I want to add methods to the vector types.What I would like to see though is a Ziggified version of the Clang extended vector and matrix extensions.
I just find it intellectually offensive that this extremely short-sighted line is drawn after ints and floats, any other algebraic/number types aren't similarly dignified.
If people had to write add(2, mul(3, 4)) etc for ints and floats the language would be used by exactly nobody! But just because particular language designers aren't using complex numbers and vectors all day, they interpret the request as wanting stupid abstract Monkey + Banana * Time or whatever. I really wish more Language People appreciated that there's only really one way to do complex numbers, it's worth doing right once and giving proper operators, too. Sure, use dot(a, b) and cross(a, b) etc, that's fine.
The word "number" is literally half of "complex number", and it's not like there are 1024 ways to implement 2D, 3D, 4D vector and complex number addition, subtraction, multiplication, maybe even division. There are many languages one can look to for guidance here, e.g. OpenCL[0] and Odin[1].
[0] OpenCL Quick Reference Card, masterpiece IMO: https://www.khronos.org/files/opencl-1-2-quick-reference-car...
[1] Odin language, specifically the operators: https://odin-lang.org/docs/overview/#operators
But if you're open to learning languages for tinkering/education purposes, I would say that Zig has several significant "intrinsic" advantages compared to C.
* It's much more expressive (it's at least as expressive as C++), while still being a very simple language (you can learn it fully in a few days).
* Its cross-compilation tooling is something of a marvel.
* It offers not only spatial memory safety, but protection from other kinds of undefined behaviour, in the form of things like tagged unions.
If the language didn't have a std implementation of async/await, would it affect your decision to use Xig? Or are you comfortable with traditional multithreaded facilities to the extent that you require concurrency (and parallelism)?
Before io_uring (which is disabled on most servers until it mature), there was no good way to do async I/O on file on Linux.
e.g. Rust async may be backed by a threadpool, but it could also run single threaded.
Async clearly works for many people, I do fully understand people who can't get their heads around threads and prefer async. It's wonderful that there's a pattern people can use to be productive!
For whatever reason, async just doesn't work for me. I don't feel comfortable using it and at this point I've been trying on and off for probably 10+ years now. Maybe it's never going to happen. I'm much more comfortable with threads, mutex locks, channels, Erlang style concurrency, nurseries -- literally ANYTHING but async. All of those are very understandable to me and I've built production systems with all of those.
I hope when Zig reaches 1.0 I'll be able to use it. I started learning it earlier this month and it's been really enjoyable to use.
Those are independent of each other. You can have async with and without threads. You can have threads with and without async.
The example code shown in the first few minutes of the video is actually using regular OS threads for running the async code ;)
The whole thing is quite similar to the Zig allocator philosophy. Just like an application already picks a root allocator to pass down into libraries, it now also picks an IO implementation and passes it down. A library in turn doesn't care about how async is implemented by the IO system, it just calls into the IO implementation it got handed from the application.
If you don’t want to use async/await just don’t call functions through io.async.
Wow. Do you expect anyone to continue reading after a comment like that?
If I had a web service using threads, would I map each request to one thread in a thread pool? It seems like a waste of OS resources when the IO multiplexing can be done without OS threads.
> last time I checked a lot of crates.io is filled with async functions for stuff that doesn't actually block.
Like what? Even file I/O blocks for large files on slow devices, so something like async tarball handling has a use case.
It's best to write in the sans-IO style and then your threading or async can be a thin layer on top that drives a dumb state machine. But in practice I find that passable sans-IO code is harder to write than passable async. It makes a lot of sense for a deep indirect dependency like an HTTP library, but less sense for an app
This is a bizarre remark
Async/await isn't "for when you can't get your head around threads", it's a completely orthogonal concept
Case in point: javascript has async/await, but everything is singlethreaded, there is no parallelism
Async/await is basically just coroutines/generators underneath.
Phrasing async as 'for people who can't get their heads around threads' makes it sound like you're just insecure that you never learned how async works yet, and instead of just sitting down + learning it you would rather compensate
Async is probably a more complex model than threads/fibers for expressing concurrency. It's fine to say that, it's fine to not have learned it if that works for you, but it's silly to put one above the other as if understanding threads makes async/await irrelevant
> The stdlib isn't too bad but last time I checked a lot of crates.io is filled with async functions for stuff that doesn't actually block
Can you provide an example? I haven't found that to be the case last time I used rust, but I don't use rust a great deal anymore
> makes it sound like you're just insecure
> instead of just sitting down + learning it you would rather compensate
Can you please edit out swipes like these from your HN posts? This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.
Your comment would be fine without those bits.
May be I just wish Zig dont call it async and use a different name.
Async-await in JS is sometimes used to swallow exceptions. It's very often used to do 1 thing at a time when N things could be done instead. It serializes the execution a lot when it could be concurrent.
if (await is_something_true()) {
// here is_something_true() can be false
}
And above, the most common mistake.Similar side-effects happen in other languages that have async-await sugar.
It smells as bad as the Zig file interface with intermediate buffers reading/writing to OS buffers until everything is a buffer 10 steps below.
It's fun for small programs but you really have to be very strict to not have it go wrong (performance, correctness).
That being said, I don't understand your `is_something_true` example.
> It's very often used to do 1 thing at a time when N things could be done instead
That's true, but I don't think e.g. fibres fare any better here. I would say that expressing that type of parallel execution is much more convenient with async/await and Promise.all() or whatever alternative, compared to e.g. raw promises or fibres.
`is_something_true` is very simple, if condition is true, and then inside the block, if you were to check again it can be false, something that can't happen in synchronous code, yet now, with async-await, it's very easy to get yourself into situations like these, even though the code seems to yell at you that you're in the true branch. the solution is adding a lock, but with such ease to write async-await, it's rarely caught
My comment was responding only to the person who equated threads and async. My comment only said that async and threading are completely orthogonal, even though they are often conflated
> `is_something_true` is very simple, if condition is true, and then inside the block, if you were to check again it can be false, something that can't happen in synchronous code
It can happen in synchronous code, but even if it couldn't - why is async/await the problem here? what is your alternative to async/await to express concurrency?
Here are the ways it can happen:
1. it can happen with fibers, coroutines, threads, callbacks, promises, any other expression of concurrency (parallel or not!). I don't understand why async/await specifically is to blame here.
2. Even without concurrency, you can mutate state to make the value of is_something_true() change.
3. is_something_true might be a blocking call to some OS resource, file, etc - e.g. the classic `if (file_exists(f)) open(f)` bug.
I am neutral on async/await, but your example isn't a very good argument against it
Seemingly nobody ever has any good arguments against it
> async-await pollutes the code completely if you're not strict about its usage
This is a good thing, if a function is async then it does something that won't complete after the function call. I don't understand this argument about 'coloured functions' polluting code. if a function at the bottom of your callstack needs to do something and wait on it, then you need to wait on it for all functions above.
If the alternative is just 'spin up an OS thread' or 'spin up a fiber' so that the function at the bottom of the callstack can block - that's exactly the same as before, you're just lying to yourself about your code. Guess what - you can achieve the same thing by putting 'await' before every function call
Perhaps you have convinced me that async/await is great after all!
`if (file_exists(f))` is misuse of the interface, a lesson in interface design, and a faulty pattern that's easy to repeat with async-await.
> I don't understand this argument about 'coloured functions' polluting code.
Let's say you have some state that needs to be available for the program. What happens in the end game is that you've completely unrolled the loads and computation because any previous interface sucks (loading the whole state at the beginning which serializes your program and makes it slower, or defining `async loadPartial` that causes massive async-await pollution at places where you want to read a part of the state, even if it is already in cache).
Think about it, you can't `await` on part of the state only once and then know it's available in other parts of code to avoid async pollution. When you solve this problem, you realize `await` was just in the way and is completely useless and code looks exactly like a callback or any other more primitive mechanism.
A different example is writing a handler of 1 HTTP request. People do it all the time with async-await, but what to do when you receive N HTTP requests? The way to make code perform well is impossible with the serial code that deals with 1 request, so async-await just allowed you to make something very simple and in an easy way, but then it falls apart completely when you go from 1 to N. Pipelining system won't really care for async-await at all, even though it pipelines IO in addition to compute.
I think "how to express concurrency" is a question I'm not even trying to answer, although I could point to approaches that completely eliminate pollution and force you to write code in that "unrolled" way from start, something like Rx or FRP where time is exactly the unit they're dealing with.
It's even easier to repeat without async-await, where you don't need to tag the function call with `await`!
> Think about it, you can't `await` on part of the state only once and then know it's available in other parts of code to avoid async pollution. When you solve this problem, you realize `await` was just in the way and is completely useless and code looks exactly like a callback or any other more primitive mechanism.
I don't understand why you can't do this by just bypassing the async/await mechanism when you're sure that the data is already loaded
```
data = null
async function getDataOrWait() { await data_is_non_null(); // however you do this return data }
function getData() { if (data == null) { throw new Error('data not available yet'); } return data; }
```
You aren't forced into using async/await everywhere all the time. this sounds like 'a misuse of the interface, a lesson in interface design', etc
> I think "how to express concurrency" is a question I'm not even trying to answer
You can't criticise async/await, which is explicitly a way to express concurrency, if you don't even care to answer the question - you're just complaining about a solution that solves a problem that you clearly don't have (if you don't need to express concurrency, then you don't need async/await, correct!)
> point to approaches that completely eliminate pollution and force you to write code in that "unrolled" way from start, something like Rx or FRP where time is exactly the unit they're dealing with.
So they don't 'eliminate pollution', they just pollute everything by default all the time (???)
Exactly, async-await does not allow you to create correct interfaces. You cannot write code that partially loads and then has sync access, without silly error raises sprinkled all over the place when you're 100% sure there's no way the error will raise or when you want to write code that is 100% correct and will access the part of the state when it is available.
Your example is obvious code pollution. For "correctness" sake I need to handle your raise even though it should not ever happen, or at least the null, to satisfy a type checker.
> So they don't 'eliminate pollution', they just pollute everything by default all the time (???)
That's not the case at all. They just push you immediately in direction where you'll land when you stop using async-await to enforce correctness and performance. edit: you stop modelling control flow and start thinking of your data dependency/computation graph where it's very easy and correct to just change computation/loads to batch mode. `is_something_true` example is immediately reframed to be correct all of the time (as a `file_exists` is now a signal and will fire on true and false)
> You can't criticise async/await, which is explicitly a way to express concurrency, if you don't even care to answer the question - you're just complaining about a solution that solves a problem that you clearly don't have (if you don't need to express concurrency, then you don't need async/await, correct!)
I'm critical of async-await, I'm not comparing it to something else, I can do that but I don't think it is necessary. I've pointed to FRP as a complete solution to the problem, where you're forced to deal with the dataflow from the start, for correctness sake, and can immediately batch, for pipelining/performance sake.
IMO, just like file_exists is a gimmick, or read_bytes(buffer), or write_bytes(buffer) is a leaky abstraction where now your program is required to manage unnecessary wasteful buffers, async-await pushes you into completely useless coding rituals, and your throw is a great example. The way you achieved middleground is with code pollution, because either full async load at the beginning is not performing well, or async-await interleaved with computation pollutes everything to be async and makes timeline difficult to reason about.
This late in the game, any concurrency solution should avoid pointless rituals.
The idea of generalizing threads for use in parallel computing/SMP didn't come until at least a decade after the introduction of threads for use as a concurrency tool.
Wasn't this only true when CPUs were single core only? Only when multi core CPUs came, true parallelism could happen (outside of using multiple CPUs)
I guess they didn't get a release from the question asker and so they edited it out?
The problem here though is that the presenter didn't repeat the question for the audience which is a rookie mistake.
Is the reason that the queue has a zero buffer size (always "full"), so that the consumer must consume the `flavor_text` before the producer can return?
Queue source:
> /// When buffer is full, producers suspend and are resumed by consumers.
https://github.com/ziglang/zig/blob/master/lib/std/Io.zig#L1...
Text version.
The desynced video makes the video a bit painful to watch.
To perform what we normally call "pure" code requires an allocator but not the `io` object. Code which accepts neither is also allocation-free - something which is a bit of a challenge to enforce in many languages, but just falls out here (from the lack of closures or globally scoped allocating/effectful functions - though I'm not sure whether `io` or an allocator is now required to call arbitrary extern (i.e. C) functions or not, which you'd need for a 100% "sandbox").