Posted by lumpa 12 hours ago
I think this is a great policy by the Zig team.
However, I wanted to give Zig a try in an agentic coding scenario. For tasks that would take a few seconds when choosing Python, Java, or JavaScript as a target language, it would take tens of minutes and waste millions of tokens before producing anything.
Almost any model gets stuck trying to figure out the correct syntax and correct libraries for a specific Zig version, fighting with compiling and figuring out function call parameters, frequently taking it wrong and going on side quests for things that should just work.
I guess the relative lack of resources and the language instability don't play well for models that try to generate Zig code. Using specific tools like zig-mcp helps only a bit.
Until LLM support for Zig improves (one needs to spend significant resources for that to happen), LLM-generated Zig code won't be good enough for either Zig programmers or Zig contributors.
I was also blocked from the Zig github repository, after being a frequent contributor to issue discussions, for reasons unknown (I was never informed, I just found out when I could no longer put a thumbs up on a comment).
(Ok ok I think we lost the fight already. I see soooooo many people using AI tools on github in the last ~2 weeks alone, claude in particular literally infiltrated everything there.)
For those who are pissed because a large OSS project isn't accepting LLM generated slop: Fuck off!
What were you trying to imply by "very convenient"?
https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
>There’s the 4x speedup claimed by the Bun team, already available on Zig 0.16.0!
>Each [incremental] update is taking less than 0.4s, compared to the 120+ seconds taken to rebuild with LLVM. In other words, incremental updates are over 300 times faster on this codebase than fresh LLVM builds are. In comparison, an enhancement capped at a 4x improvement is pretty abysmal. [..] Again, this feature is available in Zig 0.16.0—you can use it!