Posted by antirez 2 days ago
As HN likes to say, only a amateur vibe-coder could believe this.
> Address bits for pixel (x, y): > * 010 Y7 Y6 Y2 Y1 Y0 | Y5 Y4 Y3 X7 X6 X5 X4 X3
Which is wrong. It's x4-x0. Comment does not match the code below.
> static inline uint16_t zx_pixel_addr(int y, int col) {
It computes a pixel address with 0x4000 added to it only to always subtract 0x4000 from it later. The ZX apparently has ROM at 0x0000..0x3fff necessitating the shift in general but not in this case in particular.
This and the other inline function next to it for attributes are only ever used once.
> During the > * 192 display scanlines, the ULA fetches screen data for 128 T-states per > * line.
Yep.. but..
> Instead of a 69,888-byte lookup table
How does that follow? The description completely forgets to mention that it's 192 scan lines + 64+56 border lines * 224 T-States.
I'm bored. This is a pretty muddy implementation. It reminds me of the way children play with Duplo blocks.
IMHO zx_pixel_addr() is not bad, makes sense in this case. I'm a lot more unhappy with the actual implementation of the screen -> RGB conversion that uses such function, which is not as fast as it could be. For instance my own zx2040 emulator video RAM to ST77xx display conversion (written by hand, also on GitHub) is more optimized in this case. But the fact to provide the absolute address in the video memory is ok, instead of the offset. Just design.
> This and the other inline function next to it for attributes are only ever used once.
I agree with that but honestly 90% of the developers work in this way. And LLMs have such style for this reason. I stile I dislike as well...
About the lookup table, the code that it uses in the end was a hint I provided to it, in zx_contend_delay(). The old code was correct but extremely memory wasteful (there are emulators really taking this path of the huge lookup table, maybe to avoid the division for maximum speed), and there was the full comment about the T-states, but after the code was changed this half-comment is bad and totally useless indeed. In the Spectrum emulator I provided a few hints. In the Z80, no hint at all.
If you check the code in general, the Z80 implementation for instance, it is solid work on average. Normally after using automatic programming in this way, I would ask the agent (and likely Codex as well) to check that the comments match the documentation. Here, since it is an experiment, I did zero refinements, to show what is the actual raw output you get. And it is not bad, I believe.
P.S. I see your comment greyed out, I didn't downvote you.
WTF? I appreciate your technical expertise but you can't be aggressive like this on HN, and we've had to ask you this before: https://news.ycombinator.com/item?id=45663563.
If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.
I disagree that this is "aggressive." It's certainly opinionated. I think the AI does a bad job here and I'm attempting to express that in a humorous and qualified way.
> WTF?
You don't consider this to be "aggressive?"
> stick to the rules when posting here
Do you genuinely think I'm trying to be disruptive?
- Based on classic Z80 architecture by Zilog - Inspired by modern RISC designs (ARM, RISC-V, MIPS)
Funny enough, there is a 32-bit version of Z80 called Z380.
I wish people would stop using this phrase altogether for LLM-assisted coding. It has a specific legal and cultural meaning, and the giant amount of proprietary IP that has been (illegally?) fed to the model during training completely disqualifies any LLM output from claiming this status.
Essentially they can't do clean room anything!
You might as well hire the entire former mid level of a businesses programming team and claim it's clean room work
https://www.itprotoday.com/server-virtualization/windows-nt-...
In any case, an interesting experiment.
In fact this would make for an interesting benchmark - writing entire non-trivial apps based on the same prompt. Each model might be expected to write and use it's own test cases, but then all could be judged based on a common set of test cases provided as part of the benchmark suite.
Because that is exceptionally unlikely.