Posted by antirez 17 hours ago
He is not "your avg dev" and it took him 4 months with llm.
This is not a seal of approval for you to go and command all your developers to move to Claude code/codex/any other ai coding tool fully.
I'm looking at you - any avg CEO of a startup.
This is arguably a key quote: "Then, it was time to read all the code, line by line. ... I found many small inefficiencies or design errors ... so I started a process of manual and AI-assisted rewrite of many modules." We should not underestimate that step: reading code line by line might easily require more time than writing it from scratch.
I remain unconvinced by the "faster to write it by hand than read it" arguments though. My experience throughout my career is that most people, myself included, top out at a couple of hundred lines of tested, production-ready code per day. I can productively review a couple of thousand.
Let the LLM cook by doing the issues one by one. In the meantime I could start reviewing them. Checkout, running, reading. It was definitely faster since it also correctly linked everything, etc. of course once the change goes beyond that it probably is not working. However I really thought that a good idea would be to check for that work and implement it according to the issue description and change a Mr once the description changes, at least as long as the Mr is 1-3 lines. And even if it does not work, I can just discard it.
(A lot of these problems are often typos that do not even need a checkout, they come in through bigger Mrs that should not be blocked because of them)
In particular, doing direct comparisons between metrics like that doesn't work. "Lines of code" isn't a good way to measure complexity of the code, and the amount of time it takes to review the code will vary quite a bit based on the use case.
There's a lot of diversity in what kind of code people write and just because it worked for someone else doesn't mean it will work for the kinds of problems you solve. It's anecdotal evidence that someone else found it useful, your mileage may vary.
When antirez says 'I ventured to a level of complexity that I would have otherwise skipped,' I don't think you can call that a minor gain. The alternative is likely something 'good enough' that leaves the community dissatisfied for months, and then after initial design mistakes become load-bearing the ideal implementation can never be realized.
To clarify, from TFA:
> even before LLMs the implementation was likely something I could do in four months. What changed is that in the same time span, I was able to do a lot more
The initial timeframe was 4 months, he was able to do more work within the same timeframe with LLMs.
I've been working on a Database adapter for a couple months using an LLM... I've got a couple minor refactors to do still, then getting the "publish" to jsr/npm working... I've mostly held off as I haven't actually done a full review of the code... I've reviewed the tests, and confirmed they're working though. The hard part is there's some features I really want when in Windows to a Windows SQL Server instance that isn't available in linux/containers. I don't think I'll ever choose SQL again, but at least I can use/access a good API with windows direct auth and FILESTREAM access in Deno/Bun/Node.
FWIW: My final implementation landed on ODBC via rust+ffi so after I get the mssql driver out, I'll strip a few bits in a fork and publish a more generic odbc client adapter. using/dispose and async iterators as first class features in the driver.
He's not, but his work is obviously not average.
Average dev work is plumbing and CRUDs.
I start with a high level design md doc which an AI helps write. Then I ask another AI - whether the same model without the context, or another model - to critique it and spot bugs, gaps and omissions. It always finds obvious in hindsight stuff. So I ask it to summarize its findings and I paste that into the first AI and ask its opinions. We form an agreed change and make it and carry on this adversarial round robin until no model can suggest anything that seems weighty.
I then ask the AI to make a plan. And I round robin that through a bunch of AIs adversarially as well. In the end, the plan looks solid.
Then the end to end test cases plan and so on.
By the end of the first day or week or month - depending on the scale of the system - we are ready to code.
And as code gets made I paste that into other AIs with the spec and plan and ask them to spot bugs, omissions and gaps too and so on. Continually using other AI to check on the main one implementing.
And of course you have to go read the code because I have found it that AI misses polishes.
And I’m not saying that to poke fun at you (my workflow is essentially identical to yours), or at Google, but rather to say that there’s nothing new :)
AI is a fantastic accelerator of effective and ineffective workflows alike. It’s showing us which are effective and ineffective on way shorter timescales / in realtime!
> And of course you have to go read the code because I have found it that AI misses polishes
Since you mentioned using other agents, do you get mileage out of code reviews with another agent polishing the unpolished bits? My colleagues swear by it, though I personally remain skeptical about its value without a human reviewer.
> Then I ask another AI
May be synthesis-antithesis-thesis works better in applied computer science... https://en.wikipedia.org/wiki/Dialectic#Criticisms
Because spotting holes in specs has never been one of my strengths. And working without technical colleagues much of the time, it's a boon to be able to "rubber-duck" my ideas with something that is at least more intelligent than plastic.
Grabbing multipliers from thin air, the coding bit may only be 2x faster with a poorer-quality outcome, but working out what's needed is a good 5x faster.
And yes, I'm using the same adversarial AI MO as @wood_spirit, combined with Matt Pocock's excellent /grill-me and /grill-with-docs skills [1] and Plannotator [2] to review the plans.
Like:
[0] https://csci1710.github.io/2026/ and https://forge-fm.github.io/book/2026/
I haven't been using multiple AIs adversarially as OP, but might consider giving it a try with Codex and Opus. That said, my AI workflow has been pretty similar... lots of iterations on just design, then iterations on documentation, testing, etc... then iterations on implementation, testing, validation and human review in the mix.
My analogy is that it's really close to working with a foreign dev team, but your turnaround is in minutes instead of days, where it's much more interactive.
I feel strong making "dev" documentation though, since it seems a bit redundant/superfluous. I fully suspect nobody is going to read it at this point.
* I can work in code I'm not familiar with much easier
* LLMs often identify confusion or uncertainty upfront, so I can address it earlier.
* I'm much less mentally taxed so I can go for longer at my top end.
* Meetings, disruptions, end of day is WAY less critical since I can lean on the LLM to get back into things.
* I can do something else productive while the LLM is running. Bug fixes, documentation, PR reviews, etc.
To get a quality, lasting, result you're ultimately having to carefully study everything otherwise you end up quickly accumulating cognitive debt and the speedup soon shrinks as you're constantly having to revisit the initial approaches.
2000 lines the sparse array.
2000 lines the t_array commands and upper layer implementation.
~500 lines of AOF / RDB code.
All the other stuff is tests, JSON command descriptions, TRE library under "deps".
If the initial development bar is relatively high, it's far, far easier to identify flaws and gaps when you have the whole thing in front of you all at once.
c.f. valkey and others
I was confused because the last time I checked on things, it was still about fostering community input and advancement but not necessarily consensus. Things have tipped back in the original direction since then. I don't think "Redis was completely built in this way since the start" is completely accurate, but also the community effort under the new governance model never got very deeply entrenched while you were away.
... just speaking as someone who sometimes has to review very long PRs sometimes, though, I feel like 25% is a roughly normal level of "signal to noise." 5,000 lines of core logic is a LOT, and the tests and dependencies do still need to be read.
EDIT: I feel like the problem, as a reviewer, is processing 4 months of intensive research/development and providing useful feedback. At that point, there's probably not much major input you can have into the core architecture or strategy, so you're probably not providing much more than a bugbot at that point.
Sure you can? In this concrete case, Redis is very "flat" — there's the data structure implementations, and there's the commands that use them. 1+N. You could have feedback about the data structure (i.e. whether it's optimal for the use-cases); or about any of the commands (i.e. not just their impls, but also whether they're the best core API surface to lock in long-term, or even whether they're worth including at all.)
Any given feedback would necessitate fairly limited rework to address, as you're either modifying the data structure (and its tests) or a command (and its tests and docs.)
Virtually all major Redis features are a solo job of the post author.
By the way reviewers are paid good money for this and know the setup.
Now I just need a way to protect my chats from any potential discovery, and <pew pew> business’ll be easy.
Then it quickly lost its original meaning as people started using it for virtually all forms of AI-assisted coding.
@antirez: Introducing a regex feature that late into the project for a seemingly unrelated feature feels a bit weird? Can you explain more your rationale on that? thanks!
The RE component is interesting, but as commentary here has noted it seems orthogonal to the array data structure (i.e., usable on others as well). Does this not make more sense to accomplish with Lua scripting? Or if performance of Lua is an issue perhaps abstracting OP to be composable on top of any command that returns a range of values.
I say this with reverence for Antirez as the expert in this space, but some of this new feature set feels like the sort of solution that I tend to see arise from LLM-driven development; namely creation of new functionality instead of enhancement of existing, plus overcomplicating features when composition with others might be more effective.