Top
Best
New

Posted by antirez 2 days ago

Redis array: short story of a long development process(antirez.com)
270 points | 89 commentspage 2
thallavajhula 2 days ago|
Salvatore really wants to popularize the term Automatic Programming/Coding it seems. (https://antirez.com/news/159)
beyti 2 days ago||
I keep finding myself to minimize the words to describe the same thing as well, since we are finding ourselves doing "that" operation more and more over time.

maybe shortening the term to "auto-code" would help tho.

zozbot234 2 days ago||
https://en.wikipedia.org/wiki/Automatic_programming It's an acknowledged term in computer science, describing any mechanism whatsoever of auto-generating code from a description at a higher level of abstraction. Of course LLM's are highly unusual in being non-deterministic and having a surprisingly broad scope, but this does not make the term inapplicable.
jdw64 2 days ago||
It feels like Redis is becoming a small database, which seems to make it more convenient to use. Could you add more examples that clarify where the boundary should be?
antirez 2 days ago|
Well, Redis is a data structures server, and has very complicated and edgy data structures like the HyperLogLog, so I have very little doubts that a fundamental data type like the Array will fit :) Also the actual complexity added is mostly two C files that are quite commented and understandable.

    wc -l t_array.c sparsearray.c
        2012 t_array.c
        2063 sparsearray.c
        4075 total (including comments)
Sure there are also the AOF / RDB glues, the tests, the vendored TRE library for ARGREP. But all in all it's self contained complexity with little interactions with the rest of the server.

A quick note: if we focus only on that part of the implementation, skipping tests and persistence code which is not huge, 4075 lines in 4 months are an average of 33 lines per day, which is quite low.

jdw64 2 days ago||
I’m a big fan of your work, and I honestly didn’t expect to receive a reply from you. Thank you. Also, thank you for pointing out exactly where I was misunderstanding the issue. In the past, I used Redis for temperature measurements in a smart farm project. I used Hashes back then, but it seems like Array would fit that use case much better.

This looks like a very useful feature. Thank you again for the reply.

antirez 2 days ago||
I appreciate your kind reply as well :)
shay_ker 2 days ago||
antirez: i'm curious, with the final code, have you experimented with effectively one-shotting the final result? i wonder if we can get there with GEPA, and maybe there's something we can learn in how to elicit/prompt these models to get what we want.

or maybe the conclusion is that model providers need to clean up their training data!

srinikhilr 2 days ago||
Anyone know how to get the specification mentioned in the blog post? Don't see one in the linked PR.
nitwit005 2 days ago||
The use of C stdlib localization functions (toupper, mbrtowc, etc), makes me suspect if there will be some regex behavior differences between systems or locales.
antirez 2 days ago|
Redis sets the locale at startup to avoid issues so should be ok but we will document that for instance è will not match È when nocase is used.
ok123456 2 days ago||
Is this an apologia since the PR is +22,212 -34?
antirez 2 days ago|
Haha, ~5000 LOC with comments. The rest is tests + TRE code + TRE tests.
gbalduzzi 2 days ago||
Is it possible to see the specification file you created and used for AI assisted development?

Very cool anyway! Can I expect a youtube video about this soon?

antirez 2 days ago|
Yep I will release it, it is a bit out of sync at this point, but will do a pass of updating and will release it.
nojvek 2 days ago||
It’s always a great HN thread when an author of a widely used lib/app engages on a technical level.

antirez - you inspire a generation of devs. Thanks for all you do.

dsecurity49 2 days ago||
AI is a fantastic co-pilot, but you still need to know how to fly the plane when the edge cases start hitting the fan.
leetrout 2 days ago||
On safari mobile it's a page with the title header and a footer. Theres no content rendering.
antirez 2 days ago||
Checking, thanks. EDIT: works very well on my iPhone, so without being able to reproduce is not easy to fix.
tobr 2 days ago||
Same here, I need to turn off content blockers for the article content to load.
antirez 2 days ago||
I should probably remove the Adsense JS which I don't use anyway...
leetrout 2 days ago||
Oh shoot. Sorry I didn't even think about having a content blocker running on my phone. Sorry for the distraction.
revscat 2 days ago||
[dead]
epolanski 2 days ago|
Got few questions:

- the project essentially spans almost 3 different (albeit minor) generations of LLMs. Have you noticed major differences in their personas, behavior, output for that specific use case?

- when using AI for feedback, have you ever considered giving it different "personalities"? I have few skills that role play as very different reviewers with their own different (by design conflicting) personalities. I found this to improve the output, but also to be extremely tiring and to often have high noise ratio.

- when did you, if ever, felt that AI was slowing you down massively compared to just doing it yourself (e.g. some specific bug or performance or design fix)? Are there recurring patterns?

- conversely, how often did AI had moments where it genuinely gave you feedback or ideas that would've not come to you?

- last: do you have specific prompts, skills, setups, etc to work on specific repositories?

antirez 2 days ago|
1. The huge jump from from Opus to GPT 5.3. Game changer. GPT 5.4, 5.5, were better but only incrementally better.

2. Nope I don't give much personalities, but I use subtle prompt differences to maximize certain responses I want, to make the model focusing in a given detail or acting in a specific kind of engineering mindset.

3. It never happened that the AI was slowing me down since I always had the full context and code detail in mind of what was happening. I believe that this happens more when you don't have a clear idea. Also GPT >= 5.3/4 is not the past generation of models, it is very hard to trap it into a situation where it seems unable to understand what you mean.

4. A few times the AI provided fresh insights that I really liked. Most of the times it was the other way around. Certain implementations were written by the AI at a very impressive level of quality.

5. I don't use general skills, I build skills with deep search when needed for specific projects, and build an AGENT.md that works as a knowledge base as I work with the AI. One thing that I use a lot is, when there is a very complex problem, to tell GPT that I have a friend called Machiavelli that is an incredible computer scientist. To write him an email in /tmp/letter.md with the problem we are facing, and I'll try to get a reply. Then I ask GPT 5.5 Pro on the web with extensive reasoning set on. It will take sometimes 30 minutes or more to reply. Often times after I feed back the reply, the agent will be able to see things a lot more clearly.

epolanski 2 days ago||
Thanks a lot for the insights. I like the Machiavelli thing.

> Then I ask GPT 5.5 Pro on the web with extensive reasoning set on. It will take sometimes 30 minutes or more to reply.

Any reason why Codex can't do that?

antirez 2 days ago||
If Pro is the same model (hard to tell, I'm not sure) it has a token budget to think (test time scaling) which is huge compared to the Codex endpoint.
More comments...