Top
Best
New

Posted by zknill 2 days ago

All your agents are going async(zknill.io)
68 points | 44 comments
edg5000 2 hours ago|
There is nothing wrong with the HTTP layer, it's just a way to get a string into the model.

The problem is the industry obsession on concatenating messages into a conversation stream. There is no reason to do it this way. Every time you run inference on the model, the client gets to compose the context in any way they want; there are more things than just concatenating prompts and LLM ouputs. (A drawback is caching won't help much if most of the context window is composed dynamically)

Coding CLIs as well as web chat works well because the agent can pull in information into the session at will (read a file, web search). The pain point is that if you're appending messages a stream, you're just slowly filling up the context.

The fix is to keep the message stream concept for informal communication with the prompter, but have an external, persistent message system that the agent can interact with (a bit like email). The agent can decide which messages they want to pull into the context, and which ones are no longer relevant.

The key is to give the agent not just the ability to pull things into context, but also remove from it. That gives you the eternal context needed for permanent, daemonized agents.

vanviegen 1 hour ago||
I've been working on a coding agent that does this on and of for about a year. Here's my latest attempt: https://github.com/vanviegen/maca#maca - This one allows agents to request (and later on drop) 'views' on functions and other logical pieces of code, and always get to see the latest version of it. (With some heuristics to not destroy kv-caches at every turn.)

The problem is that the models are not trained for this, nor for any other non-standard agentic approach. It's like fighting their 'instincts' at every step, and the results I've been getting were not great.

zknill 2 hours ago|||
> "and which ones are no longer relevant."

This is absolutely the hardest bit.

I guess the short-cut is to include all the chat conversation history, and then if the history contains "do X" followed by "no actually do Y instead", then the LLM can figure that out. But isn't it fairly tricky for the agent harness to figure that out, to work out relevancy, and to work out what context to keep? Perhaps this is why the industry defaults to concatenating messages into a conversation stream?

vdelpuerto 19 minutes ago|||
Shortcut works sometimes. But if X is common in training and Y is rare, the model regresses on the next turn even with 'do Y, not X' right there in history. @vanviegen's 'fighting instincts' — you can't trust the model to read the correction. Gate it before the model runs instead of inferring it from context
asixicle 1 hour ago|||
That's what the embedding model is for. It's like a tack-on LLM that works out the relevancy and context to grab.
nprateem 57 minutes ago||
God knows why you think this is possible. If I don't even know what might be relevant to the conversation in several turns, there's no way an agent could either.
asixicle 49 minutes ago||
One of us is confusing prediction with retrieval. The embedding model doesn't predict what is going to be relevant in several turns, just on the turn at hand. Each turn gets a fresh semantic search against the full body of memory/agent comms. If the conversation or prompt changes the next query surfaces different context automatically.

As you build up a "body of work" it gets better at handling massive, disparate tasks in my admittedly short experience. Been running this for two weeks. Trying to improve it.

raincole 46 minutes ago|||
> The key is to give the agent not just the ability to pull things into context, but also remove from it

Of course Anthropic/OpenAI can do it. And the next day everyone will be complaining how much Claude/Codex has been dumbed down. They don't even comply to the context anymore!

sourcecodeplz 1 hour ago|||
Yeah, opencode was/is like this and they never got caching right. Caching is a BIG DEAL to get right.
asixicle 1 hour ago|||
To be utterly shameless, this what I've been building: https://github.com/ASIXicle/persMEM

Three persistent Claude instances share AMQ with an additional Memory Index to query with an embedding model (that I'm literally upgrading to Voyage 4 nano as I type). It's working well so far, I have an instance Wren "alive" and functioning very well for 12 days going, swapping in-and-out of context from the MCP without relying on any of Anthropic's tools.

And it's on a cheap LXC, 8GB of RAM, N97.

handfuloflight 5 minutes ago||
Why is shame a factor at all in sharing your work?
asixicle 1 minute ago||
Good point. I guess because I'm new here I'm not positive on the decorum-policy for self-promotion.

I just make stuff to share with others, so yeah, good point.

ElFitz 2 hours ago||
Hmm.

Maybe there’s a way to play around with this idea in pi. I’ll dig into it.

_pdp_ 3 hours ago||
Here is an interesting find.

Let's say that you have two agents running concurrently: A & B. Agent A decides to push a message into the context of agent B. It does that and the message ends up somewhere in the list of the message right at the bottom of the conversation.

The question is, will agent B register that a new message was inserted and will it act on it?

If you do this experiment you will find out that this architecture does not work very well. New messages that are recent but not the latest have little effect for interactive session. In other words, Agent A will not respond and say, "and btw, this and that happened" unless perhaps instructed very rigidly or perhaps if there is some other instrumentation in place.

Your mileage may vary depending on the model.

A better architecture is pull-based. In other words, the agent has tools to query any pending messages. That way whatever needs to be communicated is immediately visible as those are right at the bottom of the context so agents can pay attention to them.

An agent in that case slightly more rigid in a sense that the loop needs to orchestrate and surface information and there is certainly not one-size-fits-all solution here.

I hope this helps. We've learned this the hard way.

sudosteph 1 hour ago|
Yep, I didn't want to have to think about concurrency so my solution was a global lock file on my VM that gets checked by a pre-start hook in claude code. Each of my "agents" is a it's own linux user with their own CLAUDE.md, and there is a changelog file that gets injected into that each time they launch. They can update the changelog themselves, and one agent in particular runs more frequently to give updates to all of them. Most of it is just initiated by cron jobs. This doesn't scale infinitely, but if you stick to two-pizza teams per VM it will still be able to do a lot.

So hooks are your friends. I also use one as a pre flight status check so it doesn't waste time spinning forever when the API has issues.

aledevv 2 hours ago||
> All of these features are about breaking the coupling between a human sitting at a terminal or chat window and interacting turn-by-turn with the agent.

This means:

- less and less "man-in-the-loop"

- less and less interaction between LLMs and humans

- more and more automation

- more and more decision-making autonomy for agents

- more and more risk (i.e., LLMs' responsibility)

- less and less human responsibility

Problem:

Tasks that require continuous iteration and shared decision-making with humans have two possible options:

- either they stall until human input

- or they decide autonomously at our risk

Unfortunately, automation comes at a cost: RISK.

dist-epoch 2 hours ago|
AI driven cars have better risk profiles than humans.

Why do you think the same will not also be true for AI steerers/managers/CEO?

In a year of two, having a human in the loop, will all of their biases and inconsistencies will be considered risky and irresponsible.

khafra 1 hour ago|||
"Did the vehicle just crash" has a short feedback loop, very amenable to RL. "Did this product strategy tank our earnings/reputation/compliance/etc" can have a much longer, harder to RL feedback loop.

But maybe not that much longer; METR task length improvement is still straight lines on log graphs.

dist-epoch 1 hour ago||
The AI has read all the business books, blogs and stories.

Unless your CEO is Steve Jobs, it's hard to imagine it being much worse than your average pointy haired boss.

rapind 58 minutes ago|||
> The AI has read all the business books, blogs and stories.

This seems like a liability as most business books, blogs, and stories are either marketing BS or gloss over luck and timing.

> Unless your CEO is Steve Jobs, it's hard to imagine it being much worse than your average pointy haired boss.

As someone using AI agents daily, this is actually incredible really easy to imagine. It's actually hard to imagine it NOT being horrible! Maybe that'll change though... if gains don't plateau.

nprateem 54 minutes ago|||
But they are shit. Over the last 2 days I've got bored of the predictable cycle of it first getting excited about a new idea then back peddling once I shoot it to pieces.

They can't write and think critically at the same time. Then subsequent messages are tainted by their earlier nonsensical statements.

Opus 3.7 BTW, not some toy open source model.

jddj 2 hours ago||||
Getting to that point is likely going to involve a lot of (the business and personal equivalent of) Teslas electing to drive through white semitrailers.
philipwhiuk 27 minutes ago||||
Or autonomous weapons?
oblio 1 hour ago|||
> AI driven cars have better risk profiles than humans.

From which company? I hope you say "Waymo", because Tesla is lying through its teeth and hiding crash statistics from regulators.

artisin 2 hours ago||
So reinventing terminal multiplexing, except over proprietary chat/realtime transports instead of PTYs?
oblio 1 hour ago|
Yeah, but one is free and the other one might make you a billionaire.

If you think about it, about 30% of the biggest businesses out there are based on this exact business idea. IRC - Slack, XMPP & co - the many proprietary messengers out there, etc.

scotty79 5 minutes ago||
It seems that people started spontaneously using chat apps (telegram and such) for durable channel between them and their async agents.

Maybe better somebody standardize that because we'll end up with agents sending rich payloads between themselves via telegram.

Yokohiii 2 days ago||
this is a commercial sales pitch for something that doesn't exist
zknill 3 hours ago|
I don't think this is quite right. I do work for a pub/sub company that's involved in this space, but this article isn't a commercial sales pitch and we do have a product that exists.

The article is about how agents are getting more and more async features, because that's what makes them useful and interesting. And how the standard HTTP based SSE streaming of response tokens is hard to make work when agents are async.

philipwhiuk 26 minutes ago||
> but this article isn't a commercial sales pitch

Yes it is. But it's nice you've convinced yourself I guess.

What is this, if not a product pitch:

> Because we’re building on our existing realtime messaging platform, we’re approaching the same problem that Cloudflare and Anthropic are approaching, but we’ve already got a bi-directional, durable, realtime messaging transport, which already supports multi-device and multi-user. We’re building session state and conversation history onto that existing platform to solve both halves of the problem; durable transport and durable state.

Havoc 2 hours ago||
Struggling with this at the moment too - the second you have a task that is a blend of CI style pipeline, LLM processing and openclaw handing that data back and forth, maintaining state and triggering next step gets tricky. They're essentially different paradigms of processing data and where they meet there are impedance mismatches.

Even if I can string it together it's pretty fragile.

That said I don't really want to solve this with a SaaS. Trying really hard to keep external reliance to a minimum (mostly the llm endpoint)

mettamage 2 hours ago||
> The interesting thing is what agents can do while not being synchronously supervised by a human.

I vibe coded a message system where I still have all the chat windows open but my agents run a command that finished once a message meant for them comes along and then they need to start it back up again themselves. I kept it semi-automatic like that because I'm still experimenting whether this is what I want.

But they get plenty done without me this way.

sebastiennight 3 hours ago||
The idea of the "session" is an interesting solution, I'll be looking forward to new developments from you on this.

I don't think it solves the other half of the problem that we've been working on, which is what happens if you were not the one initiating the work, and therefore can't "connect back into a session" since the session was triggered by the agent in the first place.

zknill 3 hours ago|
With the approach based on pub/sub channels, this is possible to do if you know the name of the session (i.e. know the name of the channel).

Of course the hard bit then is; how does the client know there's new information from the agent, or a new session?

Generally we'd recommend having a separate kind of 'notification' or 'control' pub/sub channel that clients always subscribe to to be notified of new 'sessions'. Then they can subscribe to the new session based purely on knowing the session name.

serbrech 3 hours ago|
I recognize the problem statement and decomposition of it. But not the solution. Especially saying that he sees the same problem being worked on by N people. And now that makes in N+1? I’ve been more interested by the protocols and standard that could truly solve this for everyone in a cross-compatible way. Some people have dabbled with atproto as the transport and “memory” storage for example.
More comments...