Top
Best
New

Posted by shad42 16 hours ago

The agent harness belongs outside the sandbox(www.mendral.com)
118 points | 86 commentspage 3
qudat 12 hours ago|
Interesting idea. Tangentially related I’ve been using my local agent to interact with remote shells via zmx, described here: https://bower.sh/zmx-ai-portal

The use case is different but this article strikes some vague similarities around an agent API to remotely execute commands.

blcknight 15 hours ago||
I am not sure anyone knows what a harness is at this point. I've heard 17 different definitions of it at this point. It's almost like a buzzword in search of a problem.
aluzzardi 15 hours ago||
Author here. My definition is: you take an agent, remove the model and you’re left with the harness.

Tools, memories, sandboxing, steering, etc

ossa-ma 15 hours ago|||
Clean definition, stealing it. Way better than mine: "Now imagine Claude as Shinji and Claude Code as Eva..."
TeMPOraL 13 hours ago||
Huh. My definition - or rather, explanation - has always been, "The model is just a big bag of floats you multiply with some numbers to get some numbers out, plus a regular program that runs a loop which, at minimum, turns inputs (text, images) into a stream of numbers, pushes it through those multiplications against the bag of floats, and turns results back into text/images/whatnot. That regular program is called a harness[0]. Now, the trick to make LLMs into agents, is to add another loop in the harness that reads the output and decides whether to send it out to user, or do something else, like executing more code (that's what tools are), or feeding it back to input with some commentary (that's how you get "thinking"), or both (that's how you get the "agentic loop")".

Because there isn't really much more to it. And ever since we, i.e. those of us who played with ChatGPT API early on, bolted tools to it, some half a year before OpenAI woke up and officially named it "function calling" - ever since then, we knew that harness was the key. What kept changing was which logic (and how much of it) to put in explicitly, vs. pushing it back to the model on the "main thread", vs. pushing it to a model on a separate conversation track. But the basic insight remains the same.

--

[0] - Well, today - until recently you'd call it a "runner" or "runtime".

Dotnaught 13 hours ago||||
So, client?
beepbooptheory 15 hours ago|||
But what is an agent without tools?
tomrod 14 hours ago||
Code.
beepbooptheory 14 hours ago||
Like as in what its made out of, or what it makes? Neither really makes sense here? Lots of things are made out of code and not necessarily agents, but also (from my decidedly outside observer perspective) "agents" are not limited to being code producers either.
nextaccountic 14 hours ago|||
If you use cloud models.. the harness is what runs in your computer

AI companies would love if everything ran in their cloud, but arguably there are latency reasons or other reasons to run at least some stuff in your own computer

brazukadev 15 hours ago|||
the agent harness is the REPL. The evaluation + loop.
irishcoffee 15 hours ago||
I don’t even know what an agent means, let alone harness.
IgorPartola 15 hours ago|||
There is an LLM API. You send it a system prompt and the conversation history. If the last message is a user message the agent will send back a response. It can also send back a “thinking” message before it sends a response and it can also send back a structured message with one or more function calls for functions you defined in your API request (things like “ls(): list files”).

The harness is the part that makes the API calls, interacts with the user, makes the function calls, and keeps track of the conversation memory.

You can also use the LLM to summarize the conversation into a single shorter message so you get compaction. And instead of statically defining which functions are available to the LLM you can create an MCP server which allows the LLM to auto-discover functions it can call and what they do.

That’s the whole magic of something like Claude Code. The rest is details.

TeMPOraL 13 hours ago||||
I'd say the core is that the harness/runtime/${whatever you call it} doesn't just unconditionally sends model output to the user, and user input to the model, +/- some post-processing, but instead runs a loop that feeds the output back to the model if some conditions are met. That gives you basic "thinking" and single "function calling" a-la early ChatGPT. However, if you allow it to loop arbitrary number of times and allow the output to decide whether to loop or to stop, you get a basic agent.
zmmmmm 14 hours ago|||
Agent is currently defined as "what I want it to mean given whatever I am talking about".

Personally, for me it embodies a level of autonomy. I define that as, an AI model with potential to interact with something external to itself based on its output, where that includes its own future behavior.

Koffiepoeder 15 hours ago||
Slightly related: I am looking for:

- Easy single command CLI agent spawning with templates

- Automatic context transfer (i. e. a bit like git worktrees)

- Fully containerised, but remote (a bit like pods)

- Central, mitm-proxy zero trust authn/authz management (no keys or credentials inside the agents), rather enrichment in the hypervisor/encapsulation

- Multi agent follow-up functionalities

- Fully self hosted/FOSS

Basically a very dev-friendly, secure, "kubernetes"-like solution for running remote agents.

Anyone has an idea of how to achieve this or potential technologies?

nvader 13 hours ago|
Yeah, have you tried `mngr` by Imbue? It seems to have a bunch of the features you're looking for.

https://github.com/imbue-ai/mngr

sudb 14 hours ago||
Is secretly rerouting reads/writes/edits of skills and memory any easier than just dumping the actual skills and memory files on disk at sandbox startup?

Another benefit of moving the harness outside the sandbox is you get to avoid accidentally creating a massive distributed system and you therefore don't have to think so much about events/communication between your main API and your sandboxes.

avipeltz 9 hours ago||
At least for me one of the major reasons to run an agent in a sandbox is to save memory on my machine if i am running multiple agents in parallel. Wouldnt this not help for that?
pamcake 11 hours ago||
The agent harness needs different sandbox(es) with different privileges. Nothing here supports not containing its access. It's a mistake to think and talk about "the sandbox" in the way the article does.
solidasparagus 15 hours ago||
Why are two concurrent sessions updating the same memory key with different values? IMO it probably points to a fundamental flaw in how memory is being thought about and built.
aluzzardi 15 hours ago|
Author here. Because of parallelism and non determinism.

This problem is quite common and not limited to memories. For instance, Claude Code will block write attempts and steer the agent to perform a read first (because the file might have been modified in the meantime by the user or another agent).

Same principle here: rather than trying to deterministically “merge” concurrent writes, you fail the last write and let the agent read again and try another write

Retr0id 16 hours ago||
It took me a while to grok why this made any sense, I think the context is that this is for hosting many agents as a service.
qezz 15 hours ago|
Exactly, my understanding is also that they host agents as a service. The actual use case is mentioned in the end of the article, which makes it hard to reason about.

Anyway. General advice: treat harnesses as any other (third-party) software that you run on your server. Modern harnesses (the ones from big companies, you need to subscribe to) are black boxes. Would you run a random binary you fetched from the internet on your server? Claude code, codex etc. are exactly this.

shad42 15 hours ago||
We don't host 3rd party agents (I don't know if this what you implied). We built an agent that monitors CI pipelines, tests failures, performance and auto opens PR to address issues we find. We host our agent loop on a backend (it's in go), and we call to the sandbox when we run operations involving the user code.
moron4hire 7 hours ago||
The idea of physically separating the agent harness and the artifacts on which you want the harness to work seems to be the wrong answer to the right problem, a symptom of a worrying problem with all the agentic workflow work I see in the wild: that of being lazy about how you develop the harness and tools it has available. Specifically, there seems to be a desire to make agent tools in an incredibly naive and simplistic way that ignores access control.

My company is not a "sophisticated" software development organization. We're 3000 people where 2000 do nothing with AI, 900 are naively jumping on every AI bandwagon that comes along (or rather, parroting whatever they read on the Internet to complain about how hard their job is despite it not having materially changed in the last 15 years), 90 are capable of doing anything towards implementing anything involving AI beyond "just use Claude," and 10 have the experience to really scrutinize what is going on. And our work is of a nature that scrutinizing the exact process of how results occurred, what data we came to it by, is essential. There are regulations and compliance issues that could land people in prison if we don't and the results are eventually proved to involve inappropriate data. What does that mean? I'll just say we primarily work for DoD.

I have very long experience with managers asking for the moon and not listening when the engineering staff raises red flags. They ask us, "why can't we just do X?" Where X is whatever they've recently read about in whatever MBA-targeted publication that was bought and paid for by the service provider profitting off of X, with no skin in the game regarding the nuance, because the relevant regulations are written to scapegoat the person in the chair bashing the keys and not the person making the decisions and hanging the former person's jobs over their heads. The "why can't we just do X" is not an in-good-faith question, it's a statement that you need to shut up about your concerns and "just do X."

But out of desperation/malicious compliance, I've started developing an agentic harness that can "just get AI to do it" for the data sources on which we work. And I've noticed two things: A) agent harnesses are not that hard to write (honestly, anyone with basic programming competency should be able to do it), and B) they can only ever work on what you give them. I suppose the last point should be obvious, but I've had enough conversations with folks who expect magic that it is clear that it is not actually obvious.

And that's where I get into "extant agent work is lazy." The agent harness I've developed is incapable of accessing data its operator should not have. If you are cleared to only see a subset of the universe of data, then running this harness cannot possibly give you access to more than your clearance. I'm not trying to brag here, because this was not a difficult guarantee to make. In developing the harness and giving it tools to do work, I just developed the same access controls I would have done for a human user accessing an API to the same data. The only thing that is different about my approach is that I didn't use an off-the-shelf harness with tools developed by others. I just wasn't lazy about my job.

My key stakeholder was skeptical that I was able to do this, mostly because he has subconsciously intuited that our organization is not very sophisticated in developing software. He doesn't understand that employing AI isn't magic, and I think that is the case for a lot of the people who use AI the most here. They see products like Claude go to work and think there is some kind of special sauce there that requires the development by a "frontier" AI firm to actualize. But the truth is, the more you develop agentic AI capability, the less AI you are actually employing. The AI eventually becomes just an orchestrator of tools that perform work by not-AI means. If you are lazy, you try to lean on naive tool implementations that let the AI do whatever "it wants." And that's where you get into trouble. But if you show up to your job and be diligent about implementing those tools, there is no possible way the AI can screw you over, because you never gave it the unrestricted access to curl or `rm -rf`.

This is why, even if AI does become a permanent fixture of software development (still not convinced, even after all this experience), you're still not getting rid of us software engineers, despite how much you hate us. You still need us to protect your data, and nothing about AI has changed the equation that ends in "data is king." If anything, it's more important than ever.

Edit: I'm specifically developing a multi-user agent, accessed via a Web application over a shared database. Row-level access control is baked into every tool and I can do this with little effort because dependency injection Is A Thing. Thus, the parameters of access control never even reach the AI.

More comments...