Top
Best
New

Posted by fs_software 2 days ago

OpenClaw is a security nightmare dressed up as a daydream(composio.dev)
391 points | 284 commentspage 3
lxgr 1 day ago|
What annoys me most about OpenClaw after trying it for a few weeks is that it cosplays security so incredibly hard, it actually regularly breaks my (very basic) setup via introducing yet another vibe coded, poorly conceptualized authentication/authorization/permission layer, and at the same time does absolutely nothing to convince me that any of this is actually protecting me of anything.

Maybe this idea is lost on 10^x vibecoders, but complexity almost always comes at a cost to security, so just throwing more "security mechanisms" onto a hot vibe-coded mess do not somehow magically make the project secure.

falense 1 day ago||
Agreed! Made my own OpenClaw variant based on many of the same principles. It takes Simon Willies lethal trifecta and implements it to an OpenClaw like architecture.

https://www.tri-onyx.com/

taurath 1 day ago||
I love how despite all this, the author still uses the language:

> We’re simply not there yet to let the agents run loose

As if there aren’t fundamental properties that would need to change to ever become secure.

lxgr 1 day ago|
Personally, if I could run capable-enough inference on hardware I control, and could rely on the harness asking me for mechanistic confirmation before the agent can take consequential actions, I'd do it immediately.
taurath 20 hours ago||
Consequential actions like searching the web or downloading packages or dependencies or doing most anything useful?
lxgr 7 hours ago||
No, these are all fine for me (my agent is sandboxed in a container, so it can install all the node modules or Debian packages it wants).

I was thinking more of sending outgoing emails, publishing anything on the web, spending my money etc.

rickdg 2 days ago||
Related: https://news.ycombinator.com/item?id=47475997
mandeepj 1 day ago||
Have you tried NemoClaw? Not endorsing it; just enquiring. Nvidia is claiming to provide guardrails with that.
latand6 1 day ago||
One thing I'd like to critisize - although I can agree that skill security is a real problem, but the solution is not to restrict yourself from using them, but to rely on the community: reviews, likes/dislikes, maybe having the skills curated. We need some trust signals. Also, since markdown files are auditable by design - your agent might actually verify them before running - provided you're using something like GPT-5.4 on high reasoning.
chewbacha 2 days ago||
This read like an AI generated piece and seems to be an advertisement for their product.
koconder 1 day ago||
Should have said this was a fear to promote a b2b sass "TrustClaw"
feeworth 1 day ago||
Didn't Nvidia create the safer version basically?
perbu 1 day ago|
The problem is that the the LLM can't distinguish between data and instruction so there is just so much the harness can do.
Yizahi 1 day ago|
Every LLM evangelist seems to forget that there is a reason why LLMs work so well for coding. It's because there were and are preexisting non-LLM validation tools for coding. The slop doesn't make it past linters, compilers, cone analysis and other tools, and then there is a second barrier in the form of code review. And even will these guardrails LLMs often produce substandard output.

Buying a ticket, writing an email, setting calendars or fiddling with files on the drive etc. have none of these guardrails. LLMs can and will simply oneshot the slop into a real system, without neither computer nor human validation.

More comments...