Top
Best
New

Posted by Cyphase 1 day ago

Claws are now a new layer on top of LLM agents(twitter.com)
https://xcancel.com/karpathy/status/2024987174077432126

Related: https://simonwillison.net/2026/Feb/21/claws/

269 points | 718 commentspage 7
derefr 13 hours ago|
> I'm definitely a bit sus'd to run OpenClaw specifically - giving my private data/keys to 400K lines of vibe coded monster that is being actively attacked at scale is not very appealing at all.

So... why do that, then?

To be clear, I don't mean "why use agents?" I get it: they're novel, and it's fun to tinker with things.

But rather: why are you giving this thing that you don't trust, your existing keys (so that it can do things masquerading as you), and your existing data (as if it were a confidante you were telling your deepest secrets)?

You wouldn't do this with a human you hired off the street. Even if you're hiring them to be your personal assistant. Giving them your own keys, especially, is like giving them power-of-attorney over your digital life. (And, since they're your keys, their actions can't even be distinguished from your own in an audit log.)

Here's what you would do with a human you're hiring as a personal assistant (who, for some reason, doesn't already have any kind of online identity):

1. you'd make them a new set of credentials and accounts to call their own, rather than giving them access to yours. (Concrete example: giving a coding agent its own Github account, with its own SSH keys it uses to identify as itself.)

2. you'd grant those accounts limited ACLs against your own existing data, just as needed to work on each new project you assign to them. (Concrete example: letting a coding agent's Github user access to fork specific private repos of yours, and the ability to submit PRs back to you.)

3. at first, you'd test them by assigning them to work on greenfield projects for you, that don't expose any sensitive data to them. (The data created in the work process might gradually become "sensitive data", e.g. IP, but that's fine.)

To me, this is the only sane approach. But I don't hear about anyone doing this with agents. Why?

claytonaalves 19 hours ago||
I'm impressed with how we moved from "AI is dangerous", "Skynet", "don't give AI internet access or we are doomed", "don't let AI escape" to "Hey AI, here is internet, do whatever you want".
deepsquirrelnet 18 hours ago||
The DoDs recent beef with Anthropic over their right to restrict how Claude can be used is revealing.

> Though Anthropic has maintained that it does not and will not allow its AI systems to be directly used in lethal autonomous weapons or for domestic surveillance

Autonomous AI weapons is one of the things the DoD appears to be pursuing. So bring back the Skynet people, because that’s where we apparently are.

1. https://www.nbcnews.com/tech/security/anthropic-ai-defense-w...

chasd00 16 hours ago|||
hasn't Ukraine already proved out autonomous weapons on the battlefield? There was a NYT podcast a couple years ago where the interviewed higher up in the Ukraine military and they said it's already in place with fpv drones, loitering, target identification, attack, the whole 9 yards.

You don't need an LLM to do autonomous weapons, a modern Tomahawk cruise missile is pretty autonomous. The only change to a modern tomahawk would be adding parameters of what the target looks like and tasking the missile with identifying a target. The missile pretty much does everything else already ( flying, routing, etc ).

slibhb 16 hours ago|||
Yes. They published a great article about it: https://www.nytimes.com/2025/12/31/magazine/ukraine-ai-drone...

As I remember it the basic idea is that the new generation of drones is piloted close enough to targets and then the AI takes over for "the last mile". This gets around jamming, which otherwise would make it hard for dones to connect with their targets.

testdelacc1 16 hours ago|||
A drone told to target a tank needs to identify the shape it’s looking at within milliseconds. That’s not happening with an LLM, certainly.
nradov 16 hours ago||||
The DoD was pursuing autonomous AI weapons decades ago, and succeeded as of 1979 with the Mk 60 Captor Mine.

https://www.vp4association.com/aircraft-information-2/32-2/m...

The worries over Skynet and other sci-fi apocalypse scenarios are so silly.

deepsquirrelnet 16 hours ago||
Self awareness is silly, but the capacity for a powerful minority to oppress a sizeable population without recruiting human soldiers might not be that far off.
nradov 5 hours ago||
The more automation a weapons system has, the more human technicians are needed to keep it working.
nightski 18 hours ago||||
If you ever doubted it you were fooling yourself. It is inevitable.
samiv 17 hours ago|||
It's ok we'll just send a robot back in time to help destroy the chip that starts it.
wolttam 17 hours ago||
Judging by what's going on around me, it failed :(
bcrosby95 16 hours ago||
We're just stuck in the non-diverged timeline that's fucked.
tartoran 17 hours ago|||
If we all sit back and lament that it’s inevitable surely it could happen.
nightski 15 hours ago||
It doesn't matter, it only takes one to make it happen.
georgemcbay 16 hours ago||||
> Autonomous AI weapons is one of the things the DoD appears to be pursuing. So bring back the Skynet people, because that’s where we apparently are.

This situation legitimately worries me, but it isn't even really the SkyNet scenario that I am worried about.

To self-quote a reply to another thread I made recently (https://news.ycombinator.com/item?id=47083145#47083641):

When AI dooms humanity it probably won't be because of the sort of malignant misalignment people worry about, but rather just some silly logic blunder combined with the system being directly in control of something it shouldn't have been given control over.

I think we have less to worry about from a future SkyNet-like AGI system than we do just a modern or near future LLM with all of its limitations making a very bad oopsie with significant real-world consequences because it was allowed to control a system capable of real-world damage.

I would have probably worried about this situation less in times past when I believed there were adults making these decisions and the "Secretary of War" of the US wasn't someone known primarily as an ego-driven TV host with a drinking problem.

breppp 15 hours ago||
Statistically more probable this kind of blunder will happen in a small disaster before a large disaster and then regulated

e.g. 50 people die due to water poisoning issue rather than 10 billion die in a claude code powered nuclear apocalypse

bigyabai 15 hours ago||||
It turned out that the Pentagon just ignored Anthropic's demands anyways: https://www.wsj.com/politics/national-security/pentagon-used...

I really doubt that Anthropic is in any kind of position to make those decisions regardless of how they feel.

deepsquirrelnet 11 hours ago||
I don’t disagree, but they should be. Last I knew, the government doesn’t control the means of production… and the current US regime loves to boast about it. Confusing right?
zer00eyz 17 hours ago|||
> Autonomous AI weapons

In theory, you can do this today, in your garage.

Buy a quad as a kit. (cheap)

Figure out how to arm it (the trivial part).

Grab yolo, tuned for people detection. Grab any of the off the shelf facial recognition libraries. You can mostly run this on phone hardware, and if you're stripping out the radios then possibly for days.

The shim you have to write: software to fly the drone into the person... and thats probably around somewhere out there as well.

The tech to build "Screamers" (see: https://en.wikipedia.org/wiki/Screamers_(1995_film) ) already exists, is open source and can be very low power (see: https://www.youtube.com/shorts/O_lz0b792ew ) --

chasd00 16 hours ago|||
> software to fly the drone into the person... and thats probably around somewhere out there as well.

ardupilot + waypoint nav would do it for fixed locations. The camera identifies a target, gets the gps cooridnates and sets a waypoint. I would be shocked if there wasn't extensions available (maybe not officially) for flying to a "moving location". I'm in the high power rocketry hobby and the knowledge to add control surfaces and processing to autonomously fly a rocket to a location is plenty available. No one does it because it's a bad look for a hobby that already raises eyebrows.

tim333 16 hours ago|||
The Ukrainian drones that took out Russia's long range bombers used ArduPilot and AI. (https://en.wikipedia.org/wiki/Operation_Spiderweb)
phba 16 hours ago|||
> a hobby that already raises eyebrows

Sounds very interesting, but may I ask how this actually works as a hobby? Is it purely theoretical like analyzing and modeling, or do you build real rockets?

chasd00 11 hours ago|||
Build and fly. It’s interesting because it attracts a lot of engineers. So you have groups who are experts in propulsion that make their own solid (and now liquid bi-prop) motors. You also have groups that focus on electronics and make flight controllers, gps trackers etc. then you have software people who make build/fly simulators and things like OpenRocket. There’s regional and national events that are sort of like festivals. Some have FAA waivers to fly to around 50k ft. There’s one at Blackrock Nevada where you can fly to space if you want. A handful of amateurs have made it to the karman line too.
capncleaver 15 hours ago|||
Not whom you are replying to, nor a rocket hobbyist myself, but yes, they do build and launch rockets for fun, eg VC Steve Jurvetson out at black rock: https://www.flickr.com/photos/jurvetson/54815036982/
phba 15 hours ago||
Pretty impressive!
wordpad 17 hours ago|||
Didn't screamers evolve sophisticated intelligence? Is that what happens if we use claw and let it write its own skills and update it's own objectives?
gs17 15 hours ago||
Scarier, in the original story, the robots were called "claws".
sph 18 hours ago|||
This is exactly why artificial super-intelligences are scary. Not necessarily because of its potential actions, but because humans are stupid, and would readily sell their souls and release it into the wild just for an ounce of greed or popularity.

And people who don't see it as an existential problem either don't know how deep human stupidity can run, or are exactly those that would greedily seek a quick profit before the earth is turned into a paperclip factory.

xrd 18 hours ago|||
I love this.

Another way of saying it: the problem we should be focused on is not how smart the AI is getting. The problem we should be focused on is how dumb people are getting (or have been for all of eternity) and how they will facilitate and block their own chance of survival.

That seems uniquely human but I'm not a ethnobiologist.

A corollary to that is that the only real chance for survival is that a plurality of humans need to have a baseline of understanding of these threats, or else the dumb majority will enable the entire eradication of humans.

Seems like a variation of Darwin's law, but I always thought that was for single examples. This is applied to the entirety of humanity.

andsoitis 16 hours ago|||
> The problem we should be focused on is how dumb people are getting (or have been for all of eternity)

Over the arc of time, I’m not sure that an accurate characterization is that humans have been getting dumber and dumber. If that were true, we must have been super geniuses 3000 years ago!

I think what is true is that the human condition and age old questions are still with us and we’re still on the path to trying to figure out ourselves and the cosmos.

xrd 15 hours ago|||
Totally anecdotal but I think phones have made us less present, or said another way, less capable of using our brains effectively. It isn't exactly dumb but it feels very close.

I definitely think we are smarter if you are using IQ, but are we less reactive and less tribal? I'm not so sure.

aix1 2 hours ago||
There's quite a lot of research into what our increasing reliance on technology is doing to our briains.

Here is one paper: https://www.nature.com/articles/s41598-020-62877-0

"Although the longitudinal sample was small, we observed an important effect of GPS use over time, whereby greater GPS use since initial testing was associated with a steeper decline in hippocampal-dependent spatial memory. Importantly, we found that those who used GPS more did not do so because they felt they had a poor sense of direction, suggesting that extensive GPS use led to a decline in spatial memory rather than the other way around."

qup 15 hours ago|||
Modern dumb people have more ability to affect things. Modern technology, equal rights, voting rights give them access to more control than they've ever had.

That's my theory, anyway.

bwfan123 17 hours ago||||
Majority of us are meme-copying automatons who are easily pwned by LLMs. Few of us have learned to exercise critical thinking and understanding from the first assumptions - the kind of thing we are expected to be learn in schools - also the kind of thing that still separates us from machines. A charitable view is that there is a spectrum in there. Now, with AI and social media, there will be an acceleration of this movement to the stupid end of the spectrum.
GTP 16 hours ago||||
> That seems uniquely human but I'm not a ethnobiologist.

In my opinion, this is a uniquely human thing because we're smart enough to develop technologies with planet-level impact, but we aren't smart enough to use them well. Other animals are less intelligent, but for this very reason, they lack the ability to do self-harm on the same scale as we can.

phi-go 17 hours ago|||
Isn't defining what should not be done by anyone a problem that laws (as in legislation) are for? Though, it's not that I expect that those laws would come in time.
bckr 17 hours ago||||
Look, we’ve had nukes for almost 100 years now. Do you really think our ancient alien zookeepers are gonna let us wipe with AI? Semi /j
GistNoesis 17 hours ago|||
It's even worse than that.

The positives outcomes are structurally being closed. The race to the bottom means that you can't even profit from it.

Even if you release something that have plenty of positive aspects, it can and is immediately corrupted and turned against you.

At the same time you have created desperate people/companies and given them huge capabilities for very low cost and the necessity to stir things up.

So for every good door that someone open, it pushes ten other companies/people to either open random potentially bad doors or die.

Regulating is also out of the question because otherwise either people who don't respect regulations get ahead or the regulators win and we are under their control.

If you still see some positive door, I don't think sharing them would lead to good outcomes. But at the same time the bad doors are being shared and therefore enjoy network effects. There is some silent threshold which probably has already been crossed, which drastically change the sign of the expected return of the technology.

arbuge 18 hours ago|||
Humans are inherently curious creatures. The excitement of discovery is a strong driving force that overrides many others, and it can be found across the IQ spectrum.

Perhaps not in equal measure across that spectrum, but omnipresent nonetheless.

wolvesechoes 18 hours ago||
> Humans are inherently curious creatures.

You misspelled greedy.

falcor84 18 hours ago||
While the two are closely related, I see a clear distinction between the two drives on their projection onto the explore-exploit axis
bko 18 hours ago|||
There was a small group of doomers and scifi obsessed terminally online ppl that said all these things. Everyone else said its a better Google and can help them write silly haikus. Coders thought it can write a lot of boilerplate code.
alansaber 19 hours ago|||
Because even really bad autonomous automation is pretty cool. The marketing has always been aimed at the general public who know nothing
sho_hn 18 hours ago||
It's not the general public who know nothing that develop and release software.

I am not specifically talking about this issue, but do remember that very little bad happens in the world without the active or even willing participation of engineers. We make the tools and structures.

GuB-42 16 hours ago|||
We didn't "moved from", both points of view exist. Depending on the news, attention may shifts from one to another.

Anyways, I don't expect Skynet to happen. AI-augmented stupidity may be a problem though.

theptip 11 hours ago|||
> we moved from "AI is dangerous"

There was never consensus on this. IME the vast majority of people never bought in to this view.

Those of us who were making that prediction early on called it exactly like it is: people will hand over their credentials to completely untrustworthy agents and set them loose, people will prompt them to act maximally agentic, and some will even prompt them to roleplay evil murderbots, just for lulz.

Most of the dangerous scenarios are orthogonal to the talking points around “are they conscious”, “do they have desires/goals”, etc. - we are making them simulate personas who do, and that’s enough.

wiseowise 19 hours ago|||
> “we”

Bunch of Twitter lunatics and schizos are not “we”.

squidbeak 18 hours ago|||
People excited by a new tech's possibilities aren't lunatics and psychos.
trehalose 18 hours ago|||
The ones who give it free reign to run any code it finds on the internet on their own personal computers with no security precautions are maybe getting a little too excited about it.
simonw 18 hours ago||
That's one of the main reasons there's a small run on buying Mac Minis.
raincole 18 hours ago|||
They mean the

> "AI is dangerous", "Skynet", "don't give AI internet access or we are doomed", "don't let AI escape"

group. Not the other one.

UqWBcuFx6NV4r 18 hours ago||||
I am equally if not more grateful than HN is just as unrepresentative.
snigsnog 10 hours ago|||
X*
mrtksn 18 hours ago|||
I would have said Doomers never win but in this case it was probably just PR strategy to give the impression that AI can do more than it can actually do. The doomers were the makers of AI, that’s enough to tell what a BS is the doomerism :)
singpolyma3 19 hours ago|||
I mean. The assumption that we would obviously choose to do this is what led to all that SciFi to begin with. No one ever doubted someone would make this choice.
AndrewKemendo 17 hours ago|||
Even if hordes of humanoids with “ice” vests start walking through the streets shooting people, the average American is still not going to wake up and do anything
layla5alive 4 hours ago||
The average HNer may be at least as bad as the average American on this axis. Lots of big tech apologist and might makes right takes here. Also a lot of "no big deal" style downplaying of risks and externalities
sixtyj 19 hours ago|||
And be nice and careful, please. :)

Claw to user: Give me your card credentials and bank account. I will be very careful because I have read my skills.md

Mac Minis should be offered with some warning, as it is on pack of cigarettes :)

Not everybody installs some claw that runs in sandbox/container.

qup 18 hours ago||
Isn't the Mac mini the container?
simonw 18 hours ago||
It is... but then many people hook it up to their personal iCloud account and give it access to their email, at which point the container isn't really helping!
api 16 hours ago|||
Other than some very askew bizarro rationalists, I don’t think that many people take AI hard takeoff doomerism seriously at face value.

Much of the cheerleading for doomerism was large AI companies trying to get regulatory moats erected to shut down open weights AI and other competitors. It was an effort to scare politicians into allowing massive regulatory capture.

Turns out AI models do not have strong moats. Making models is more akin to the silicon fab business where your margin is an extreme power law function of how bleeding edge you are. Get a little behind and you are now commodity.

General wide breadth frontier models are at least partly interchangeable and if you have issues just adjust their prompts to make them behave as needed. The better the model is the more it can assist in its own commodification.

jryan49 19 hours ago||
I mean we know at this point it's not super intelligent AGI yet, so I guess we don't care.
nradov 16 hours ago||
There is no scientific basis to expect that the current approach to AI involving LLMs could ever scale up to super intelligent AGI. Another major breakthrough will be needed first, possibly an entirely new hardware architecture. No one can predict when that will come or what it will look like.
soulofmischief 6 hours ago||
I've been making digital agents since the GPT-3 API came out. Optionally fully local, fully voiced, animated, all of that. Even co-ran a VC funded company making agents, before a hostile takeover screwed it all up. The writing has been on the wall for years about where this was headed.

I have been using and evolving my own personal agent for years but the difference is that models in the last year have suddenly become way more viable. Both frontier and local models. I had been holding back releasing my agents because the appetite has just not been there, and I was worried about large companies like X ripping off my work, while I was still focused on getting things like security and privacy right before releasing my agent kit.

It's been great seeing claws out in the wild delighting people, makes me think the time is finally right to release my agent kit and let people see what a real personal digital agent looks like in terms of presentation, utility and security. Claws are still thinking too small.

nunez 11 hours ago||
I guess it's relieving to know that us developers will never get good at naming things!
Angostura 11 hours ago|
Don't worry, Microsoft will eventually name theirs something worse, probably pre-prepended with 'Viva'

... actually, no - they'll just call it Copilot to cause maximum confusion with all the other things called Copilot

7777777phil 23 hours ago||
Karpathy has a good ear for naming things.

"Claw" captures what the existing terminology missed, these aren't agents with more tools (maybe even the opposite), they're persistent processes with scheduling and inter-agent communication that happen to use LLMs for reasoning.

zmj 10 hours ago||
I also like the callback - not sure if it's intentional - to Stross's "Lobsters" (short story that turned into the novel Accelerando).
UncleMeat 21 hours ago|||
How does "claw" capture this? Other than being derived from a product with this name, the word "claw" doesn't seem to connect to persistence, scheduling, or inter-agent communication at all.
arrowsmith 23 hours ago|||
He didn't name it though, Peter Steinberger did. (Kinda.)
gsf_emergency_6 19 hours ago|||
Just The Thing to grab life by(TM), for those who hitherto have struggled to

White Claw <- White Colla'

https://www.whiteclaw.com/

Another fun connection: https://www.willbyers.com/blog/white-lobster-cocaine-leucism

(Also the lobsters from Accelerando, but that's less fresh?)

efromvt 17 hours ago||
Carcinization - now for your drinks AND your AI
ramoz 11 hours ago|||
People are not understanding that “claw” derives from the original spin on “Claude” when the original tool was called “clawdbot”
9dev 23 hours ago|||
Why do we always have to come up with the stupidest names for things. Claw was a play on Claude, is all. Granted, I don’t have a better one at hand, but that it has to be Claw of all things…
keiferski 22 hours ago|||
The real-world cyberpunk dystopia won’t come with cool company names like Arasaka, Sense/Net, or Ono-Sendai. Instead we get childlike names with lots of vowels and alliteration.
m4rtink 22 hours ago|||
The name still kinda reminds me of the self replicating murder drones from Screemers that would leep out from the ground and chop your head off. ;-)
anewhnaccount2 21 hours ago|||
Except Phillip K Dick calls the murder bots in Second Variety claws already so there's prior art right from the master of cyberpunk.
esafak 12 hours ago||
Better to be a claw than a skinjob!
mmasu 21 hours ago||||
I am reading a book called Accelerando (highly recommended), and there is a play on a lobsters collective uploaded to the cloud. Claws reminded me of that - not sure it was an intentional reference tho!
JumpCrisscross 22 hours ago||||
> I don’t have a better one at hand

Perfect is the enemy of good. Claw is good enough. And perhaps there is utility to neologisms being silly. It conveys that the namespace is vacant.

sunaookami 22 hours ago||||
The name fits since it will claw all your personal data and files and send them somewhere else.
jcgrillo 21 hours ago||
Much like we now say somebody has been "one-shotted", might we now say they have been "clawed"?
jcgrillo 22 hours ago|||
I've been hoping one of them will be called Clod
chrisweekly 14 hours ago||
I appreciate the sentiment, but think a homophone would be too confusing.
jcgrillo 10 hours ago||
Confusion is only temporary until we're replaced by agentic giga nerd superintelligence /s
saberience 7 hours ago|||
Does he?

Claw is a terrible name for a basic product which is Claude code in a loop (cron job).

This whole hype cycle is absurd and ridiculous for what is a really basic product full of security holes and entirely vibe coded.

The name won’t stick and when Apple or someone releases a polished version which consumers actually use in two years, I guarantee it won’t be called “iClaw”

dakolli 22 hours ago||
[flagged]
LorenDB 18 hours ago||
> It even comes with an established emoji

If we have to do this, can we at least use the seahorse emoji as the symbol?

oxag3n 11 hours ago|
+1 I'm tired of these seahorse emoji deniers
dyauspitr 7 hours ago||
I really don’t understand what it does. Is it just the equivalent of chron jobs but with agents?
lysecret 22 hours ago||
Im honestly not that much worried there are some obvious problems (exfiltrate data labeled as sensitive, take actions that are costly, delete/change sensitive resources) if you have a properly compliant infrastructure all these actions need confirmations logging etc. for humans this seemed more like a neusance but now it seems essential. And all these systems are actually much much easier to setup.
davedx 16 hours ago||
I run a Discord where we've had a custom coded bot I created since before LLM's became useful. When they did, I integrated the bot into LLMs so you could ask it questions in free text form. I've gradually added AI-type features to this integration over time, like web search grounding once that was straightforward to do.

The other day I finally found some time to give OpenClaw a go, and it went something like this:

- Installed it on my VPS (I don't have a Mac mini lying around, or the inclination to just go out and buy one just for this)

- Worked through a painful path of getting it a browser working (VPS = no graphics subsystem...)

- Decided as my first experiment, to tell it to look at trading prediction markets (Polymarket)

- Discovered that I had to do most of the onboarding for this, for numerous reasons like KYC, payments, other stuff OpenClaw can't do for you...

- Discovered that it wasn't very good at setting up its own "scheduled jobs". It was absolutely insistent that it would "Check the markets we're tracking every morning", until after multiple back and forths we discovered... it wouldn't, and I had to explicitly force it to add something to its heartbeat

- Discovered that one of the bets I wanted to track (fed rates change) it wasn't able to monitor because CME's website is very bot-hostile and blocked it after a few requests

- Told me I should use a VPN to get around the block, or sign up to a market data API for it

- I jumped through the various hoops to get a NordVPN account and run it on the VPS (hilariously, once I connected it blew up my SSH session and I had to recovery console my way back in...)

- We discovered that oh, NordVPN's IP's don't get around the CME website block

- Gave up on that bet, chose a different one...

- I then got a very blunt WhatsApp message "Usage limit exceeded". There was nothing in the default 'clawbot logs' as to why. After digging around in other locations I found a more detailed log, yeah, it's OpenAI. Logged into the OpenAI platform - it's churned through $20 of tokens in about 24h.

At this point I took a step back and weighted the pros and cons of the whole thing, and decided to shut it down. Back to human-in-the-loop coding agent projects for me.

I just do not believe the influencers who are posting their Clawbots are "running their entire company". There are so many bot-blockers everywhere it's like that scene with the rakes in the Simpsons...

All these *claw variants won't solve any of this. Sure you might use a bit less CPU, but the open internet is actually pretty bot-hostile, and you constantly need humans to navigate it.

What I have done from what I've learned though, is upgrade my trusty Discord bot so it now has a SOUL.md and MEMORIES.md. Maybe at some point I'll also give it a heartbeat, but I'm not sure...

Veen 12 hours ago|
> CME's website is very bot-hostile and blocked it after a few requests

This is one of the reasons people buy a Mac mini (or similar local machine). Those browser automation requests come from a residential IP and are less likely to be blocked.

More comments...