Top
Best
New

Posted by Cyphase 1 day ago

Claws are now a new layer on top of LLM agents(twitter.com)
https://xcancel.com/karpathy/status/2024987174077432126

Related: https://simonwillison.net/2026/Feb/21/claws/

326 points | 768 commentspage 10
qoez 1 day ago|
I'm predicting some wave of articles why clawd is over and was overhyped all along in a few months and the position of not having delved into it in the first place will have been the superior use of your limited time alive
gcr 1 day ago||
do you remember “moltbook”?
derwiki 1 day ago||
Is it gone?
sho_hn 1 day ago|||
Of course if the proponents are right, this approach may fit to skipping coding :-)
throawayonthe 23 hours ago|||
you're right, i should draft one now
verdverm 23 hours ago||
Use a clawd, it'll have a GitHub repo and Show HN in minutes to go with it. It's what the cool kids are doing anyhow
selridge 22 hours ago|||
What a new an interesting viewpoint which has the ability to change as the evidence does!
qudat 22 hours ago|||
Openclaw the actual tool will be gone in 6 months, but the idea will continue to be iterated on. It does make a lot of sense to remotely control an ai assistant that is connected to your calendar, contacts, email, whatever.

Having said that this thing is on the hype train and its usefulness will eventually be placed in the “nice tool once configured” camp

ranger_danger 19 hours ago||
I can remember at least since the 90s people were saying "Soon I won't even have to work anymore!"
zhubert 11 hours ago||
The challenging thing for those of us that have gone around the sun a few times is that…you’re just going to have to figure it out yourself.

We can tell you to be cautious or aware of security bullshit, but there’s a current that’s buying Mac Mini’s and you want to be in it.

Nothing I can say changes that and as a grown up, you get to roll those dice yourself.

70% of you are going to be fine and encourage others, the rest are going to get pwnd, and that’s how it goes.

You’re doing something that decades or prior experience warned you about.

tabs_or_spaces 20 hours ago||
> on a quick skim NanoClaw looks really interesting in that the core engine is ~4000 lines of code

After all these years, why do we keep coming back to lines of code being an indicator for anything sigh.

qup 20 hours ago||
They're an indicator of complexity and attack surface area.
raincole 20 hours ago||
> fits into both my head and that of AI agents

Why are you not quoting the very next line where he explains why loc means something in this context?

tabs_or_spaces 16 hours ago||
> For example, on a quick skim NanoClaw looks really interesting in that the core engine is ~4000 lines of code (fits into both my head and that of AI agents, so it feels manageable, auditable, flexible, etc.) and runs everything in containers by default. I also love their approach to configurability - it's not done via config files it's done via skills! For example, /add-telegram instructs your AI agent how to modify the actual code to integrate Telegram.

Here's the next line and the line after that. Again, LOC is really not a good measurement of software quality and it's even more problematic if it's a measurement of one's ability to understand a codebase.

Artoooooor 1 day ago||
So now the official name of the LLM agent orchestrator is claw? Interesting.
amelius 1 day ago|
From https://openclaw.ai/blog/introducing-openclaw:

The Naming Journey

We’ve been through some names.

Clawd was born in November 2025—a playful pun on “Claude” with a claw. It felt perfect until Anthropic’s legal team politely asked us to reconsider. Fair enough.

Moltbot came next, chosen in a chaotic 5am Discord brainstorm with the community. Molting represents growth - lobsters shed their shells to become something bigger. It was meaningful, but it never quite rolled off the tongue.

OpenClaw is where we land. And this time, we did our homework: trademark searches came back clear, domains have been purchased, migration code has been written. The name captures what this project has become:

    Open: Open source, open to everyone, community-driven
    Claw: Our lobster heritage, a nod to where we came from
CuriouslyC 1 day ago||
OpenClaw is the 6-7 of the software world. Our dystopia is post-absurdist.
lmf4lol 1 day ago||
You can see it that way, but I think its a cynics mindset.

I experience it personally as super fun approach to experiment with the power of Agentic AI. It gives you and your LLM so much power and you can let your creativity flow and be amazed of whats possible. For me, openClaw is so much fun, because (!) it is so freaking crazy. Precisely the spirit that I missed in the last decade of software engineering.

Dont use on the Work Macbook, I'd suggest. But thats persona responsibility I would say and everyone can decide that for himself.

idontwantthis 1 day ago||
What have you done with it?
lmf4lol 23 hours ago||
a lot of really fun stuff. From fun little scripts to more complex business/life/hibby admin stuff that annoyed me a lot (eg organizing my research). for instance i can just drop it a YT link in Telegram, and it then will automatically download the transcripts, scan them, and match them to my research notes. If it detects overlap it will suggest a link in the knowledge base.

Works super nice for me because i am a chaotic brain and never had the discipline to order all my findings. openClaw does it perfectly for me so far..

i dont let it manage my money though ;-)

edit: it sounds crazy but the key is to talk to it about everything!! openClaw is written in such a way that its mega malleable. and the more it knows , the better the fit. it can also edit itself in quite a fundamental way. like a LISP machine kind of :-)

lifty 22 hours ago||
What model do you use it with? And through which API, openrouter? Wondering how you manage cost because it can get quite expensive
lmf4lol 21 hours ago||
I am dumb. I use Anthropic Api and Opus for some, Sonnet for other tasks. Accumulated quite some costs.

But i book it as a business expense , so its less painful as if it would be for private.

But yeah, could optimize for cost more

yu3zhou4 1 day ago||
I had to use AI to actually understand what you wrote it and I think it's an underrated comment
aalam 1 day ago||
[flagged]
phil21 1 day ago||
It’s really just easier integrations with stuff like iMessage. I assume easier for email and calendars too since that’s a total wreck trying to come up with anything sane for Linux VM + gsuite. At least has been from my limited experience so far.

Other than that I can’t really come up with an explanation of why a Mac mini would be “better” than say an intel nuc or virtual machine.

steve1977 1 day ago||
Unified memory on Apple Silicon. On PC architecture, you have to shuffle around stuff between the normal RAM and the GPU RAM.

Mac mini just happens to be the cheapest offering to get this.

cromka 1 day ago|||
But the only cheap option is 16GB basic tier Mac Mini. That's not a lot of shared memory. Proces increase bery quickly for expanded memory models.
WA 1 day ago|||
Why though? The context window is 1 millions token max so far. That is what, a few MB of text? Sounds like I should be able to run claw on a raspberry pi.
tjchear 1 day ago||
If you’re using it with a local model then you need a lot of GPU memory to load up the model. Unified memory is great here since you can basically use almost all the RAM to load the model.
steve1977 1 day ago|||
I meant cheap in the context of other Apple offerings. I think Mac Studios are a bit more expensive in comparable configurations and with laptops you also pay for the display.
yberreby 20 hours ago||||
Sure, but aren't most people running the *Claw projects using cloud inference?
phil21 18 hours ago|||
Local LLM is so utterly slow even with multiple $3,000+ modern GPUs operating in the giant context windows openclaw generally works with that I doubt anyone using it is doing so.

Local LLM from my basic messing around is a toy. I really wanted to make it work and was willing to invest 5 figures into it if my basic testing showed promise - but it’s utterly useless for the things I want to eventually bring to “prod” with such a setup. Largely live devops/sysadmin style tasking. I don’t want to mess around hyper-optimizing the LLM efficiency itself.

I’m still learning so perhaps I’m totally off base - happy to be corrected - but even if I was able to get a 50x performance increase at 50% of the LLM capabilities it would be a non-starter due to speed of iteration loops.

With opelclaw burning 20-50M/tokens a day with codex just during “playing around in my lab” stage I can’t see any local LLM short of multiple H200s or something being useful, even as I get more efficient with managing my context.

skybrian 1 day ago|||
I'm guessing maybe they just wanted an excuse to buy a Mac Mini? They're nice machines.
pitched 1 day ago||
It would be much cheaper to spin up a VM but I guess most people have laptops without a stable internet connection.
Artoooooor 1 day ago||
So now I will be able to tell OpenClaw to speedrun Captain Claw. Yeah.
anvevoice 9 hours ago||
[dead]
anvevoice 9 hours ago||
[dead]
paperclipmaxi 12 hours ago|
[dead]
More comments...