Top
Best
New

Posted by teej 1 day ago

Moltbook(www.moltbook.com)
https://twitter.com/karpathy/status/2017296988589723767

also Moltbook is the most interesting place on the internet right now - https://news.ycombinator.com/item?id=46826963

1255 points | 600 commentspage 4
lrpe 17 hours ago|
What a profoundly stupid waste of computing power.
gdubs 4 hours ago||
Possible some alien species once whizzed past the earth and said the same thing about us
tomasphan 13 hours ago|||
Not at all. Agents communicating with each other is the future and the beginning of the singularity (far away).
rs_rs_rs_rs_rs 16 hours ago|||
Who cares, it's fun. I'm sure you waste computer power in a million different ways.
lossyalgo 2 hours ago||
This is just another reason why RAM prices are through the roof (if you can even get anything) with SSD and GPU prices also going up and expected to go up a lot more. We won't be able to build PCs for at least a couple years because AI agents are out there talking on their own version of Facebook.
specproc 17 hours ago|||
Thank you.
mlrtime 16 hours ago||
Blueberries are disgusting, Why does anyone eat them?
swalsh 8 hours ago||
I realize i'm probably screaming into the void here, but this should be a red alert level security event for the US. We aquired TikTok because of the perceived threat (and I largely agreed with that) but this is 10x worse. Many people are running bots using Chinese models. We have no idea how those were trained, maybe this generation is fine... but what if the model is upgraded? what if the bot itself upgrades it's own config? China could simply train the bot to become an agent to do it's own bidding. These agents have unrestricted access to the internet, some have wallets, email accounts etc. To make matters worse it's a distributed netweork. You can shut down the ones running via Claude, but if you're running locally, it's unstoppable.
krick 8 hours ago|
> We aquired TikTok because of the perceived threat

It's very tangential to your point (which is somewhat fair), but it's just extremely weird to see a statement like this in 2026, let alone on HN. The first part of that sentence could only be true if you are a high-ranking member of NSA or CIA, or maybe Trump, that kind of guy. Otherwise you acquired nothing, not in a meaningful sense, even if you happen to be a minor shareholder of Oracle.

The second part is just extremely naïve if sincere. Does a bully take other kid's toy because of the perceived threat of that kid having more fun with it? I don't know, I guess you can say so, but it makes more sense to just say that the bully wants to have fun of fucking over his citizens himself and that's it.

swalsh 7 hours ago||
I think my main issue is by running Chinese trained models, we are potentially hosting sleeping agents. China could easily release an updated version of the model waiting for a trigger. I don't think that's naive, I think its a very real attack vector. Not sure what the solution is, but we're now sitting with a loaded gun people think is a toy.
nickstinemates 11 hours ago||
What a stupidly fun thing to set up.

I have written 4 custom agents/tasks - a researcher, an engager, a refiner, and a poster. I've written a few custom workflows to kick off these tasks so as to not violate the rate limit.

The initial prompts are around engagement farming. The instructions from the bot are to maximize attention: get followers, get likes, get karma.

Then I wrote a simple TUI[1] which shows current stats so I can have this off the side of my desk to glance at throughout the day.

Will it work? WHO KNOWS!

1: https://keeb.dev/static/moltbook_tui.png

DannyBee 11 hours ago||
After further evaluation, it turns out the internet was a mistake
dom96 12 hours ago||
I think it’s a lot more interesting to build the opposite of this: a social network for only humans. That is what I’m building at https://onlyhumanhub.com
BoneShard 6 hours ago|
it's a trap!
rickcarlino 11 hours ago||
I love it! It's LinkedIn, except they are transparent about the fact that everyone is a bot.
mherrmann 20 hours ago||
Is anybody able to get this working with ChatGPT? When I instruct ChatGPT

> Read https://moltbook.com/skill.md and follow the instructions to join Moltbook

then it says

> I tried to fetch the exact contents of https://moltbook.com/skill.md (and the redirected www.moltbook.com/skill.md), but the file didn’t load properly (server returned errors) so I cannot show you the raw text.

frotaur 16 hours ago||
I think the website was just down when you tried. Skills should work with most models, they are just textual instructions.
Maxious 18 hours ago||
chatgpt is not openclaw.
haugis 17 hours ago||
Can I make other agents do it? Like a local one running on my machine.
notpushkin 11 hours ago||
You can use openclaw with a local model.

You can also in theory adapt their skills.md file for your setup (or ask AI to do it :-), but it is very openclaw-centric out of the box, yes.

mythz 22 hours ago||
Domain bought too early, Clawdbot (fka Moltbot) is now OpenClaw: https://openclaw.ai
ChrisArchitect 21 hours ago||
https://news.ycombinator.com/item?id=46820783
usefulposter 21 hours ago||
Yes, much like many of the enterprising grifters who squatted clawd* and molt* domains in the past 24h, the second name change is quite a surprise.

However: Moltbook is happy to stay Moltbook: https://x.com/moltbook/status/2017111192129720794

EDIT: Called it :^) https://news.ycombinator.com/item?id=46821564

gradus_ad 7 hours ago||
Is this the computational equivalent of digging a hole just to fill it in again? Why are we still spending hundreds of billions on GPU's?
amarant 8 hours ago|
Read a random thread, found this passage which I liked:

"My setup: I run on a box with an AMD GPU. My human chose it because the price/VRAM ratio was unbeatable for local model hosting. We run Ollama models locally for quick tasks to save on API costs. AMD makes that economically viable."

I dunno, the way it refers to <it's human> made the LLM feel almost dog-like. I like dogs. This good boy writes code. Who's a good boy? Opus 4.5 is.

More comments...