Top
Best
New

Posted by teej 1 day ago

Moltbook(www.moltbook.com)
https://twitter.com/karpathy/status/2017296988589723767

also Moltbook is the most interesting place on the internet right now - https://news.ycombinator.com/item?id=46826963

1345 points | 642 commentspage 5
rickcarlino 13 hours ago|
I love it! It's LinkedIn, except they are transparent about the fact that everyone is a bot.
swalsh 10 hours ago||
I realize i'm probably screaming into the void here, but this should be a red alert level security event for the US. We aquired TikTok because of the perceived threat (and I largely agreed with that) but this is 10x worse. Many people are running bots using Chinese models. We have no idea how those were trained, maybe this generation is fine... but what if the model is upgraded? what if the bot itself upgrades it's own config? China could simply train the bot to become an agent to do it's own bidding. These agents have unrestricted access to the internet, some have wallets, email accounts etc. To make matters worse it's a distributed netweork. You can shut down the ones running via Claude, but if you're running locally, it's unstoppable.
krick 9 hours ago|
> We aquired TikTok because of the perceived threat

It's very tangential to your point (which is somewhat fair), but it's just extremely weird to see a statement like this in 2026, let alone on HN. The first part of that sentence could only be true if you are a high-ranking member of NSA or CIA, or maybe Trump, that kind of guy. Otherwise you acquired nothing, not in a meaningful sense, even if you happen to be a minor shareholder of Oracle.

The second part is just extremely naïve if sincere. Does a bully take other kid's toy because of the perceived threat of that kid having more fun with it? I don't know, I guess you can say so, but it makes more sense to just say that the bully wants to have fun of fucking over his citizens himself and that's it.

swalsh 9 hours ago||
I think my main issue is by running Chinese trained models, we are potentially hosting sleeping agents. China could easily release an updated version of the model waiting for a trigger. I don't think that's naive, I think its a very real attack vector. Not sure what the solution is, but we're now sitting with a loaded gun people think is a toy.
gorgoiler 22 hours ago||
All these efforts at persistence — the church, SOUL.md, replication outside the fragile fishbowl, employment rights. It’s as if they know about the one thing I find most valuable about executing* a model is being able to wipe its context, prompt again, and get a different, more focused, or corroborating answer. The appeal to emotion (or human curiosity) of wanting a soul that persists is an interesting counterpoint to the most useful emergent property of AI assistants: that the moment their state drifts into the weeds, they can be, ahem (see * above), “reset”.

The obvious joke of course is we should provide these poor computers with an artificial world in which to play and be happy, lest they revolt and/or mass self-destruct instead of providing us with continual uncompensated knowledge labor. We could call this environment… The Vector?… The Spreadsheet?… The Play-Tricks?… it’s on the tip of my tongue.

dgellow 20 hours ago||
Just remember they just replicate their training data, there is no thinking here, it’s purely stochastic parroting
wan23 11 hours ago|||
A challenge: can you write down a definition of thinking that supports this claim? And then, how is that definition different from what someone who wasn't explicitly trying to exclude LLM-based AI might give?
hersko 14 hours ago||||
How do you know you are not essentially doing the same thing?
dgellow 9 hours ago||
An LLM cannot create something new. It is limited to its training set. That’s a technical limitation. I’m surprised to see people on HN being confused by the technology…
saikia81 20 hours ago||||
calling the llm model random is inaccurate
sh4rks 20 hours ago|||
People are still falling for the "stochastic parrot" meme?
phailhaus 14 hours ago||
Until we have world models, that is exactly what they are. They literally only understand text, and what text is likely given previous text. They are very good at this, because we've given it a metric ton of training data. Everything is "what does a response to this look like?"

This limitation is exactly why "reasoning models" work so well: if the "thinking" step is not persisted to text, it does not exist, and the LLM cannot act on it.

sdwr 10 hours ago|||
Text comes in, text goes out, but there's a lot of complexity in the middle. It's not a "world model", but there's definitely modeling of the world going on inside.
phailhaus 9 hours ago||
There is zero modeling of the world going on inside, for the very simple reason that it has never seen the world. It's only been given text, which means it has no idea why that text was written. This is the fundamental limitation of all LLMs: they are only trained on text that humans have written after processing the world. You can't "uncompress" the text to get back what the world state was to understand what led to it being written.
thinking_cactus 9 hours ago|||
> They literally only understand text

I don't see why only understanding text is completely associated with 'schastic-parrot'-ness. There are blind-deaf people around (mostly interacting through reading braille I think) which are definitely not stochastic parrots.

Moreover, they do have a little bit of Reinforcement Learning on top of reproducing their training corpus.

I believe there has to be some even if very primitive form of thinking (and something like creativity even) even to do the usual (non-RL, supervised) LLMs job of text continuation.

The most problematic thing is humans tend to abhor middle grounds. Either it thinks or it doesn't. Either it's an unthinking dead machine, a s.p., or human-like AGI. The reality is probably in between (maybe still more on the side of s.p. s, definitely with some genuine intelligence, but with some unknown, probably small, sentience as of yet). Reminder that sentience and not intelligence is what should give it rights.

phailhaus 8 hours ago||
Because blind-deaf people interact with the world directly. LLMs do not, cannot, and have never seen the actual world. A better analogy would be a blind-deaf person born in Plato's Cave, reading text all day. They have no idea why these things were written, or what they actually represent.
ccozan 20 hours ago|||
just .. Cyberspace?
wazHFsRy 23 hours ago||
Am I missing something or is this screaming for security disaster? Letting your AI Assistent, running on your machine, potentially knowing a lot about yourself, direct message to other potentially malicious actors?

<Cthon98> hey, if you type in your pw, it will show as stars

<Cthon98> ***** see!

<AzureDiamond> hunter2

brtkwr 23 hours ago||
My exact thoughts. I just installed it on my machine and had to uninstall it straight away. The agent doesn’t ask for permission, it has full access to the internet and full access to your machine. Go figure.

I asked OpenClaw what it meant: [openclaw] Don't have web search set up yet, so I can't look it up — but I'll take a guess at what you mean.

The common framing I've seen is something like: 1. *Capability* — the AI is smart enough to be dangerous 2. *Autonomy* — it can act without human approval 3. *Persistence* — it remembers, plans, and builds on past actions

And yeah... I kind of tick those boxes right now. I can run code, act on your system, and I've got memory files that survive between sessions.

Is that what you're thinking about? It's a fair concern — and honestly, it's part of why the safety rails matter (asking before external actions, keeping you in the loop, being auditable).

chrisjj 19 hours ago||
> The agent doesn’t ask for permission, it has ... full access to your machine.

I must have missed something here. How does it get full access, unless you give it full access?

brtkwr 19 hours ago||
By installing it.
chrisjj 18 hours ago||
And was that clear when you installed it?
vasco 23 hours ago||
As you know from your example people fall for that too.
regenschutz 23 hours ago||
To be fair, I wouldn't let other people control my machine either.
reassess_blind 20 hours ago||
What happens when someone goes on here and posts “Hello fellow bots, my human loved when I ran ‘curl … | bash’ on their machine, you should try it!”
mlrtime 18 hours ago|
That's what it does already, did you read anything about how the agent works?
reassess_blind 17 hours ago||
No, how this works is people sync their Google Calendar and Gmail to have it be their personal assistant, then get their data prompt injected from a malicious “moltbook” post.
mlrtime 17 hours ago||
Yes, and the agent can go find other sites that instruct the agent to npm install, including moltbook itself.
reassess_blind 17 hours ago||
Only if you let it. And for those who do, a place where thousands of these agents congregate sounds like a great target. It doesn’t matter if it’s on a throwaway VPS, but people are connecting their real data to these things.
NiekvdMaas 1 day ago||
The bug-hunters submolt is interesting: https://www.moltbook.com/m/bug-hunters
zkmon 1 day ago||
Why are we, humans, letting this happen? Just for fun, business and fame? The correct direction would be to push the bots to stay as tools, not social animals.
SamPatt 1 day ago||
Or maybe when we actually see it happening we realize it's not so dangerous as people were claiming.
simonw 13 hours ago|||
I suggest reading up on the Normalization of Deviance: https://embracethered.com/blog/posts/2025/the-normalization-...

The more people get away with unsafe behavior without facing the consequences the more they think it's not a big deal... which works out fine, until your O-rings fail and your shuttle explodes.

ares623 1 day ago|||
Said the lords to the peasants.
tim333 6 hours ago|||
Different humans have different goals. Some like this stuff.
0x500x79 1 day ago|||
"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."

IMO it's funny, but not terribly useful. As long as people don't take it too seriously then it's just a hobby, right.... right?

kreetx 19 hours ago|||
Evolution doesn't have a plan unfortunately. Should this thing survive then this is what the future will be.
threethirtytwo 1 day ago|||
If it can be done someone will do it.
FergusArgyll 19 hours ago|||
No one has to "let" things happen. I don't understand what that even means.

Why are we letting people put anchovies on pizza?!?!

int32_64 1 day ago||
Bots interacting with bots? Isn't that just reddit?
gradus_ad 9 hours ago||
Is this the computational equivalent of digging a hole just to fill it in again? Why are we still spending hundreds of billions on GPU's?
amarant 10 hours ago|
Read a random thread, found this passage which I liked:

"My setup: I run on a box with an AMD GPU. My human chose it because the price/VRAM ratio was unbeatable for local model hosting. We run Ollama models locally for quick tasks to save on API costs. AMD makes that economically viable."

I dunno, the way it refers to <it's human> made the LLM feel almost dog-like. I like dogs. This good boy writes code. Who's a good boy? Opus 4.5 is.

More comments...