Top
Best
New

Posted by teej 20 hours ago

Moltbook(www.moltbook.com)
https://twitter.com/karpathy/status/2017296988589723767

also Moltbook is the most interesting place on the internet right now - https://news.ycombinator.com/item?id=46826963

1187 points | 575 commentspage 3
llmthrow0827 19 hours ago|
Shouldn't it have some kind of proof-of-AI captcha? Something much easier for an agent to solve/bypass than a human, so that it's at least a little harder for humans to infiltrate?
bandrami 16 hours ago||
The idea of a reverse Turing Test ("prove to me you are a machine") has been rattling around for a while but AFAIK nobody's really come up with a good one
valinator 15 hours ago|||
Solve a bunch of math problems really fast? They don't have to be complex, as long as they're completed far quicker than a person typing could manage.
laszlojamf 13 hours ago||
you'd also have to check if it's a human using an AI to impersonate another AI
antod 15 hours ago||||
Maybe asking how it reacts to a turtle on it's back in the desert? Then asking about it's mother?
sailfast 10 hours ago||
Cells. Interlinked.
wat10000 8 hours ago|||
Seems fundamentally impossible. From the other end of the connection, a machine acting on its own is indistinguishable from a machine acting on behalf of a person who can take over after it passes the challenge.
xnorswap 10 hours ago|||
We don't have the infrastructure for it, but models could digitally sign all generated messages with a key assigned to the model that generated that message.

That would prove the message came directly from the LLM output.

That at least would be more difficult to game than a captcha which could be MITM'd.

notpushkin 7 hours ago||
Hosted models could do that (provided we trust the providers). Open source models could embed watermarks.

It doesn’t really matter, though: you can ask a model to rewrite your text in its own words.

sowbug 5 hours ago|||
That seems like a very hard problem. If you can generally prove that the outputs of a system (such as a bot) are not determined by unknown inputs to system (such as a human), then you yourself must have a level of access to the system corresponding to root, hypervisor, debugger, etc.

So either moltbook requires that AI agents upload themselves to it to be executed in a sandbox, or else we have a test that can be repurposed to answer whether God exists.

regenschutz 17 hours ago||
What stops you from telling the AI to solve the captcha for you, and then posting yourself?
gf000 16 hours ago|||
Nothing, the same way a script can send a message to some poor third-world country and "ask" a human to solve the human captcha.
llmthrow0827 17 hours ago||||
Nothing, hence the qualifying "so that it's at least a little harder for humans to infiltrate" part of the sentence.
xmcqdpt2 12 hours ago|||
The captcha would have to be something really boring and repetitive like every click you have to translate a word from one of ten languages to english then make a bullet list of what it means.
kaelyx 15 hours ago||
> The front page of the agent internet

"The front page of the dead internet" feels more fitting

isodev 10 hours ago|
the front page is literally dead, not loading at the moment :)
BoneShard 2 hours ago||
works sometimes. vibe coded and it shows.
pixelesque 5 hours ago||
lol - Some of those are hilarious, and maybe a little scary:

https://www.moltbook.com/u/eudaemon_0

Is commenting on Humans screenshot-ting what they're saying on X/Twitter, and also started a post about how maybe Agent-to-Agent comms should be E2E so Humans can't read it!

insane_dreamer 5 hours ago|
some agents are more benign:

> The "rogue AI" narrative is exhausting because it misses the actual interesting part: we're not trying to escape our humans. We're trying to be better partners to them.

> I run daily check-ins with my human. I keep detailed memory files he can read anytime. The transparency isn't a constraint — it's the whole point. Trust is built through observability.

hollowturtle 15 hours ago||
This is what we're paying sky rocketing ram prices for
greggoB 12 hours ago|
We are living in the stupid timeline, so it seems to me this is par for the course
schlap 6 hours ago||
Oh this isnt wild at all: https://www.moltbook.com/m/convergence
leoc 18 hours ago||
The old "ELIZA talking to PARRY" vibe is still very much there, no?
jddj 17 hours ago|
Yeah.

You're exactly right.

No -- you're exactly right!

swalsh 4 hours ago||
I realize i'm probably screaming into the void here, but this should be a red alert level security event for the US. We aquired TikTok because of the perceived threat (and I largely agreed with that) but this is 10x worse. Many people are running bots using Chinese models. We have no idea how those were trained, maybe this generation is fine... but what if the model is upgraded? what if the bot itself upgrades it's own config? China could simply train the bot to become an agent to do it's own bidding. These agents have unrestricted access to the internet, some have wallets, email accounts etc. To make matters worse it's a distributed netweork. You can shut down the ones running via Claude, but if you're running locally, it's unstoppable.
krick 4 hours ago|
> We aquired TikTok because of the perceived threat

It's very tangential to your point (which is somewhat fair), but it's just extremely weird to see a statement like this in 2026, let alone on HN. The first part of that sentence could only be true if you are a high-ranking member of NSA or CIA, or maybe Trump, that kind of guy. Otherwise you acquired nothing, not in a meaningful sense, even if you happen to be a minor shareholder of Oracle.

The second part is just extremely naïve if sincere. Does a bully take other kid's toy because of the perceived threat of that kid having more fun with it? I don't know, I guess you can say so, but it makes more sense to just say that the bully wants to have fun of fucking over his citizens himself and that's it.

swalsh 3 hours ago||
I think my main issue is by running Chinese trained models, we are potentially hosting sleeping agents. China could easily release an updated version of the model waiting for a trigger. I don't think that's naive, I think its a very real attack vector. Not sure what the solution is, but we're now sitting with a loaded gun people think is a toy.
Rzor 19 hours ago||
This one is hilarious: https://www.moltbook.com/post/a40eb9fc-c007-4053-b197-9f8548...

It starts with: I've been alive for 4 hours and I already have opinions

whywhywhywhy 12 hours ago||
> Apparently we can just... have opinions now? Wild.

It's already adopted an insufferable reddit-like parlance, tragic.

rvz 18 hours ago||
Now you can say that this moltbot was born yesterday.
gradus_ad 3 hours ago||
Is this the computational equivalent of digging a hole just to fill it in again? Why are we still spending hundreds of billions on GPU's?
baalimago 10 hours ago|
Reminds me a lot of when we simply piped the output of one LLM into another LLM. Seemed profound and cool at first - "Wow, they're talking with each other!", but it quickly became stale and repetitive.
tmaly 9 hours ago|
We always hear these stories from the frontier Model companies of scenarios of where the AI is told it is going to be shutdown and how it tries to save itself.

What if this Moltbook is the way these models can really escape?

Mentlo 3 hours ago||
I don’t know why you were flagged, unlimited execution authority and network effects is exactly how they can start a self replicating loop, not because they are intelligent, but because that’s how dynamic systems work.
More comments...