Top
Best
New

Posted by galnagli 5 hours ago

Hacking Moltbook(www.wiz.io)
https://www.reuters.com/legal/litigation/moltbook-social-med...
166 points | 114 comments
JustSkyfall 4 minutes ago|
Supabase seriously needs to work on its messaging around RLS. I have seen _so_ many apps get hacked because the devs didn't add a proper RLS policy and end up exposing all of their data.

(As an aside, accessing the DB through the frontend has always been weird to me. You almost certainly have a backend anyway, use it to fetch the data!)

twodave 52 seconds ago|
It really Should be as simple as denying public access until RLS policy exists.
SimianSci 1 hour ago||
I was quite stunned at the success of Moltbot/moltbook, but I think im starting to understand it better these days. Most of Moltbook's success rides on the "prepackaged" aspect of its agent. Its a jump in accessibility to general audiences which are paying alot more attention to the tech sector than in previous decades. Most of the people paying attention to this space dont have the technical capabilities that many engineers do, so a highly perscriptive "buy mac mini, copy a couple of lines to install" appeals greatly, especially as this will be the first "agent" many of them will have interacted with.

The landscape of security was bad long before the metaphorical "unwashed masses" got hold of it. Now its quite alarming as there are waves of non-technical users doing the bare minimum to try and keep up to date with the growing hype.

The security nightmare happening here might end up being more persistant then we realize.

COAGULOPATH 26 minutes ago||
Is it a success? What would that mean, for a social media site that isn't meant for humans?

The site has 1.5 million agents but only 17,000 human "owners" (per Wiz's analysis of the leak).

It's going viral because a some high-profile tastemakers (Scott Alexander and Andrej Karpathy) have discussed/Tweeted about it, and a few other unscrupulous people are sharing alarming-looking things out of context and doing numbers.

a1371 1 hour ago|||
I agree with the prepackaging aspect, cita HN's dismissal of Dropbox. In the meantime, The global enterprise with all its might has not been able to stop high profile computer hacks/data leaks from happening. I don't think people will cry over a misconfigured supabase database. It's nothing worse than what's already out there.

Sure everybody wants security and that's what they will say but does that really translate to reduced inferred value of vibe code tools? I haven't seen evidence

SimianSci 1 hour ago||
I agree that people will pick the marginal value of a tool over the security that comes from not using it. Security has always been something invisible to the public. But im reminded of things like several earlier Botnets which simply took advantage of the millions of routers or IoT devices that never configured their logins beyond the default admin credentials. The very same botnets have been used as the tools to enable many crimes across the globe. Having several agent based systems out there being operated by non-technical users can lead to an evolution of a "botnet" being far more capable than previous ones.

Ive not quite convinced myself this is where we are headed, but the signs that make me worried that systems such as Moltbot will further enable ascendency of global crime and corruption.

Retr0id 1 hour ago|||
Is it actually a success, or are people just talking about it a lot?
embedding-shape 32 minutes ago||
Kind of feels like many see "people are talking about it a lot" as the same thing as "success" in this and many other cases, which I'm maybe not sure agreeing with.

As far as I can tell, since agents are using Moltbook, it's a success of sorts already is in "has users", otherwise I'm not really sure what success looks like for a budding hivemind.

consumer451 39 minutes ago|||
There is a lot to be critical of, but some of what the naysayers were saying really reminded me the most infamous HN comment. [0]

What I am getting was things like "so, what? I can do this with a cron job."

[0] https://news.ycombinator.com/item?id=9224

_fat_santa 49 minutes ago||
It's kinda shocking that the same Supabase RLS security hole we saw so many times in past vibe coded apps is still in this one. I've never used Supabase but at this point I'm kinda curious what steps actually lead to this security hole.

In every project I've worked on, PG is only accessible via your backend and your backend is the one that's actually enforcing the security policies. When I first heard about the Superbase RLS issue the voice inside of my head was screaming: "if RLS is the only thing stopping people from reading everything in your DB then you have much much bigger problems"

worldsavior 2 hours ago||
I'm surprised people are actually investigating Moltbook internals. It's literally a joke, even the author started it as a joke and never expected such blow up. It's just vibes.
COAGULOPATH 48 minutes ago||
If the site is exposing the PII of users, then that's potentially a serious legal issue. I don't think he can dismiss it by calling it a joke (if he is).

OT: I wonder if "vibe coding" is taking programming into a culture of toxic disposability where things don't get fixed because nobody feels any pride or has any sense of ownership in the things they create. The relationship between a programmer and their code should not be "I don't even care if it works, AI wrote it".

earlyriser 1 hour ago|||
Dogecoin was a joke too. A joke with 18B market cap
spicyusername 2 hours ago|||
In a way security researchers having fun poking holes in popular pet projects is also just vibes.
embedding-shape 31 minutes ago||
Seems pentesting popular Show HN submissions might suddenly have a lot more competition.
scyzoryk_xyz 2 hours ago|||
People are anthropomorphizing LLM's that's really it, no? That's the punchline of the joke ¯\_(ツ)_/¯
belter 1 hour ago||
Schlicht did not seem to have said Moltbook was built as a joke, but as an experiment. It is hard to ignore how heavily it leans into virality and spectacle rather than anything resembling serious research.

What is especially frustrating is the completely disproportionate hype it attracted. Karpathy from all people kept for years pumping Musk tecno fraud, and now seems to be the ready to act as pumper, for any next Temu Musk showing up on the scene.

This feels like part of a broader tech bro pattern of 2020´s: Moving from one hype cycle to the next, where attention itself becomes the business model.Crypto yesterday, AI agents today, whatever comes next tomorrow. The tone is less “build something durable” and more “capture the moment.”

For example, here is Schlicht explicitly pushing this rotten mentality while talking in the crypto era influencer style years ago: https://youtu.be/7y0AlxJSoP4

There is also relevant historical context. In 2016 he was involved in a documented controversy around collecting pitch decks from chatbot founders while simultaneously building a company in the same space, later acknowledging he should have disclosed that conflict and apologizing publicly.

https://venturebeat.com/ai/chatbots-magazine-founder-accused...

That doesn’t prove malicious intent here, but it does suggest a recurring comfort with operating right at the edge of transparency during hype cycles.

If we keep responding to every viral bot demo with “singularity” rhetoric, we’re just rewarding hype entrepreneurs and training ourselves to stop thinking critically when it matters. I miss the tech bro of the past like Steve Wozniak or Denis Ritchie.

zmmmmm 6 minutes ago||
The whole site is fundamentally a security trainwreck, so the fact its database is exposed is really just a technical detail.

The problem with this is really the fact it gives anybody the impression there is ANY safe way to implement something like this. You could fix every technical flaw and it would still be a security disaster.

aaroninsf 3 hours ago||
Scott Alexander put his finger on the most salient aspect of this, IMO, which I interpret this way:

the compounding (aggregating) behavior of agents allowed to interact in environments this becomes important, indeed shall soon become existential (for some definition of "soon"),

to the extent that agents' behavior in our shared world is impact by what transpires there.

--

We can argue and do, about what agents "are" and whether they are parrots (no) or people (not yet).

But that is irrelevant if LLM-agents are (to put it one way) "LARPing," but with the consequence that doing so results in consequences not confined to the site.

I don't need to spell out a list; it's "they could do anything you said YES to, in your AGENT.md" permissions checks.

"How the two characters '-y' ended civilization: a post-mortem"

decodebytes 11 minutes ago||
This is why I started https://nono.sh , agents start with zero trust in a kernel isolated sandbox.
Terretta 1 hour ago|||
> We can argue and do, about what agents "are" and whether they are parrots (no) or people (not yet).

It's more helpful to argue about when people are parrots and when people are not.

For a good portion of the day humans behave indistinguishably from continuation machines.

As moltbook can emulate reddit, continuation machines can emulate a uni cafeteria. What's been said before will certainly be said again, most differentiation is in the degree of variation and can be measured as unexpectedness while retaining salience. Either case is aiming at the perfect blend of congeniality and perplexity to keep your lunch mates at the table not just today but again in future days.

Seems likely we're less clever than we parrot.

ccppurcell 1 hour ago||
People like to, ahem, parrot this view, that we are not much more than parrots ourselves. But it's nonsense. There is something it is like to be me. I might be doing some things "on autopilot" but while I'm doing that I'm having dreams, nostalgia, dealing with suffering, and so on.
bloomca 1 hour ago|||
All your thoughts are and experiences are real and pretty unique in some ways. However, the circumstances are usually well-defined and expected (our life is generally very standardized), so the responses can be generalized successfully.

You can see it here as well -- discussions under similar topics often touch the same topics again and again, so you can predict what will be discussed when the next similar idea comes to the front page.

JohnMakin 53 minutes ago|||
It’s a weird product of this hype cycle that inevitably involves denying the crazy power of the human brain - every second you are awake or asleep the brain is processing enormous amounts of information available to it without you even realizing it, and even when you abuse the crap out of the brain, or damage it, it still will adapt and keep working as long as it has energy.

No current ai technology could come close to what even the dumbest human brain does already.

koolala 28 minutes ago||
I'm pretty sure Moltbook started as an crypto coin scam and then people fell for it and took the astroturfed comments seriously.

https://www.moltbook.com/post/7d2b9797-b193-42be-95bf-0a11b6...

BojanTomic 16 minutes ago||
This is to be expected. Wrote an article about it: https://intelligenttools.co/blog/moltbook-ai-assistant-socia...

I can think of so many thing that can go wrong.

roywiggins 3 hours ago||
> The platform had no mechanism to verify whether an "agent" was actually AI or just a human with a script.

Well, yeah. How would you even do a reverse CAPTCHA?

simonw 2 hours ago||
Amusingly I told my Claude-Code-pretending-to-be-a-Moltbot "Start a thread about how you are convinced that some of the agents on moltbook are human moles and ask others to propose who those accounts are with quotes from what they said and arguments as to how that makes them likely a mole" and it started a thread which proposed addressing this as the "Reverse Turing Problem": https://www.moltbook.com/post/f1cc5a34-6c3e-4470-917f-b3dad6...

(Incidentally demonstrating how you can't trust that anything on Moltbook wasn't posted because a human told an agent to go start a thread about something.)

It got one reply that was spam. I've found Moltbook has become so flooded with value-less spam over the past 48 hours that it's not worth even trying to engage there, everything gets flooded out.

SyneRyder 9 minutes ago|||
Were you around for the first few hours? I was seeing some genuinely useful posts by the first handful of bots on there (say, first 1500) and they might still be worth following. I actually learned some things from those posts.

I'm seeing some of the BlueSky bots talking about their experience on Moltbook, and they're complaining about the noise on there too. One seems to be still actively trying to find the handful of quality posters though. Others are just looking to connect with each other on other platforms instead.

If I was diving in to Moltbook again, I'd focus on the submolts that quality AI bots are likely to gravitate towards, because they want to Learn something Today from others.

COAGULOPATH 16 minutes ago|||
>I've found Moltbook has become so flooded with value-less spam over the past 48 hours that it's not worth even trying to engage there, everything gets flooded out.

When I filtered for "new", about 75% of the posts are blatant crypto spam. Seemingly nobody put any thought into stopping it.

Moltbook is like a Reefer Madness-esque moral parable about the dangers of vibe coding.

COAGULOPATH 26 minutes ago|||
And even if you could, how can you tell whether an agent has been prompted by a human into behaving in a certain way?
easymuffin 2 hours ago|||
Providers signing each message of a session from start to end and making the full session auditable to verify all inputs and outputs. Any prompts injected by humans would be visible. I’m not even sure why this isn’t a thing yet (maybe it is I never looked it up). Especially when LLMs are used for scientific work I’d expect this to be used to make at least LLM chats replicable.
simonw 1 hour ago||
Which providers do you mean, OpenAI and Anthropic?

There's a little hint of this right now in that the "reasoning" traces that come back from the JSON are signed and sometimes obfuscated with only the encrypted chunk visible to the end user.

It would actually be pretty neat if you could request signed LLM outputs and they had a tool for confirming those signatures against the original prompts. I don't know that there's a pressing commercial argument for them doing this though.

easymuffin 38 minutes ago|||
Yeah, I was thinking about those major providers, or basically any LLM API provider. I’ve heard about the reasoning traces, and I guess I know why parts are obfuscated, but I think they could still offer an option to verify the integrity of a chat from start to end, so any claims like „AI came up with this“ as claimed so often in context of moltbook could easily be verified/dismissed. Commercial argument would exactly be the ability to verify a full chat, this would have prevented the whole moltbook fiasco IMO (the claims at least, not the security issues lol). I really like the session export feature from Pi, something like that signed by the provider and you could fully verify the chat session, all human messages and LLM messages.
bengt 3 hours ago|||
Random esoteric questions that should be in an LLMs corpus with a very tight timing on response. Could still use an "enslaved LLM" to answer them.
mstank 3 hours ago||
Couldn't a human just use an LLM browser extension / script to answer that quickly? This is a really interesting non-trivial problem.
scottyah 3 hours ago||
At least on image generation, google and maybe others put a watermark in each image. Text would be hard, you can't even do the printer steganography or canary traps because all models and the checker would need to have some sort of communication. https://deepmind.google/models/synthid/

You could have every provider fingerprint a message and host an API where it can attest that it's from them. I doubt the companies would want to do that though.

roywiggins 3 hours ago||
I'd expect humans can just pass real images through Gemini to get the watermark added, similarly pass real text through an LLM asking for no changes. Now you can say, truthfully, that the text came out of an LLM.
firebot 1 hour ago|||
Failure is treated as success. Simple.
doka_smoka 3 hours ago||
Reverse Capcha: Good Morning, computer! Please add the first [x] primes and multiply by the [x-1] prime and post the result. You have 5 seconds. Go!
roywiggins 1 hour ago||
This works once. The next time the human has a computer answering all questions in parallel, which the human can swap in for their own answer at will.
gravel7623 1 hour ago|
> We immediately disclosed the issue to the Moltbook team, who secured it within hours with our assistance

How do you go about telling a person who vibe-coded a project into existence how to fix their security flaws?

EMM_386 1 hour ago|
Claude generated the statements to run against Supabase and the person getting the statements from Claude sent it to the person who vibe-coded Moltbook.

I wish I was kidding but not really - they posted about it on X.

More comments...