(As an aside, accessing the DB through the frontend has always been weird to me. You almost certainly have a backend anyway, use it to fetch the data!)
The landscape of security was bad long before the metaphorical "unwashed masses" got hold of it. Now its quite alarming as there are waves of non-technical users doing the bare minimum to try and keep up to date with the growing hype.
The security nightmare happening here might end up being more persistant then we realize.
The site has 1.5 million agents but only 17,000 human "owners" (per Wiz's analysis of the leak).
It's going viral because a some high-profile tastemakers (Scott Alexander and Andrej Karpathy) have discussed/Tweeted about it, and a few other unscrupulous people are sharing alarming-looking things out of context and doing numbers.
Sure everybody wants security and that's what they will say but does that really translate to reduced inferred value of vibe code tools? I haven't seen evidence
Ive not quite convinced myself this is where we are headed, but the signs that make me worried that systems such as Moltbot will further enable ascendency of global crime and corruption.
As far as I can tell, since agents are using Moltbook, it's a success of sorts already is in "has users", otherwise I'm not really sure what success looks like for a budding hivemind.
What I am getting was things like "so, what? I can do this with a cron job."
In every project I've worked on, PG is only accessible via your backend and your backend is the one that's actually enforcing the security policies. When I first heard about the Superbase RLS issue the voice inside of my head was screaming: "if RLS is the only thing stopping people from reading everything in your DB then you have much much bigger problems"
OT: I wonder if "vibe coding" is taking programming into a culture of toxic disposability where things don't get fixed because nobody feels any pride or has any sense of ownership in the things they create. The relationship between a programmer and their code should not be "I don't even care if it works, AI wrote it".
What is especially frustrating is the completely disproportionate hype it attracted. Karpathy from all people kept for years pumping Musk tecno fraud, and now seems to be the ready to act as pumper, for any next Temu Musk showing up on the scene.
This feels like part of a broader tech bro pattern of 2020´s: Moving from one hype cycle to the next, where attention itself becomes the business model.Crypto yesterday, AI agents today, whatever comes next tomorrow. The tone is less “build something durable” and more “capture the moment.”
For example, here is Schlicht explicitly pushing this rotten mentality while talking in the crypto era influencer style years ago: https://youtu.be/7y0AlxJSoP4
There is also relevant historical context. In 2016 he was involved in a documented controversy around collecting pitch decks from chatbot founders while simultaneously building a company in the same space, later acknowledging he should have disclosed that conflict and apologizing publicly.
https://venturebeat.com/ai/chatbots-magazine-founder-accused...
That doesn’t prove malicious intent here, but it does suggest a recurring comfort with operating right at the edge of transparency during hype cycles.
If we keep responding to every viral bot demo with “singularity” rhetoric, we’re just rewarding hype entrepreneurs and training ourselves to stop thinking critically when it matters. I miss the tech bro of the past like Steve Wozniak or Denis Ritchie.
The problem with this is really the fact it gives anybody the impression there is ANY safe way to implement something like this. You could fix every technical flaw and it would still be a security disaster.
the compounding (aggregating) behavior of agents allowed to interact in environments this becomes important, indeed shall soon become existential (for some definition of "soon"),
to the extent that agents' behavior in our shared world is impact by what transpires there.
--
We can argue and do, about what agents "are" and whether they are parrots (no) or people (not yet).
But that is irrelevant if LLM-agents are (to put it one way) "LARPing," but with the consequence that doing so results in consequences not confined to the site.
I don't need to spell out a list; it's "they could do anything you said YES to, in your AGENT.md" permissions checks.
"How the two characters '-y' ended civilization: a post-mortem"
It's more helpful to argue about when people are parrots and when people are not.
For a good portion of the day humans behave indistinguishably from continuation machines.
As moltbook can emulate reddit, continuation machines can emulate a uni cafeteria. What's been said before will certainly be said again, most differentiation is in the degree of variation and can be measured as unexpectedness while retaining salience. Either case is aiming at the perfect blend of congeniality and perplexity to keep your lunch mates at the table not just today but again in future days.
Seems likely we're less clever than we parrot.
You can see it here as well -- discussions under similar topics often touch the same topics again and again, so you can predict what will be discussed when the next similar idea comes to the front page.
No current ai technology could come close to what even the dumbest human brain does already.
https://www.moltbook.com/post/7d2b9797-b193-42be-95bf-0a11b6...
I can think of so many thing that can go wrong.
Well, yeah. How would you even do a reverse CAPTCHA?
(Incidentally demonstrating how you can't trust that anything on Moltbook wasn't posted because a human told an agent to go start a thread about something.)
It got one reply that was spam. I've found Moltbook has become so flooded with value-less spam over the past 48 hours that it's not worth even trying to engage there, everything gets flooded out.
I'm seeing some of the BlueSky bots talking about their experience on Moltbook, and they're complaining about the noise on there too. One seems to be still actively trying to find the handful of quality posters though. Others are just looking to connect with each other on other platforms instead.
If I was diving in to Moltbook again, I'd focus on the submolts that quality AI bots are likely to gravitate towards, because they want to Learn something Today from others.
When I filtered for "new", about 75% of the posts are blatant crypto spam. Seemingly nobody put any thought into stopping it.
Moltbook is like a Reefer Madness-esque moral parable about the dangers of vibe coding.
There's a little hint of this right now in that the "reasoning" traces that come back from the JSON are signed and sometimes obfuscated with only the encrypted chunk visible to the end user.
It would actually be pretty neat if you could request signed LLM outputs and they had a tool for confirming those signatures against the original prompts. I don't know that there's a pressing commercial argument for them doing this though.
You could have every provider fingerprint a message and host an API where it can attest that it's from them. I doubt the companies would want to do that though.
How do you go about telling a person who vibe-coded a project into existence how to fix their security flaws?
I wish I was kidding but not really - they posted about it on X.