Top
Best
New

Posted by todsacerdoti 12/28/2025

MongoBleed Explained Simply(bigdata.2minutestreaming.com)
260 points | 140 commentspage 2
petesergeant 12/29/2025|
> In C/C++, this doesn’t happen. When you allocate memory via `malloc()`, you get whatever was previously there.

What would break if the compiler zero'd it first? Do programs rely on malloc() giving them the data that was there before?

pelorat 12/29/2025||
That's what calloc() is for
mdavid626 12/29/2025||
It takes time to zero out memory.
vivzkestrel 12/29/2025||
is it true that ubisoft got hacked and 900GB of data from their database was leaked due to mongobleed, i am seeing a lot of posts on social media under the #ubisoft tags today. can someone on HN confirm?
christophilus 12/29/2025||
I read that hack was made possible by Ubisoft’s support staff taking bribes.
Maxious 12/29/2025||
Details are still emerging, update in the last hour was that at least 5 different hacking groups were in ubisoft's systems and yeah some might have got their via bribes rather than mongodb https://x.com/vxunderground/status/2005483271065387461
sitkack 12/29/2025||
I’ll give you $1000 to run Mongo.
bschmidt107979 12/29/2025||
[dead]
dwheeler 12/29/2025||
This has many similarities to the Heartbleed vulnerability: it involves trusting lengths from an attacker, leading to unauthorized revelation of data.
ldng 12/29/2025||
MongoDB has always sucked... But it's webscale (sic)

Do yourself a favour, use ToroDB instead (or even straight PostgreSQL's JSONB).

reassess_blind 12/29/2025||
Have all Atlas clusters been auto-updated with a fix?
enether 12/29/2025|
yes. apparently before Dec 19 too
fwip 12/29/2025||
"MongoBleed Explained by an LLM"
tuetuopay 12/29/2025||
If it is, it's less fluffy and empty than most of LLM prose we're usually fed. It's well explained and has enough details to not be overwhelming.

Honestly, aside from the "<emoji> impact" section that really has an LLM smell (but remember that some people legit do this since it's in the llm training corpus), this more feels like LLM assisted (translated? reworded? grammar-checked?) that pure "explain this" prompt.

enether 12/29/2025||
I didn't use AI in writing the post.

I did some research with it, and used it to help create the ASCII art a bit. That's about it.

I was afraid that adding the emoji would trigger someone to think it's AI.

In any case, nowadays I basically always get at least one comment calling me an AI on a post that's relatively popular. I assume it's more a sign of the times than the writing...

tuetuopay 12/29/2025|||
Thank you for the clarification! I'm sorry for engaging in the LLM hunt, I don't usually do. Please keep writing, this was a really good breakdown!

In hindsight, I would not even have thought about it if not for the comment I replied to. LLM prose fail to make me read whole paragraphs and I find myself skipping roughly the second half of every paragraph, which was definitely not the case for your article. I did somewhat skip at the emoji heading, not because of LLMs, but because of a saturation of emojis in some contexts that don't really need them.

I should have written "this could be LLM assisted" instead of "this more feels like LLM assisted", but well words.

Again, sorry, don't get discouraged by the LLM witch hunt.

macintux 12/29/2025|||
I’m about ready to start flagging every comment that complains about the source material being LLM-generated. It’s tiresome, pointless, and adds absolutely nothing useful to the discussion.

If the material is wrong, explain why. Otherwise, shut up.

beembeem 12/29/2025||
Though the source article was human written, the public exploit was developed with an LLM.

https://x.com/dez_/status/2004933531450179931

bschmidt107979 12/29/2025||
[dead]
jeltz 12/29/2025|
Nah, this time it was just you.
hindustanuday 12/29/2025||
[dead]
hindustanuday 12/29/2025|
[dead]