Top
Best
New

Posted by mycroft_4221 11 hours ago

How we hacked McKinsey's AI platform(codewall.ai)
306 points | 122 commentspage 2
paxys 6 hours ago|
> named after the first professional woman hired by the firm in 1945

Going out of their way to find a woman's name for an AI assistant and bragging about it is not as empowering as the creators probably thought in their heads.

sgt101 7 hours ago||
Why was there a public endpoint?

Surely this should all have been behind the firewall and accessible only from a corporate device associated mac address?

consp 5 hours ago||
> accessible only from a corporate device associated mac address

Like that ever stopped anyone. That's just a checkbox item.

sgt101 11 minutes ago||
wot?
sgt101 10 minutes ago||
I mean - do you have the macid's of McKinsey's corporate devices?
jihadjihad 7 hours ago||
Surely.
nubg 5 hours ago||
Could the author please provide the prompt that was used to vibe write this blog post? The topic is interesting, but I would rather read the original prompt, as I am not sure which parts still match what the author wanted to say, vs flowerly formulations for captivating reading that the LLM produced.
StartupsWala 3 hours ago||
One interesting takeaway here is how quickly organizations are deploying AI tools internally without fully adapting their security models.

Traditional application security assumes fairly predictable inputs and workflows, but LLM-based systems introduce entirely new attack surfaces—prompt injection, data leakage, tool misuse, etc.

It feels like many enterprises are still treating these systems as just another SaaS product rather than something closer to an autonomous system that needs a different threat model...

bxguff 5 hours ago||
Its so funny its a SQL injection because drum roll you can't santize llm inputs. Some problems are evergreen.
dmix 4 hours ago|
Technically it was a search box input no prompts. Which tbf are often endpoints reused by RAGs
himata4113 4 hours ago||
How long until a hallucinated data breach that spreads globally. There's a few inconsistencies and the typical low effort language AI has.
nullcathedral 5 hours ago||
I think the underlying point is valid. Agents are a potential tool to add to your arsenal in addition to "throw shit at the wall and see what sticks" tools like WebInspect, Appscan, Qualys, and Acunetix.
sd9 7 hours ago||
Cool but impossible to read with all the LLM-isms
vanillameow 7 hours ago||
Tiring. Internet in 2026 is LLMs reporting on LLMs pen-testing LLM-generated software.
causal 6 hours ago|||
Those short "punchy sentence" paragraphs are my new trigger:

> No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream.

It just sounds so stupid.

darkport 4 hours ago|||
Founder of CodeWall here. It's quite funny because whilst an LLM did write the bulk of the posts factual content (based on the agents findings), I wrote the intro and summary at the end. That's just my writing style. Feel free to read my personal blog to compare: https://darkport.co.uk
bootsmann 2 hours ago|||
Idk how big your team is of course but imo try to hire a technical writer (they’re really cheap now), it pays dividends for a long time as consistent style and keywords build up SEO reputation. This article is making the rounds, some bigger papers picked it up, it is very valuable to land it well.
darkport 24 minutes ago||
Thanks for the suggestion, will look into it.
causal 4 hours ago|||
If you really DID come up with that paragraph 100% completely on your own with no LLM influence then...I apologize for the insult, though I can't really back out from what I said. It's still a bombastic way of saying very little.
consp 5 hours ago||||
It's an actual story telling method, molded into a supposed to be informative article with a bunch of "please make it interesting" sprinkled on top of it. These day known as the what's left of the internet.
philipwhiuk 3 hours ago|||
It's LinkedIn speech.

Two word sentences, each one on a new line.

causal 2 hours ago||
Ah. That might be why I find it especially triggering.
gonzalovargas 4 hours ago||
That data is worth billions to frontier AI labs. I wonder if someone is already using it to train models
build-or-die 2 hours ago|
parameterized values but raw key concatenation is the kind of thing that looks safe in code review. easy to miss for humans, but an agent will just keep poking at every input until something breaks.
More comments...