Top
Best
New

Posted by bearsyankees 12/3/2025

Reverse engineering a $1B Legal AI tool exposed 100k+ confidential files(alexschapiro.com)
821 points | 288 comments
0xbadcafebee 12/4/2025|
So, 1) a public service, 2) with no authentication, 3) and no encryption? (http only??), 4) sent every single response with a token, 5) giving full admin access to every client's legal documents. This is like a law firm with an open back door, open back window, and all the confidential legal papers sprawled out on the floor.

Imagine the potential impact. You're a single mother, fighting for custody of your kids. Your lawyer has some documentation of something that happened to you, that wasn't your fault, but would look bad if brought up in court. Suddenly you receive a phone call - it's a mysterious voice, demanding $10,000 or they will send the documents to the opposition. Neither of them knows each other; someone just found a trove of documents in an open back door and wanted to make a quick buck.

This is exactly what a software building code would address (if we had one!). Just like you can't open a new storefront in a new building without it being inspected, you should not be able to process millions of sensitive files without having your software's building inspected. The safety and privacy of all of us shouldn't be optional.

altmanaltman 12/4/2025||
but google told me everyone can vibe code apps now and software engineers should count their days... it's almost as if there's more stuff we do than just write code...
blitzar 12/4/2025|||
humans used open s3 buckets stuffed with text files of usernames, passwords, addresses, credit card numbers etc long before vibe coding was a thing.
Covenant0028 12/4/2025|||
And those humans would be looking for a new job or face other consequences. An AI model can merrily do this with zero consequences because no meaningful consequences can be visited upon it.

Just like if any human employee publicly sexually harassed his female CEO, he'd be out of a job and would find it very hard to find a new one. But Grok can do it and it's the CEO who ends up quitting.

dr_dshiv 12/4/2025|||
Prediction: Vibe coding systems will be better at security in 2 years than 90% of devs.
input_sh 12/4/2025|||
Prediction: it won't.

You can't fit every security consideration into the context window.

dr_dshiv 12/4/2025|||
90% of human devs are not aware of every security consideration.
input_sh 12/4/2025||
90% of human devs can fit more than 3-5 files into their short-term memory.

They also know not to, say, temporarily disable auth to be able to look at the changes they've made on a page hidden behind auth, which is what I observed Gemini 3 Pro doing just yesterday.

dr_dshiv 12/4/2025||
Ok, and that’s your prediction for 2 years from now? It’d be quite remarkable if humans had a bigger short term memory than LLMs in 2 years. Or that the kind of dumb security mistakes LLMs make today don’t trigger major, rapid improvements.
input_sh 12/4/2025||
Do you understand what the term "context window" means? Have you ever tried using an LLM to program anything even remotely complex? Have you observed how the quality of the output drastically reduces the longer the coversation gets?

That's what makes it bad at security. It cannot comprehend more than a floppy drive worth of data before it reverts to absolute gibberish.

eric-burel 12/4/2025|||
You may want to read about agentic AI, you can for instance call an LLM multiple times with different security consideration everytime.
input_sh 12/4/2025|||
There's about a dozen workarounds around context limits, agents being one of them, MCP servers being another one, AGENTS.md being the third one, but none of them actually solve the issue of a context window being so small that it's useless for anything even remotely complex.

Let's imagine a codebase that can fit onto a revolutionary piece of technology known as a floppy drive. As we all know, a floppy drive can store <2 megabytes of storage. But a 100k tokens is only about 400 kilobytes. So, to process the whole codebase that can fit onto a floppy drive, you need 5 agents plus the sixth "parent process" that those 5 agents will report to.

Those five agents can report "no security issues found" in their own little chunk of the codebase to the parent process, and that parent process will still be none the wiser about how those different chunks interact with each other.

eric-burel 12/6/2025|||
You can have an agent that focuses on studying the interactions. What you're saying is that an AI cannot find every security issue but neither do humans otherwise we wouldn't have security breaches in the first place. You are describing a relatively basic agentic setup mostly using your AI-assisted text editor but a commercial security bot is a much more complex beast hopefully. You replace context by memory and synthesis for instance, the same way our brain works.
joshribakoff 12/4/2025|||
In one instance it could not even describe why a test is bad unit test (asserting true is equal to true), which doesn’t even require context or multi file reasoning.

Its almost as if it has additional problems beyond the context limits :)

eric-burel 12/6/2025||
In an agentic setup you are still dependent on having relatively smart models that's true.
joshribakoff 12/4/2025||||
You may want to try using it, anecdotes often differ from theories, especially when they are being sold to you for profit. It takes maybe a few days to see a pattern of ignoring simple instructions even when context is clean. Or one prompt fixes one issue and causes new issues, rinse and repeat. It requires human guidance in practice.
ethbr1 12/4/2025||
Strongman: LLMs aren't a tool, they're fuzzy automation.

And what keeps security problems from making it into prod in the real world?

Code review, testing, static and dynamic code scanning, and fuzzing.

Why aren't these things done?

Because there isn't enough people-time and expertise.

So in order for LLMs to improve security, they need to be able to improve our ability to do one of: code review, testing, static and dynamic code scanning, and fuzzing.

It seems very unlikely those forms of automation won't be improved in the near future by even the dumbest form of LLMs.

And if you offered CISOs a "pay to scan" service that actually worked cross-language and -platform (in contrast to most "only supported languages" scanners), they'd jump at it.

joshribakoff 12/5/2025||
There is an argument here that the LLM is a tool that can multiply the addition or removal of the defects depending on how it is wielded.
ethbr1 12/5/2025||
I think the father figure of a developer who was bitten by a radioactive spider once made a similar quip.
windexh8er 12/4/2025|||
And that buys you what, exactly? Your point is 100% correct and why LLMs are no where near able to manage / build complete simple systems and surely not complex ones.

Why? Context. LLMs, today, go off the rails fairly easily. As I've mentioned in prior comments I've been working a lot with different models and agentic coding systems. When a code base starts to approach 5k lines (building the entire codebase with an agent) things start to get very rough. First of all, the agent cannot wrap it's context (it has no brain) around the code in a complete way. Even when everything is very well documented as part of the build and outlined so the LLM has indicators of where to pull in code - it almost always cannot keep schemas, requirements, or patterns in line. I've had instances where APIs that were being developed were to follow a specific schema, should require specific tests and should abide by specific constraints for integration. Almost always, in that relatively small codebase, the agentic system gets something wrong - but because of sycophancy - it gleefully informs me all the work is done and everything is A-OK! The kicker here is that when you show it why / where it's wrong you're continuously in a loop of burning tokens trying to put that train back on the track. LLMs can't be efficient with new(ish) code bases because they're always having to go lookup new documentation and burning through more context beyond what it's targeting to build / update / refactor / etc.

So, sure. You can "call an LLM multiple times". But this is hugely missing the point with how these systems work. Because when you actually start to use them you'll find these issues almost immediately.

joshribakoff 12/4/2025||
To add onto this, it is a characteristic of their design to statistically pick things that would be bad choices, because humans do too. It’s not more reliable than just taking a random person off the street of SF and giving them instructions on what to copy paste without any context. They might also change unrelated things or get sidetracked when they encounter friction. My point is that when you try to compensate by prompting repeatedly, you are just adding more chances for entropy to leak in — so I am agreeing with you.
windexh8er 12/4/2025||
> To add onto this, it is a characteristic of their design to statistically pick things that would be bad choices, because humans do too.

Spot on. If we look at, historically, "AI" (pre-LLM) the data sets were much more curated, cleaned and labeled. Look at CV, for example. Computer Vision is a prime example of how AI can easily go off the rails with respect to 1) garbage input data 2) biased input data. LLMs have these two as inputs in spades and in vast quantities. Has everyone forgotten about Google's classification of African American people in images [0]? Or, more hilariously - the fix [1]? Most people I talk to who are using LLMs think that the data being strung into these models has been fine tuned, hand picked, etc. In some cases for small models that were explicitly curated, sure. But in the context (no pun) of all the popular frontier models: no way in hell.

The one thing I'm really surprised nobody is talking about is the system prompt. Not in the manner of jailbreaking it or even extracting it. But I can't imagine that these system prompts aren't collecting mass tech debt at this point. I'm sure there's band aid after band aid of simple fixes to nudge the model in ever so different directions based on things that are, ultimately, out of the control of such a large culmination of random data. I can't wait to see how these long term issues crop and and duct taped for the quick fixes these tech behemoths are becoming known for.

[0] https://www.bbc.com/news/technology-33347866 [1] https://www.theguardian.com/technology/2018/jan/12/google-ra...

eric-burel 12/6/2025||
Talking about the debt of a system prompt feels really weird. A system prompt tied to an LLM is the equivalent of crafting a new model in the pre-LLM era. You measure their success using various quality metrics. And you improve the system prompt progressively to raise these metrics. So it feels like bandaid but that's actually how it's supposed to work and totally equivalent to "fixing" a machine learning model by improving the dataset.
aduwah 12/4/2025||||
This will age badly
dr_dshiv 12/4/2025||
That’s why we make concrete measurable predictions.
MangoToupe 12/4/2025||
Agreed, but "vibe coding will be better at security" is not one of them. Better by which metric, against which threat model, with which stakes? What security even means for greenfield projects is inherently different than for hardened systems. Vibe coding is sufficient for security today because it's not used for anything that matters.
Cthulhu_ 12/4/2025||||
It'll play a role in both securing and security research I'm sure, but I'm not confident it'll be better.

But also, you'd need to have some metrics - how good are developers at security already? What if the bar is on the floor and LLM code generators are already better?

wizzledonker 12/4/2025|||
Only if they work in a fundamentally different manner. We can't solve that problem the way we are building LLMs now.
rendaw 12/5/2025||||
AFAICT Filevine doesn't use AI programming: https://www.filevine.com/jobs/d64dfff5-e36f-4db6-adac-0fc082... There's no mention of AI there other than writing code to integrate AI pipelines.

I've seen a lot of job ads (Canva) lately that mandate AI use or AI experience, and as an AI company if they wanted that I think they would have put it in the ad.

For the record I think I may be fine with the insincerity of selling AI but not using it!

compootr 12/4/2025||||
@grok all software engineers do is mindlessly turn specifications into code in one shot, right?!?
agos 12/4/2025||||
See also: HN told me that regulation is bad and this is why the EU is behind!
eru 12/4/2025|||
> it's almost as if there's more stuff we do than just write code..

Yes, but adding these common sense considerations is actually something LLMs can already do reasonably well.

darkwater 12/4/2025|||
In 90% of the cases. And if you don't know how to spot that other 10%, you are still screwed, cause someone else will found that (and you don't even need to be an elite black hat to find it).
AbstractH24 12/4/2025||
What’s to say a human would catch this 10% either?
MangoToupe 12/4/2025|||
The salary you pay them, typically
AbstractH24 12/4/2025||
Salaries make humans infallible?
MangoToupe 12/4/2025||
No, but it makes them motivated to be thorough. There is no way to motivate a chatbot (to do better or to any end).
AbstractH24 12/4/2025|||
But money is a way to motivate the people who created AI to create better AI. Because if it doesn't perform as expected, either people won't use it or they'll turn to a competitor next time they need to do something. And these companies need recurring revenue.

If we're saying the way to ensure competency is to instill fear of not getting money tomorrow as a consequence of failure, then AI companies and humans are on equal footing.

eru 12/5/2025|||
You can run multiple chatbots in parallel. Use different models and different setups.

It's like having multiple people audit your systems. Even if everyone only catches 90%, as long as they don't catch exactly the same 90%, this parallel effort helps.

wesleywt 12/4/2025|||
Humans are pretty good at edge cases.
LunaSea 12/4/2025||||
If you explicitly request it which means you need to know about it.
eru 12/4/2025|||
OpenAI can put that in the system prompt for their CTO-as-a-service once, and then forget about it.
degamad 12/4/2025|||
Or you need to guess that it exists, or you need to scan for places it exists.
MangoToupe 12/4/2025|||
Clearly not
Sharlin 12/4/2025|||
Basically what happened in the Vastaamo case in Finland [1]. Except of course it wasn't individual phone calls – it was mass extortion of 30,000 people at once via email.

[1] https://en.wikipedia.org/wiki/Vastaamo_data_breach

tryauuum 12/4/2025||
if I remember correctly the attacker got caught in such a silly way

he wanted to demonstrate that he indeed has the private data. But he fucked up the tar command and it ended up having his username in the directory names, a username he used in other places on the internet

Fnoord 12/4/2025|||
http-only makes it also very easy to sniff for LE if they decide to. This allows them to get knowledge about cases. Like, they could be scanning it with their own AI tool for all we know. In a free country with proper LE, this would neither be legal nor happening. But I am not sure the USA is remaining one, given the leader is a convicted felon with very dubious moral standards.

The problem here however is that they get away with their sloppiness as long as the security researcher who found this is a whitehat, and the regular news won't pick it up. Once regular media pick this news up (and the local ones should), their name is tarnished and they may regret their sloppiness. Which is a good way to ensure they won't make the same mistake. After all, money talks.

zwnow 12/4/2025||
All the big tech companies are in the news every week. Everybody knows how bad they are. Their names are tarnished and yet everyone is still using their junk and they face zero repercussions when fucking up. I dont think things in the media would do any harm.
Fnoord 12/4/2025||
In the news, sure. But negatively? I consider myself included in 'everyone', and I am not using junk from all the big tech companies. More than once, I've successfully quit using certain ones, and Signal has become much more popular in my country ever since Trump II took office. Meta had to change the name of their company (Facebook) since it had such a bad name, and Zuck started a charm offensive.
anshumankmr 12/4/2025|||
Can't even make the basic auth of the password is password123.
gbacon 12/4/2025||
This is HN. We understood exactly what “exposed … confidential files” meant before reading your overly dramatic scenario. As overdone as it is, it’s not even realistic. A likely single mother is likely tiny potatoes in comparison to deep-pocketed legal firms or large corporations.

The story is an example of the market self-correcting, but out comes this “building code” hobby horse anyway. All a software “building code” will do is ossify certain current practices, not even necessarily the best ones. It will tilt the playing field in favor of large existing players and to the disadvantage of innovative startups.

The model fails to apply in multiple ways. Building physical buildings is a much simpler, much less complex process with many fewer degrees of freedom than building software. Local city workers inspecting by the local municipality’s code at least has clear jurisdiction because of where the physical fixed location is. Who will write the “building code”? Who will be the inspectors?

This is HN. Of all places, I’d expect to see this presented as an opportunity for new startups, not calls for slovenly bureaucracy and more coercion. The private market is perfectly capable of performing this function. E&O and professional liability insurers if they don’t already will be soon motivated after seeing lawsuits to demand regular pentests.

The reported incident is a great reminder of caveat emptor.

objclxt 12/4/2025|||
> Building physical buildings is a much simpler, much less complex process with many fewer degrees of freedom than building software.

I don't...think this is true? Google has no problems shipping complex software projects, their London HQ is years behind schedule and vastly over budget.

Construction is really complex. These can be mega-projects with tens of thousands of people involved, where the consequences of failure are injury or even death. When software failure does have those consequences - things like aviation control software, or medical device firmware - engineers are held to a considerably higher standard.

> The private market is perfectly capable of performing this function

But it's totally not! There are so many examples in the construction space of private markets being wholly unable to perform quality control because there are financial incentives not to.

The reason building codes exist and are enforced by municipalities is because the private market is incapable of doing so.

throwaway984393 12/4/2025|||
[dead]
theoldgreybeard 12/4/2025||
The bigwigs at my company want to build out a document management suite. After talking to VP of technology about requirements I ask about security as well as what the regulatory requirements are and all I get is a blank stare.

I used to think developers had to be supremely incompetent to end up with vulnerabilities like this.

But now I understand it’s not the developers who are incompetent…

eru 12/4/2025||
There's enough incompetence at all levels to go around.
theoldgreybeard 12/4/2025|||
Maybe I have just been lucky, but I have not had the displeasure of working with people either tha incompetent or willfully ignorant yet.
eru 12/4/2025|||
Oh, I should have been more careful in my formulation:

There are organisations that are generally competent, and there are places that are less competent. It's not all that uncommon for the whole organisation to be generally incompetent.

The saddest places (for me) are those where almost every individual you talk to seems generally competent, but judging by their output the company might as well be stuffed by idiots. Something in the way they are organised suppresses the competence. (I worked at one such company.)

> Maybe I have just been lucky, but I have not had the displeasure of working with people either tha incompetent or willfully ignorant yet.

It's very important before you start any new job to suss out how competent people and the organisation are. Ideally, you probably want to work for a competent company. But at least you want to know what you are getting into.

There's a bit of luck involved, if you go in blindly, but you can also use skill and elbow-grease to investigate.

vkou 12/4/2025|||
> Something in the way they are organised suppresses the competence.

It's a natural outcome of authoritarian structures when the people at the top are idiots. When that happens, the whole organization rots.

chii 12/4/2025|||
> suss out how competent people and the organisation are.

how does one do this, without first having the job and being embedded in there? From the outside, it's near impossible to see these details imho.

eru 12/4/2025|||
Yes, it's hard, and I'm not sure there are general strategies that always work.

It's fundamentally the same problem that the company is trying to solve when they interview you, just the other way 'round.

Some ideas: observe and ask in the interviews and hiring process in general. See what you can find out about the company from friends, contacts and even strangers. Network! Do some online research, too.

Btw, lots of the cliché interview questions ("What are your greatest weaknesses?" etc) actually make decent questions you can ask about the company and team you are about to join.

SaltyBackendGuy 12/4/2025||||
Something I've found useful is just reaching out to past employees. Usually folks that don't work there anymore will be more transparent. Only challenge is getting someone to respond to you, but you'd be surprised how many folks will talk if you don't come off like you're trying to sell them something or a bot.
YouAreWRONGtoo 12/4/2025|||
[dead]
delaminator 12/4/2025||||
I’m governed by them

Reeves orders Treasury inquiry over Budget leaks

Chancellor’s policies found their way to the press before she announced them to MPs

https://www.telegraph.co.uk/news/2025/12/03/reeves-orders-tr...

eru 12/5/2025||
You might want to consider voting with your feet?
blitzar 12/4/2025|||
"Are We the Baddies?"
Lord-Jobo 12/4/2025||||
Not only does the Peter principle generally show more incompetence the higher up a structure you move, but the outsized influence those positions have make for a very noticeably higher level of “things fucked up by incompetence“ coming from the C suite compared to the rest of the structure.

There’s definitely plenty of incompetence regardless. But I’ve never seen a company where the incompetence was more noteworthy in the cog positions than “leadership”.

samdung 12/4/2025|||
Incompetence compounds at an astonishing rate.
hahn-kev 12/4/2025||
I've had the same. Ask them to come up with a ToS and they're like "we'll talk about that in an upcoming meeting" it's been a few years now with nothing.
icyfox 12/3/2025||
I'm always a bit surprised how long it can take to triage and fix these pretty glaring security vulnerabilities. October 27, 2025 disclosure and November 4, 2025 email confirmation seems like a long time to have their entire client file system exposed. Sure the actual bug ended up being (what I imagine to be) a <1hr fix plus the time for QA testing to make sure it didn't break anything.

Is the issue that people aren't checking their security@ email addresses? People are on holiday? These emails get so much spam it's really hard to separate the noise from the legit signal? I'm genuinely curious.

Aurornis 12/3/2025||
In my experience, it comes down to project management and organizational structure problems.

Companies hire a "security team" and put them behind the security@ email, then decide they'll figure out how to handle issues later.

When an issue comes in, the security team tries to forward the security issue to the team that owns the project so it can be fixed. This is where complicated org charts and difficult incentive structures can get in the way.

Determining which team actually owns the code containing the bug can be very hard, depending on the company. Many security team people I've worked with were smart, but not software developers by trade. So they start trying to navigate the org chart to figure out who can even fix the issue. This can take weeks of dead-ends and "I'm busy until Tuesday next week at 3:30PM, let's schedule a meeting then" delays.

Even when you find the right team, it can be difficult to get them to schedule the fix. In companies where roadmaps are planned 3 quarters in advance, everyone is focused on their KPIs and other acronyms, and bonuses are paid out according to your ticket velocity and on-time delivery stats (despite PMs telling you they're not), getting a team to pick up the bug and work on it is hard. Again, it can become a wall of "Our next 3 sprints are already full with urgent work from VP so-and-so, but we'll see if we can fit it in after that"

Then legal wants to be involved, too. So before you even respond to reports you have to flag the corporate counsel, who is already busy and doesn't want to hear it right now.

So half or more of the job of the security team becomes navigating corporate bureaucracy and slicing through all of the incentive structures to inject this urgent priority somewhere.

Smart companies recognize this problem and will empower security teams to prioritize urgent things. This can cause another problem where less-than-great security teams start wielding their power to force everyone to work on not-urgent issues that get spammed to the security@ email all day long demanding bug bounties, which burns everyone out. Good security teams will use good judgment, though.

srrdev 12/3/2025|||
Oh man this is so true. In this sort of org, getting something fixed out-of-band takes a huge political effort (even a critical issue like having your client database exposed to the world).
DrewADesign 12/3/2025||
While there were numerous problems with the big corporate structures I worked in decades ago where everything was done by silos of specialists, there were huge advantages. No matter where there was a security, performance, network, hardware, etc. issue, the internal support infrastructure had the specialist’s pagers and for a problem like this, the people fixing it would have been on a conference call until it was fixed. There was always a team of specialists to diagnose and test fixes, always available developers with the expertise to write fixes if necessary, always ops to monitor and execute things, always a person in charge to make sure it all got done, and everybody knew which department it was and how to reach them 24/7.

Now if you needed to develop something not-urgent that involved, say, the performance department, database department, and your own, hope you’ve got a few months to blow on conference calls and procedure documents.

For that industry it made sense though.

eru 12/4/2025||
Interesting. Wouldn't the performance department have their fingers in all the pies anyway, too, or how was that handled?
DrewADesign 12/5/2025||
Their job was specifically managing server resource allocation— as an IT role and not a dev role— in a completely standardized environment. Most applications were given a standard allotment of resources, and they only got involved if something was running out of ram, disk access was too slow, or something just seemed to be taking a lot longer than usual. If it seemed to be a network problem, or just a program crash, for example, they were never involved unless troubleshooting indicated it involved them. More often than not, I’d get a phone call telling me the system I was working on seemed to be heavy on the disk access or something, and they had already allotted it more to keep it stable, but I should check to make sure we weren’t doing something stupid.

Now that I think of it, I’ll bet a lot of companies have a system similar to this for their infrastructure… they just outsource it to AWS, Azure, Google, etc. and comparatively fly by the seat of their pants on the dev side. You could only scale that system down so much, I imagine.

rvba 12/4/2025||||
> Many security team people I've worked with were smart, but not software developers by trade.

A lot are people who cannot code at all, cannot administer - they just fill tables and check boxes, maybe from some automated suite. They dont know what http and https is, because they are just paper pushers what is far from real security, but more like security in name only.

And they joined the work since it pays well

tietjens 12/4/2025|||
Great comment. Very true.
Barathkanna 12/3/2025|||
A lot of the time it’s less “nobody checked the security inbox” and more “the one person who understands that part of the system is juggling twelve other fires.” Security fixes are often a one-hour patch wrapped in two weeks of internal routing, approvals, and “who even owns this code?” archaeology. Holiday schedules and spam filters don’t help, but organizational entropy is usually the real culprit.
Aurornis 12/3/2025|||
> A lot of the time it’s less “nobody checked the security inbox” and more “the one person who understands that part of the system is juggling twelve other fires.”

At my past employers it was "The VP of such-and-such said we need to ship this feature as our top priority, no exceptions"

whstl 12/3/2025||||
I've once had a whole sector of a fintech go down because one DevOps person ignored daily warning emails for three months that an API key was about to expire and needed reset.

And of course nobody remembered the setup, and logging was only accessible by the same person, so figuring out also took weeks.

bongodongobob 12/3/2025||
I'm currently on the other side of this trying to convince management that the maintenance that should have been done 3 years ago needs to get done. They need "justification".
jll29 12/4/2025||
Write a short memo that saying you are very concerned, and describe a range of things that may happen (from "not much" over medium to maximum scare - lawsuits, brand/customer trust destroyed etc.).

Email the memo to a decision maker with the important flag on and CC: another person as a witness.

If you have been saying it for a long time and nobody has taken any action, you may use the word "escalation" as part of the subject line.

If things hit the fan, it will also make sure that what drops from the fan falls on the right people, and not on you.

ChrisMarshallNY 12/3/2025||||
It could also be someone "practicing good time management."

They have a specific time of day, when they check their email, and they only give 30 minutes to that time, and they check emails from most recent, down.

The email comes in, two hours earlier, and, by the time they check their email, it's been buried under 50 spams, and near-spams; each of which needs to be checked, so they run out of 30 minutes, before they get to it. The next day, by email check time, another 400 spams have been thrown on top.

Think I'm kidding?

Many folks that have worked for large companies (or bureaucracies) have seen exactly this.

eru 12/4/2025||
The system would be mostly sane, if you could sort by some measure of importance, not just recency.
throwaway290 12/3/2025|||
It's not about fixing it, it's about acknowledging it exists
ipdashc 12/3/2025|||
security@ emails do get a lot of spam. It doesn't get talked about very much unless you're monitoring one yourself, but there's a fairly constant stream of people begging for bug bounty money for things like the Secure flag not being set on a cookie.

That said, in my experience this spam is still a few emails a day at the most, I don't think there's any excuse for not immediately patching something like that. I guess maybe someone's on holiday like you said.

canopi 12/3/2025|||
This.

There is so much spam from random people about meaningless issues in our docs. AI has made the problem worse. Determining the meaningful from the meaningless is a full time job.

TheTaytay 12/3/2025|||
This is where “managed” bug bounty programs like BugCrowd or HackerOne deliver value: only telling you when there is something real. It can be a full time job to separate the wheat from the chaff. It’s made worse by the incentive of the reporters to make everything sound like a P1 hair-on-fire issue.
YouAreWRONGtoo 12/4/2025||
[dead]
whstl 12/3/2025||||
Half of the emails I used to get in a previous company were pointless issues, some coming from a honey pot.

The other half was people demanding payment.

horacemorace 12/4/2025||||
Training a tech support team of interns to solve all of them would be an enviable hacker or software dev training program.
Bootvis 12/3/2025|||
Use AI for that :)
Bootvis 12/4/2025||
Not kidding, I bet llm’s are excellent at triaging these reports. Humans, in a corporate setting, are apparently not.
latchkey 12/3/2025|||
My favorite one is the "We've identified a security hole in your website"... and I always respond quickly that my website is statically generated, nothing dynamic and immutable on cloudflare pages. For some odd reason, I never hear back from them.
bfxbjuf 12/3/2025|||
Well we have 600 people in the global response center I work at. And the priority issue count is currently 26000. That means its serious enough that its been assigned to some one. There are tens of thousands of unassigned issues cuz the traige teams are swamped. People dont realize as systems get more complex issues increase. They never decrease. And the chimp troupes response has always been a Story - we can handle it.
londons_explore 12/4/2025|||
The security@ inbox has so much junk these days with someone reporting that if you paste alert('hacked') into devtools then it makes the website hacked!

I reckon only 1% of reports are valid.

LLM's can now make a plausible looking exploit report ('there is a use after free bug in your server side implementation of X library which allows shell access to your server if you time these two API calls correctly'), but the LLM has made the whole thing up. That can easily waste hours of an experts time for a total falsehood.

I can completely see why some companies decide it'll be an office-hours-only task to go through all the reports every day.

tryauuum 12/4/2025||
My favorite was "we can trigger your website to initiate a connection to the server we control". They were running their own mail servers and were creating a new accounts on our website. Of course someone needs to initiate a TCP connection to deliver an email message!

Of course this could be a real vulnerability if it would disclose the real server IP behind cloudflare. This was not the case, we were sending via AWS email gateway

gwbas1c 12/3/2025|||
Not every organization prioritizes being able to ship a code change at the drop of a hat. This often requires organizational dedication to heavy automated testing a CI, which small companies often aren't set up to do.
stavros 12/3/2025||
I can't believe that any company takes a month to ship something. Even if they don't have CI, surely they'd prefer to break the app (maybe even completely) than risk all their legal documents exfiltrated.
Aurornis 12/3/2025|||
> I can't believe that any company takes a month to ship something.

Outside of startups and big tech, it's not uncommon to have release cycles that are months long. Especially common if there is any legal or regulatory involvement.

technion 12/4/2025||||
I can only say you havent worked anywhere i have.

I remember heartbleed dropping shortly after a deployment and not being allowed to patch for like ten months because the fix wasn't "validated". This was despite insurers stating this issue could cost coverage and legal getting involved.

stavros 12/4/2025||
What? That's crazy, wow!
Jolter 12/3/2025|||
It’d be pretty reasonable to take the whole API down in this scenario, and put it back up once it’s patched. They’d lose tons of cash but avoid being liable for extreme amounts of damages.
Capricorn2481 12/3/2025|||
> October 27, 2025 disclosure and November 4, 2025 email confirmation seems like a long time to have their entire client file system exposed

I have unfortunately seen way worse. If it will take more than an hour and the wrong people are in charge of the money, you can go a pretty long time with glaring vulnerabilities.

giancarlostoro 12/3/2025||
I call that one of the worrisome outcomes from "Marketing Driven Development" where the business people don't let you do technical debt "Stories" because you REALLY need to do work that justifies their existence in the project.
perlgeek 12/3/2025|||
Another aspect to consider: when you reduce the amount of permission anything has (like here the returned token), you risk breaking something.

In a complex system it can be very hard to understand what will break, if anything. In a less complex system, it can still be hard to understand if the person who knows the security model very well isn't available.

jofzar 12/3/2025|||
> October 27, 2025 disclosure and November 4, 2025 email confirmation seems like a long time to have their entire client file system exposed

There is always the simple answer, these are lawyers so they are probably scrambling internally to write a response that covers themselves legaly also trying to figure out how fucked they are.

1 week is surprisingly not that slow.

bgbntty2 12/3/2025||
I'm a bit conflicted about what responsible disclosure should be, but in many cases it seems like these conditions hold:

1) the hack is straightforward to do;

2) it can do a lot of damage (get PII or other confidential info in most cases);

3) downtime of the service wouldn't hurt anyone, especially if we compare it to the risk of the damage.

But, instead of insisting on the immediate shutting down of the affected service, we give companies weeks or months to fix the issue while notifying no one in the process and continuing with business as usual.

I've submitted 3 very easy exploits to 3 different companies the past year and, thankfully, they fixed them in about a week every time. Yet, the exploits were trivial (as I'm not good enough to find the hard ones, I admit). Mostly IDORs, like changing id=123456 to id=1 all the way up to id=123455 and seeing a lot medical data that doesn't belong to me. All 3 cases were medical labs because I had to have some tests done and wanted to see how secure my data was.

Sadly, in all 3 cases I had to send a follow-up e-mail after ~1 week, saying that I'll make the exploit public if they don't fix it ASAP. What happened was, again, in all 3 cases, the exploit was fixed within 1-2 days.

If I'd given them a month, I feel they would've fixed the issue after a month. If I'd given then a year - after a year.

And it's not like there aren't 10 different labs in my city. It's not like online access to results is critical, either. You can get a printed result or call them to write them down. Yes, it would be tedious, but more secure.

So I should've said from the beginning something like:

> I found this trivial exploit that gives me access to medical data of thousands of people. If you don't want it public, shut down your online service until you fix it, because it's highly likely someone else figured it out before me. If you don't, I'll make it public and ruin your reputation.

Now, would I make it public if they don't fix it within a few days? Probably not, but I'm not sure. But shutting down their service until the fix is in seems important. If it was some hard-to-do hack chaining several exploits, including a 0-day, it would be likely that I'd be the first one to find it and it wouldn't be found for a while by someone else afterwards. But ID enumerations? Come on.

So does the standard "responsible disclosure", at least in the scenario I've given (easy to do; not critical if the service is shut down), help the affected parties (the customers) or the businesses? Why should I care about a company worth $X losing $Y if it's their fault?

I think in the future I'll anonymously contact companies with way more strict deadlines if their customers (or others) are in serious risk. I'll lose the ability to brag with my real name, but I can live with it.

As to the other comments talking about how spammed their security@ mail is - that's the cost of doing business. It doesn't seem like a valid excuse to me. Security isn't one of hundreds random things a business should care about. It's one of the most important ones. So just assign more people to review your mail. If you can't, why are you handling people's PII?

nl 12/4/2025|||
Don't do this.

I understand you think you are doing the right thing but be aware that by shutting down a medical communication services there's a non-trivial chance someone will die because of slower test results.

Your responsibility is responsible disclosure.

Their responsibility is how to handle it. Don't try to decide that for them.

ghostly_s 12/4/2025|||
> I think in the future I'll anonymously contact companies with way more strict deadlines if their customers (or others) are in serious risk. I'll lose the ability to brag with my real name, but I can live with it.

What you're describing is likely a crime. The sad reality is most businesses don't view protection of customers' data as a sacred duty, but simply another of the innumerable risks to be managed in the course of doing business. If they can say "we were working on fixing it!" their asses are likely covered even if someone does leverage the exploit first—and worst-case, they'll just pay a fine and move on.

bgbntty2 12/4/2025||
Precisely - they view security as just one part of many of their business, instead of viewing it as one of the most important parts. They've insured themselves against a breach, so it's not a big deal for them. But it should be.

The more casualties, the more media attention -> the more likely they, and others in their field, will take security more seriously in the future.

If we let them do nothing for a month, they'll eventually fix it, but in the mean time malicious hackers may gain access to the PII. They might not make it public, but sell that PII via black markets. The company may not get the negative publicity it deserves and likely won't learn to fix their systems in time and to adopt adequate security measures. The sale of the PII and the breach itself might become public knowledge months after the fact, while the company has had a chance to grow in the meantime, and make more security mistakes that may be exploited later on.

And yes, I know it may be a crime - that's why I said I'd report it anonymously from now on. But if the company sits on their asses for a month, shouldn't that count as a crime, as well? The current definition of responsible disclosure gives companies too much leeway, in my opinion.

If I knew I operated a service that was trivial to exploit and was hosting people's PII, I'd shut it down until I fixed it. People won't die if I make everything in my power to provide the test results (in my example of medical labs) to doctors and patients via other means, such as via paper or phone. And if people do die, it would be devastating, of course, but it would mean society has put too much trust into a single system without making sure it's not vulnerable to the most basic of attacks. So it would happen sooner or later, anyway. Although I can't imagine someone dying because their doctor had to make a phone call to the lab instead of typing in a URL.

The same argument about people dying due to the disruption of the medical communications system could be made about too-big-to-fail companies that are entrenched into society because a lot of pension funds have invested in them. If the company goes under, the innocent people dependent on the pension fund's finances would suffer. While they would suffer, which would be awful, of course, would the alternative be to not let such companies go bankrupt? Or would it be better for such funds to not rely so much on one specific company in the first place? That is to say, in both cases (security or stocks in general) the reality is that currently people are too dependent on a few singular entities, while they shouldn't be. That has to change, and the change has to begin somewhere.

habosa 12/4/2025||
They took a month to fix this? That’s beyond inexcusable. I can’t imagine how any customer could justify working with them going forward.

Also … shows you what a SOC 2 audit is worth: https://www.filevine.com/news/filevine-proves-industry-leade...

Even the most basic pentest would have caught this.

stingraycharles 12/4/2025||
SOC2 is mainly to check boxes, and forces you to think about a few things. There’s no real / actual audit, and in my experience the pen tests are very much a money grab. You’re paying way too much money for some “pentesting” automated suite to run.

The auditors themselves pretty much only care that you answered all questions, they don’t really care what the answers are and absolutely aren’t going to dig any deeper.

(I’m responsible for the SOC2 audits at our firm)

abustamam 12/4/2025|||
When I worked for a consulting firm some years back I randomly got put on a project that dealt with payment information. I had never had to deal with payment information before so I was a bit nervous about being compliant. I was pointed to SOC2 compliance which sounded scary. Much to my relief (and surprise), the SOC2 questionnaire was literally just what amounted to a survey monkey form. I answered as truthfully as I could and at the end it just said "congrats you're compliant!" or something to that effect.

I asked my my manager if that's all that was required and he said yes, just make sure you do it again next year. I spent the rest of my time worrying that we missed something. I genuinely didn't believe him until your comment.

Edit: missing sentence.

chanux 12/4/2025|||
Once this type of issue gets publicized, does that in anyway affect the certification?
eru 12/5/2025||
Sometimes scandals affect these things. But it's hard to predict.
rustystump 12/4/2025|||
Soc2 and most other certifications are akin to the tsa, security theater. After seeing the info sec security space from the inside i can only say that it blows my mind how abhorrent the security space is. Prod db creds in code? A ok. Not using some stupid vendors “pen testing” software on each mr, blasphemy?
technion 12/4/2025|||
Unless im missing something, they replied stating they would look into it and then its totally vague when they patched, with Alex apparently randomly testing later and telling them in a "follow up" that it was fixed.

I dont at all get why there is a paragraph thanking their communication if that is the case.

nick49488171 12/4/2025||
Probably given the alternative, being ghosted followed by a no-knock FBI raid
eru 12/4/2025|||
It looks like SOC 2 (and the other SOCs) where developed by accountants?

I wouldn't expect them to find any computer problems either to be honest.

anticensor 12/6/2025||
There are only 3 books of SOC: SOC I, SOC II Part 1, SOC II Part II.
mrweasel 12/4/2025|||
The time to fix isn't really important, assuming that they took the system offline in the mean time... but we all know they didn't, because that would cost to much.
jonny_eh 12/4/2025|||
Where did it say that they took a month to fix? The hacker just checked in 2 weeks later and it was fixed by that point.
theodorejb 12/4/2025||
According to the timeline it took more than a week just for Filevine to respond saying they would review and fix the vulnerability. It was 24 days after initial disclosure when he confirmed the fix was in place.
OtherShrezzing 12/4/2025||
Given that the author describes the company as prompt, communicative and professional, I think it’s fair to assume there was more contact than the four events in the top of the article.
aitchnyu 12/4/2025||
Is there any stricter standard? Should one strive for PCI-DSS even if they are a regular SaaS?
eru 12/5/2025||
Whatever Google does internally would be a much stricter standard, but I'm not sure they've written it up for outsiders to use, alas.
kylecazar 12/3/2025||
If they have a billion dollar valuation, this fairly basic (and irresponsible) vulnerability could have cost them a billion dollars. If someone with malice had been in your shoes, in that industry, this probably wouldn't have been recoverable. Imagine a firm's entire client communications and discovery posted online.

They should have given you some money.

edm0nd 12/3/2025||
Exactly.

They could have sold this to a ransomare group or affiliate for 5-6 figures and then the ransomware group could have exfil'd the data and attempted to extort the company for millions.

Then if they didnt pay and the ransomware group leaked the info to the public, they'd likely have to spend millions on lawsuits and fines anyways.

They should have paid this dude 5-6 figures for this find. It's scenarios like this that lead people to sell these vulns on the gray/black market instead of traditional bug bounty whitehat routes.

RagnarD 12/3/2025|||
They should have given him a LOT of money.
DonHopkins 12/3/2025||
Would you settle for a LOT of free AI generated legal advice? ;)
Tepix 12/4/2025||
Who says they didn't give him money?
zain37 12/4/2025|||
I reckon he would've mentioned it if he got a bounty, 100% deserves the bag
sys32768 12/3/2025||
I work for a finance firm and everyone is wondering why we can store reams of client data with SaaS Company X, but not upload a trust document or tax return to AI SaaS Company Y.

My argument is we're in the Wild West with AI and this stuff is being built so fast with so many evolving tools that corners are being cut even when they don't realize it.

This article demonstrates that, but it does sort of beg the question as to why not trust one vs the other when they both promise the same safeguards.

pr337h4m 12/3/2025||
FWIW this company was founded in 2014 and appears to have added LLM-powered features relatively recently: https://www.reuters.com/legal/transactional/legal-tech-compa...
hughes 12/4/2025|||
While the FileVine service is indeed a Legal AI tool, I don't see the connection between this particular blunder and AI itself. It sure seems like any company with an inexperienced development team and thoughtless security posture could build a system with the same issues.

Specifically, it does not appear that AI is invoked in any way at the search endpoint - it is clearly piping results from some Box API.

empiko 12/4/2025|||
There is none. Filevine is not even an "AI" company. They are a pretty standard SaaS that has some AI features nowadays. But the hive mind needs its food, and AI bad as we all know.
lionkor 12/4/2025|||
> any company with an inexperienced development team and thoughtless security posture

Point out one (1) "AI product" company that isn't described accurately by that sentence

layer8 12/3/2025|||
The question is what reason did you have to trust SaaS Company X in the first place?
sys32768 12/3/2025|||
Because it's the Cloud and we're told the cloud is better and more secure.

In truth the company forced our hand by pricing us out of the on-premise solution and will do that again with the other on-premise we use, which is set to sunset in five years or so.

ansgri 12/4/2025||
Probably has more to do with responsibility outsourcing: if SaaS has security breach AND they tell in the contract that they’re secure, then you’re not responsible. Sure, there may be reputational damage for you, but it’s a gamble with good odds in most cases.

Storing lots of legal data doesn’t seem to be one of these cases though.

bonesss 12/4/2025||
I see profits and outsourcing.

Selling an on-premise service requires customer support, engineering, and duplication of effort if you’re pushing to the cloud as well. Then you get the temptations and lock in of cloud-only tooling and an army of certified consultant drones whose resumes really really need time on AWS-doc-solution-2035, so the on premise becomes a constant weight on management.

SaaS and the cloud is great for some things some of the time, but often you’re just staring at the marketing playbook of MS or Amazon come to life like a golem.

pm90 12/3/2025|||
SaaS is now a "solved problem"; almost all vendors will try to get SOX/SOC2 compliance (and more for sensitive workloads). Although... its hard to see how these certifications would have prevented something like this :melting_face:.
mbesto 12/3/2025|||
> My argument is we're in the Wild West with AI and this stuff is being built so fast with so many evolving tools that corners are being cut even when they don't realize it.

The funny thing is that this exploit (from the OP) has nothing to do with AI and could be <insert any SaaS company> that integrates into another service.

Aperocky 12/3/2025|||
Does SaaS X/Cloud offer IAM capabilities? Or going further, do they dogfood their own access via the identity and access policies? If so, and you construct your own access policy, you have relative peace of mind.

If SaaS Y just says "Give me your data and it will be secure", that's where it gets suspect.

teej 12/3/2025|||
It doesn't sound like your firm does any diligence that would actually prevent you from buying a vendor that has security flaws.
whalesalad 12/3/2025|||
using ai vs not-ai as your litmus test is giving you a false sense of security. it's ALL wild west
pstuart 12/3/2025||
And nobody seems to pay attention to the fact that modern copiers cache copies on a local disk and if the machines are leased and swapped out the next party that takes possession has access to those copies if nobody bothered to address it.
lupire 12/3/2025||
This was the plot of Grisham's book The Firm in 1991
canopi 12/3/2025||
The first thing that comes to my mind is SOC2 HIPAA and the whole security theater.

I am one of the engineers that had to suffer through countless screenshots and forms to get these because they show that you are compliant and safe. While the real impactful things are ignored

latchkey 12/3/2025||
SemiAnalysis made this a base requirement for being appropriately ranked on their ClusterMAX report, telling me it is akin to FAA certifications, and then getting hacked themselves for not enforcing simple security controls.

https://jon4hotaisle.substack.com/i/180360455/anatomy-of-the...

It is crazy how this gets perpetuated in the industry as actually having security value, when in reality, it is just a pay-to-play checkbox.

chickensong 12/4/2025||
You have to start somewhere though. Security theater sucks, and it's not like compliance is a silver bullet, but at least it's something. Having been through implementing standards compliance, it did help the company in some areas. Was it perfect? Definitely not. Was it driven by financial goals? Absolutely. It did tighten up some weak spots though.

If the options mainly consist of "trust me bro" vs "we can demonstrate that we put in some effort", the latter seems more preferable, even if it's not perfect.

BrenBarn 12/4/2025||
I'm less and less sure that when a billion-dollar company screws up this bad, the right thing to do is privately disclose it and let them fix it. This kind of thing just allows companies to go on taking people's money without facing the consequences of their mistakes.
barbazoo 12/4/2025||
Does a disclosure like this absolve them of any responsibility? They still violated whatever user privacy act.
mvkel 12/4/2025|||
It does. Most privacy laws are based on time-from-discovery. If they immediately sprung into action at the moment they were informed and remediated the issue, they're in compliance.
BrenBarn 12/4/2025|||
Right, that's the problem. There need to be standards that govern what can ever be released to customers/the public in the first place. When violations of those are discovered, the penalties should be based on time from release, so the longer it was out in the wild, the greater the penalty.
mvkel 12/4/2025||
But you can't remove something from the internet once it's there, so once it's released, it's expected that it always will be.

It's also impossible to guarantee a 100% secure infrastructure, no matter how good your product team is.

In the grey is a term of art: "best efforts."

If data is leaking, and it wasn't because hackers bypassed a bunch of safeguards, if it can be shown that you didn't use Best Efforts to secure said data, there is liability.

BrenBarn 12/5/2025||
A charitable way of interpreting "best effort" is that it's similar to what I said: we need standards. But the problems with our notion of "best effort" are:

1. The standards aren't clearly defined (i.e., you must specifically do this).

2. They are defined in terms of efforts rather than effects. It is like saying "every car sold must be made of steel" rather than "every car sold must be capable of withstanding an impact against a concrete wall at 60mph with X amount of deformation, etc." We want the rules to determine what level of threat is protected against, not just what motions the company went through. In the case in the article, it wasn't because hackers bypassed a bunch of safeguards; the company didn't protect against even basic threats.

3. It's not enough to have "liability". That puts the onus on individuals to sue the company for their specific damages. We need criminal penalties that are designed to punish companies (and the individuals who direct them) for the harm they do to society by the overall process of rushing ahead selling things instead of slowing down and being careful. We need large-scale enforcement so that companies actually stop doing these things because the cost of doing them becomes too enormous.

4. Our laws do not adequately take account of the differential power of those who cut corners, and the differential gains reaped. We frequently find small operators on the wrong end of painful lawsuits and onerous criminal penalties, while the biggest companies and wealthiest individuals use their position to avoid consequences. Laws need to explicitly take this into account, lowering the standard of proof for penalties against larger, wealthier, and more powerful companies and individuals, and also making those penalties exponentially higher.

mattmaroon 12/4/2025|||
So is that true if they find out when the public does too? It seems that disclosing it privately has some upside (protecting the users) and no downside.
mvkel 12/4/2025||
That depends more on what the Privacy Policy is of the service, which you agree to when you sign up and use it
btbuildem 12/4/2025|||
What are you going to do, sue them? The place is literally teeming with lawyers.
abustamam 12/4/2025||
What would you suggest the right thing to do would be?

Edit: I agree with you that we shouldnt let companies like this get away with what amounts to a slap on the wrist. But everything else seems irresponsible as well.

BrenBarn 12/4/2025||
I guess if I imagine the ideal world, it would be that you report it to the authorities and they impose penalties on the offender that are large enough that the company winds up significantly worse off than if they had just grown more slowly. In other words the punishment for moving fast and breaking things needs to be bad enough to outweigh the gains of doing so.

In the current world, I dunno. I guess it depends on what the company is. If it's something like a hedge fund or a fossil fuel company I think I'd be fine with some kind of wikileaks-like avenue for exposing it in such a way that it results in the company being totally destroyed.

magnetowasright 12/3/2025||
I am at a loss for words. This wasn't a sophisticated attack.

I'd love to know who filevine uses for penetration testing (which they do, according to their website) because holy shit, how do you miss this? I mean, they list their bug bounty program under a pentesting heading, so I guess it's just nice internet people.

It's inexcusable.

rashidujang 12/4/2025||
This was my impression after reading the article too. I have no doubt that the team at Filevine attempted to secure their systems and have probably thwarted other attackers, but got their foot stuck in what is an unsophisticated attack. It only takes one chain vulnerability to bring down the site.

Security reminds me of the Anna Karenina principle: All happy families are alike; each unhappy family is unhappy in its own way.

GJim 12/4/2025||
> I am at a loss for words. This wasn't a sophisticated attack.

To be fair, data security breaches seldom are.

yieldcrv 12/3/2025|
I've worked in several "agentic" roles this year alone (I'm very poachable lol)

and otherwise well structured engineering orgs have lost their goddamn minds with move fast and break things

because they're worried that OpenAI/Google/Meta/Amazon/Anthropic will release the tool they're working on tomorrow

literally all of them are like this

trollbridge 12/4/2025|
Old school blue chip type of companies are like this too. They’ve thrown all the process and caution they used to have to the wind so that they can… apply AI to their IT org which isn’t even their core business?
More comments...