Top
Best
New

Posted by bearsyankees 12/3/2025

Reverse engineering a $1B Legal AI tool exposed 100k+ confidential files(alexschapiro.com)
821 points | 288 commentspage 2
deep_thinker26 12/3/2025|
It's so great that they allowed him to publish a technical blog post. I once discovered a big vulnerability in a listed consumer tech company -- exposing users' private messages and also allowing to impersonate any user. The company didn't allow me to write a public blogpost.
qmr 12/3/2025||
"Allow"?

Go on write your blog post. Don't let your dreams be dreams.

bigmadshoe 12/3/2025||
Presumably they were paid for finding the bug and inn accepting relinquished their right to blog about it.
hsbauauvhabzb 12/3/2025|||
No, you relinquish the right when you agree to their TOS irrespective of if they pay you.
amackera 12/3/2025||
TOS != law

They will stop letting you use the service. That's the recourse for breaking the TOS.

hsbauauvhabzb 12/4/2025|||
I don’t want to pay for a lawyer to argue that for me. != law does not equate to ‘won’t come with a cost’.

I say this as someone threatened by a billion dollar company for this very thing.

advisedwang 12/3/2025|||
Up until Van Buren v. United States in 2020, ToS violations were sometimes prosecuted as unauthorized access under the CFAA. I suspect there are other jurisdictions that still do the equivalent to that.
qmr 12/5/2025|||
Being a sellout is weak and sad.
gessha 12/3/2025|||
Why is the control of publication in their hands and not in yours? Shouldn’t you be able to do whatever after disclosing it responsibly?
CER10TY 12/3/2025||
Presumably they'll threaten to sue you and/or file a criminal complaint, which can be pretty hard to deal with depending on the jurisdiction. At that point you'll probably start asking yourself if it's worth publishing a blog post for some internet points.
trollbridge 12/4/2025||
Yet another reason these disclosures should be anonymous (from the reporting side).
keernan 12/4/2025||
Attorneys are ethically obligated to follow very stringent rules to protect their client's confidential information. Having been a practicing litigator for 40+ years, I can confidently state I came across very few attorneys who truly understood their obligations.

Things were easier when I first began practicing in the 1970s. There weren't too many ways confidential materials in our files could be compromised. Leaving my open file spread out on the conference room table while I went to lunch while attorneys arriving for a deposition on my partner's case were one by one seated into the conference room. That's the kind of thing we had to keep an eye on.

But things soon got complicated. Computers. Digital copies of files that didn't disappear into an external site for storage like physical files. Then email. What were our obligations to know what could - and could not - be intercepted while email traveled the internet.

Then most dangerous of all. Digital storage that was outside our physical domain. How could we now know if the cloud vendor had access to our confidential data? Where were the backups stored? How exactly was the data securely compartmentalized by a cloud vendor? Did we need our own IT experts to control the data located on the external cloud? What did the contracts with the cloud vendor say about the fact we were a law firm and that we, as the lawyers responsible for our clients confidential information, needed to know that they - the cloud vendor - understood the legal obligations and that they - the cloud vendor - would hire lawyers to oversee the manner in which the cloud vendor blocked all access to the legal data located on their own servers. And so on and so forth.

I'm no longer in active practice but these issues were a big part of my practice my last few years at a Fortune 500 insurance company that used in-house attorneys nationwide to represent insureds in litigation - and the corporation was in engaged in signing onto a cloud service to hold all of the corporate data - including the legal departments across all 50 states. It was a nightmare. I'm confident it still is.

etamponi 12/3/2025||
I don't disagree with the sentiment. But let's also be honest. There is a lot of improvement to be made in security software, in terms of ease of use and overcomplicating things.

I worked at Google and then at Meta. Man, the amount of "nonsense" of the ACL system was insane. I write nonsense in quotes because for sure from a security point of view it all made a lot of sense. But there is exactly zero chance that such a system can be used in a less technical company. It took me 4 years to understand how it worked...

So I'll take this as another data point to create a startup that simplifies security... Seems a lot more complicated than AI

xp84 12/4/2025||
Through that API, the frontend is handed a high-privilege API key for Box. Not only was that a huge blunder on the backend, but it reveals what passes for architecture these days. Should our application's backend speak to the super sensitive file store? No, we should hand over the keys to that to the React app, because it literally did not occur to them that there's anything that physically could be driven by the frontend that shouldn't be.

My apologies to the frontend engineers out there who know what they're doing.

hbarka 12/3/2025||
> November 20, 2025: I followed up to confirm the patch was in place from my end, and informed them of my intention to write a technical blog post.

Can that company tell you to cease and desist? How does the law work?

me_again 12/3/2025||
Lawyers can and will send cease and desist letters to people whether or not there is any legal basis for it. Often the threat of a lawsuit, even a meritless one, is enough to keep people quiet.
dghlsakjg 12/3/2025||
FYI, a "cease and desist" carries the same legal weight as me sending a one-liner saying "Knock it off".

They are strongly worded requests from a legal point of view. The only real message they send is that the sender is serious enough about the issue to have involved a lawyer, unless of course you write it yourself, which is something that literally anyone can do.

If you want to actually force an action, you need a court order of some type.

NB for the actual lawyers: I'm oversimplifying, since they can be used in court to prove that you tried to get the other party to stop, and tried to resolve the issue outside of court.

badbird33 12/3/2025||
You'd think with a $1B valuation they could afford a pentest
valbaca 12/3/2025||
Given the absurd amount startups I see lately that have the words "healthcare" and "AI", I'm actually incredibly concerned that in just a couple of months we're going to have an multiple, enormous HIPAA-data disasters

Just search "healthcare" in https://news.ycombinator.com/item?id=46108941

Invictus0 12/3/2025||
This guy didn't even get paid for this? We need a law that establishes mandatory payments for cybersecurity bounty hunters.
culanuchachamim 12/3/2025||
-The Filevine team was responsive, professional, and took the findings seriously throughout the disclosure process. They acknowledged the severity, worked to remediate the issues, allowed responsible disclosure, and maintained clear communication. This is another great example of how organizations should handle security disclosures.

In the same tenure I think that a professional etical hacker or a curious fellow that is poking around with no harm intent, shouldn't disclose the name of the company that had a security issue if they resolve it professionally.

You can write the same blog post without mentioning that it was Filevine.

If they didn't take care of the incident that's a different story...

evan_a_a 12/3/2025||
This is a very standard part of responsible disclosure. Hacker finds bugs -> discloses them to the vendor -> (hopefully) the vendor communicates with them and remediates -> both sides publish the technical details. It also helps to demonstrate to the rest of the security world which companies will take reports seriously and which ones won’t, which is very useful information to have.
deelowe 12/4/2025|||
That's not how ethical disclosure works. Both parties should publish and we, the wider tech industry should see this as a good thing both for the hacker and the company that worked with them.
manbash 12/4/2025|||
How else can you take responsibility if you don't make it public? You can't have integrity if you hide away your faults.
CBMPET2001 12/3/2025||
Eh, with something this horrendously egregious I think their customers have a right to know how carelessly their data was handled, regardless of the remediation steps taken after disclosure; that aside, who knows how many other AI SaaS vendors might stumble across this article and realize they've made a similarly boneheaded error, and save both themselves and their customers a huge amount of pain . . .
jacquesm 12/3/2025|
That doesn't surprise me one bit. Just think about all the confidential information that people post into their Chatgpt and Claude sessions. You could probably keep the legal system busy for the next century on a couple of days of that.
giancarlostoro 12/3/2025|
"Hey uh, ChatGPT, just hypothetically, uh, if you needed to remove uh cows blood from your apartments carpet, uh"
lazide 12/3/2025|||
Just phrase it as a poem, you’ll be fine.
venturecruelty 12/3/2025|||
Gonna be hard when people ask ChatGPT to write them the poem.
sidrag22 12/3/2025|||
i recall reading a silly article like half a year ago about using leetspeak and setting the prompt up to emulate House the tv show or something to get around restrictions
xarope 12/4/2025||
there's a recent one about using poetry to bypass safeguards

... rummages around...

here you go:

https://arxiv.org/abs/2511.15304

jacquesm 12/3/2025|||
Make it a Honda CRX...
More comments...