Top
Best
New

Posted by toomuchtodo 4 hours ago

I found a Vulnerability. They found a Lawyer(dixken.de)
182 points | 89 commentspage 2
xvxvx 4 hours ago|
I’ve worked in I.T. For nearly 3 decades, and I’m still astounded by the disconnect between security best practices, often with serious legal muscle behind them, and the reality of how companies operate.

I came across a pretty serious security concern at my company this week. The ramifications are alarming. My education, training and experience tells me one thing: identify, notify, fix. Then when I bring it to leadership, their agenda is to take these conversations offline, with no paper trail, and kill the conversation.

Anytime I see an article about a data breach, I wonder how long these vulnerabilities were known and ignored. Is that just how business is conducted? It appears so, for many companies. Then why such a focus on security in education, if it has very little real-world application?

By even flagging the issue and the potential fallout, I’ve put my career at risk. These are the sort of things that are supposed to lead to commendations and promotions. Maybe I live in fantasyland.

dspillett 1 hour ago||
> I came across a pretty serious security concern at my company this week. The ramifications are alarming. […] Then when I bring it to leadership, their agenda is to take these conversations offline, with no paper trail, and kill the conversation.

I was in a very similar position some years ago. After a couple of rounds of “finish X for sale Y then we'll prioritise those issue”, which I was young and scared enough to let happen, and pulling on heartstrings (“if we don't get this sale some people will have to go, we risk that to [redacted] and her new kids, can we?”) I just started fixing the problems and ignoring other tasks. I only got away with the insubordination because there were things I was the bus-count-of-one on at the time and when they tried to butter me up with the promise of some training courses, I had taken & passed some of those exams and had the rest booked in (the look of “good <deity>, he got an escape plan and is close to acting on it” on the manager's face during that conversation was wonderful!).

The really worrying thing about that period is that a client had a pen-test done on their instance of the app, and it passed. I don't know how, but I know I'd never trust that penetration testing company (they have long since gone out of business, I can't think why).

tracker1 1 hour ago||
I wish I could recall the name of a pen test company I worked with when I wrote my auth system... They were pretty great and found several serious issues.

At least compared to our internal digital security group would couldn't fathom, "your test is wrong for how this app is configured, that path leads to a different app and default behavior" it's not actually a failure... to a canned test for a php exploit. The app wasn't php, it was an SPA and always delivered the same default page unless in the /auth/* route.

After that my response became, show me an actual exploit with an actual data leak you can show me and I'll update my code instead of your test.

calvinmorrison 4 hours ago|||
> By even flagging the issue and the potential fallout, I’ve put my career at risk.

Simple as. Not your company? not your problem? Notify, move on.

dspillett 2 hours ago|||
I read that post as him talking about their company, in the sense of the company they were working for. If that was the case, then an exploit of an unfixed security issue could very much affect them either just as part of the company if the fallout is enough to massively harm business, or specifically if they had not properly documented their concerns so “we didn't know” could be the excuse from above and they could be blamed for not adequately communicating the problem.

For an external company “not your company, not your problem” for security issues is not a good moral position IMO. “I can't risk the fallout in my direction that I'm pretty sure will result from this” is more understandable because of how often you see whistle-blowers getting black-listed, but I'd still have a major battle with the pernickety prick that is my conscience¹ and it would likely win out in the end.

[1] oh, the things I could do if it wasn't for conscience and empathy :)

Aurornis 56 minutes ago|||
Their websites says they're a freelance cloud architect.

The article doesn't say exactly, but if they used their company e-mail account to send the e-mail it's difficult to argue it wasn't related to their business.

They also put "I am offering" language in their e-mail which I'm sure triggered the lawyers into interpreting this a different way. Not a choice of words I would recommend using in a case like this.

refulgentis 4 hours ago||
> These are the sort of things that are supposed to lead to commendations and promotions. Maybe I live in fantasyland.

I had a bit of a feral journey into tech, poor upbringing => self taught college dropout waiting tables => founded iPad point of sale startup in 2011 => sold it => Google in 2016 to 2023

It was absolutely astounding to go to Google, and find out that all this work to ascend to an Ivy League-esque employment environment...I had been chasing a ghost. Because Google, at the end of the day, was an agglomeration of people, suffered from the same incentives and disincentives as any group, and thus also had the same boring, basic, social problems as any group.

Put more concretely, couple vignettes:

- Someone with ~5 years experience saying approximately: "You'd think we'd do a postmortem for this situation, but, you know how that goes. The people involved think they're an organization-wide announcement that you're coming for them, and someone higher ranked will get involved and make sure A) it doesn't happen or B) you end up looking stupid for writing it."

- A horrible design flaw that made ~50% of users take 20 seconds to get a query answered was buried, because a manager involved was the one who wrote the code.

dspillett 2 hours ago|||
> A horrible design flaw that made ~50% of users take 20 seconds to get a query answered was buried, because a manager involved was the one who wrote the code.

Maybe not when it is as much as 20 seconds, but an old manager of mine would save fixing something like that for a “quick win” at some later time! He would even have artificial delays put in, enough to be noticeable and perhaps reported but not enough to be massively inconvenient, so we could take them out during the UAT process - it didn't change what the client finally got, but it seemed to work especially if they thought they'd forced us to spend time on performance issues (those talking to us at the client side could report this back up their chain as a win).

pixl97 1 hour ago||
There is a term for this but I can't remember what it's called.

Effectively you put in on purpose bugs for an inspector to find so they don't dig too deep for difficult to solve problems.

smcin 28 minutes ago|||
'canary', 'review canary' or something.
bubblewand 3 hours ago||||
I've seen into some moderately high levels of "prestigious" business and government circles and I've yet to find any level at which everyone suddenly becomes as competent and sharp as I'd have expected them to be, as a child and young adult (before I saw what I've seen and learned that the norm is morons and liars running everything and operating terrifically dysfunctional organizations... everywhere, apparently, regardless how high up the hierarchy you go). And actually, not only is there no step at which they suddenly become so, people don't even seem to gradually tend to brighter or generally better, on average, as you move "upward"... at all! Or perhaps only weakly so.

Whatever the selection process is for gestures broadly at everything, it's not selecting for being both (hell, often not for either) able and willing to do a good job, so far as what the job is apparently supposed to be. This appears to hold for just about everything, reputation and power be damned. Exceptions of high-functioning small groups or individuals in positions of power or prestige exist, as they do at "lower" levels, but aren't the norm anywhere as far as I've been able to discern.

refulgentis 40 minutes ago||
Ty for sharing this, I don’t talk about it often, and never in professional circles. There’s a lot of emotions and uncertainty attached to it. It’s very comforting to see someone else describe it as it is to me without being just straightforwardly misanthropic.
xvxvx 4 hours ago|||
I would get fired at Google within seconds then. I’m more than happy to shine a light on bullshit like that.
snowhale 1 hour ago||
the NDA demand with a same-day deadline is such a classic move. makes it clear they were more worried about reputation than fixing anything.
pixl97 1 hour ago||
Reply: "sorry, before reaching out to you I already notified a major media organization with a 90 day release notice"
jbreckmckye 1 hour ago||
Typical shakedown tactic. I used to have a boss who would issue these ridiculous emails with lines like "you agree to respond within 24 hours else you forfeit (blah blah blah)"
viccis 4 hours ago||
This is somewhat related, but I know of a fairly popular iOS application for iPads that stores passwords either in plaintext or encrypted (not as digests) because they will email it to you if you click Forgot Password. You also cannot change it. I have no experience with Apple development standards, so I thought I'd ask here if anyone knows whether this is something that should be reported to Apple, if Apple will do anything, or if it's even in violation of any standards?
tracker1 1 hour ago||
FWIW, some types of applications may be better served with encryption over hashing for password access. Email being one of them, given the varying ways to authenticate, it gets pretty funky to support. This is why in things like O365 you have a separate password issued for use with legacy email apps.
greggsy 3 hours ago|||
If anything it’s just a violation of industry expectations. You as a consumer just don’t need to use the product.
tokyobreakfast 3 hours ago|||
>whether this is something that should be reported to Apple, if Apple will do anything

Lmao Apple will not do anything for actual malware when reported with receipts, besides sending you a form letter assuring you "experts will look into it, now fuck off" then never contact you again. Ask me how I know. To their credit, I suspected they ran it through useless rudimentary automated checks which passed and they were back in business like a day later.

If your expectation is they will do something about shitty coding practices half the App Store would be banned.

jopsen 3 hours ago||
> Apple will not do anything for actual malware when reported with receipts, besides sending you a form letter assuring you "experts will look into it, now fuck off"

Ask while you are in an EU country, request appeal and initiate Out-of-court dispute resolution.

Or better yet: let the platform suck, and let this be the year of the linux desktop on iPhone :)

wizzwizz4 1 hour ago||
I used to say "submit it to Plain Text Offenders: https://plaintextoffenders.com/", but the site appears defunct since… 2012‽ How time flies…
kazinator 3 hours ago||
> vulnerability in the member portal of a major diving insurer

What are the odds an insurer would reach for a lawyer? They probably have several on speed dial.

cptskippy 3 hours ago|
What makes you think they don't retain them in-house?
kazinator 6 minutes ago|||
[delayed]
tracker1 1 hour ago|||
Depends on the usage... in-house counsel may open up various liabilities of their own, depending on how things present.
estebarb 2 hours ago||
If this was in Costa Rica the appropiate way was to contact PRODHAB about the leak of personal information and Costa Rica CSIRT ( csirt@micitt.go.cr ).

Here all databases with personal information must be registered there and data must be secure.

Aurornis 1 hour ago|
> If this was in Costa Rica the appropiate way was to contact PRODHAB about the leak of personal information and Costa Rica CSIRT ( csirt@micitt.go.cr ).

They did. It's in the article. Search for 'CSIRT'. It's one of the key points of the story.

hbrav 2 hours ago||
This is extremely disappointing. The insurer in question has a very good reputation within the dive community for acting in good faith and for providing medical information free of charge to non-members.

This sounds like a cultural mismatch with their lawyers. Which is ironic, since the lawyers in question probably thought of themselves as being risk-averse and doing everything possible to protect the organisation's reputation.

dekhn 1 hour ago|
I find often that conversations between lawyers and engineers are just two very different minded people talking past each other. I'm an engineer, and once I spent more time understanding lawyers, what they do, and how they do it, my ability to get them to do something increased tremendously. It's like programming in an extremely quirky programming language running on a very broken system that requires a ton of money to stay up.
smcin 21 minutes ago|||
Could you post on HN on that? Would be worth reading.

And are you only talking about cybersecurity disclosure, liability, patent applications... And the scenario when you're both working for the same party, or opposing parties?

dekhn 5 minutes ago||
I'm talking about any situation where a principled person who is technically correct gets a threatening letter from a lawyer instead of a thank you.

If you read enough lawyer messages (they show up on HN all the time) you will see they follow a pattern of looking tough, and increasingly threatening posture. But often, the laws they cite aren't applicable, and wouldn't hold up in court or public opinion.

BlueGreenMagick 43 minutes ago|||
I'm curious to hear your take on the situation in the article.

Based on your experience, do you think there are specific ways the author could have communicated differently to elicit a better response from the lawyers?

dekhn 7 minutes ago||
It would take a bit of time to re-read the entire chain and come up with highly specific ways. The way I read the exchange, the lawyer basically wants the programmer to shut up and not disclose the vulnerability, and is using threatening legal language. While the programmer sees themself as a responsible person doing the company a favor in a principled way.

Some things I can see. I think the way the programmer worded this sounds adversarial; I wouldn't have written it that way, but ultimately, there is nothing wrong with it: "I am offering a window of 30 days from today the 28th of April 2025 for [the organization] to mitigate or resolve the vulnerability before I consider any public disclosure."

When the lawyer sent the NDA with extra steps: the programmer could have chosen to hire a lawyer at this point to get advice. Or they could ignore this entirely (with the risk that the lawyer may sue him?), or proceed to negotiate terms, which the programmer did (offering a different document to sign).

IIUC, at that point, the lawyer went away and it's likely they will never contact this guy again, unless he discloses their name publicly and trashes their security, at which point the lawyer might sue for defamation, etc.

Anyway, my take is that as soon as the programmer got a lawyer email reply (instead of the "CTO thanking him for responsible disclosure"), he should have talked to his own lawyer for advice. When I have situations similar to this, I use the lawyer as a sounding board. i ask questions like "What is the lawyer trying to get me to do here?" and "Why are they threatening me instead of thanking me", and "What would happen if I respond in this way".

Depending on what I learned from my lawyer I can take a number actions. For example, completely ignoring the company lawyer might be a good course of action. The company doesn't want to bring somebody to court then have everybody read in a newspaper that the company had shitty security. Or writing a carefully written threatening letter- "if you sue me, I'll countersue, and in discovery, you will look bad and lose". Or- and this is one of my favorite tricks, rewriting the document to what I wanted, signing that, sending it back to them. Again, for all of those, I'd talk to a lawyer and listen to their perspective carefully.

Buttons840 2 hours ago||
I've said before that we need strong legal protections for white-hat and even grey-hat security researchers or hackers. As long as they report what they have found and follow certain rules, they need to be protected from any prosecution or legal consequences. We need to give them the benefit of the doubt.

The problem is this is literally a matter of national security, and currently we sacrifice national security for the convenience of wealthy companies.

Also, we all have our private data leaked multiple times per month. We see millions of people having their private information leaked by these companies, and there are zero consequences. Currently, the companies say, "Well, it's our code, it's our responsibility; nobody is allowed to research or test the security of our code because it is our code and it is our responsibility." But then, when they leak the entire nation's private data, it's no longer their responsibility. They're not liable.

As security issues continue to become a bigger and bigger societal problem, remember that we are choosing to hamstring our security researchers. We can make a different choice and decide we want to utilize our security researchers instead, for the benefit of all and for better national security. It might cause some embarrassment for companies though, so I'm not holding my breath.

krisoft 1 minute ago|
> we need strong legal protections for white-hat and even grey-hat security researchers or hackers.

I have a radical idea which goes even further: we should have legaly mandated bug bounties. A law which says that if someone makes a proper disclosure of an actual exploitable security problem then your company has to pay out. Ideally we could scale the payout based on the importance of the infrastructure in question. Vulnerabilities with little lasting consequence would pay little. Serious vulnerabilities with potential to society wide physical harm could pay out a few percents of the yearly revenue of the given company. For example hacking the high score in a game would pay only little, a vulnerability which can collapse the electric grid or remotely command a car would pay a king’s ransom. Enough to incentivise a cottage industry to find problems. Hopefully resulting in a situation where the companies in question find it more profitable to find and fix the problems themselves.

I’m sure there is a potential to a lot of unintended consequences. For example i’m not sure how could we handle insider threats. One one hand insider threats are real and the companies should be protecting against them as best as they could. On the other hand it would be perverse to force companies to pay developers for vulnerabilities the developers themselves intentionally created.

projektfu 3 hours ago||
Another comment says the situation was fake. I don't know, but to avoid running afoul of the authorities, it's possible to document this without actually accessing user data without permission. In the US, the Computer Fraud and Abuse Act and various state laws are written extremely broadly and were written at a time when most access was either direct dial-up or internal. The meaning of abuse can be twisted to mean rewriting a URL to access the next user, or inputting a user ID that is not authorized to you.

Generally speaking, I think case law has avoided shooting the messenger, but if you use your unauthorized access to find PII on minors, you may be setting yourself up for problems, regardless if the goal is merely dramatic effect. You can, instead, document everything and hypothesize the potential risks of the vulnerability without exposing yourself to accusation of wrongdoing.

For example, the article talks about registering divers. The author could ask permission from the next diver to attempt to set their password without reading their email, and that would clearly show the vulnerability. No kids "in harm's way".

alphazard 3 hours ago|
Instead of understanding all of this, and when it does or does not apply, it's probably better to disclose vulnerabilities anonymously over Tor. It's not worth the hassle of being forced to hire a lawyer, just to be a white hat.
cptskippy 3 hours ago||
Part of the motivation of reporting is clout and reputation. That sounds harsh or critical but for some folks their reputation directly impacts their livelihood. Sure the data controller doesn't care, but if you want to get hired or invited to conferences then the clout matters.
esafak 1 hour ago||
You could use public-key encryption in your reports to reveal your identity to parties of your choosing.
desireco42 4 hours ago|
I think the problem is the process. Each country should have a reporting authority and it should be the one to deal with security issues.

So you never report to actual organization but to the security organization, like you did. And they would be more equiped to deal with this, maybe also validate how serious this issue is. Assign a reward as well.

So you are researcher, you report your thing and can't be sued or bullied by organization that is offending in the first place.

PaulKeeble 3 hours ago||
If the government wasn't so famous for also locking people up that reported security issues I might agree, but boy they are actually worse.

Right now the climate in the world is whistleblowers get their careers and livihoods ended. This has been going on for quite a while.

The only practical advice is ignore it exists, refuse to ever admit to having found a problem and move on. Leave zero paper trail or evidence. It sucks but its career ending to find these things and report them.

iamnothere 34 minutes ago|||
This would only work if governments and companies cared about fixing issues.

Also, it would prevent researchers from gaining public credit and reputation for their work. This seems to be a big motivator for many.

ikmckenz 3 hours ago|||
That’s almost what we already have with the CVE system, just without the legal protections. You report the vulnerability to the NSA, let them have their fun with it, then a fix is coordinated to be released much further down the line. Personally I don’t think it’s the best idea in the world, and entrenching it further seems like a net negative.
ylk 1 hour ago|||
This is not how CVEs work at all. You can be pretty vague when registering it. In fact they’re usually annoyingly so and some companies are known for copy and pasting random text into the fields that completely lead you astray when trying to patch diff.

Additionally, MITRE doesn’t coordinate a release date with you. They can be slow to respond sometimes but in the end you just tell them to set the CVE to public at some date and they’ll do it. You’re also free to publish information on the vulnerability before MITRE assigned a CVE.

desireco42 2 hours ago|||
Yeah, something like that, nothing too much, just to exclude individual to deal with evil corps
janalsncm 2 hours ago||
Does it have to be a government? Why not a third party non-profit? The white hat gets shielded, and the non-profit has credible lawyers which makes suing them harder than individuals.

The idea is to make it easier to fix the vulnerability than to sue to shut people up.

For credit assignment, the person could direct people to the non profit’s website which would confirm discovery by CVE without exposing too many details that would allow the company to come after the individual.

This business of going to the company directly and hoping they don’t sue you is bananas in my opinion.

More comments...