Posted by toomuchtodo 4 hours ago
I came across a pretty serious security concern at my company this week. The ramifications are alarming. My education, training and experience tells me one thing: identify, notify, fix. Then when I bring it to leadership, their agenda is to take these conversations offline, with no paper trail, and kill the conversation.
Anytime I see an article about a data breach, I wonder how long these vulnerabilities were known and ignored. Is that just how business is conducted? It appears so, for many companies. Then why such a focus on security in education, if it has very little real-world application?
By even flagging the issue and the potential fallout, I’ve put my career at risk. These are the sort of things that are supposed to lead to commendations and promotions. Maybe I live in fantasyland.
I was in a very similar position some years ago. After a couple of rounds of “finish X for sale Y then we'll prioritise those issue”, which I was young and scared enough to let happen, and pulling on heartstrings (“if we don't get this sale some people will have to go, we risk that to [redacted] and her new kids, can we?”) I just started fixing the problems and ignoring other tasks. I only got away with the insubordination because there were things I was the bus-count-of-one on at the time and when they tried to butter me up with the promise of some training courses, I had taken & passed some of those exams and had the rest booked in (the look of “good <deity>, he got an escape plan and is close to acting on it” on the manager's face during that conversation was wonderful!).
The really worrying thing about that period is that a client had a pen-test done on their instance of the app, and it passed. I don't know how, but I know I'd never trust that penetration testing company (they have long since gone out of business, I can't think why).
At least compared to our internal digital security group would couldn't fathom, "your test is wrong for how this app is configured, that path leads to a different app and default behavior" it's not actually a failure... to a canned test for a php exploit. The app wasn't php, it was an SPA and always delivered the same default page unless in the /auth/* route.
After that my response became, show me an actual exploit with an actual data leak you can show me and I'll update my code instead of your test.
Simple as. Not your company? not your problem? Notify, move on.
For an external company “not your company, not your problem” for security issues is not a good moral position IMO. “I can't risk the fallout in my direction that I'm pretty sure will result from this” is more understandable because of how often you see whistle-blowers getting black-listed, but I'd still have a major battle with the pernickety prick that is my conscience¹ and it would likely win out in the end.
[1] oh, the things I could do if it wasn't for conscience and empathy :)
The article doesn't say exactly, but if they used their company e-mail account to send the e-mail it's difficult to argue it wasn't related to their business.
They also put "I am offering" language in their e-mail which I'm sure triggered the lawyers into interpreting this a different way. Not a choice of words I would recommend using in a case like this.
I had a bit of a feral journey into tech, poor upbringing => self taught college dropout waiting tables => founded iPad point of sale startup in 2011 => sold it => Google in 2016 to 2023
It was absolutely astounding to go to Google, and find out that all this work to ascend to an Ivy League-esque employment environment...I had been chasing a ghost. Because Google, at the end of the day, was an agglomeration of people, suffered from the same incentives and disincentives as any group, and thus also had the same boring, basic, social problems as any group.
Put more concretely, couple vignettes:
- Someone with ~5 years experience saying approximately: "You'd think we'd do a postmortem for this situation, but, you know how that goes. The people involved think they're an organization-wide announcement that you're coming for them, and someone higher ranked will get involved and make sure A) it doesn't happen or B) you end up looking stupid for writing it."
- A horrible design flaw that made ~50% of users take 20 seconds to get a query answered was buried, because a manager involved was the one who wrote the code.
Maybe not when it is as much as 20 seconds, but an old manager of mine would save fixing something like that for a “quick win” at some later time! He would even have artificial delays put in, enough to be noticeable and perhaps reported but not enough to be massively inconvenient, so we could take them out during the UAT process - it didn't change what the client finally got, but it seemed to work especially if they thought they'd forced us to spend time on performance issues (those talking to us at the client side could report this back up their chain as a win).
Effectively you put in on purpose bugs for an inspector to find so they don't dig too deep for difficult to solve problems.
Whatever the selection process is for gestures broadly at everything, it's not selecting for being both (hell, often not for either) able and willing to do a good job, so far as what the job is apparently supposed to be. This appears to hold for just about everything, reputation and power be damned. Exceptions of high-functioning small groups or individuals in positions of power or prestige exist, as they do at "lower" levels, but aren't the norm anywhere as far as I've been able to discern.
Lmao Apple will not do anything for actual malware when reported with receipts, besides sending you a form letter assuring you "experts will look into it, now fuck off" then never contact you again. Ask me how I know. To their credit, I suspected they ran it through useless rudimentary automated checks which passed and they were back in business like a day later.
If your expectation is they will do something about shitty coding practices half the App Store would be banned.
Ask while you are in an EU country, request appeal and initiate Out-of-court dispute resolution.
Or better yet: let the platform suck, and let this be the year of the linux desktop on iPhone :)
What are the odds an insurer would reach for a lawyer? They probably have several on speed dial.
Here all databases with personal information must be registered there and data must be secure.
They did. It's in the article. Search for 'CSIRT'. It's one of the key points of the story.
This sounds like a cultural mismatch with their lawyers. Which is ironic, since the lawyers in question probably thought of themselves as being risk-averse and doing everything possible to protect the organisation's reputation.
And are you only talking about cybersecurity disclosure, liability, patent applications... And the scenario when you're both working for the same party, or opposing parties?
If you read enough lawyer messages (they show up on HN all the time) you will see they follow a pattern of looking tough, and increasingly threatening posture. But often, the laws they cite aren't applicable, and wouldn't hold up in court or public opinion.
Based on your experience, do you think there are specific ways the author could have communicated differently to elicit a better response from the lawyers?
Some things I can see. I think the way the programmer worded this sounds adversarial; I wouldn't have written it that way, but ultimately, there is nothing wrong with it: "I am offering a window of 30 days from today the 28th of April 2025 for [the organization] to mitigate or resolve the vulnerability before I consider any public disclosure."
When the lawyer sent the NDA with extra steps: the programmer could have chosen to hire a lawyer at this point to get advice. Or they could ignore this entirely (with the risk that the lawyer may sue him?), or proceed to negotiate terms, which the programmer did (offering a different document to sign).
IIUC, at that point, the lawyer went away and it's likely they will never contact this guy again, unless he discloses their name publicly and trashes their security, at which point the lawyer might sue for defamation, etc.
Anyway, my take is that as soon as the programmer got a lawyer email reply (instead of the "CTO thanking him for responsible disclosure"), he should have talked to his own lawyer for advice. When I have situations similar to this, I use the lawyer as a sounding board. i ask questions like "What is the lawyer trying to get me to do here?" and "Why are they threatening me instead of thanking me", and "What would happen if I respond in this way".
Depending on what I learned from my lawyer I can take a number actions. For example, completely ignoring the company lawyer might be a good course of action. The company doesn't want to bring somebody to court then have everybody read in a newspaper that the company had shitty security. Or writing a carefully written threatening letter- "if you sue me, I'll countersue, and in discovery, you will look bad and lose". Or- and this is one of my favorite tricks, rewriting the document to what I wanted, signing that, sending it back to them. Again, for all of those, I'd talk to a lawyer and listen to their perspective carefully.
The problem is this is literally a matter of national security, and currently we sacrifice national security for the convenience of wealthy companies.
Also, we all have our private data leaked multiple times per month. We see millions of people having their private information leaked by these companies, and there are zero consequences. Currently, the companies say, "Well, it's our code, it's our responsibility; nobody is allowed to research or test the security of our code because it is our code and it is our responsibility." But then, when they leak the entire nation's private data, it's no longer their responsibility. They're not liable.
As security issues continue to become a bigger and bigger societal problem, remember that we are choosing to hamstring our security researchers. We can make a different choice and decide we want to utilize our security researchers instead, for the benefit of all and for better national security. It might cause some embarrassment for companies though, so I'm not holding my breath.
I have a radical idea which goes even further: we should have legaly mandated bug bounties. A law which says that if someone makes a proper disclosure of an actual exploitable security problem then your company has to pay out. Ideally we could scale the payout based on the importance of the infrastructure in question. Vulnerabilities with little lasting consequence would pay little. Serious vulnerabilities with potential to society wide physical harm could pay out a few percents of the yearly revenue of the given company. For example hacking the high score in a game would pay only little, a vulnerability which can collapse the electric grid or remotely command a car would pay a king’s ransom. Enough to incentivise a cottage industry to find problems. Hopefully resulting in a situation where the companies in question find it more profitable to find and fix the problems themselves.
I’m sure there is a potential to a lot of unintended consequences. For example i’m not sure how could we handle insider threats. One one hand insider threats are real and the companies should be protecting against them as best as they could. On the other hand it would be perverse to force companies to pay developers for vulnerabilities the developers themselves intentionally created.
Generally speaking, I think case law has avoided shooting the messenger, but if you use your unauthorized access to find PII on minors, you may be setting yourself up for problems, regardless if the goal is merely dramatic effect. You can, instead, document everything and hypothesize the potential risks of the vulnerability without exposing yourself to accusation of wrongdoing.
For example, the article talks about registering divers. The author could ask permission from the next diver to attempt to set their password without reading their email, and that would clearly show the vulnerability. No kids "in harm's way".
So you never report to actual organization but to the security organization, like you did. And they would be more equiped to deal with this, maybe also validate how serious this issue is. Assign a reward as well.
So you are researcher, you report your thing and can't be sued or bullied by organization that is offending in the first place.
Right now the climate in the world is whistleblowers get their careers and livihoods ended. This has been going on for quite a while.
The only practical advice is ignore it exists, refuse to ever admit to having found a problem and move on. Leave zero paper trail or evidence. It sucks but its career ending to find these things and report them.
Also, it would prevent researchers from gaining public credit and reputation for their work. This seems to be a big motivator for many.
Additionally, MITRE doesn’t coordinate a release date with you. They can be slow to respond sometimes but in the end you just tell them to set the CVE to public at some date and they’ll do it. You’re also free to publish information on the vulnerability before MITRE assigned a CVE.
The idea is to make it easier to fix the vulnerability than to sue to shut people up.
For credit assignment, the person could direct people to the non profit’s website which would confirm discovery by CVE without exposing too many details that would allow the company to come after the individual.
This business of going to the company directly and hoping they don’t sue you is bananas in my opinion.