Posted by toomuchtodo 8 hours ago
ai;dr
This is AI slop.
Use your own words!
I would rather read the original prompt!
Am I the only one who can't stand this AI slop pattern?
So you never report to actual organization but to the security organization, like you did. And they would be more equiped to deal with this, maybe also validate how serious this issue is. Assign a reward as well.
So you are researcher, you report your thing and can't be sued or bullied by organization that is offending in the first place.
Right now the climate in the world is whistleblowers get their careers and livihoods ended. This has been going on for quite a while.
The only practical advice is ignore it exists, refuse to ever admit to having found a problem and move on. Leave zero paper trail or evidence. It sucks but its career ending to find these things and report them.
Additionally, MITRE doesn’t coordinate a release date with you. They can be slow to respond sometimes but in the end you just tell them to set the CVE to public at some date and they’ll do it. You’re also free to publish information on the vulnerability before MITRE assigned a CVE.
The idea is to make it easier to fix the vulnerability than to sue to shut people up.
For credit assignment, the person could direct people to the non profit’s website which would confirm discovery by CVE without exposing too many details that would allow the company to come after the individual.
This business of going to the company directly and hoping they don’t sue you is bananas in my opinion.
Also, it would prevent researchers from gaining public credit and reputation for their work. This seems to be a big motivator for many.
So it sort of makes sense that companies would go on the attack because there's a risk that their insurance company will catch wind and they'll be on the hook.
By the way, I had a story when I accidentally hacked an online portal in our school. It didn't go much and I was "caught" but anyways. This is how we learn to be more careful.
I believe in every single system like that it's fairly possible to find a vulnerability. Nobody cares about them and people that make those systems don't have enough skill to do it right. Data is going to be leaked. That's the unfortunate truth. It gets worse with the come of AI. Since it has zero understanding of what it is actually it will make mistakes that would cause more data leaks.
Even if you don't consider yourself as an evil person, would you still stay the same knowing real security vulnerability? Who knows. Some might take advantage. Some won't and still be punished for doing everything as the "textbook way".
A) in EU; GDPR will trump whatever BS they want to try B) no confirmation affected users were notified C) aggro threats D) nonsensical threats, sourced to Data Privacy Officer w/seemingly 0 scruples and little experience
Due to B), there's a strong responsibility rationale.
Due to rest, there's a strong name and shame rationale. Sort of equivalent to a bad Yelp review for a restaurant, but for SaaS.