Posted by impish9208 5 days ago
I think that in all of these cases it would have been no worse for the companies in question if they just sent out a dry, "just the facts, ma'am" report of what actually happened, without any of the BS "the security of our customer data is our primary priority!" statements to begin with that always accompany these kinds of breach disclosures. E.g. something like:
On <date>, due to a vulnerability in the third party vendor SolarWinds which provides network security services for us, we detected the following breaches of customer data:
1. xxx
2. yyy
The steps we are currently taking, and what you should do: zzz.
----
Perhaps one good thing that can come out of this is that some sort of "standard" format for breach disclosures comes about (think the "Nutrition Facts" labels on food boxes in the US). All I do when I see companies trying to minimize breach disclosures is assume they're bullshitting anyway.
But most companies are not just that. They're barely-legal Ponzi schemes. The board and their appointed CxOs are selected specifically on the basis of how much they can get the stock price up. This results in companies making lots of terribly short-sighted decisions.
In the specific case of breach disclosures, any bad news about a company tends to create uncertainty, which makes short-term investors and speculators close their positions, which drops the price. This drop tends to be short-term, but it imperils the liquidity of the investment, and liquid investments tend to be more valuable, so...
"Most companies are ponzi schemes focused on short term stock price appreciation" is a criticism that has been around for decades. If that's really the case, the performance of the s&p 500 shows that it's either false, or a really long con that somehow still hasn't collapsed yet.
A far more straightforward explanation is that CEOs don't like delivering bad news, especially ones that happened on their watch, so they try to bury it. Covering up mistakes is something that kids even do. There's no need to invoke "most companies are [...] barely-legal Ponzi schemes"
Yep: Man at top has ego. News at 11.
Like, "reputational damage", obviously nobody cares if a company gets breached - 99% of the customers won't notice, 99% of the remaining won't understand it, and the competitors are probably just as much at risk; all you need to do is issue some PR note and maybe offer free credit monitoring (some US peculiarity), and you're done. Same for most other things leading to "reputational damage". It feels like it's obviously a loss of $nothing, so why do CFOs and CISOs seem to put so much interest in this impact category?
Well, I haven't thought about stock prices, and their lack of correlation with customer experience. My bad.
--
[0] - I suppose I had all the things I needed to figure it out, somehow I didn't connect the dots. And/or was too busy trying to ensure our fancy probability math wasn't bullshit to pay attention to the larger context.
> The investigation revealed that the threat actor accessed and downloaded a limited number of our source code repositories, as the threat actor is reported to have done with other victims of the SolarWinds Orion supply chain attack. We believe that the source code downloaded by the threat actor was incomplete and would be insufficient to build and run any aspect of the Mimecast service. We found no evidence that the threat actor made any modifications to our source code nor do we believe that there was any impact on our products. We will continue to analyze and monitor our source code to protect against potential misuse.
But the SEC feels this was misleading, because they did not specify which source code repositories were targeted or what percentage of the code in those repositories was exfiltrated. That's the dynamic that drives these kind of disclosures, oversharing driving demands for even more absurd levels of oversharing. They had to go calculate that precisely 76% of their M365 interop code was exfiltrated - is that information worth the cost of producing it, or even valuable to anyone in any way?
It's valuable to the SEC, because they're the ones tasked with enforcing these rules and specifics are what allow for enforcement. If you publish an actual percentage, then they can ding you for lying if the percentage was wrong. being vague isn't misleading on its own, but it can be used to be misleading.
if they actually know what was exfiltrated, then putting specifics in the disclosure should be a trivial matter. maybe not a percentage of lines in the codebase, but you've got to give the SEC enough that they could potentially check it and determine if it was a lie. and "a limited number" isn't specific enough to do that.
But no company will deliberately overstate the customer impact, think of what it would do to their bottom lines. They much prefer spending a bunch of money to minimize overstating. Exactly.
If only they were allowed to understate customer impact then they could harvest even more of that reputational arbitrage is not a very compelling justification.
This assumes there is someone on staff capable of writing a no-nonsense diagnosis.
> Avaya. will pay a $1 million civil penalty;
> Check Point will pay a $995,000 civil penalty; and
> Mimecast will pay a $990,000 civil penalty.
With the exception of Mimecast, these are companies that are bringing in billions of dollars in revenue annually. How is this supposed to deter them?
The SW supply chain attack is one of the most brilliant cyber attacks in recent history. They hit a train load of gold bars, and had a much as 14 months of dwell time with potentially 18,000 customers. Discovery must have been disappointing for the attackers.
If you follow the most important rule, secrecy, you get plausible deniability and small-er fines.
* ie a practical precedent, not a legal one
Unisys and Avaya are both security vendors. This absolutely is a bad look for them, as almost every Security RFP asks about internal controls and how a vendor has remediated against these issues, and this is ammunition for any competitor to ask a prospect to re-evaluate purchases from either due to misrepresenting their security procedures.
Furthermore, Unisys only has an operating profit of around $200M a year, so a $4M fine is fairly brutal (that's an entire security team's operating budget for a company at Unisys' size).
Avaya's is smaller still, so that $1M is fairly brutal for them
Furthermore, security vendors like Avaya and Unisys could arguably be in breach of contract with customers because it could be argued that they misrepresented their internal security protocols to customers.
Often you may see this result in a divestiture, as in, unable to clean up, so we'll sell the client base to someone with better systems. (In theory. Almost inevitably, this drags a few legacy systems over anyway.)
The reason why companies get breached is because the systems being breached are all legacy. Company A buys company B who bought company C, which merged with company D. C fires D's old IT department, because it's redundant, so now D's billing system is being managed by C's IT department. C then sells itself to B, who has a much more robust billing system. At this point, it'd make sense to replace the billing system from D, but everyone who knew how it worked got fired in the C/D merger. So it sits around because nobody wants to break that part of the business. Then A buys B and does another round of layoffs, so anyone who even knew about this is gone.
Ten years and hundreds of iterations of this exact cycle later, you get an e-mail from a stranger saying they found all your customer records being sold on a cybercrime forum. Your IT department scrambles to remediate a breach in a system they've never heard of that nobody remembers installing or maintaining. It's just always been there. Corporate amnesia runs deep. People are finding forgotten old servers running unpatched versions of Windows Server 2003 that were so ritualistically overlooked you'd need to be high on Class Z mnestics just to perceive them.
Every enterprise IT department is like this. That's why companies get breached so damned often. There is never enough time in the budget to properly document legacy systems, nor are the decision-makers at the top even aware of the fact that they exist. Their job is to eat things, and they eat voraciously. If you want to stop this from happening, you need to make M&A illegal, not just inflict more pain to the invisible arms the corporate body cannot perceive pain from.
That's because it's not understood what a liability allowing this to occur is. Perhaps if we fine them based on revenue they would understand that IT is a core part of their company and can no longer live on the edges of the business units.
No, it's because they understand what the liability of allowing this is (minimal and inconsequential). So why bother?
Look at the stock price hit companies take when they have security breaches. The impact is basically none apart from a short-term dip which recovers soon enough. Or look at the fines companies get for breaches, always a minuscule percentage of their profit.
This is why companies will keep short-changing security, because to them it's just a cost that doesn't really matter. And objectively, it doesn't matter when viewed from the lens of maximizing profit at all cost.
Did crowdstrike go out of business yet as a consequence of their breach? Did tmobile? Did equifax? These all should have, but all are going strong.
So yes, impact is minimal and inconsequential.
Depressing.
Percent of revenue fines regressively to margin.
10% of Walmart's revenue is 4 years' profits. 10% of Equifax's is a few quarters'. Moreover, you'd have a bureaucrats' delight of companies splitting revenues across entities while courts have to litigate common control claims. Unless you have a good reason to punish low-margin businesses more heavily than high-margin ones, this is an inefficient scheme.
Better: fines based on damages, trebled.
Except damages for data leaks are kind of hard to compute, since in practice they're $0 until some of the data is provably used to cause some non-$0 worth of damage down the line.
Through private action, yes. Use statute to define damages as a function of number of people affected, type of data released and whether the company self reported or was caught, by the public or a regulator. Add enhancements if the company was reckless, the data was out there for longer than a month or if it was accessed by foreign adversaries.
Several recent examples would have fallen foul of this … Grenfell tower, Tesla FSD, Boeing 737max, Thames Water, United Utilities and the EA.