Posted by mdhb 5 days ago
If it results in a new billion-dollar penalty, maybe it would've saved money to move him quietly to a cushy rest-and-vest advisory position, in which he's not allowed to see, do, or say anything.
> In his whistleblower complaint, Baig is requesting reinstatement, [...]
I don't understand the "reinstatement" part. Does he actually want to go back, and think that it wouldn't be a toxic dynamic?
(He already talked about retaliation. And then by going public the way he did, I'd think he burned that bridge, salted the earth for a mile around bridge, and then nuked the entire metro area from orbit.)
Or is "reinstatement" simply something the lawyers just have to ask for, to ostensibly make him whole, but they actually neither want nor expect that?
“Reinstatement” is usually a legal formality in whistleblower cases: lawyers ask for it because the law says the remedy for retaliation is to make the employee whole, and it strengthens the case even if nobody expects it to happen. In reality, returning to the job is almost never feasible, so the request mostly serves as leverage for a financial settlement.
> In the United States, whistleblowers typically receive a percentage of the money collected by the government, ranging from 10% to 30% of fines or penalties.
Maybe he's just laying a foundation for an upcoming legal dispute?
But until he is paid, his position is that he wants to be reinstated.
Remember, kids: End to end encryption is useless if the "ends" are fully controlled by an (untrustworthy) third party.
you probably mean outside of the USA, it's huge in Europe/UK
(which doesn't contradict your main point)
USA is special because it is the (only?) country where iPhone has more users than Android.
If you give someone your number, they’ll text you on WhatsApp.
Russia: Telegram
Taiwan: Line
Japan: Line
By contrast, WhatsApp is best known to me for being used in Europe, Australia, and India.
For business comms drop instagram and move WhatsApp to first.
For Singapore it seems LinkedIn messages are the go to IM for business.
Europe p2p: telegram number one by a huge margin, then WhatsApp. B2b: WhatsApp, period.
Blue bubble isn't really a thing ever mentioned in France either, not enough iPhone market share.
Nobody uses iMessage. People with iPhone use WhatsApp too.
The user experience of iMessage used to be subpar and now everyone has WhatsApp installed anyway, the feature set is the same and it works on all phone brands so nobody feels like switching.
> According to the 115-page complaint, Baig discovered through
> internal security testing that WhatsApp engineers could “move
> or steal user data” including contact information, IP addresses
> and profile photos “without detection or audit trail”.
That isn't really the breach you're making it out to be. Profile photos, unless made private/contacts only, are already publicly visible, and so is "contact information".
Of course these are useful to intelligence services, but this doesn't mean that Baig found they don't have true end-to-end encryption.
If I were Evil-Tim-Cook, I'd have a deal with the FBI (and other agencies) where I'd hand over some user's data, in return for them keeping that secret and occasionally very publicly taking Apple to court demanding they expose a specific user and intentionally losing - to bolster Apple's privacy reputation.
The FBI wants its investigations to go to court and lead to convictions. Any evidence gained in this way would be exposed as coming form Apple; notwithstanding parallel construction:
* https://en.wikipedia.org/wiki/Parallel_construction
As for other agencies, I'm sure many have exploits to attack these devices and get spyware on them, and so may not need Apple's assistance.
Apple is a part of PRISM so there's approximately a 100% chance that anything you send to Apple via message, cloud, or whatever else, gets sent onto the NSA and consequently any agency that wants it. But the entire mass data collection they are doing is probably unconstitutional and thus illegal. But anytime it gets challenged in courts it gets thrown out on a lack of standing - nobody can prove it was used against them, so they don't have the legal standing to sue.
And the reason this is, is because its usage is never acknowledged in court. Instead there is parallel construction. [1] For instance imagine the NSA finds out somebody is e.g. muling some drugs. They tip off the police and then the police find the car in question and create some reason to pull it over - perhaps it was 'driving recklessly.' They coincidentally find the cache of drugs after doing a search of the car because the driver was 'behaving erratically', and then this 'coincidence' is how the evidence is introduced into court.
----
So getting back to Apple they probably want to have their cake and eat it too. By giving the NSA et al all they want behind the scenes they maintain those positive relations (and compensatory $$$ from the government), but then by genuinely fighting its normalization (which would allow it to be directly introduced) in court, they implicitly lie to their users that they're keeping their data protected. So it's this sort of strange thing where it's a facade, but simultaneously also real.
It's kind of wild that this is the part of the deep state MAGA just forgot about.
For instance, if someone shared something incriminating in a group chat and got arrested, and that info was only shared in the group chat, they'd have to silence everyone in that group chat to ensure that the channel still seemed secure. I don't think at least our government is that competent or careful.
But also, people wayyyy overhype how much apple tries to come off as privacy-forward. They sell ads and don't even allow you to deny apps access to the internet, and for the most part their phone security seems more focused on denying you control over your own phone rather than denying a third party access to it. I think they just don't want the hassle of complying with warrants. Stuff like pegasus would only be so easy to sell if you couldn't lean on the company to gain access, and I think it'd be difficult for hundreds of countries to conspire to obscure legal pressure. Finally Apple generally has little to gain from reading your data, unlike other tech giants with perverse incentives.
Of course this is all speculation, but I do trust imessages much more than I trust anything coming out of meta, and most of what comes out of google.
“Only” is doing an incredible amount of work there.
Unless you concoct something incriminating solely for the purpose of testing this, the something incriminating being discussed in group chat previously happened in the real world. Ripples of information were created there and can be found (parallel construction).
If they fail in parallel construction, they always have the option to continue. For the vast majority of cases where opsec isn't 100% foolproof, we hear about them. For the few cases where it was foolproof, we just don't hear about them.
Corrupt investigators can use parallel construction to pretend that the key breakthrough in the case was actually something legal.
Clearly, you are underestimating the intelligence and capabilities of the US government. They have a lot of money. Like... A lot of money.
* Recovery Keys
* Recovery Contact (someone who holds your recovery key in key escrow)
plain_msg = decrypt(encrypted_msg)
send_to_nsa(plain_msg)
YES!
Also makes me wonder about Google's change wrt android security patches - under the guise of "making it easier for OEMs" by moving to quarterly is actually just so that Paragon and other nation state spyware has access to the vulnerabilities for at least 4 months before they get patched.
Personally it doesn't matter if there are auditing systems in place, if the data is readable in any way, shape or form.
I haven’t touched a lot of these cyber security parts of industry: especially policies for awhile…
… but I do recall that auditing was a stronger motivator than preventing. There were policies around checking the audit logs, not being able to alter audit logs and ensuring that nobody really knew exactly what was audited. (Except for a handful of individuals of course.)
I could be wrong, but “observe and report” felt like it was the strongest possible security guarantee available inside the policies we followed (PCI-DSS Tier 1). and that prevention was a nice to have on top.
That strategy doesn't help a victim who's being stalked by an employee, who can use your system to find their new home address. They often don't care if they get fired (or worse), so the motivator doesn't work because they aren't behaving rationally to begin with.
I'm not talking about small businesses here, but large corporations that have more than enough resources to do better than just auditing.
> crime happens but perpetrators will be punished
Societies can't prevent crime without draconian measures that stifle all of our freedoms to an extreme degree. Corporations can easily put barriers in place that make it much more difficult (or impossible) to gain unauthorized access to customer information. The entire system is under their control.
No amount of internal auditing, externally verified and stamped with approval for following ISO standards theater will change the fact that as a company it has firebombed each and every bridge that was ever available to it, in my book.
If the data has the potential to be misused, that is enough for me to equate it as not secure for use.
Different culture from the blue app, or whatever they call it?
From people at Facebook circa 2018, I know that end user privacy was addressed at multiple checkpoints -- onboarding, the UI of all systems that could theoretically access PII, war stories about senior people being fired due to them marginally misunderstanding the policy, etc.
Note that these friends did not belong to WhatsApp, which was at that time a rather separate suborg.
The privacy violations and complete disregard for user data are too numerous to mention. There's a Wikipedia article that summarizes the ones we publicly know about.
Based on incentives alone, when the company's primary business model is exploiting user data, it's easy to see these events as simple side effects. When the CEO considers users of his products to be "dumb fucks", that culture can only permeate throughout the companies he runs.
Your comment talks about incentives, but you haven’t actually made a rational argument tying actual incentives to behaviour.
The problem is similar to that of government efforts to ban encryption: if you have a backdoor, everyone has a backdoor.
If Meta is collecting huge amount of user info like candy (they are) and using it for business purposes (they are), then necessarily those employees implementing those business purposes can do that, too.
You can make them pinky promise not to. That doesn't do anything.
Meta has a similar problem with stalking via Ring camera. You allow and store live feeds of every Ring camera? News flash: your employees can, too! They're gonna use that to violate your customers!
So whatever they claim publicly, and probably to their low-level employees, is just marketing to cover their asses and minimize the impact to their bottom line.
You claim it’s all talk, but it’s not much more effort to walk the walk. It doesn’t hurt profits to do it.
That being said, maybe I'm dumb but I guess I don't see the huge risk here? I could certainly believe that 1500 employees had basically complete access with little oversight (logging and not caring isn't oversight imo). But how is that a safety risk to users? User information is often very important in the day to day work of certain engineering orgs (esp. the large number of eng who are fixing things based off user reports). So that access exists, what's the security risk? That employees will abuse that access? That's always going to be possible I think?
If you have a sister,imagine her being stalked by an employee?
If you have crypto, imagine an employee selling your information to a third party?
1) leave quietly and tell no one: con - no one on HN gets to talk about it. The next person needing money does it anyway.
2) leave loudly when you're still poor: con - you get blacklisted from tech and die from a preventable disease working at a gas station without insurance. The company implements the policy anyway.
3) leave loudly when your rich: con - people accuse you of selling out the users.
4) Don't join Meta in the first place
I have consistently told recruiters from Meta to leave me alone. It is a company that has knowingly done massive harm to our culture and our children, and I have no interest in ever working with or for them.
from here: https://www.courtlistener.com/docket/71293063/baig-v-meta-pl...
This further surprised Mr. Baig, as WhatsApp, which is known for its strong security brand externally, had such a small security team of just 6 engineers, and they were all only working on this tiny aspect of application security. All the other teams in WhatsApp were well staffed. The engineering team had about 1200 engineers. In addition, there were about 100 product managers, about 100 product designers, nearly 200 data scientists, etc. WhatsApp overall had about 3000 employees.
“Are we going to be in the same situation as Mudge at Twitter?”
WhatsApp is way beyond just texting and calling, it is basically global infrastructure now, used daily by governments, NGOs, and billions. This is not a startup screw-up, it's a public utility gone seriously messed up. Heads need to roll. Stop playing god. Secure the platform or step aside.≥ Company refused to allocate more than around 10 engineers to the Security team at any point
If true, this tells the story here with security culture at WhatsApp. Assuming a backlog of known weaknesses (as any established code base will have), and the velocity that 100 PMs and 1200 SWEs implies, how would you do anything as a security team besides stick your fingers in the figurative holes in the dike? The ensuing conflict between Baig and his superiors about not fixing stuff is surely going to result in an assessment of "poor performance" but is likely just Baig giving a f** about user data.
As many holes as WhatsApp's "E2E" encryption has, this shows how valuable it still is. It's all metadata, not message content.
There is no oversight of these monstrosities of any sort. I doubt anyone would have issues with the thesis that Meta would implement anything that might curb their user numbers unless it was mandated.
Why would they? They are beholden to their shareholders first. If it isn't illegal then it isn't illegal, immoral perhaps but that is not illegal, unless it is illegal.
My learned friends are going to have to really get their bowling arms warmed up for this sort of skit. For starters, you need a victim ... err complainant.
And not every CEO begins life in their company with "if you need any info just ask, they trust me, dumb fucks"
There are very, very few apps I really trust. E.g. the only mechanism I trust for communicating passwords securely is GPG, I wouldn’t even use Signal for that.
Onavo Protect, the VPN client from the data-security app maker acquired by Facebook back in 2013, has now popped up in the Facebook iOS app itself, under the banner “Protect” in the navigation menu. Clicking through on “Protect” will redirect Facebook users to the “Onavo Protect – VPN Security” app’s listing on the App Store.
https://techcrunch.com/2018/02/12/facebook-starts-pushing-it...
https://www.cnbc.com/amp/2022/11/17/meta-disciplined-or-fire...
A related scheme is the existence of brokers who will, for a fee, recover banned or locked accounts. User pays the broker $X, broker pays their contact at Meta $Y, and using internal tooling suddenly a ban or suspension that would normally put someone in an endless loop of automated vague bullshit responses gets restored.