Most people just don't care enough until after they're hacked, at which point they care just enough to wish they'd done something more previously, which is just shy of enough to start doing something differently going forward.
It's not that normies are too stupid figure this out, it's that they make risk accept decisions on risks they don't thoroughly understand or care enough about to want to understand. My personal observation is that the concept of even thinking about potential future technology risks at all (let alone considering changing behavior to mitigate those risks) seems to represent an almost an almost pathological level of proactive preparation to normies, the same way that preppers building bunkers with years of food and water storage look to the rest of us.
I do use password manager and disk encryption, just for case of theft. Still feels like one stupid sleepy misclick away from losing stuff and no amount of MFAs or whatever is going to save me, they actually feel like added complexity which leads to mistakes.
(Have you ever attended an academic security conference like Usenix Security?)
Anyone else see all the drones flying over a peaceful No Kings assembly?
And even if the CIA/Mossad/NSA/whoever is "interested" in you - this is the era of mass surveillance. The chances that you're worth a Stuxnet level of effort is 0.000000001%. Vs. 99.999% chance that they'll happily hoover up your data, if you make it pretty easy for their automated systems to do that.
Honestly, the oversimplification here reads to me more like something Bob Jones could use to justify not caring about "b0bj0nes" not being a great password.
Best would be non-text, binary strings. Since I already use a password manager, I don't really need to type passwords by hand. But I do understand most people prefer text passwords that could be entered by hand if necessary.
Or: This is Bob "Dim Bulb" Jones we're talking to. KISS, and maybe we can convince him to upgrade his password to "iwantacoldbeernow".
Sorry, your password does not meet complexity requirements because it does not contain at least one of each of the following: uppercase letters, lowercase letters, numeric digits, nonalphanumeric symbols.
“I want 1 cold beer now.”
Sorry, your password may not contain spaces.
“Iwant1coldbeernow.”
Sorry, your password is too long.
“Iwant1beernow.”
Sorry, your password is too long.
“1Beer?”
Sorry, your password is too short.
“Password1!”
Thank you. Your password has been changed.
Debian is probably the only example of a successful public public-key infrastructure, but SSH keys are a perfectly serviceable form of public-key infrastructure in everyday life. At least for developers.
Mickens's skepticism about security labels is, however, justified; the problems he identifies are why object-capability models seem more successful in practice.
I do agree that better passwords are a good idea, and, prior to the widespread deployment of malicious microphones, were adequate authentication for many purposes—if you can avoid being phished. My own secure password generator is http://canonical.org/~kragen/sw/netbook-misc-devel/bitwords...., and some of its modes are memorable correct-horse-battery-staple-type passwords. It's arguably slightly blasphemous, so you may be offended if you are an observant Hindu.
The only thing I see is that both are contained and quarantined. The threat of both has been neutralized to the degree where I think the espionage agencies of all these countries are playing along together to keep the engine of their craft going uninterrupted without fuss.
In other words, you have to be gullible to think an embassy cares about protecting Assange. It’s a phone call from the secret service director saying “Keep him there for now, it’s where we want him.”
Can you elaborate on this? I don't understand the context for malicious microphones and how that affects secure passwords.
Microphones on devices such as Ring doorbell cameras are explicitly exfiltrating audio data out of your control whenever they're activated. Features like Alexa and Siri require, in some sense, 24/7 microphone activation, although normally that data isn't transmitted off-device except on explicit (vocal) user request. But that control is imposed by non-user-auditable device firmware that can be remotely updated at any time.
Finally, for a variety of reasons, it's becoming increasingly common to have a microphone active and transmitting data intentionally, often to public contexts like livestreaming video.
With the proliferation of such potentially vulnerable microphones in our daily lives, we should not rely too heavily on the secrecy of short strings that can easily leak through the audio channel.
But this is an example of the kind of thing the OP is talking about. You're probably not at a very realistic risk of having your password hacked via audio exfiltrated from the Ring camera at your front door. Unless it's Mossad et al who want your password.
Oh, you mean PEP 506. I wrote this program in 02012, and PEP 506 wasn't written until 02015, didn't ship in a released Python until 3.6 in 02016, and even then was only available in Python 3, which I didn't use because it basically didn't work at the time.
PEP 506 is just 22 lines of code wrapping SystemRandom. There's no advantage over just using SystemRandom directly.
“Manning up and facing trial” sounds fair in theory, but under the Espionage Act there’s no public-interest defense. He’d be barred from explaining motive or the public value of the disclosures, much of the case would be classified, and past national-security whistleblowers have faced severe penalties. That’s why he sought asylum.
I'd argue that for every Assange and Snowden, there are 100 (1k? 100k?) people using Tor for illegal, immoral, and otherwise terrible things. If you're OK with that, then sure, fine point.
> SSH keys
Heartbleed and Terrapin were both pretty brutal attacks on common PKI infra. It's definitely serviceable and very good, but vulnerabilities can go for forever without being noticed, and when they are found they're devastating.
That was the quote I was referring to. Also, of course I didn't say that no one should have any privacy; I simply implied a high moral cost for this particular form of privacy.
It is accurate to say that Tor's hidden service ecosystem is focused on drugs, ransomware, cryptocurrency, and sex crime.
However, there are other important things happening there. You can think of the crime as cover traffic to hide those important things. So it's all good.
The third result was "FREE $FOO PORN" where $FOO was something that nearly the entire human race recognizes as deeply Not Okay and is illegal everywhere.
I wonder what % of the heinous-sounding sites are actually providing the things they say they are.
I'm sure that some (most?) of them actually offer heinous stuff. But surely some of them are honeypots run by law enforcement and some are just straight up scams. However, I have no sense of whether that percentage is 1% or 99%.
https://scholar.harvard.edu/files/mickens/files/thenightwatc...
> A systems programmer will know what to do when society breaks down, because the systems programmer already lives in a world without law.
so unless you're worth all that trouble, you're really just trying to avoid being "low hanging fruit" compromised by some batch script probing known (and usually very old) vulnerabilities
or they just pay the $2100 per API call to download it from the telco or social media company.
it's not improper if you agreed to give a company the ability to sell your data to anyone -- the government is anyone, and they have the money.
Alas, no matter how hard we try to trust our compilers, we must also adopt methods to trust our foundries.
Oh, we don't have our own foundries?
Yeah, thats the real problem. Who owns the foundries?
>Otherwise you're just going to be making stupid mistakes that real cryptographers and security folks found and wrote defenses against three decades ago.
Yeah, thats the point, learn those same techniques, get it in the guild, and watch each others backs.
Rather than just 'trusting' some faceless war profiteers from the midst of an out of control military-industrial complex.
While having your own foundry is undoubtedly a good thing from the perspective of supply chain resiliency, if hacking is what you're worried about there are probably easier ways to mitigate (e.g. a bit more rigor in QC).
There's a reason the NSA can get Intel CPUs without IME and you can't. Given the incentives and competence of the people involved, it's probably an intentional vulnerability that you can't escape because you don't fab your own chips. There's strong circumstantial evidence that Huawei got banned from selling their products in the US for doing the same thing. And the Crypto AG backdoor (in hardware but probably not in silicon) was probably central to a lot of 20th-century international relations, though that wasn't publicly known until much later.
And this is before we get into penny-ante malicious hardware like laser printer toner cartridges, carrier-locked cellphones, and HDMI copy protection.
No amount of QC is going to remove malicious hardware; at best, it can tell you it's there.
This is also a completely different threat model but whatever.
It might not be an intentional backdoor, but it very much seems designed with out-of-band access in mind, with the AMT remote management features and the fact that the network controller has DMA (this enables packet interception).
If relevant adversaries don't know which computer to burn the exploit on, then they won't burn it on the right one.
If you vocally oppose your tyrannical government, you won't avoid a bomb on your head. In the best case you'll get a bullet through your head. Worst case, you spend a lifetime in a prison.
When you start successfully reaching many people you can be sure that security agencies will start watching you.
"It’s the reductionist approach to life: if you keep it small, you’ll keep it under control. If you don’t make any noise, the bogeyman won’t find you. But it’s all an illusion, because they die too, those people who roll up their spirits into tiny little balls so as to be safe. Safe?! From what? Life is always on the edge of death; narrow streets lead to the same place as wide avenues, and a little candle burns itself out just like a flaming torch does."
I like his using Mossad as the extreme. I guess "Mossad'd" is now a verb.
I have a fond memory of being at a party where someone had the idea to do dramatic readings of various Mickens Usenix papers. Even just doing partial readings, it was slow going, lots of pauses to recover from overwhelming laughter. When the reading of The Slow Winter got to "THE MAGMA PEOPLE ARE WAITING FOR OUR MISTAKES", we had to stop because someone had laughed so hard they threw up. Not in an awful way, but enough to give us a pause in the action, and to decide we couldn't go on.
Good times.
I'm going to be job hunting soon and I was planning to prioritize the Bay Area because that's the only place I've encountered a decent density of people like this, but maybe I'm setting my sights too short.
There are nerds everywhere.
My favorite is The Night Watch.
hilarious AND scary levels of prescient writing...
This World of Ours (2014) [pdf] - https://news.ycombinator.com/item?id=27915173 - July 2021 (6 comments)
If you are a target you are screwed. But clever crypto isn't useless.
<NO CARRIER>
> If your adversary is the Mossad, YOU’RE GONNA DIE AND THERE’S NOTHING THAT YOU CAN DO ABOUT IT. The Mossad is not intimidated by the fact that you employ https://. If the Mossad wants your data, they’re going to use a drone to replace your cellphone with a piece of uranium that’s shaped like a cellphone
It's like a Mossad agent read this paper and thought hey that's actually not a bad idea.
But the core rant is about dubious assumptions in academic cryptography papers. I was also reading a lot of academic crypto papers in 2014, and the assumptions got old real fast. Mickens mocks these ideas:
• "There are heroes and villains with fantastic (yet oddly constrained) powers". Totally standard way to get a paper published. Especially annoying were the mathematical proofs that sound rigorous to outsiders but quietly assume that the adversary just can't/won't solve a certain kind of equation, because it would be inconvenient to prove the scheme secure if they did. Or the "exploits" that only worked if nobody had upgraded their software stack for five years. Or the systems that assume a perfect implementation with no way to recover if anything goes wrong.
• "you could enlist a well-known technology company to [run a PKI], but this would offend the refined aesthetics of the vaguely Marxist but comfortably bourgeoisie hacker community who wants everything to be decentralized", lol. This got really tiresome when I worked on Bitcoin. Lots of semi-technical people who had never run any large system constantly attacking every plausible design of implementable complexity because it wasn't decentralized enough for their tastes, sometimes not even proposing anything better.
• "These [social networks] are not the best people in the history of people, yet somehow, I am supposed to stitch these clowns into a rich cryptographic tapestry that supports key revocation and verifiable audit trails" - another variant of believing decentralized cryptography and PKI is easy.
He also talks about security labels like in SELinux but I never read those papers. I think Mickens used humor to try and get people talking about some of the bad patterns in academic cryptography, but if you want a more serious paper that makes some similar points there's one here:
And for added fun, that same radical decentralization crowd, finally settling on the extremely centralized Lightning crutch, which is not only centralized but also computationally over complicated and buggy.
That's assuming they can figure out who you are in the first place. My pipe dream for the internet (that I thought we were getting way back in the 90's) is total anonymity. You can say whatever you like about the mossad, or the NSA or the KGB or whatever you like, and they'll never be able to figure out whose cellphone to replace with a piece of uranium.
We have the technology to make it happen (thanks to the paranoid security researchers!) just not the collective will to allow it.
I mean go read 4chan, a place where there is something like total anonymity. Those people are constantly imagining that half the comments on the site are generated by intelligence agencies and, who knows, maybe they are right? I really do wonder if there is any way to reap the rewards of total anonymity without the poison of bad actors.
I'm somewhat moderate on the issue from a practical point of view. I think citizens have a right to some sort of reasonable privacy and I don't think laws which try to regulate the technical mechanisms by which we can have it make sense, no matter how evil the use of the technology is. But I don't think that, in the end, it is beyond the remit of authority to snoop with, for example, a court order, and the means to do so. I expect authority to abuse power, but I don't think that technological solutions can prevent that. Only a vigilant citizenry can do it.
If you have a single company, then that's easy enough for a group like Mossad to infiltrate. Probably easier than a distributed system.