I think it's important to emphasise that although Tim's toy hypermedia system (the "World Wide Web") didn't come with baked in security, ordinary users have never really understood that. It seems to them as though http://foo.example/ must be guaranteed to be foo.example, just making that true by upgrading to HTTPS is way easier than somehow teaching billions of people that it wasn't true and then what they ought to do about that.
I am reminded of the UK's APP scams. "Authorized Push Payment" was a situation where ordinary people think they're paying say, "Big Law Firm" but actually a scammer persuaded them to give money to an account they control because historically the UK's payment systems didn't care about names, so to it a payment to "Big Law Firm" acct #123456789 is the same as a payment to "Jane Smith" acct #123456789 even though you'd never get a bank to open you an account in the name of "Big Law Firm" without documents the scammer doesn't have. To fix this, today's UK payment systems treat the name as a required match not merely for your records, so when you say "Big Law Firm" and try to pay Jane's account because you've been scammed, the software says "Wrong, are you being defrauded?" and so you're safe 'cos you have no reason to fill out "Jane Smith" as that's not who you're intending to give money to.
We could have tried to teach all the tens of millions of UK residents that the name was ignored and so they need other safeguards, but that's not practical. Upgrading payment systems to check the name was difficult but possible.
And I noticed that Whatsapp is even worse than Chrome, it opens HTTPS even if I share HTTP links.
Equally your preference for HTTP should not stand in the way of a more secure default for the average person.
Honestly I'd prefer that my mom didn't browse any http sites, it's just safer that way. But that doesn't detract from your ability to serve unencrypted pages which can easily be intercepted or modified by an ISP (or worse.)
Fortunately, one can publish on the www without using ICANN DNS
For example http://199.233.217.201 or https://199.233.217.201
1. I have run own root server for over 15 years
An individual cannot even mention choosing to publish a personal blog over HTTP without being subjected to a kneejerk barrage of inane blather. This is truly a sad state of affairs
I'm experimenting with non-TLS, per packet encryption with a mechanism for built-in virtual hosting (no SNI) and collision-proof "domainnames" on the home network as a reminder that TLS is not the only way to do HTTPS
It's true we depend on ISPs for internet service but that's not a reason to let an unlimited number of _additional_ third parties intermediate and surveil everything we do over the internet
And this is why it's a good thing that every major browser will make it more and more painful, precisely so that instead of arguments about it, we'll just have people deciding whether they want their sites accessible by others or not.
Unencrypted protocols are being successfully deprecated.
I know about acme.sh, but still...
Like, the default for cars almost everywhere is you buy one made by some car manufacturer like Ford or Toyota or somebody, but usually making your own car is legal, it's just annoyingly difficult and so you don't do that.
It may be legal but good luck ever getting registration for it.
Now, getting required insurance coverage, that can be a different story. Btu even there, many states allow you to post a bond in lieu of an insurance policy meeting state minimums.
It’s trying to make and sell three or four that is nearly impossible.
I've used their stuff since it came out and never used certbot, FWIW. If I were to set something up today, I'd probably use https://github.com/dehydrated-io/dehydrated.
So you're absolutely not dependent on the client software, or indeed anyone else's client software.
So, what you've said is true today, but historically Certbot's origin is tied to Let's Encrypt, which makes sense because initially ACME isn't a standard protocol, it's designed to become a standard protocol but it is still under development and the only practical server implementations are both developed by ISRG / Let's Encrypt. RFC 8555 took years.
And I couldn't praise enough acme.sh at the opposite that is simple, dependency less and reliable!
What does this mean? Is that encryption not reliant on any third parties, or is it just relying on different third parties?
There is no magic do it all yourself. Communicating with people implies dependence.
My hosting provider may accidentally fuck up, but they'll apologise and fix it.
My CA fucks up, they e-mail me at 7pm telling me I've got to fix their fuck-up for them by jumping through a bunch of hoops they have erected, and they'll only give me 16 hours to do it.
Of course, you might argue my hosting provider has a much higher chance of fucking up....
Except (a) your website doesn't let users create custom subdomains; (b) as the certificate is now in use, you the certificate holder have demonstrated control over the web server as surely as a HTTP-01 challenge would; (c) you have accounts and contracts and payment information all confirming you are who you say you are; and (d) there is no suggestion whatsoever that the certificate was issued to the wrong person.
And you could have gotten a certificate for free from Lets Encrypt, if you had automatic certificate rotation in place - you paid $500 for a 12-month certificate because you don't.
An organisation with common sense policies might not need to revoke such a certificate at all, let alone revoke it with only hours of notice.
And have you seen how many actual security problems CAs have refused to revoke in the last few years? Holding them to their agreements is important, even if a specific mistake isn't a security problem [for specific clients]. Letting them haggle over the security impact of every mistake is much more hassle than it's worth.
> if you had automatic certificate rotation in place - you paid $500 for a 12-month certificate because you don't
Then in this hypothetical I made a mistake and I should fix it for next time.
And I should be pretty mad at my CA for giving me an invalid certificate. Was there an SLA?
Mark my words, some day soon an enterprising politician will notice the CA system can be drawn into trade sanctions against the enemy of the day....
If you're required to (or choose to) not tell us about it, because of active monitoring when we notice it's likely your CA will be distrusted for not telling us, this is easier because there's a mechanism to tell us about it - same way that there's a way to officially notify the US that you're a spy, so, when you don't (because duh you're a spy) you're screwed 'cos you didn't follow the rules.
The tech centralization under the US government does mean there's a vulnerability on the browser side, but I wouldn't speculate about how long that would last if there's a big problem.
There are ways to remove that dependency, but it's going to involve a decentralized DNS replacement like Namecoin or Handshake, many of which include their own built-in alternatives to the CA system too so if "no third parties" is something you truly care about you can probably kill two birds with one stone here.
https://multiplayeronlinestandard.com/goto.html (the reason for the domain is I will never waste time on HTTPS but github does it automatically for free up to 100GB/month)
Depending on yet another third party to provide what is IMHO a luxury should not be required, and I have been continually confused as to why it is being forced down everyone's throat.
Man in in the?
Kinda like how Wikipedia benefits Google. Or public roads benefit Uber. Or clean water benefits restaurants
If that were the universal state, then it would be easy to tell when someone was visiting a site that mattered, and you could probably infer a lot about it by looking at the cleartext of the non-HTTPS side they were viewing right before they went to it.
However, the page you're fetching from that domain is encrypted, and that's vastly more sensitive. It's no big deal to visit somemedicinewebsite.com in a theocratic region like Iran or Texas. It may be a very big deal to be caught visiting somemedicinewebsite.com/effective-abortion-meds/buy. TLS blocks that bit of information. Today, it still exposes that you're looking at plannedparenthood.com, until if/when TLS_ECH catches on and becomes pervasive. That's a bummer. But you still have plausible deniability to say "I was just looking at it so I could see how evil it was", rather than having to explain why you were checking out "/schedule-an-appointment".
[0]https://developers.cloudflare.com/ssl/edge-certificates/ech/
TLS traffic analysis can still reveal which pages you accessed with some degree of confidence, based on packet sizes, timings, external resources that differ between pages (e.g. images)
Most of the site hosted general information about the agency and its functions, but they also had a section where you could provide information.
AFAIK it's still not that widely adopted or can be easily blocked/disabled on a network though.
My navigation habits are boring but they are mine, not anyone else's to see.
A server has no way to know whether the user cares or not, so they are not in a position to choose the user's privacy preferences.
Also: a page might be fully static, but I wouldn't want $GOVERNMENT or $ISP or $UNIVERSITY_IT_DEPARTMENT to inject propaganda, censor... Just because it's safe for you doesn't mean it's safe for everyone.
It does MITM between you and the HTTPS websites you browse.
In fact it's just a regular laptop that I fully control and installed from scratch, straight out of Apple's store. As all my company laptops have been.
And if it was company policy I would refuse indeed. I would probably not work there in the first place, huge red flag. If I really had to work there for very pressing reasons I would do zero personal browsing (which I don't do anyways).
Not even when I was an intern at random corpo my laptop was MITMed.
So to echo a sister comment: while sadly it is common in some jurisdictions, it is definitely not normal.
I could maybe understand it for non-tech people (virus scanning yadda yadda) but for a tech person it's a nuisance at best.
Edit: I'm not saying I like it this way... but that's what you get when working in a small org in a larger org in a govt office. When I worked in a security team for a bank, we actually were on a separate domain and network. I generally prefer to work untrusted, externally and rely on another team for production deployment workflows, data, etc.
I'm lucky to be a dev both by trade and passion. I like my job, it's cozy, and we're still scarce enough that my employer and I are in a business relationship as equals: I'm just a business selling my services to another business under common terms (which in my case include trusting each other).
But this is mostly a waste of time, these days companies just install agents on each laptop to monitor activity. If you do not own the machine/network you are using then don’t visit sites hat you don’t want them to see.
"I want my communications to be as secure as practical."
"Ah, but they're not totally secure! Which means they're totally insecure! Which means you might as well write your bank statements on postcards and mail them to the town gossip!"
It amazes me how anti-HTTPS some people can be.
With http it is trivial.
So you say you don’t care if my ISP injects whole bunch of ads and I don’t even see your content but only the ads and I blame you for duping me into watching them.
Nowadays VPN providers are popular what if someone buys VPN service from the shitty ones and gets treated like I wrote above and it is your reputation of your blog devastated.
And while at it, lobby to make corporate MiTM tools illegal as well.
Because if you are bothered about my little blog, you should be bothered that your employer can inspect all your HTTPS traffic.
More to the point: serving your blog with HTTPS via Let's Encrypt does not in any way forbid you from also serving it with HTTP without "depending on third parties to publish content online". It would take away from the drama of the statement though, I suppose.
Shine on you crazy diamond, and all that, but...
> I have been continually confused as to why it is being forced down everyone's throat.
Have you never sat on public wifi and tried to open an http site? These days it is highly likely to be MITM'd by the wifi provider to inject ads (or worse). Even residential ISPs that one pays for cannot be trusted not to inject content, if given the opportunity, because they noticed that they are monopolies and most users cannot do anything about it.
You don't get to choose the threat model of those who visit your site.
I honestly don't remember a single case where that happened to me. Internet user since 1997.
Probably a low-threat security risk for a blog.
But indeed, the ability to publish on my own outweights the risk of someone modding my content.
Most of us here read their news from work laptops, where the employer and their MiTM supplier are a much bigger threat even for HTTPS websites.
Their client will complain loudly until and unless they install it, but then for those who care you could offer the best of both worlds.
Almost certainly more trouble than it's worth. G'ah, and me without any free time to pursue a weekend hobby project!
You're not really offering that because the first connection could've be intercepted.
What is funny about HTTPS is that early arguments for its existence IIRC were often along the lines of protecting credit card numbers and personal information that needed to be sent during e-commerce
HTTPS may have delivered on this promise. Of course HTTPS is needed for e-commerce. But not all web use is commercial transactions
Today, it's unclear who or what^2 HTTPS is really protecting anymore
For example,
- web users' credit card numbers are widely available, sold on black markets to anyone; "data breaches" have become so common that few people ask why the information was being collected and stored in the first place nor do they seek recourse
- web users' personal information is routinely exfiltrated during web use that is not e-commerce, often to be used in association with advertising services; perhaps the third parties conducting this data collection do not want the traffic to be optionally inspected by web users or competitors in the ad services business
- web users' personal information is shared from one third party to another, e.g., to "data brokers", who operate in relative obscurity, working against the interests of the web users
All this despite "widespread use of encryption", at least for data in transit, where the encryption is generally managed by third parties
When the primary use of third-party mediated HTTPS is to protect data collection, telemetry, surveillance and ad services delivery,^1 it is difficult for me to accept that HTTPS as implemented is primarily for protecting web users. It may benefit some third parties financially, e.g., CA and domainname profiteers, and it may protect the operations of so-called "tech" companies though
Personal information and behavioral data are surreptitiously exfiltrated by so-called "tech" companies whilst the so-called "tech" company's "secrets", e.g., what data they collect, generally remain protected. The companies deal in information they do not own yet operate in secrecy from its owners, relentlessly defending against any requests for transparency
1. One frequent argument for the use of HTTPS put forth by HN commenters has been that it prevents injection of ads into web pages by ISPs. Yet the so-called "tech" companies are making a "business" out of essentially the same thing: injecting ads, e.g., via real-time auctions, into web pages. It appears to this reader that in this context HTTPS is protecting the "business" of the so-called "tech" companies from competition by ISPs. Some web users do not want _any_ ads, whether from ISPs or so-called "tech" companies
2. I monitor all HTTPS traffic over the networks I own using a local forward proxy. There is no plaintext HTTP traffic leaving the network unless I permit it for a specific website in the proxy config. The proxy forces all traffic over HTTPS
If HTTPS were optionally under user control, certainly I would be monitoring HTTPS traffic being automatically sent from own computers on own network to Google by Chrome, Android, YouTube and so on. As I would for all so-called "tech" companies doing data collection, surveillance and/or ad services as a "business"
Ideally one would be able to make an informed decision whether they want to send certain information to companies like Google. But as it stands, with the traffic sometimes being protected from inspection _by the computer owner_, through use of third party-mediated certificates, the computer owner is prevented from knowing what information is being sent
In own case, that traffic just gets blocked
It's not a strawman, it's a real attack that we've seen for decades.
The entire guidance of "don't connect to an open wireless AP"? That's because a malicious actor who controlled the AP could read and modify your HTTP traffic - inject ads, read your passwords, update the account number you requested your money be transferred to. The vast majority of that threat is gone if you're using HTTPS instead of HTTP.
Now imagine if we still lived in a world like that. Someone visits UN meeting and the rest is your imagination.
[1] https://nordvpn.com/cybersecurity/glossary/firesheep/?srslti...
Say we all move to HTTPS but then let’s encrypt goes away, certificate authority corps merge, and then google decides they also want remote attestation for two way trust or whatever - the whole world becomes walled up into an iOS situation. Even a good idea is potentially very bad at the hands of unregulated corps (and this is not a hypothetical)
Just switch to ZeroSSL - it's the default certificate provider for the acme.sh script now.
This can still be MITM'd. Maybe they can't drain your bank account by the nature of the content, but they can still lie or something. And that's not good.
It would be ideal if people only browsed from trusted networks, but telling people "don't do the convenient, useful, obvious thing" only goes so far. Hence the desire to secure connections from another angle.
The problem in the above was not actually caused by the AP being open, nor is it just limited to APs in the path between you and whatever you're trying to connect to on the internet. Another common example is ISPs which inject content banners into unencrypted pages (sometimes for billing/usage alerts, other times for ads). Again, this is just another example - you aren't going to whack-a-mole an answer to trusting everything a user might transit on the internet, that's how we came to HTTPS instead.
> There are still legitimate uses for HTTP including reading static content.
There are valid reasons to do a lot of things which don't end up making sense to support in the overall view.
> Say we all move to HTTPS but then let’s encrypt goes away, certificate authority corps merge, and then google decides they also want remote attestation for two way trust or whatever - the whole world becomes walled up into an iOS situation. Even a good idea is potentially very bad at the hands of unregulated corps (and this is not a hypothetical)
There are at least 2 other decent sized independent ACME operators at this point, but say all of the certificate authority corps merge but we planned ahead and kept HTTP support: our banking/payments, sites with passwords, sites with PII, medical sites, etc is in a stranglehold but someone's plain text blog post about it will be accessible without a warning message. Not exactly a great victory, we'll still need to solve the actual problem just as desperately at that point.
.
The biggest gripe I have with the way browsers go about this is they only half consider the private use cases, and you get stuck with the rough edges. E.g. here they call private addresses out to not get a warning, but my (fully in browser, single page) tech support dump reader can't work when opened as a file:/// because the browser built-in for calculating an HMAC (part of WebCrypto) is for secure contexts only, and file:/// doesn't qualify. Apart from being stupid because they aren't getting rid of JavaScript support on file:/// origins until they just get rid of file:/// completely and it just means I need a shim, it's also stupid because file:/// is no less a secure origin than localhost.
I'd like for every possible "unsecure" private use case to work, but I (and the majority of those who uses a browser) also has a conflicting want to connect to public websites securely. The options and impacts for these conflicting desires have to be weighed and thought through.
At least mongoose will serve stuff in 100KB.
I work at a company that also happens to run a CDN and the sheer amount of layers Google forces everyone to put onto their stack, which was a very simple text based protocol, is mind boggling.
First there was simple TCP+HTTP. Then HTTPS came around, adding a lot of CPU load onto servers. Then they invented SPDY which became HTTP2, because websites exploded in asset use (mostly JS). Then they reinvented the layer 4 with QUIC (in-house first), which resulted in HTTP3. Now this.
Each of them adding more complexity and data framing onto, what used to be a simple message/file exchange protocol.
And you can not opt out, because customers put their websites into a website checker and want to see all green traffic lights.
Vs. `traceroute` suggests that would-be on-path attackers are up against a vastly smaller attack surface.
PCI DSS is the data security standard required by credit card processors for you to be able to accept credit card payments online. Since version 1.0 came out in 2004, Requirement 4.1 has been there, requiring encrypted connections when transmitting card holder information.
There’s certainly was a time when you had two parts of a commerce website: one site all of the product stuff and catalogs and categories and descriptions which are all served over HTTP (www.shop.com) and then usually an entirely separate domain (secure.shop.com) where are the actual checkout process started that used SSL/TLS. This was due to the overhead of SSL in the early 2000s and the cost of certificates. This largely went away once Intel processors got hardware accelerated instructions for things like AES, certificates became more cost-effective, and then let’s encrypt made it simple.
Occasionally during the 2000s and 2010s you might see HTML form that were served over HTTP and the target was an HTTPS URL but even that was rare simply because it was a lot of work to make it that complex instead of having the checkout button just take you to an entirely different site
On another note I would much prefer to skip https, as the default, and go straight to WSS (TLS WebSockets). WebSockets are superior to HTTP in absolutely every regard except that HTTP is session-less.
What is the risk exactly? A man-in-the-middle redirect to a malicious https site?
It would be nice to see some way for browsers to indicate when a site has some extra validation so you could immediately see that your bank has a real certificate as is appropriate for a bank and not just Let's Encrypt. Yes, I can click the padlock icon to get that information, but it would be nice if there was some light warning for free certificates to make it more immediately obvious.
HSTS might also interact with this, but I'd expect an HSTS site to just cause Chrome to go for HTTPS (and then that connection would either succeed or fail).
> to force network-level auth flows (which don't always fire correctly when hitting HTTPS)
The whole point of HTTPS is basically that these shouldn't work, essentially. Vendors need to stop implementing weird network-level auths by MitM'ing the connection, and DHCP has an option to signal to someone joining a network that they need to go to a URL to do authentication. These MitM-ers are a scourge, and often cause a litany of poor behavior in applications…
It is incredibly common for public wifi captive portals to be built on a stack of hacks, some of which require the inspection of HTTP and DNS requests to function.
*Yes better tools exist, but they dont arent commonly used, and require Portal, WAP and Client support. Most vendors just tell people to turn new fancy shit off, disable HTTPS and proceed with HTTP.
I don't like people externalizing their security policy preferences. Yes this might be more secure for a class of use-cases, but I as a user should be allowed to decide my threat model. It's not like these initiatives really solve the risks posed by bad actors. We have so much compliance theater around email, and we still have exactly the same threats and issues as existed twenty years ago.
> I as a user should be allowed to decide my threat model
Asking you if you want to proceed is allowing you to decide your threat model.
> We have so much compliance theater around email, and we still have exactly the same threats and issues as existed twenty years ago.
...and yet we have largely eliminated entire classes of issue on the web with the shift to HTTPS, to the point where asking users to opt-in to HTTP traffic is actually a practical option, raising the default security posture with minimal downside.
A lot of this discussion is about how the browsers define their security requirements on top of HTTPS/TLS/etc.
Such as what CAs they trust by default, and what’s the maximum lifetime of a certificate before they won’t trust it. I believe it is now 2 years? Going even lower soon.
My first reaction was along the lines of "What? That can't possibly be right..."
After testing a bit, it looks like you can load https://neverssl.com but it'll just redirect you to a non-https subdomain. OTOH, if the initial load before redirecting is HTTPS then it shouldn't work on hotel wifi or whatever, so still seems like it defeats the purpose.
Huh.
http.rip will probably show a "website unavailable" error at some point unless you manually type in the http:// prefix.