That is of course unless you really intend to send an email to someone at test@gmail.com.
Each service is then exposed via '<service>.local.<domain>'.
This has been working flawlessly for me for some time.
I use it extensively on my LAN with great success, but I have Macs and Linux machines with Avahi. People who don't shouldn't mess with it...
Practically speaking, HTTPS on LAN is essentially useless, so I don't see the benefits. If anything, the current situation allows the user to apply TOFU to local devices by adding their unsigned certs to the trust store.
The existing exception mechanisms already work for this, all you need to do is click the "continue anyway" button.
Public wifi isn't a thing? Nobody wants to admin the router on a wifi network where there might be untrusted machines running around?
In practice, you probably want an authorized network for management, and an open network with the management interface locked out, just in case there's a vulnerability in the management interface allowing auth bypass (which has happened more often than anyone would like).
I agree on the latter, but that means your IoT devices being accessible through both networks and being able to discriminate which requests are coming from the insecure interface and which are coming from secure admin, which isn't practical for lay users to configure as well. I mean, a router admin screen can handle that but what about other devices?
I know it seems pedantic, but this UI problem is one of many reasons why everything goes through the Cloud instead of our own devices living on our own networks, and I don't like that controlling most IoT devices (except router admin screens) involves going out to the Internet and then back to my own network. It's insecure and stupid and violates basic privacy sensibilities.
Ideally I want end users to be able to buy a consumer device, plug it into their router, assign it a name and admin-user credentials (or notify it about their credential server if they've got one), and it's ready and secure without having to do elaborate network topology stuff or having to install a cert onto literally every LAN client who wants to access its public interface.
* It's reserved so it's not going to be used on the public internet.
* It is shorter than .local or .localhost.
* On QWERTY keyboards "test" is easy to type with one hand.
That said, I do use mDNS/Bonjour to resolve .local addresses (which is probably what breaks .local if you're using it as a placeholder for a real domain). Using .local as a imaginary LAN domain is a terrible idea. These days, .internal is reserved for that.
I have a more in depth write up here: https://www.silvanocerza.com/posts/my-home-network-setup/
Though I wanted a short URL, that's why I used .it any way.
Yes, it does require a cert for TLS and that cert will not be trusted by default. I have found that with OpenSSL and a proper script you can spin up a cert chain on the fly and you can make these certs trusted in both Windows and Linux with an additional script. A script cannot make for trusted certs in Safari on OSX though.
I figured all this out in a prior personal app. In my current web server app I just don’t bother with trust. I create the certs and just let the browser display its page about high risk with the accept the consequences button. It’s a one time choice.
I used the .z domain bc it's quick to type and it looks "unusual" on purpose. The dream was to set up a web UI so you wouldn't need to configure it in the terminal and could see which apps are up and running.
Then I stopped working the job where I had to remember 4 different port numbers for local dev and stopped needing it lol.
Ironically, for once it's easier to set this kind of thing up on MacOS than on Linux, bc configuring a local DNS resolver on linux (cf this taiscale blog post "The Sisyphean Task Of DNS Client Config on Linux" https://tailscale.com/blog/sisyphean-dns-client-linux). Whereas on Mac it's a couple commands.
I think Tailscale should just add this to their product, they already do all the complicated DNS setup with their Magic DNS, they could sprinkle in port forwarding and be done. It'd be a real treat.
If you are using other systems then you can set this up fairly easily in your network DNS resolver. If you use dnsmasq (used by pihole) then the following config works:
address=/localhost/127.0.0.1
address=/localhost/::1
There are similar configs for unbound or whatever you use.I have a ready to go docker-compose setup using Traefik here: https://github.com/georgek/traefik-local
Rather than do all this manually each time and worry about port numbers you just add labels to docker containers. No ports, just names (at least for http stuff).
A possible disadvantage is that specifying a single ip to listen on means the http server won't listen on your LAN ip address, which you might want.
That's not redirection per se, a word that's needlessly overloaded to the point of confusion. It's a smart use of a reverse proxy.
It would be nice if you all reserved the word "redirect" for something like HTTP 3xx behavior.