The OpenWrt wiki on Homenet suggests the project might be dead: https://openwrt.org/docs/guide-user/network/zeroconfig/hncp_...
Anyone familiar with HNCP? Are there any concerns of conflicts if HNCP becomes "a thing"? I have to say, .home.arpa doesn't exactly roll of the tongue like .internal. Some macOS users seem to have issues with .home.arpa too: https://www.reddit.com/r/MacOS/comments/1bu62do/homearpa_is_...
In my native language (Finnish) it's even worse, or better, depending on personal preference - it translates directly to .mildew.lottery-ticket.
home.arpa is for HNCP.
Use .internal.
> A network-wide zone is appended to all single labels or unqualified zones in order to qualify them. ".home" is the default; [...]
If I were going to do a bunch of extra work messing with configs I'd be far more inclined to switch all my personal stuff over to GNS for security and privacy reasons.
The ".localhost" TLD has traditionally been statically defined in
host DNS implementations as having an A record pointing to the
loop back IP address and is reserved for such use
The RFC 8375 suggestion (*.home.arpa) allows for more than a single host in the domain. If not in name/feeling, but the strictest readings [and adherence] too.This can be addressed by hijacking an existing TLD for private use, e.g. mything.bb :^)
If you want to be sure, use mything./ : the . at the end makes sure no further domains are appended during DNS lookup, and the / makes the browser try to access to resource without Googling it.
That's hardly the only example of annoying MONOBAR behavior.
This problem could have been avoided if we had different widgets for doing different things. Someone should have thought of that.
<virtual>.<physical-host>.internal
So for example phpbb.mtndew.internal
And I’d probably still add phpbb.localhost
To /etc/hosts on that host like OP doesRef: https://www.icann.org/en/board-activities-and-meetings/mater...
> As of March 7, 2025, the domain has not been standardized by the Internet Engineering Task Force (IETF), though an Internet-Draft describing the TLD has been submitted.
https://www.icann.org/en/board-activities-and-meetings/mater...
> Resolved (2024.07.29.06), the Board reserves .INTERNAL from delegation in the DNS root zone permanently to provide for its use in private-use applications.
https://developer.mozilla.org/en-US/docs/Web/Security/Secure...
So you don't need self signed certs for HTTPS on local if you want to, for example, have a backend API and a frontend SPA running at the same time talking to eachother on your machine (authentication for example requires a secure context if doing OAuth2).
Won't `localhost:3000` and `localhost:3001` also both be secure contexts? Just starting a random vite project, which opens `locahost:3000`, `window.isSecureContext` returns true.
Usually you'd have a reverse proxy running on port 80 that forwards traffic to the appropoiate service, and an entry in /etc/hosts for each domain, or a catch all in dnsmasq.
Example: a docker compose setup using traefik as a reverse proxy can have all internal services running on the same port (eg. 3000) but have a different domain. The reverse proxy will then forward traffic based on Host. As long as the host is set up properly, you could have any number of backends and frontends started like this, via docker compose scaling, or by starting the services of another project. Ports won't conflict with eachother as they're only exposed internally.
Now, wether you have a use for such a setup or not is up to you.
1. not all browsers are the same
2. there is no official standard
3. even if there was, standards are often ignored
4. what is true today can be false tomorrow
5. this is mitigation, not security
they are all aiming to implement the same html spec
2. there is no official standard
there literally is
> A context is considered secure when it meets certain minimum standards of authentication and confidentiality defined in the Secure Contexts specification
https://w3c.github.io/webappsec-secure-contexts/
3. even if there was, standards are often ignored
major browsers wouldn't be major browsers if this was the case
4. what is true today can be false tomorrow
standards take a long time to become standard and an even longer time to be phased out. this wouldn't sneak up on anyone
5. this is mitigation, not security
this is a spec that provides a feature called "secure context". this is a security feature. it's in the name. it's in the spec.
# Proxy to a backend server based on the hostname.
if (-d vhosts/$host) {
proxy_pass http://unix:vhosts/$host/server.sock;
break;
}
Your local dev servers must listen on a unix domain socket, and you must drop a symlink to them at eg /var/lib/nginx/vhosts/inclouds.localhost/server.sock.Not a single command, and you still have to add hostname resolution. But you don't have to programmatically edit config files or restart the proxy to stand up a new dev server!
Check it out and let me know what you think! (Free, MIT-licensed, single-binary install)
Basically, it wraps up the instructions in this blogpost and makes everything easy for you and your team.
- If Caddy has not already generated a local root certificate:
- Generate a local root certificate to sign TLS certificates
- Install the local root certificate to the system's trust stores, and the Firefox certificate store if it exists and an be accessed.
So yes. I had written about how I do this directly with Caddy over here: https://automagic.blog/posts/custom-domains-with-https-for-y...The certs and keys live in the localias application state directory on your machine:
• tree /Users/pd/Library/Application\ Support/localias/caddy/pki/authorities/local/
/Users/pd/Library/Application Support/localias/caddy/pki/authorities/local/
├── intermediate.crt
├── intermediate.key
├── root.crt
└── root.key
The whole nicety of localias is that you can create domain aliases for any domain you can think of, not just ".localhost". For instance, on my machine right now, the aliases are: • localias list
cryptoperps.local -> 3000
frontend.test -> 3000
backend.test -> 8080
I really wish there was a safer way to do this, i.e. a way to tag a trusted CA as "valid for localhost use only". The article mentions this in passing
> The sudo version of the above command with the -d flag also works but it adds the certificate to the System keychain for all users. I like to limit privileges wherever possible.
But this is a clear case of https://xkcd.com/1200/.
Maybe this could be done using the name constraint extension marked as critical?
Of note, it doesn't work on macOS. I recall having delivered a coding assignment for a job interview long ago, and the reviewer said it didn't work for them, although the code all seemed correct to them.
It turned out on macOS, you need to explicitly add any subdomains of .localhost to /etc/hosts.
I'm still surprised by this; I always thought that localhost was a highly standard thing covered in the RFC long long ago… apparently it isn't, and macOS still doesn't handle this TLD.
No, not here.
$ ping hello.localhost
PING hello.localhost (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.057 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms
$ ping hello.localhost
ping: cannot resolve hello.localhost: Unknown host
That said, I do recommend the use of the internal. zone for any such setup, as others have commented. This article provides some good reasons why (at least for .local) you should aim to use a standards-compliant internal zone: https://community.veeam.com/blogs-and-podcasts-57/why-using-...
Not so different from you, but without even registering the vanity domain. Why is this such a bad idea?
It's better to use domain you control.
I'm a fan of buying cheapest to extend (like .ovh, great value) and use real Let's Encrypt (via dns challenge) to register any subdomain/wildcard. So that any device will have "green padlock" for totally local service.
https://krebsonsecurity.com/2020/02/dangerous-domain-corp-co...
Honestly for USD5/year why don't you just buy yourself a domain and never have to deal with the problem?
I guess the lesson is to deploy a self-signed root ca in your infra early.
I suggested using a domain given they already have Caddy set up and it's inexpensive to acquire a cheap domain. It's also less of a headache in my experience.
Actually, now that I've linked the docs, it seems they use smallstep internally as well haha
[0] https://caddyserver.com/docs/automatic-https#local-https
Though you could register a name like ch.ch and get a wildcard certificate for *.ch.ch, and insert local.ch.ch in the hosts file and use the certificate in the proxy, that would even work on the go.
Is that a new thing? I heard previously that if you wanted to do DNS/domain for local network you had to expose the list external.
Yes, this also works under macOS, but I remember there used to be a need to explicitly add these addresses to the loopback interface. Under Linux and (IIRC) Windows these work out of the box.
I previously used differing 127.0.0.0/8 addresses for each local service I ran on my machine. It worked fine for quite a while but this was in pre-Docker days.
Later on I started using Docker containers. Things got more complicated if I wanted to access an HTTP service both from my host machine and from other Docker containers. Instead of having your services exposed differently inside a docker network and outside of it, you can consistently use the IP and Ports you expose/map.
If you're 127.0.0.0/8 addresses then this won't work. The local loopback addresses aren't routed to the host computer when sent from a Docker container; they're routed to the container. In other words, 127.0.0.1 inside Docker means "this container" not "this machine".
For that reason I picked some other unused IP block [0] and assigned that block to the local loopback interface. Now I use those IPs for assigning to my docker containers.
I wouldn't recommend using the RFC 1918 IP blocks since those are frequently used in LANs and within Docker itself. You can use something like the link-local IP block (169.254.0.0/16) which I've never seen used outside of the AWS EC2 metadata service. Or you can use the carrier-grade NAT IP block (100.64.0.0/16). Or even some IP block that's assigned for public use, but is never used, although that can be risky.
I use Debian Bookworm. I can bind 100.64.0.0/16 to my local loopback interface by creating a file under /etc/network/interfaces.d/ with the following
auto lo:1
iface lo:1 inet static
address 100.64.0.1
gateway 100.64.0.0
netmask 255.255.0.0
Once that's set up I can expose the port of one Docker container at 100.64.0.2:80, another at 100.64.0.3:80, etc. $ resolvectl query foo.localhost
foo.localhost: 127.0.0.1 -- link: lo
::1 -- link: lo
Another benefit is being able to block CSRF using the reverse proxy.https://www.man7.org/linux/man-pages/man8/libnss_myhostname....
(There are lots of other useful NSS modules, too. I like the libvirt ones. Not sure if there's any good way to use these alongside systemd-resolved.)
The ability to add host entries via an environment variable turned out to be more useful than I'd expected, though mostly for MITM(proxy) and troubleshooting.