Top
Best
New

Posted by zdw 2 days ago

When internal hostnames are leaked to the clown(rachelbythebay.com)
442 points | 250 commentspage 2
trjordan 1 day ago|
Having recently set up sentry, at least one of the ways they use this is to auto-configure uptime monitoring.

Once they know what hosts you run, it'll ping that hostname periodically. If it stays up and stable for a couple days, you'll get an alert in product: "Set up uptime monitoring on <hostname>?"

Whether you think this is valid, useful, acceptable, etc. is left as an exercise to the reader.

Linkd 1 day ago|
Expansion opportunities
ashu1461 1 day ago||
Isn't the article over emphasising a little bit on leakage of internal urls ?

Internal hostnames leaking is real, but in practice it’s just one tiny slice of a much larger problem: names and metadata leak everywhere - logs, traces, code, monitoring tools etc etc.

icedchai 1 day ago||
Is it a real problem? My internal hostnames resolve to RFC-1918 addresses and I have a firewall. If I wasn't so lazy, I'd use split DNS.
reddalo 1 day ago||
In other words: never put sensitive information in names and metadata.
dmichulke 1 day ago||
Or name them after little bobby tables.

Is there some sort of injection that's a legal host name?

jerf 1 day ago||
DNS naming rules for non-Unicode are letters, numbers, and hyphens only, and the hyphens can't start or stop the domain. Unicode is implemented on top of that through punycode. It's possible a series of bugs would allow you to punycode some sort of injection character through into something but it would require a chain of faulty software. Not an impossibly long chain of faulty software by any means, but a chain rather than just a single vulnerability. Punycode encoders are supposed leave ASCII characters as ASCII characters, which means ASCII characters illegal in DNS can't be made legal by punycoding them legally. I checked the spec and I don't see anything for a decoder rejecting something that jams one in, but I also can't tell if it's even possible to encode a normal ASCII character; it's a very complicated spec. Things that receive that domain ought to reject it, if it is possible to encode it. And then it still has to end up somewhere vulnerable after that.
m3047 1 day ago||
Rules are just rules. You can put things in a domain name which don't work as hostnames. Really the only place this is enforced by policy is at the public registrar level. Only place I've run into it at the code level is in a SCADA platform blocking a CNAME record (which followed "legal" hostname rules) pointing to something which didn't. The platform uses jython / python2 as its scripting layer; it's java; it's a special real-time java: plenty of places to look for what goes wrong, I didn't bother.

People should know that they should treat the contents of their logs as unsanitized data... right? A decade ago I actually looked at this in the context of a (commercial) passive DNS, and it appeared that most of the stuff which wasn't a "valid" hostname was filtered before it went to the customers.

teekert 2 days ago||
Is this a Chrome/Edge thing? Or do privacy respecting browsers also do this? If so, it's unexpected.

If Firefox also leaks this, I wonder if this is something mass-surveillance related.

(Judging from the down votes I misunderstood something)

nomercy400 1 day ago|
From what I understand, sentry.io is like a tracing and logging service, used by many organizations.

This helps you (=NAS developer) to centralize logs and trace a request through all your application layers (client->server->db and back), so you can identify performance bottlenecks and measure usage patterns.

This is what you can find behind the 'anonymized diagnostics' and 'telemetry' settings you are asked to enable/consent.

For a WebUI it is implemented via javascript, which runs on the client's machine and hooks into the clicks, API calls and page content. It then sends statistics and logs back to, in this case, sentry.io. Your browser just sees javascript, so don't blame them. Privacy Badger might block it.

It is as nefarious as the developer of the application wants to use it. Normally you would use it to centralize logging, find performance issues, and get a basic idea on what features users actually use, so you can debug more easily. But you can also use it to track users. And don't forget, sentry.io is a cloud solution. If you post it on machines outside your control, expect it to be public. Sentry has a self-hosted solution, btw.

jeroenhd 1 day ago||
My employer uses Sentry for (backend) metrics collection so I had to unblock it to do my job. I wish Sentry would have separate infra for "operating on data collected by Sentry" and "submit every mouse click to Sentry" so I could block their mass surveillance and still do my job, but I suppose that would cut into their profit margins.

My current solution is a massive hack that breaks down every now and then.

wbobeirne 1 day ago||
Most organizations I've set Sentry up for tunnel the traffic through their own domain, since many blocking extensions block sentry requeats by default. Their own docs recommend it as well. All that to say, it's not trivial to fully block it and you were probably sending telemetry anyway even with the domain blocked.
jeroenhd 1 day ago||
With the right tricks (CNAME detection, URL matching) a bunch of ad blocking tools still pick up the first-party proxies, but that only works when directly communicating with the Sentry servers.

Quite a pain that companies refuse to take no for an answer :/

fragmede 2 days ago||
This highlights a huge problem with LetsEncrypt and CT logs. Which is that the Internet is a bad place, with bad people looking to take advantage of you. If you use LetsEncrypt for ssl certs (which you should), that hostname gets published to the world, and that server immediately gets pummeled by requests for all sorts of fresh install pages, like wp-admin or phpmyadmin, from attackers.
krautsauer 2 days ago||
That may be related, but it's not what happened here. Wildcard-cert and all.
ale42 1 day ago|||
It's not just Let's Encrypt, right? CT is a requirement for all Certificate Authorities nowadays. You can just look at the certificate of www.google.com and see that it has been published to two CT logs (Google's and Sectigo's)
tialaramex 1 day ago|||
Technically logging certificates is not a Requirement of the trust stores, but most web browsers won't accept a certificate which isn't presented with a proof of logging, typically (but not always) baked inside the certificates.

The reason for this distinction is that failing to meet a Requirement for issued certificates would mean the trust stores might remove your CA, but several CAs today do issue unlogged certificates - and if you wanted to use those on a web server you would need to go log them and staple the proofs to your certs in the server configuration.

Most of the rules (the "Baseline Requirements" or BRs) are requirements and must be followed for all issued certificates, but the rule about logging deliberately doesn't work that way. The BRs do require that a CA can show us - if asked - everything about the certificates they issued, and these days for most CAs that's easiest accomplished by just providing links to the logs e.g. via crt.sh -- but that requirement could also be fulfilled by handing over a PDF or an Excel sheet or something.

nottorp 1 day ago|||
Now I get why they want to reduce certificate validity to 20 minutes. The logs will become so spammy then that the bad guys won't be able to scan all hosts in them any more...
prmoustache 1 day ago|||
Why would you care that your hostname on a local only domain is published to the world if it is not reachable from outside? Publicly available hosts are alread published to the world anyway through DNS.

LetsEncrypt doesn't make a difference at all.

Gigachad 1 day ago|||
Unsecured fresh install states that rely on you signing in before an attacker does were always a horrible idea. It's been a welcome change on the Linux side where Linux distros can install with your SSH key and details preloaded so password login is always disabled.

These PHP apps need to change so you first boot the app with credentials so the app is secured at all moments.

Spivak 2 days ago|||
I like only getting *.domain for this reason. No expectation of hiding the domain but if they want to figure out where other things are hosted they'll have to guess.
hsbauauvhabzb 2 days ago|||
That’s really not a great fix. If those hostnames leak, they leak forever. I’d be surprised if AV solutions and/or windows aren’t logging these things.
ttoinou 2 days ago|||
So how do you get this ?
rossy 2 days ago||
Let's Encrypt can issue wildcard certs too
thakoppno 2 days ago|||
> the Internet is a bad place

FWIW - it’s made of people

TZubiri 2 days ago||
No, it's made by systems made by people, systems which might have grown and mutated so many times that the original purpose and ethics might be unrecognizable to the system designers. This can be decades in the case of tech like SMTP, HTTP, JS, but now it can be days in the era of Moltbots and vibecoding.
jesterson 2 days ago||
> If you use LetsEncrypt for ssl certs (which you should)

You meant you shouldn't right? Partially exactly for the reasons you stated later in the same sentence.

josh3736 2 days ago||
Let's Encrypt has nothing to do with this problem (of Certificate Transparency logs leaking domain names).

CA/B Forum policy requires every CA to publish every issued certificate in the CT logs.

So if you want a TLS certificate that's trusted by browsers, the domain name has to be published to the world, and it doesn't matter where you got your certificate, you are going to start getting requests from automated vulnerability scanners looking to exploit poorly configured or un-updated software.

Wildcards are used to work around this, since what gets published is *.example.com instead of nas.example.com, super-secret-docs.example.com, etc — but as this article shows, there are other ways that your domain name can leak.

So yes, you should use Let's Encrypt, since paying for a cert from some other CA does nothing useful.

tialaramex 1 day ago|||
Another big way you get scooped up, having worked in that industry among other things - is that anybody - internal staff, customers, that one sales guy who insists on using his personal iPhone to demo the product and everybody turns a blind eye because he made $14M in sales last year - calls some public DNS resolver and the public DNS server sells those names --- even though the name didn't "work" because it wasn't public.

They don't sell who asked because that's a regulatory nightmare they don't want, but they sell the list of names because it's valuable.

You might buy this because you're a bad guy (reputable sellers won't sell to you but that's easy to circumvent), because you're a more-or-less legit outfit looking for problems you can sell back to the person who has the problem, or even just for market research. Yes, some customers who own example.com and are using ZQF brand HR software won't name the server zqf.example.com but a lot of them will and so you can measure that.

jesterson 2 days ago|||
Statistically amount of parasite scanning on LE "secured" domains is way more compared to purchased certficates. And yes, this is without voluntary publishing on LE side.

I am not entirely aware what LE does differently, but we had very clear observation in the past about it.

m3047 1 day ago||
This is exactly why I have a number of "appliances" which never get clown updates: have addresses in a subnet I block at the segment edge, have DNS which never answers, and there are a few entries in the "DNS firewall" [0] (RPZ) which mostly serve as canaries.

This is the problem with the notion that "in the name of securitah IoT devices should phone home for updates": nobody said "...and map my network in the name of security"

[0] Don't confuse this with Rachel's honeypot wildcarding *.nothing-special.whatever.example.com for external use.

zaptheimpaler 2 days ago||
Oh god this sucks, i've been setting up lots of services on my NAS pointing to my own domains recently. Can't even name the domains on my own damn server with an expectation of privacy now.
jeroenhd 1 day ago||
The (somewhat affordable) productized NASes all suffer from big tech diseases.

I think a lot of people underestimate how easy a "NAS" can be made if you take a standard PC, install some form of desktop Linux, and hit "share" on a folder. Something like TrueNAS or one of its forks may also be an option if you're into that kind of stuff.

If you want the fancy docker management web UI stuff with as little maintenance as possible, you may still be in the NAS market, but for a lot of people NAS just means "a big hard drive all of my devices can access". From what I can tell the best middle point between "what the box from the store offers" and "how do build one yourself" is a (paid-for) NAS OS like HexOS where analytics, tracking, and data sales are not used to cover for race-to-the-bottom pricing.

zaptheimpaler 1 day ago|||
Actually I host everything on a linux PC/server, but a different box runs PFSense and a local DNS resolver so I was talking about setting up a split-brain DNS there. So I don't have to manually edit the hosts file on every machine and keep it up to date with IP changes. Personally I really like docker compose, its made running the little homeserver very easy.
jeroenhd 1 day ago||
Personally, I've started just using mDNS/Bonjour for local devices. Comes preinstalled on most devices (may need a manual package on BSD/Linux servers) and doesn't require any configuration. Just type in devicename.local and let the network do the rest. You can even broadcast additional device names for different services, so you don't need to do plex.nas.local, but can just announce plex.local and nas.local from the same machine.

There's a theoretical risk of MitM attacks for devices reachable over self-signed certificates, but if someone breaks into my (W)LAN, I'm going to assume I'm screwed anyway.

I've used split-horizon DNS for a couple of years but it kept breaking in annoying ways. My current setup (involving the pihole web UI because I was sick of maintaining BIND files) still breaks DNSSEC for my domain and I try to avoid it when I can.

prmoustache 1 day ago||||
I don't even understand what kind of webui one would want.

All you really need is a bunch of disk and an operating system with an ssh server. Even the likes of samba and nfs aren't even useful anymore.

jeroenhd 1 day ago|||
A bunch of out-of-the-box NAS manufacturers provide a web-based OS-like shell with file managers, document editors, as well as an "app store" for containers and services.

I see the traditional "RAID with a SMB share" NAS devices less and less in stores.

If only storage target mode[1] had some form of authentication, it'd make setting up a barebones NAS an absolute breeze.

[1]: https://www.freedesktop.org/software/systemd/man/257/systemd...

Nextgrid 1 day ago||
Storage target mode is block-level, not filesystem-level, meaning it won't support concurrent access and any network hiccup or dropped connection will leave the filesystem in an unclean state.
simoncion 1 day ago||
> ...any network hiccup or dropped connection will leave the filesystem in an unclean state.

Given that the docs claim that this is an implementation of an official NVMe thing, I'd be very surprised if it had absolutely no facility for recovering from intermittent network failure. "The network is unreliable" [0] is axiom #1 for anyone who's building something that needs to go over a network.

If what you report is true, then is the suckage because of SystemD's poor implementation, or because the thing it's implementing is totally defective?

[0] Yes, datacenter (and even home) networks can be very reliable. They cannot be 100% reliable and -in my professional experience- are substantially less than 100% reliable. "Your disks get turbofucked if the network ever so much as burps" is unacceptable for something you expect people to actually use for real.

Nextgrid 11 hours ago||
NVME only provides block IO. An interruption of the connection is equivalent to forcibly unplugging a hard drive. If the filesystem you put on top supports that and is able to recover from that, you're fine. But most filesystems do not optimize for that happening anywhere near as frequently as it would if you were using this as a regular file sharing protocol over unreliable networks.
simoncion 10 hours ago||
> NVME only provides block IO.

Sure. NVMe provides block IO carried over a variety of transports. The one we're talking about is TCP.

> An interruption of the connection is equivalent to forcibly unplugging a hard drive.

Remember that I said in my footnote:

  "Your disks get turbofucked if the network ever so much as burps" is unacceptable for something you expect people to actually use for real.
A glance at the spec reveals that TCP was chosen to provide reliable, in-order transmission of NVMe payloads. TCP is quite able to recover from intermittent transport errors. You might consider reading the first paragraph of sections 2 and 3.3, as well as sections 3.4, 3.5, and the first handful of paragraphs of section 3.5.1 of the relevant spec. [0]

If you're truly seeing disk corruption whenever the network so much as burps, then it sounds like the SystemD guys fucked something up.

[0] <https://nvmexpress.org/wp-content/uploads/NVM-Express-TCP-Tr...>

Gigachad 1 day ago|||
File history, sharing and user management are some of the common ones I can think of.
AndyMcConachie 1 day ago|||
The real trick, and the reason I don't build my own NAS, is standby power usage. How much wattage will a self built Linux box draw when it's not being used? It's not easy to figure out, and it's not easy to build a NAS optimized for this.

Whereas Synology or other NAS manufacturers can tell me these numbers exactly and people have reviewed the hardware and tested it.

ssl-3 1 day ago|||
To me, it's a question of time and money efficiency. (Time is money.)

I can buy a NAS, whereby I pay money to enjoy someone else's previous work of figuring it out. I pay for this over and over again as my needs change and/or upgrades happen.

Or

I can build a NAS, whereby I spend time to figure it out myself. The gained knowledge that I retain in my notes and my tiny little pea brain gets to be used over and over again as needs change, and/or upgrades happen. And -- sometimes -- I even get paid to use this knowledge.

(I tend to choose the latter. YMMV.)

lstodd 1 day ago|||
There are power meters like KWS-303L that will tell you how much manufacturers lie with their numbers.

For example my ancient tplink TL-WR842N router eats 15W standby or no, while my main box, fans, backlight, gpu, hdds and stuff -- about 80W idle.

Looking at Synology site the only power I see there is the psu rating, which is 90W for DS425. So you can expect real power consumption of about 30-40W. Which is typical for just about any NUC or a budget ATX motherboard with a low-tier AMD-something + a bunch of HDDs.

jraph 2 days ago||
> Can't even name the domains on my own damn server with an expectation of privacy now.

You never could. A host name or a domain is bound to leave your box, it's meant to. It takes sending an email with a local email client.

(Not saying, the NAS leak still sucks)

ahoka 1 day ago|||
I have internal zones in my home network and requests to resolve them never leave the private network. So no, it's not meant to.
jraph 1 day ago||
"Meant to" may indeed not be really accurate.

However, domains and host names were not designed to be particularly private and should not be considered secret, many things don't consider them private, so you should not put anything sensible in a host name, even in a network that's supposed private. Unless your private network is completely air-gapped.

Now, I wouldn't be surprised that hostnames were in fact originally expected to be explicitly public.

zaptheimpaler 1 day ago|||
I don't know much about email, but how would some random service send an email from my domain if I've never given it any auth tokens?
TheDong 1 day ago|||
You don't need any auth to send an email from your domain, or in fact from any domain. Just set whatever `From` you want.

I've received many emails from `root@localhost` over the years.

Admittedly, most residential ISPs block all SMTP traffic, and other email servers are likely to drop it or mark it as spam, but there's no strict requirement for auth.

flexagoon 1 day ago|||
You can, but most email providers will immediately reject your email or put it into spam because of missing DKIM/DMARC/SPF
prmoustache 1 day ago|||
> Admittedly, most residential ISPs block all SMTP traffic, and other email servers are likely to drop it or mark it as spam, but there's no strict requirement for auth.

Source? I've never seen that. Nobody could use their email provider of choice if that was the case.

namibj 1 day ago|||
They don't do DPI, they just look at the destination port. And that's why there's a separate port for submission to mail agents where such auth is expected and thus only outbound mail is typically even attempted to be submitted to. Technically local delivery mail too, e.g. where the From and the To headers are valid and have the same domain.
TheDong 1 day ago|||
The 3 most common ISPs in the US are Comcast, Spectrum, and AT&T

Comcast blocks port 25: https://www.xfinity.com/support/articles/email-port-25-no-lo...

AT&T says "port 25 may be blocked from customers with dynamically-assigned Internet Protocol addresses", which is the majority of customers https://about.att.com/sites/broadband/network

What ISP are you using that isn't blocking port 25, and have you never had the misfortune of being stuck with comcast or AT&T as your only option?

prmoustache 1 day ago||
Well I am not in the USA for a start but if it is blocked it must be only inbound otherwise it would break everybody.
jraph 1 day ago||
> if it is blocked it must be only inbound

Yep, at least in France it's like this for ISPs doing this IIRC.

jraph 1 day ago|||
It should not, but it's usual to configure random services to send mails to users, for instance for password resets, or for random notifications.

Another thing usually sending mails is cron, but that should only go to the admin(s).

Some services might also display the host name somewhere in their UI.

stingraycharles 2 days ago||
I don’t understand. How could a GCP server access the private NAS?

I agree the web UI should never be monitored using sentry. I can see why they would want it, but at the very least should be opt in.

minitech 2 days ago||
It couldn’t, but it tried.
copperx 2 days ago||
A for effort, F for firewall.
throwaway290 2 days ago||
It said knocking, not accessing

also

> you notice that you've started getting requests coming to your server on the "outside world" with that same hostname.

superkuh 1 day ago||
I love that this write-up is hosted both on HTTP and HTTPS. I cannot access the HTTPS version but the HTTP display just fine. Now that's reliability.
DANmode 1 day ago|
> I cannot access the HTTPS version

Curiosity begs: why not?

superkuh 1 day ago||
I opened it on an old computer with an old linux distro with an old browser because old linux distros have reliable and working accessibility features like screen readers and good non-gpu text to speech and advanced keyboard/mouse sharing. Modern linux distros do not. Don't worry, I have javascript execution/etc turned off by default on that machine.
DANmode 1 day ago||
Glad I asked - first time running into a screen reader using Linux user in the wild!

What distro/reader? Thanks

superkuh 10 hours ago||
Ubuntu 10.04 (custom 3.x kernel, custom 1.7.x xorg) + Orca + Festival 1.96 with nitech HTS voices.
notpushkin 1 day ago||
https://archive.ph/siEdE
linhns 1 day ago|
Well somehow Rachel's website is not sending back any response now.
More comments...