Posted by zdw 2 days ago
Once they know what hosts you run, it'll ping that hostname periodically. If it stays up and stable for a couple days, you'll get an alert in product: "Set up uptime monitoring on <hostname>?"
Whether you think this is valid, useful, acceptable, etc. is left as an exercise to the reader.
Internal hostnames leaking is real, but in practice it’s just one tiny slice of a much larger problem: names and metadata leak everywhere - logs, traces, code, monitoring tools etc etc.
Is there some sort of injection that's a legal host name?
People should know that they should treat the contents of their logs as unsanitized data... right? A decade ago I actually looked at this in the context of a (commercial) passive DNS, and it appeared that most of the stuff which wasn't a "valid" hostname was filtered before it went to the customers.
If Firefox also leaks this, I wonder if this is something mass-surveillance related.
(Judging from the down votes I misunderstood something)
This helps you (=NAS developer) to centralize logs and trace a request through all your application layers (client->server->db and back), so you can identify performance bottlenecks and measure usage patterns.
This is what you can find behind the 'anonymized diagnostics' and 'telemetry' settings you are asked to enable/consent.
For a WebUI it is implemented via javascript, which runs on the client's machine and hooks into the clicks, API calls and page content. It then sends statistics and logs back to, in this case, sentry.io. Your browser just sees javascript, so don't blame them. Privacy Badger might block it.
It is as nefarious as the developer of the application wants to use it. Normally you would use it to centralize logging, find performance issues, and get a basic idea on what features users actually use, so you can debug more easily. But you can also use it to track users. And don't forget, sentry.io is a cloud solution. If you post it on machines outside your control, expect it to be public. Sentry has a self-hosted solution, btw.
My current solution is a massive hack that breaks down every now and then.
Quite a pain that companies refuse to take no for an answer :/
The reason for this distinction is that failing to meet a Requirement for issued certificates would mean the trust stores might remove your CA, but several CAs today do issue unlogged certificates - and if you wanted to use those on a web server you would need to go log them and staple the proofs to your certs in the server configuration.
Most of the rules (the "Baseline Requirements" or BRs) are requirements and must be followed for all issued certificates, but the rule about logging deliberately doesn't work that way. The BRs do require that a CA can show us - if asked - everything about the certificates they issued, and these days for most CAs that's easiest accomplished by just providing links to the logs e.g. via crt.sh -- but that requirement could also be fulfilled by handing over a PDF or an Excel sheet or something.
LetsEncrypt doesn't make a difference at all.
These PHP apps need to change so you first boot the app with credentials so the app is secured at all moments.
FWIW - it’s made of people
You meant you shouldn't right? Partially exactly for the reasons you stated later in the same sentence.
CA/B Forum policy requires every CA to publish every issued certificate in the CT logs.
So if you want a TLS certificate that's trusted by browsers, the domain name has to be published to the world, and it doesn't matter where you got your certificate, you are going to start getting requests from automated vulnerability scanners looking to exploit poorly configured or un-updated software.
Wildcards are used to work around this, since what gets published is *.example.com instead of nas.example.com, super-secret-docs.example.com, etc — but as this article shows, there are other ways that your domain name can leak.
So yes, you should use Let's Encrypt, since paying for a cert from some other CA does nothing useful.
They don't sell who asked because that's a regulatory nightmare they don't want, but they sell the list of names because it's valuable.
You might buy this because you're a bad guy (reputable sellers won't sell to you but that's easy to circumvent), because you're a more-or-less legit outfit looking for problems you can sell back to the person who has the problem, or even just for market research. Yes, some customers who own example.com and are using ZQF brand HR software won't name the server zqf.example.com but a lot of them will and so you can measure that.
I am not entirely aware what LE does differently, but we had very clear observation in the past about it.
This is the problem with the notion that "in the name of securitah IoT devices should phone home for updates": nobody said "...and map my network in the name of security"
[0] Don't confuse this with Rachel's honeypot wildcarding *.nothing-special.whatever.example.com for external use.
I think a lot of people underestimate how easy a "NAS" can be made if you take a standard PC, install some form of desktop Linux, and hit "share" on a folder. Something like TrueNAS or one of its forks may also be an option if you're into that kind of stuff.
If you want the fancy docker management web UI stuff with as little maintenance as possible, you may still be in the NAS market, but for a lot of people NAS just means "a big hard drive all of my devices can access". From what I can tell the best middle point between "what the box from the store offers" and "how do build one yourself" is a (paid-for) NAS OS like HexOS where analytics, tracking, and data sales are not used to cover for race-to-the-bottom pricing.
There's a theoretical risk of MitM attacks for devices reachable over self-signed certificates, but if someone breaks into my (W)LAN, I'm going to assume I'm screwed anyway.
I've used split-horizon DNS for a couple of years but it kept breaking in annoying ways. My current setup (involving the pihole web UI because I was sick of maintaining BIND files) still breaks DNSSEC for my domain and I try to avoid it when I can.
All you really need is a bunch of disk and an operating system with an ssh server. Even the likes of samba and nfs aren't even useful anymore.
I see the traditional "RAID with a SMB share" NAS devices less and less in stores.
If only storage target mode[1] had some form of authentication, it'd make setting up a barebones NAS an absolute breeze.
[1]: https://www.freedesktop.org/software/systemd/man/257/systemd...
Given that the docs claim that this is an implementation of an official NVMe thing, I'd be very surprised if it had absolutely no facility for recovering from intermittent network failure. "The network is unreliable" [0] is axiom #1 for anyone who's building something that needs to go over a network.
If what you report is true, then is the suckage because of SystemD's poor implementation, or because the thing it's implementing is totally defective?
[0] Yes, datacenter (and even home) networks can be very reliable. They cannot be 100% reliable and -in my professional experience- are substantially less than 100% reliable. "Your disks get turbofucked if the network ever so much as burps" is unacceptable for something you expect people to actually use for real.
Sure. NVMe provides block IO carried over a variety of transports. The one we're talking about is TCP.
> An interruption of the connection is equivalent to forcibly unplugging a hard drive.
Remember that I said in my footnote:
"Your disks get turbofucked if the network ever so much as burps" is unacceptable for something you expect people to actually use for real.
A glance at the spec reveals that TCP was chosen to provide reliable, in-order transmission of NVMe payloads. TCP is quite able to recover from intermittent transport errors. You might consider reading the first paragraph of sections 2 and 3.3, as well as sections 3.4, 3.5, and the first handful of paragraphs of section 3.5.1 of the relevant spec. [0]If you're truly seeing disk corruption whenever the network so much as burps, then it sounds like the SystemD guys fucked something up.
[0] <https://nvmexpress.org/wp-content/uploads/NVM-Express-TCP-Tr...>
Whereas Synology or other NAS manufacturers can tell me these numbers exactly and people have reviewed the hardware and tested it.
I can buy a NAS, whereby I pay money to enjoy someone else's previous work of figuring it out. I pay for this over and over again as my needs change and/or upgrades happen.
Or
I can build a NAS, whereby I spend time to figure it out myself. The gained knowledge that I retain in my notes and my tiny little pea brain gets to be used over and over again as needs change, and/or upgrades happen. And -- sometimes -- I even get paid to use this knowledge.
(I tend to choose the latter. YMMV.)
For example my ancient tplink TL-WR842N router eats 15W standby or no, while my main box, fans, backlight, gpu, hdds and stuff -- about 80W idle.
Looking at Synology site the only power I see there is the psu rating, which is 90W for DS425. So you can expect real power consumption of about 30-40W. Which is typical for just about any NUC or a budget ATX motherboard with a low-tier AMD-something + a bunch of HDDs.
You never could. A host name or a domain is bound to leave your box, it's meant to. It takes sending an email with a local email client.
(Not saying, the NAS leak still sucks)
However, domains and host names were not designed to be particularly private and should not be considered secret, many things don't consider them private, so you should not put anything sensible in a host name, even in a network that's supposed private. Unless your private network is completely air-gapped.
Now, I wouldn't be surprised that hostnames were in fact originally expected to be explicitly public.
I've received many emails from `root@localhost` over the years.
Admittedly, most residential ISPs block all SMTP traffic, and other email servers are likely to drop it or mark it as spam, but there's no strict requirement for auth.
Source? I've never seen that. Nobody could use their email provider of choice if that was the case.
Comcast blocks port 25: https://www.xfinity.com/support/articles/email-port-25-no-lo...
AT&T says "port 25 may be blocked from customers with dynamically-assigned Internet Protocol addresses", which is the majority of customers https://about.att.com/sites/broadband/network
What ISP are you using that isn't blocking port 25, and have you never had the misfortune of being stuck with comcast or AT&T as your only option?
Yep, at least in France it's like this for ISPs doing this IIRC.
Another thing usually sending mails is cron, but that should only go to the admin(s).
Some services might also display the host name somewhere in their UI.
I agree the web UI should never be monitored using sentry. I can see why they would want it, but at the very least should be opt in.
also
> you notice that you've started getting requests coming to your server on the "outside world" with that same hostname.
Curiosity begs: why not?
What distro/reader? Thanks