Posted by zdw 2 days ago
sure hope nobody does that targeting ips (like that blacklist in masscan) that will auto report you to your isp/ans/whatever for your abusive traffic. Repeatedly.
> The poster described how she was able to retrieve her car after service just by giving the attendant her last name. Now any normal car owner would be happy about how easy it was to get her car back, but someone with a security mindset immediately thinks: “Can I really get a car just by knowing the last name of someone whose car is being serviced?”
Just a couple of hours ago, I picked my car up from having its obligatory annual vehicle check. I walked past it and went into their office, saying "I'm here to pick up my car". "Which one is it?" "The Golf" "Oh, the $MODEL?" (it was the only Golf in their car park) "Yeah". And then after payment of £30, the keys were handed over without checking of anything, not even a confirmation of my surname. This was a different guy to the one who was in there an hour earlier when I dropped the car off.
Some car dealership who never had a car stolen hires a consultant and they identify this pickup situation as a problem. Then they implement some wild security and now customers who just dropped off their car, just talked to the same customer service person about the weather ... have to go through some extra security to impersonally prove who they are, because someone imagined a problem that has never occurred (or nearly never). But here we go doing the security dance because someone imagined a problem that really has nothing to do with how people actually steal cars...
Computers and the internet are different of course, the volume of possibilities / bad actors you could be exposed to are seemingly endless. Yet even there security mindset can go overboard.
I'm currently trying to recover/move some developer accounts for some services because we had someone leave the company less than gracefully. Often I have my own account, it's part of an organization ... but moving ownership is an arduous and bizarrely different process for each company. I get it, you wouldn't want someone to take over our no name organization, but the process all seem to involve extra steps piled on "for security". The fact that I'm already a customer, have an account in good standing, part of the organization, the organization account holder has been inactive ... doesn't seem to matter at all, I may as well be a stranger from the outside, presumably because of "security".
I can imagine being in info-sec is a rough life. When you get breached, they're blamed. So they spend all their time red-teaming and coming up with outlandish ways that their systems can be compromised, and equally outlandish hoops for users to jump through just to use their product. So the product gets all these hoops. And then an attacker gets even more creative, breaches you again, and now your product has horrible UX + you're still getting breached.
I mean, I don't mind if the same dev public-keys are used nearly everywhere in internal dev and testing environments... but JFC, don't deploy them to client infrastructure for our apps.
FWIW, aside... for about the last decade, I generally separate auth from the application I'm working with, relying on a limited set of established roles and RSA signed JWTs, allowing for the configuration of one or more issuers. This allows for a "devauth" that you can run locally for a whoever you want usage. While more easily integrating into other SSO systems and bridges with other auth services/systems in differing production environments. Even with firm SSO/Ouath, etc services, it's still the gist of configuration.
Then they realize that one person may be bribed so they require at least two people to verify at pickup and drop off.
Meanwhile, a car has never ever been stolen this way.
Definitely over the top issue.
Meanwhile I could fake them all in a fairly short amount of time...
The likelihood of conmen stealing VW Golfs from repair shops is a really low risk/high impact event. So they could demand your passport and piss you off or have you leave a happy customer.
In the remote chance the con artist strikes, it’s a general liability covered by insurance.
So the garage can have lower security because even potential thieves do a risk/reward calculation and the vast majority choose not to proceed with it.
Online, the risk/reward calculation is different (what risk?), so more people will be tempted to try (even for the lolz - not every act of cybercrime is done for monetary purposes).
It's risky, sure. But the garage situation also seems risky.
I might be misinformed but I've been told that for a while now (maybe 20 years or so), new cars have been built to be exceptionally difficult to hot-wire.
A South African friend told me that some brand of four wheel drive could be hot-wired but it involved getting behind one of the front head-lamp bulbs - doable, but a damaging process if you're in a rush.
The people who work there aren't office workers; you've got blue collar workers who spend all day working together and hanging out using heavy equipment right in the back. And they're going to be well acquainted with the local tow truck drivers and the local police - so unless you're somewhere like Detroit, you better be on your way across state lines the moment you're out of there. And you're not conning a typical corporate drone who sees 100 faces a day; they'll be able to give a good description.
And then what? You're either stuck filing off VINs and faking a bunch of paperwork, or you have to sell it to a chop shop. The only way it'd plausibly have a decent enough payoff is if you're scouting for unique vehicles with some value (say, a mint condition 3000GT), but that's an even worse proposition for social engineering - people working in a garage are car guys, when someone brings in a cool vehicle everyone's talking about it and the guy who brought it in. Good luck with that :)
Dealership? Even worse proposition, they're actual targets so they know how to track down missing vehicles.
If you really want to steal a car via social engineering, hit a car rental place, give them fake documentation, then drive to a different state to unload it - you still have to fake all the paperwork, and strip anything that identifies it as a rental, and you won't be able to sell to anyone reputable so it'll be a slow process, and you'll need to disguise your appearance differently both times so descriptions don't match later. IOW - if you're doing it right so it has a chance in hell of working, that office job starts to sound a whole lot less tedious.
Way easier to just write code :)
When Kia and Hyundai were recently selling models without real keys or ignition interlocks, that was the main thing folks did when they stole them.
> This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves ...
I have to disagree in the strongest terms. It doesn't matter what it is, the only way to do a good job designing something is to imagine the ways in which things could go wrong. You have to poke holes in your own design and then fix them rather than leaving it to the real world to tear your project to shreds after the fact.
The same thing applies to science. Any even half decent scientist is constantly attempting to tear his own theories apart.
I think Schneier is correct about that sort of thinking not being natural for your typical person. But it _is_ natural (or rather a prerequisite) for truly competent engineers and scientists.
() Just yesterday I had to correct a PR because the engineer did not think of some corner cases. All sorts of corner cases happen in real life.
I think its more the nuanced difference between safety and security. Engineers build things so they run safe. For example building a roof that doesnt collapse is a safe roof. Is the roof secure? Maybe I can put thermites in the wood...
this is the difference. Safety is no harm done from the thing itself Engineers build and security is securing the thing from harm from outside.
Security will have a wider scope by default (unlike natural phenomena, attacks are motivated and can get pretty creative after all) but there will still be some boundary outside of which "not my problem" applies. Regardless, it's the same fundamental thought pattern in use. Repeatedly asking "what did I overlook, what unintended assumptions did I make, how could this break".
That said, admittedly by the time you make it to the scale of Google or Microsoft and are seriously considering intelligence agencies as adversaries the sky is the limit. But then the same sort of "every last detail is always your problem" mentality also applies to the engineers and software developers building things that go to space (for example).
Seems to me that the problem is the NAS's web interface using sentry for logging/monitoring, and part of what was logged were internal hostnames (which might be named in a way that has sensitive info, e.g, the corp-and-other-corp-merger example they gave. So it wouldn't matter that it's inaccessible in a private network, the name itself is sensitive information.).
In that case, I would personally replace the operating system of the NAS with one that is free/open source that I trust and does not phone home. I suppose some form of adblocking ala PiHole or some other DNS configuration that blocks sentry calls would work too, but I would just go with using an operating system I trust.
Clown is Rachel's word for (Big Tech's) cloud.
was (and she worked at Google too)
> "clowntown" and "clowny" are words you see there.
Didn't know this, interesting!
You may not owe clown-resemblers better, but you owe this community better if you're participating in it.
We ban accounts that keep posting in this sort of pattern, as yours has, so if you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.
It feels pretty hacker jargon-ish, it has some "hysterical raisins" type wordplay vibes.
I use a localhost TLS forward proxy for all TCP and HTTP over the LAN
There is no access to remote DNS, only local DNS. I use stored DNS data periodically gathered in bulk from various sources. As such, HTTP and other traffic over TCP that use hostnames cannot reach hosts on the internet unless I allow it in local DNS or the proxy config
For me, "WebPKI" has proven useful for blocking attempts to phone home. Attempts to phone home that try to use TLS will fail
I also like adding CSP response header that effectively blocks certain Javascript
It sounds like the blog author gave the NAS direct access to the internet
Every user is different, not everyone has the same preferences
For example, I have seen a freshly installed Firefox Nightly try to connect to sentry.io on startup
For me, these attempts never succeed
FTFA:
Every time you load up the NAS [in your browser], you get some clown GCP host knocking on your door, presenting a SNI hostname of that thing you buried deep inside your infrastructure. Hope you didn't name it anything sensitive, like "mycorp-and-othercorp-planned-merger-storage", or something.
Around this time, you realize that the web interface for this thing has some stuff that phones home, and part of what it does is to send stack traces back to sentry.io. Yep, your browser is calling back to them, and it's telling them the hostname you use for your internal storage box. Then for some reason, they're making a TLS connection back to it, but they don't ever request anything. Curious, right?
This is when you fire up Little Snitch, block the whole domain for any app on the machine, and go on with life.
I disagree with your conclusion. The post speaks specifically about interactions with the NAS through a browser being the source of the problem and the use of an OSX application firewall program called Little Snitch to resolve the problem. [0] The author's ~fifteen years of posts demonstrate that she is a significantly accomplished and knowledgeable system administrator who has configured and debugged much trickier things than what's described in the article.It's not impossible that the source of the problem has been misidentified... but it's extremely unlikely. Having said that, one thing I do find likely is that the NAS in question is isolated from the Internet; that's just a smart thing that a savvy sysadmin would do.
[0] I find it... unlikely that the NAS in question is running OSX, so Little Snitch is almost certainly running on a client PC, rather than the NAS.
The term has been in use for quite some time; It is voicing sarcastic discontent with the hyperscaler platforms _and_ their users (the idea being that the platform is "someone else's computer" or - more up to date - "a landlord for your data"). I'm not sure if she coined it, but if she did then good on her!
Not everyone believes using "the cloud" is a good idea, and for those of us who have run their own infrastructure "on-premises" or co-located, the clown is considered suitably patronising. Just saying ;)
I have a vague memory of once having a userscript or browser extension that replaced every instance of the word "cloud" with "other peoples' computers". (iirc while funny, it was not practical, and I removed it).
fwiw I agree and I do not believe using "the cloud" for everything is a good idea either, I've just never heard of the word "clown" being used in this way before now.
It can be useful to hide a private service behind a URL that isn't easy to guess (less attack surfaces, because a lot of attackers can't find the service). But it needs to be inside the URL path, not the hostname.
bad: my-hidden-fileservice-007-abc123.example.com/
good: fileservice.example.com/my-hidden-service-007-abc123/
In the first example the name is leaked with DNS queries, TLS certificates and many other possibilities. In the second example the secret path is only transmitted via HTTPS and doesn't leak as easy.Subfinder has a lot of sources to find subdomains, not only certs: https://github.com/projectdiscovery/subfinder
It's treating a symptom rather than a disease, but what else can we do?
Bit of a pain to set this all up though. I run a number of services on my home network and I always stick Nginx in front with a restrictive CSP policy, and then open that policy up as needed. For example, I'm running Home Assistant, and I have the Steam plugin, which I assume is responsible for requests from my browser like for: https://avatars.steamstatic.com/HASH_medium.jpg, which are being blocked by my injected CSP policy
P.S. I might decide to let that steam request through so I can see avatars in the UI. I also inject "Referrer-Policy: no-referrer", so if I do decide to do that, at least they wont see my HA hostname in there logs by default.
Using LE to apply SSL to services? Complicated. Non standard paths, custom distro, everything hidden (you can’t figure out where to place the ssl cert of how to restart the service, etc). Of course you will figure it out if you spent 50 hours… but why?
Don’t get me started with the old rsync version, lack of midnight commander and/or other utils.
I should have gone with something that runs proper Linux or BSD.
That said, I’ll probably try out the UniFi NAS offerings in the near future. I believe Synology has semi-walked-back its draconian hard drive policy but I don’t trust them to not try that again later. And because I only use my Synology as a NAS I can switch to something else relatively easily, as long as I can mount it on my app server, I’m golden.
There are guides on how to mainline Synology NAS's to run up-to-date debian on them: https://forum.doozan.com/list.php
leave it to serve files and iscsi. it's very good at it
if you leave it alone, no extra software, it will basically be completely stable. it's really impressive
If you have OPNSense, it has an ACME plugin with Synology action. I use that to automatically renew and push a cert to the NAS.
That said, since I like to tinker, Synology feels a bit restricted, indeed. Although there is some value in a stable core system (like these immutable distros from Fedora Atomic).
Edit: I just checked Grafana and cadvisor reports 23 containers.
Edit2: 4.4.302+ (2022) is my kernel version, there might be specific tools that require more recent kernels, of course, but I was so far lucky enough to not run into those.
I know there are userspace implementations, but can't remember the specifics rn and don't have my notes with me.
> kernel modules for iptables-nft
I think you meant nftables. The iptables-nft package is meant to provide iptables interface for nftables for code that still expects that, afaik. I didn't run into that issue yet (knock-knock). According to docs nftables is available since kernel 3.13, so in theory it might be possible to build the modules for Synology.
However, I don't think I will be buying another Synology in the future, mainly because of other issues like they restricting what RAM I can use or what I want to use the M2 slots for, or their recent experiment with trying to push their own drives only, etc. I might give TrueNAS a try if I am not bored enough to just build one on top of a general purpose OS...
As great as containerization is, having the right kernel modules available goes a long way and I probably wouldn't have run into trouble like that if the first container hadn't fallen back to iptables because nftables was unavailable.
All of these NAS OSs that include docker work great for the most popular containers, but once you get into the more complex ones strange quirks start poping up.
https://github.com/JessThrysoee/synology-letsencrypt
> there is very little one can do with this thing.
It has a VMM and Docker. Entware / opkg exist for it. There's very little that can't be done, but expecting to use an appliance that happens to be Linux-based as a generic Linux server is going to lead to challenges. Be it Synology, TrueNAS, or anything else.
https://blog.sentry.io/sentry-ingestion-domains-updates/
https://cloud.google.com/blog/topics/partners/using-sentry-t...
https://old.reddit.com/r/PleX/comments/1b12phf/plex_sending_...
There has never been any resource record for any sentry.io domain in the DNS that is used by computers I control. This DNS is local and I control it. I saw a request to an ingest.sentry.io domain once while experimenting with Firefox. It failed
The DNS used by me only contains addresses for servers that I find useful
But every user has their own preferences. It is possible that some end-users might see value in allowing their computers to automatically send requests to sentry.io while receiving nothing in return. I am not one of those users
Public services see one way (no TCP return flow possible) from almost any source IP. If you can tie that from other corroborated data, the same: you see packets from "inside" all the time.
Darknet collection during final /8 run-down captured audio in UDP.
Firewalls? ACLs? Pah. Humbug.
Mind elaborating on this? SIP traffic from which year?
That sounds like a large kick-me sign taped to every new service. Reading how certificate transparency (CT) works leads me to think that there was a missed opportunity to publish hashes to the logs instead of the actual certificate data. That way a browser performing a certificate check can verify in CT, but a spammer can't monitor CT for new domains.
What you're describing there is certificate... translucency, I guess?