Posted by mobeigi 6 days ago
I watched them. They don't want to hang around longer than necessary. They will only approach a bike rack that is clearly visible from the road. They will only steal a bike that has unobstructed access to the road (no tricky bollards or other bikes to get around). Even though they are full of bravado, and shout obscenities and threats at me when I tell them to fuck off, they still run away (even though the one approaching the bikes is carrying a weapon while his companion stays on the scooter ready to escape)
Anything that even mildly inconveniences these guys is enough to stop them attempting theft. The bikes they steal needs to be expensive, out in the open, with direct access to the road, and with a shitty lock. And believe it or not, those tumblers line up a lot.
Throwing a blanket over a bike is probably enough to stop them from even approaching it.
Well the lock itself for a junkie in Amsterdam has value if you get expensive one it is additional loot.
It IT there are a lot of people tossing a blanket over the scooter and believing they're affecting the ability of the the attacker when they're really changing the likelihood of an attack.
Imagine if every single person put a blanket over their bike. Now imagine if everyone got a chain that was 10 times stronger. Which world would you rather live in?
In addition to likelihood, attacks have shape. And proper installations can force your adversaries' maneuvers to take a certain shape. I've heard this referred to as terraforming.
If you're going to "do it in the road" (a highly visible bike rack), your lock or chain works much better when it is better / stronger than the herd. If everyone has a chain which is 10x stronger, then a better grinder becomes a cost of doing business. Maybe I'd rather live in a world where I didn't use that bike rack.
But the basic flaw of this analogy is that it implies you're at war, and your system is always in battle.
Just like any security control, if it's your only means of security, it will not offer much risk reduction. Just like all security controls, the if you want risk reduction use more security controls together. Like all security controls, there is no way to eliminate risk, just reduce it as much as possible while still being able to effectively achieve your mission.
Because of this I believe security through obscurity to be important component in a healthy and mature risk posture.
It irks me when it's dismissed because obscurity is not security. No single security control is security on its own.
Think about leaving your bike unlocked in times square, vs. the top of a 7 000 meter mountain in the himalayas.
Which unlocked (unsecure) bike is more likely to be stolen, and ergo has a lower risk attached?
----
Obscurity does not help you when the thief has already found your bike, nor is obscurity very helpful for keeping your bike safe if you happen to live in times square.
But if you live at the top of a himalayan peak, you can be fairly certain you're not going to have your bike stolen.
you could put the bike right on the side of the mountain without any obfuscation and it won't get got because ain't no one gonna die for a bike.
its like how we know where dead people are on Everest but we can't get them down; they serve as landmarks.
"The Integrated Survivability Onion"
https://cogecog.com/the-threat-onion/
1. Don't be seen.
2. Don't be acquired
3. Don't be hit
4. Don't be penetrated
5. Don't be killed
It's actually not a bad mental model training aid for teaching people who might find themselves in an active combat environment.
"killed" in this case would be equivalent to having something penetrate and hit sensitive systems. at that point it's basically just a function of what the penetrator is trying to do -- if they just want $$$ they ransomware. if they want exfil or DoS or making critical systems do naughty things that is also a kill.
not necessarily - this model is also taught for army/marines type ground combat operations, in how to effectively camouflage, how to manoeuvre.
the "don't be penetrated" is more of an equipment choice and engineering decision specific to armor and active kinetic counter-munitions systems, like anti-drone shotguns, tanks with active protection systems, chobham armor, etc.
If a munition has been fired by you, first try to not get penetrated by it at all, and if that fails, try to prevent something catastrophic like a bolus of explosive formed penetrator molten copper from spraying into the inside of your armored personnel carrier.
To keep with the analogy: no one is going to stand in a field when people are shooting at you. So then why do a small subset of vocal people online suggest that you just put your bulletproof vest and claim that hiding in the woods, regardless of the vest, is a bad idea?
Therefore, the safest assumption to make is that an adversary already has figured out all of your obscurity, because they always can do this given sufficient time and interest, at which point the only thing between them and you is your security.
That is why we design systems without obscurity and only care about security.
Obscurity is optional.
Obscurity is not worthwhile when it increases your own costs. Nevertheless, if you can add obscurity with negligible additional cost and inconvenience, then you should do it.
Security through obscurity merely means that your system is atypical. It's not hidden, it's not secret, it's not hard to find, it's not hard to examine, it's not less visible, etc - there is nothing inherently different about the systems at all other than that one is more common than the other. It's just less typical.
Obscurity is not the same thing as something being "obscured".
Obscurity means something is either difficult to comprehend, not well known or uncommon.
Obscured means something is hidden or concealed. When something is hidden, that means the thing is still there and there is a way to get to it. You can build automated tools around finding it.
>Being 'less typical' is a form of security because most attacks rely on some form of pattern recognition, and obscurity literally dissolves patterns into noise.
This is making the leap of faith assumption that "obscurity" is equivalent to "impossible to understand". In security you have no control over the attacker and therefore have to assume your attacker has more than enough knowledge and intelligence to perform the attack.
Since computer systems are static and unchanging without frequent patching, you can't assume that there is a cat and mouse game where the mouse is adapting its hiding strategies dynamically and managing to escape every single time.
As is always the case in these semantic discussions, the answer depends on your initial axioms and assumptions, which does kind of make most of these discussions pointless (but I did learn a lot from this one).
This notion was termed "security through obscurity" ie: "you use the less popular option, therefor that option is safer". It has nothing to do with "obscuring" in the sense of "hiding", that's a linguistic quirk of a colloquial term. If you were actually taking action to reduce the ability to understand a system in a way that you could meaningfully defend, it would no longer be "security through obscurity".
The argument has persisted because there are two different questions that sound the same (X is less typical than Y):
1. Is "X" safer than "Y"?
2. Is a user of "X" safer than a user of "Y"?
When looking at (1) in isolation, you can say things like "X lacks security features, therefor Y is safer" and "X is less often used, therefor X is safer", etc. This is a question about the posture of the project itself, in isolation.
(2) is about the context for users. The reality is that X, which perhaps is fundamentally less well built software, may actually have users who are attacked far less frequently.
Both are likely to favor "rarity is a poor indicator of safety" as we generally reject mitigation approaches that rely on attackers to behave specific ways, but what's important is that these are completely different questions and neither has to do with being obscured but rather rare.
None of this is about what is "obscured" or not. If something is obscured or obfuscated, that is a technique that can be evaluated separately by its own merits (ie: how hard is deobfuscation, how easy is it to adapt to deobfuscation, etc). All of this is about whether you're evaluating (1) or (2) - and in the case of (1), which is what the criticism always has focused on, the answer is that "rarity" is not a mitigation.
That is not where the term comes from.
Basically the insurgents choose terrain they know well, because they live there. They choose a swamp / mire in an open field between two hills. They build fortifications. They obscure the true nature of the ground they're standing on, out in the open. They goad the king's army into finishing them then and there. They fight on foot against knights on horseback. It's a mess. They win.
You literally just read how Obscurity protected OP in a cybersecurity incident. Now you are just playing word games, which are a waste of time.
Perhaps a better word would be resistance (to intrusion), which is a dimension orthogonal to visibility.
Obscurity alone isn't security. Security that includes obscurity in it's architecture is relevant.
Asking because of the Baader–Meinhof phenomenon :)
I recently learned about that and now I see it everywhere, weird.
It's a bit of an elitist view of security that romanticize concepts without thinking about what they can actually be used for. My personal bad experience with that was a manager who was stating me that having a different subdomain for the admin panel was a concealment and not a security practice.
I mean - it's very easy to see how this kind of argument actually prevents from doing something that can help just on the basis of philosophical purity - which often just miss the point - security is not a mechanism that will solve all your problems ; heck in fact I have to layer at least 4 mechanisms just on the http interface to feel safe ; it's more of a lot of layers that together form a barrier ;
We sit too much on TLS thinking "That's it, security job is done" - then we get some crazy stuff like French ANTS that get pawned with some IDOR ; as IF f* using some hash or something ; ANYTHING PLEASE F* HELL ; would have not helped
Obscurity is decreasingly effective as more people use it. Security is more effective as more people use it.
Ideally we want a viable plan B, for when it’s leaked/figured out. (E.g. generate new passwords)
(For convenience let’s label air-gap as kind of physical security)
That's not what the expression means.
"Security through obscurity" has a very specific meaning — that your system's security depends on your adversary not understanding how it works. E.g. understanding RSA is a few wikipedia articles away, and that doesn't compromise its security, so RSA isn't security through obscurity.
But I think it is interesting and useful to detach from that specific label with all connotations, and treat it for a moment as just regular english phrase.
So we can analyse the wider pattern, see why it is deemed flawed, whether it is a binary choice or a spectrum.
(Notable thing to frame the analysis: hacker does not attack RSA, hacker will hack certain implementation of SSH server and use heartblead-v2 to sidestep RSA completely)
Lucketone's argument is essentially saying that the bad practice itself isn't actually a bad practice by equivocating the term of art and the plain language definition.
I’ve used it for a long long time. Like in 1999 I’d have a knock on certain ports in a certain order to unlock the ssh port.
And lots of weird stuff to stop forum spam. Which could work for weeks or months or even a year.
"Security through obscurity" refers to the practice of using an hard to change "thing" as a secret, which is indeed bad practice
Security through obscurity in cryptosystems would mean defining your own crypto algorithm (or using a secretly-defined one, secret in the sense that it is unknown to the adversaries) to protect your system.
It is NOT bad in itself. It IS bad if you only rely on that. Even if you'd use a "secret" algorithm, you MUST protect the keys as with a public algorithm. Also, being secret means you cannot benefit from the cryptanalysis of the community, which is in practice very important. BUT... if you have a lot of cryptanalysis expertise at disposal, then using a secret algorithm can be very effective.
your password (plain text) is secret because only you are supposed to a have it. in the digital realm, sharing the contents of the password (plain-text) is be akin to making a copy of it — undesirable
now, the algorithm that hashes the plain-text for comparison with the stored hash, that can be know by anyone, and typically is
so password ≠ hashing algorithm
if you were hiding in cover during ww1, maybe you had a chance.
But if you were hiding from the Terminator, who is "Tireless, Fearless, Merciless", it might not last that long.
same might be said of exploits hiding from people... vs AI.
All security is security through obscurity. When it gets obscure enough we call it “public key cryptography”. Guess the 2048-bit prime number I'm thinking of and win a fabulous prize! (access to all of my data)
This is the crux of the article.
(1) Kerckhoffs's Principle doesn’t say that. It says to design the system AS IF the adversary has all of the info about it except the secrets (encryption key, certificates, etc).
(2) this rule is okay if you are a solo maintainer of a WordPress installation. It’s a problem if you work at a large company and part of the company knows the full intent of this, while the rest of the company doesn’t know the other layers of security BECAUSE of the obscurity layer. In this way, it’s important to communicate that this is only a layer and shouldn’t replace any other security decisions.
More broadly, anything that raises the cost of an attack helps security. Whether it is worth investing your defensive effort in that vs on more actual security is a different matter.
For instance, with respect to url parameters, I have seen people being told they have an Insecure Direct Object Reference, then apply base64 encoding to it to obscure what is going on. To QA they don't notice it looks like junk, it is obscure, but base64 encoded parameters are catnip to hackers.
So in this case, the obscurity made the system worse over time.
Heck, the most cringeworthy phrase "Base64 Encryption" which I have heard many many times.
But I think it's covered by your immediate parent comment
> Whether it is worth investing your defensive effort in that vs on more actual security is a different matter.
So the base64 introduces a marginal security gain, but in addition to expending effort in implementation, it increases the cost of other efforts (which is the case for almost all features), in the case of a fixed cost QA (which is again, always the case), the quality of the QA (pardon the redundancy) will be the parameter that suffers.
So yes, if the security gain is very minimal, then it's likely that the cost of the feature will be so great comparatively, that it will not only affect all other parameters like ease of use, but the negative indirect impact on security will be greater than the marginal positive direct impact on security.
Many such cases.
Might give you enough time to change the locks. But not provably — which can matter to a lot of people.
Again, I'm not opposed to simple tricks like this to “buy some time” so long as they don’t PREVENT the deeper layers of security from being performed. But if a company has scarce resources and a choice between patching unpatched software or changing DB names from the defaults the former actually improves security and the latter should only be performed if the staff has solved all of the higher risk items.
But it can add a bit of delay to someone breaking actual security, so maybe they'll hit the next target first as that is a touch easier. Though with the increasing automation of hole detection and exploitation, even that might stop being the case if it hasn't already.
The biggest problem with obscurity measures IMO is psychological: people tend to assume that the measures⁰ are far more effective than they actually are, so they might make less effort to verify that the proper security is done properly.
----
[0] like moving SSHd to a non-standard port¹
[1] a solution that can inconvenience your users more than attackers, and historically (in combination with exploiting a couple of bugs) actually made certain local non-root credential scanning attacks possible if you chose a high port
Now, in both instances, the obscurity provided does not necessarily cure your infrastructure's vulnerabilities, a dedicated attacker wouldn't have a single problem with either of these. But for someone who hammers the whole internet in a dim hope of finding another Wordpress server from 2017, or the latest flawed online security cam, your disguise is as good as perfect.
So ASLR [1] is not a security control? I guess you are pretty alone with this opinion.
[1] https://en.wikipedia.org/wiki/Address_space_layout_randomiza...
I am pretty sure everyone who works in security agrees that obscurity is not security.
Security through obscurity in this case would be to roll your own ASLR implementation with a different randomization strategy.
ASLR is a well understood system that exploit writers know to expect and thus ASLR is not security through obscurity.
I.e. just because you* don't know where something is, doesn't mean it's using obsecurity to hide.
The reason is important, because words mean things: If you say, knowledge of some secret is security though obsecurity. That means passwords are security though obsecurity.
*: that may or may not be available to the attacker.
it other words, just because a secret exists, doesn't put that secret into the 'obsecurity' category.
The delay can also be infinite in practice. If a really bad zero day is discovered, it might protect you from becoming a victim. No guarantees, but it can improve your chances.
> that step didn't add any security.
It is a decision that’s part of the entire process. A branch of many in the decision tree. Other branches are deciding which characters to type for the password; ASCII characters can be as little as 1 bit apart. Deciding between left and right is also 1 bit apart.
I think it boils down to what people commonly understand to be publicly knowable information versus understood-to-be-secret information.
One example: I self-host my password manager at pw.example.com/some-secret-path/. That extra path adds as much to security as a randomly picked username in HTTP Basic Auth: arguably none. Yet, it is as impossible for attackers to enumerate and find that path as it is with passwords.
The difference is that the path leaks easier. It’s not generally understood to be a secret. Yet I argue it helps security. (Example: leaking the domain name through certificate transparency logs AND even, say, user credentials means an attack is still unsuccessful; a strictly necessary piece of the puzzle is missing).
However "Not Having Stuff to Steal" works like a charm. It's thousands of years old, and has never gone out of style.
I know that it's considered blasphemy, hereabouts, but I've found that not collecting information that I don't absolutely need is pretty effective.
Even if someone knocks down all my gates and fences, they'll find the fox wasn't worth the chase.
It does make stuff like compiling metrics more of a pain, but that's my problem; not my users'.
> I don't think "obscurity" really buys you much (especially these days, with LLMs).
Actually I think it does so even more with LLMs. As has been posited before (particularly on the threads about open source projects going closed source) security comes down to who has paid more attention to the code, the attacker or the defender. And of course, these days attention is measured in tokens.
We know that LLM's are pretty capable of reversing-engineering to figure out an application's logic, but I would bet it takes many more tokens than reading the code or other public information directly. As such, obscurity adds an important layer to security: increasing the costs on the attacker.
Security has always been a numbers game, but now the numbers will overwhemingly be tokens and scale. If the defenders can cheaply raise the costs on the attackers by adding simple layers of obscurity, it can act as a significant deterrent at scale. I wonder if we'll even see new obfuscation techniques that are cheap to implement but targeted specifically at LLMs...
One example I remember is Pidgin storing its passwords in plain text in $HOME. They could have encrypted them with some hardcoded string, and made a lot of people happy that they would no longer grep their $HOME and find their passwords right there. However this had the side effect that now people were dropping the ball and sharing their config files with others. Or forgetting to setup proper permissions for their $HOME, etc.
In addition, these layers of obscurity are also not overhead free: they may complicate debugging, hey may introduce dangerous dependencies, they may tie you to a vendor, they may reduce computing freedom (e.g. Secure Boot), etc.
The whole point of security in depth is that you use non colinear layers of protection to raise the cost of an attack and reduce the blast radius of a successful attack.
(Note also most keychain implementations are not truly improving security in any way, but this is a separate topic)
That said, purple3/pidgin3 (still in development) only supports for keyrings and doesn't try to do any password management on its own even though password managers fall into the "Store a password(s) behind a password" as detailed on the above page.
Does that make it wrong?
I almost missed the twist at the end because I had no idea what the hell cockroach papers were. I still don't understand the reference, but at least it sounds mildly interesting. So, well done.
Now, as for this strawman argument of yours about justifying an infinite amount of crap, that's true of all manner of disingenuous arguments. Who cares about that in this case?
> Or forgetting to setup proper permissions for their $HOME, etc.
This is Pidgin's fault how?
Now, if you wanted to argue that Pidgin should have put the passwords into a separate file and chmod400'ed it that would make much more sense.
> In addition, these layers of obscurity are also not overhead free: they may complicate debugging, hey may introduce dangerous dependencies, they may tie you to a vendor, they may reduce computing freedom (e.g. Secure Boot), etc.
Not many good things have zero cost, do they... The point of TFA is that a little bit of well thought out obscurity pays huge dividends when applied in the real world. His example about the WP exploit ought to be all you need to read to get on board with that.
Security ONLY through obscurity is bad (Kerckhoffs's Principle).
Security through obscurity, as an additional layer, is good!
I've been saying this ever since that phrase was coined. A layer or two of obscurity keeps a lot of noise out of logs, reduces alert fatigue and cuts down on storage costs especially if one is using Splunk as their SIEM and makes targeted attacks much easier to detect. I will keep it.
The argument is that it's much easier to secure proper key material rather than design and config information that can often be leaked accidentally because it's actually directly manipulated by humans (employee onboarding, employee churn etc)
If the focus is on the latter, obscurity buys you nothing and adds complexity/distraction, which is bad. The former can be important though.
You have been alive since the 1880s?
”Security including obscurity“ is fine.
It's a simple probability calculation. If some automated scanning tools can't find your service, a lot of attackers will never know of its existence. So even if it has an unpatched vulnerability, they won't attack it.
If 1000 attackers find the vulnerable system, the probability is high at least one is attacking it. If it's only or two one who find it, they might just ignore your system, because they found thousands of others they randomly chose first.
Obscurity provides, effectively, no security. There may be other benefits to the obscurity, but considering the obscurity a layer of your security is bad. I hope we all agree that moving telnet to another port provides no security (it's easily sniffable, easily fingerprintable).
If it provides another benefit, use it, but don't think there's any security in it.
For ~30 years I've moved my ssh to a non-standard port. It quiets down the logs nicely, people aren't always knocking on the door. But it's not a component of my security: I still disable password auth, disable root login, and only use ssh keys for access. But considering it security is undeniably bad.
I disagree on this. It's right up there with "premature optimization is the root of all evil" on the list of phrases that get parroted by a certain type of engineer who is more interested in repeating sound bites than understanding the situation.
You can even see it throughout this comment section: Half of the top level comments were clearly written by people who didn't even read the first section of the article and are instead arguing with the headline or what they assumed the article says
You may not see it as “security“, but any entity that is actively monitoring their logs benefits when the false positives decrease. If I am dealing with 800 failed login attempts per minute I cannot possibly investigate all of them. But if failed logins are rare in my environment, I may be able to investigate each one.
Obscurity that increases the signal to noise ratio is a force multiplier for active defense.
I don't think fail2ban protects obfuscated ports, I know it. If an IP is trying to connect to a system on port 22, it is ipso facto unwanted and doing unauthorized activities. Plonk! Onto the ban list it goes. You'd be surprised how effective that is.
Once the roar of automated skiddies is silenced, the signal of real attacks cuts through the noise quite clearly.
Remember, to avoid being eaten by most bears, you don't have to outrun them -- you only have to outrun the poor sap next to you. ;-) There is real world value in raising the bar and becoming even a moderately harder target than the rest of the crowd.
Maybe I should spin up a vanilla VM and just let it get hammered for a month and post the logs here....
[1] It's been a while since I looked at prices for tens of thousands of distinct proxy connections. Anyone want to pretend to be a hax0r and get a current price quote?
Advice like this should be at the top of the chapter in the textbook that teaches young sysmonkeys how to admin a box securely. Well stated.
Q: Why would you "review the logs" by (human/agent) hand for a service exposed to the Internet? What are you actually looking for?
[I say this as someone who has tens of thousands of failed auth attempts against services I expose to the Internet. Per day.]
If I were you I would do that immediately. Then, once your logs become actually useful again, look at them.
"Hmmm. There sure seem to be a lot of failed login attempts for bobsmith@server. Maybe I should call him up and see if there's something going on."
Q: If you've still done the right things - "disable[d] password auth, disable[d] root login, and only use ssh keys for access" - why do you care about how 'quiet' your logs are?
Obfuscating JS is probably a decent defence against your 9 year old brother. It is not against a motivated, well funded state sponsored attacker.
Part of what bugs me about English is the practical ambiguity of the colloquial understanding of what "<foo> is <bar>" implies. Does it mean that all foos are also bars or does it mean there exists a foo where that foo is also bar? Does it mean foo is always bad or foo is often bar? Dutch is my first language and I grew up in South Viet Nam, Nigeria and Texas. I did not get the standard programming.
There's a whole spectrum between 9 year old and a motivated state actor, and obfuscation is effective for a big part of the spectrum.