Top
Best
New

Posted by mobeigi 6 days ago

Security through obscurity is not bad(mobeigi.com)
205 points | 214 comments
rascul 6 days ago|
Obscurity can be fine but it's not security. I think of it like cover and concealment in the military. Security is cover. Something you can get behind so the bullets don't hit you. Obscurity is concealment. Harder to see, harder to find, so the enemy doesn't know where to shoot, but it's not stopping any bullets. Both have advantages and disadvantages and can complement each other depending on how they're used.
raffraffraff 6 days ago||
Example: there are teenage gangs going around on high powered scooters in my city, carrying hammers and mini grinders. They pair up on a scooter, steal a bike and disappear.

I watched them. They don't want to hang around longer than necessary. They will only approach a bike rack that is clearly visible from the road. They will only steal a bike that has unobstructed access to the road (no tricky bollards or other bikes to get around). Even though they are full of bravado, and shout obscenities and threats at me when I tell them to fuck off, they still run away (even though the one approaching the bikes is carrying a weapon while his companion stays on the scooter ready to escape)

Anything that even mildly inconveniences these guys is enough to stop them attempting theft. The bikes they steal needs to be expensive, out in the open, with direct access to the road, and with a shitty lock. And believe it or not, those tumblers line up a lot.

Throwing a blanket over a bike is probably enough to stop them from even approaching it.

canpan 6 days ago|||
It's not great, but basically if your lock is better than the lock on the bicycle next to yours, they will most likely not steal yours..
WhyNotHugo 5 days ago|||
Your bike should always looks like a less interesting target [for theft] than the other bikes in the same rack.
kombookcha 5 days ago|||
You only need to be faster than the slowest gazelle in the pack, right?
withinboredom 5 days ago||
Gazelle bikes are pretty fast ... instructions unclear.
ozim 18 hours ago||
Following Dutch jokes:

Well the lock itself for a junkie in Amsterdam has value if you get expensive one it is additional loot.

hackeraccount 5 days ago|||
That's fine as long you are really clear in your mind about what's going on.

It IT there are a lot of people tossing a blanket over the scooter and believing they're affecting the ability of the the attacker when they're really changing the likelihood of an attack.

Imagine if every single person put a blanket over their bike. Now imagine if everyone got a chain that was 10 times stronger. Which world would you rather live in?

m3047 5 days ago||
Honey pots, tar pits, bot motels, janky configs, visible telemetry (for example): these slow down adversaries in two ways. 1) They directly slow the adversary down and force them to navigate deliberately. 2) They increase uncertainty in uncomfortable ways, the effectiveness of this depends on how important it is for the adversary to remain undiscovered, not "poke the bear". Together, the effect is more than additive.

In addition to likelihood, attacks have shape. And proper installations can force your adversaries' maneuvers to take a certain shape. I've heard this referred to as terraforming.

If you're going to "do it in the road" (a highly visible bike rack), your lock or chain works much better when it is better / stronger than the herd. If everyone has a chain which is 10x stronger, then a better grinder becomes a cost of doing business. Maybe I'd rather live in a world where I didn't use that bike rack.

mday27 6 days ago|||
This is an especially good analogy because facing a well-resourced adversary in cybersecurity is like finding out that the enemy brought artillery -- hopefully you weren't relying entirely on obscurity because pretty soon there will be nowhere to hide
TeMPOraL 6 days ago|||
Funny analogy, in that when the high caliber shells start raining, most forms of cover won't make a difference. The ones that will, are not something you want to stay behind on days when you're not being actively bombed. In fact, keeping you behind such protections is by itself a military tactic - it lets the enemy roam freely and maneuver around you.

But the basic flaw of this analogy is that it implies you're at war, and your system is always in battle.

cindyllm 6 days ago|||
[dead]
neoCrimeLabs 6 days ago|||
Agreed with your sentiment, and that was a great example.

Just like any security control, if it's your only means of security, it will not offer much risk reduction. Just like all security controls, the if you want risk reduction use more security controls together. Like all security controls, there is no way to eliminate risk, just reduce it as much as possible while still being able to effectively achieve your mission.

Because of this I believe security through obscurity to be important component in a healthy and mature risk posture.

It irks me when it's dismissed because obscurity is not security. No single security control is security on its own.

Maxion 6 days ago||
Obscurity by itself does provide risk reduction.

Think about leaving your bike unlocked in times square, vs. the top of a 7 000 meter mountain in the himalayas.

Which unlocked (unsecure) bike is more likely to be stolen, and ergo has a lower risk attached?

----

Obscurity does not help you when the thief has already found your bike, nor is obscurity very helpful for keeping your bike safe if you happen to live in times square.

But if you live at the top of a himalayan peak, you can be fairly certain you're not going to have your bike stolen.

red-iron-pine 5 days ago||
the security controls for a bike on a high mountain are not obscurity, they're the lack of oxygen (that kills), the cold (that kills), the height (that kills), and the literal sheer difficulty of getting there.

you could put the bike right on the side of the mountain without any obfuscation and it won't get got because ain't no one gonna die for a bike.

its like how we know where dead people are on Everest but we can't get them down; they serve as landmarks.

walrus01 6 days ago|||
Because I love how seriously the DoD takes newly invented terms, we have:

"The Integrated Survivability Onion"

https://cogecog.com/the-threat-onion/

1. Don't be seen.

2. Don't be acquired

3. Don't be hit

4. Don't be penetrated

5. Don't be killed

It's actually not a bad mental model training aid for teaching people who might find themselves in an active combat environment.

trashb 5 days ago|||
I feel like "Don't be captured" should be included, perhaps between 4 and 5. In cybersecurity this would be applicable to ransomware.
red-iron-pine 5 days ago||
the implication of the "don't be acquired" and "don't be penetrated" is some sort of anti-air or anti-tank missile.

"killed" in this case would be equivalent to having something penetrate and hit sensitive systems. at that point it's basically just a function of what the penetrator is trying to do -- if they just want $$$ they ransomware. if they want exfil or DoS or making critical systems do naughty things that is also a kill.

walrus01 5 days ago||
> the implication of the "don't be acquired" and "don't be penetrated" is some sort of anti-air or anti-tank missile

not necessarily - this model is also taught for army/marines type ground combat operations, in how to effectively camouflage, how to manoeuvre.

the "don't be penetrated" is more of an equipment choice and engineering decision specific to armor and active kinetic counter-munitions systems, like anti-drone shotguns, tanks with active protection systems, chobham armor, etc.

If a munition has been fired by you, first try to not get penetrated by it at all, and if that fails, try to prevent something catastrophic like a bolus of explosive formed penetrator molten copper from spraying into the inside of your armored personnel carrier.

TZubiri 5 days ago||||
The acquired concept is new to me, is this an established term?
walrus01 5 days ago||
in the sense of one military force contacting another, yes, as in acquiring a target.
Maxion 6 days ago|||
Works just as well anywhere, really.
staticassertion 6 days ago|||
I don't think that really works because obscurity isn't harder to see or find. I don't know the analogy, it's like standing out in the open and being like "yeah but who would think to look here lol".
willis936 6 days ago|||
I think you're misinterpreting "obscurity" for "lack of obscurity". If you have a vulnerability in an API interface that is completely undocumented that is a vulnerability that is obscured. It's hiding in the woods, not standing in a field.

To keep with the analogy: no one is going to stand in a field when people are shooting at you. So then why do a small subset of vocal people online suggest that you just put your bulletproof vest and claim that hiding in the woods, regardless of the vest, is a bad idea?

arcfour 6 days ago|||
You know when people are shooting at you. You don't know when or if people are exploring undocumented/obscure features of your system and what they have learned about it that you were trying to hide.

Therefore, the safest assumption to make is that an adversary already has figured out all of your obscurity, because they always can do this given sufficient time and interest, at which point the only thing between them and you is your security.

That is why we design systems without obscurity and only care about security.

willis936 6 days ago|||
I agree that it's a good principle but it's taken too far when justifying needlessly growing risk surface area. Like the principle is useful to justify security hardening. It is not useful when used to increase the odds of being attacked.
adrian_b 6 days ago|||
Security is mandatory.

Obscurity is optional.

Obscurity is not worthwhile when it increases your own costs. Nevertheless, if you can add obscurity with negligible additional cost and inconvenience, then you should do it.

staticassertion 6 days ago|||
This isn't about what's a good idea or bad idea. Perhaps it's best to simply leave analogies behind, otherwise we'll just focus on the wrong thing.

Security through obscurity merely means that your system is atypical. It's not hidden, it's not secret, it's not hard to find, it's not hard to examine, it's not less visible, etc - there is nothing inherently different about the systems at all other than that one is more common than the other. It's just less typical.

willis936 6 days ago|||
What you're describing is a thing that is not obscured. Don't refer to things as obscured if they are not obscured. When others talk about about things that are obscured they are talking about things that are obscured, not things that are not obscured.
staticassertion 6 days ago|||
You can see my other comment on this. The word "obscure" is not very relevant to the phrase "security through obscurity".
dreambuffer 6 days ago|||
I'm having a hard time understanding what you mean here. If something is obscured, by definition it is less visible. Being 'less typical' is a form of security because most attacks rely on some form of pattern recognition, and obscurity literally dissolves patterns into noise.
imtringued 6 days ago|||
>If something is obscured, by definition it is less visible.

Obscurity is not the same thing as something being "obscured".

Obscurity means something is either difficult to comprehend, not well known or uncommon.

Obscured means something is hidden or concealed. When something is hidden, that means the thing is still there and there is a way to get to it. You can build automated tools around finding it.

>Being 'less typical' is a form of security because most attacks rely on some form of pattern recognition, and obscurity literally dissolves patterns into noise.

This is making the leap of faith assumption that "obscurity" is equivalent to "impossible to understand". In security you have no control over the attacker and therefore have to assume your attacker has more than enough knowledge and intelligence to perform the attack.

Since computer systems are static and unchanging without frequent patching, you can't assume that there is a cat and mouse game where the mouse is adapting its hiding strategies dynamically and managing to escape every single time.

dreambuffer 6 days ago||
Depends, some systems are dynamic. There is also a gray area where obscurity can be computationally infeasible to attack, but not bound by traditional polynomial assumptions in cryptography.

As is always the case in these semantic discussions, the answer depends on your initial axioms and assumptions, which does kind of make most of these discussions pointless (but I did learn a lot from this one).

staticassertion 6 days ago||||
You're overly focusing on the term and not the meaning. The term comes about from people choosing tools like "foxit" or "Opera" and saying that those products are safer than their cohorts Adobe/ Firefox because they are attacked less often.

This notion was termed "security through obscurity" ie: "you use the less popular option, therefor that option is safer". It has nothing to do with "obscuring" in the sense of "hiding", that's a linguistic quirk of a colloquial term. If you were actually taking action to reduce the ability to understand a system in a way that you could meaningfully defend, it would no longer be "security through obscurity".

The argument has persisted because there are two different questions that sound the same (X is less typical than Y):

1. Is "X" safer than "Y"?

2. Is a user of "X" safer than a user of "Y"?

When looking at (1) in isolation, you can say things like "X lacks security features, therefor Y is safer" and "X is less often used, therefor X is safer", etc. This is a question about the posture of the project itself, in isolation.

(2) is about the context for users. The reality is that X, which perhaps is fundamentally less well built software, may actually have users who are attacked far less frequently.

Both are likely to favor "rarity is a poor indicator of safety" as we generally reject mitigation approaches that rely on attackers to behave specific ways, but what's important is that these are completely different questions and neither has to do with being obscured but rather rare.

None of this is about what is "obscured" or not. If something is obscured or obfuscated, that is a technique that can be evaluated separately by its own merits (ie: how hard is deobfuscation, how easy is it to adapt to deobfuscation, etc). All of this is about whether you're evaluating (1) or (2) - and in the case of (1), which is what the criticism always has focused on, the answer is that "rarity" is not a mitigation.

bawolff 6 days ago|||
> The term comes about from people choosing tools like "foxit" or "Opera" and saying that those products are safer than their cohorts Adobe/ Firefox because they are attacked less often.

That is not where the term comes from.

staticassertion 6 days ago||
In the infosec world, it pretty much is where the popular discourse has always been. It's just a bunch of nonsense terminology.
dreambuffer 6 days ago|||
I understand it now, thanks
m3047 5 days ago|||
Visiblility is also a mental construct of what we expect to see and what we know already and can map to what we see. "Obscure" is doing a lot of work here. It doesn't necessarily mean hidden, it can mean the object's true purpose or form is hidden from some particular vantage, and only that vantage.
m3047 5 days ago||||
Interesting. Have you seen the movie Braveheart? That's the leadup to the later humiliation of the king in battle, there's a movie / drama about this one too. Saw it recently, don't remember the name.

Basically the insurgents choose terrain they know well, because they live there. They choose a swamp / mire in an open field between two hills. They build fortifications. They obscure the true nature of the ground they're standing on, out in the open. They goad the king's army into finishing them then and there. They fight on foot against knights on horseback. It's a mess. They win.

singpolyma3 6 days ago|||
The first rule of not being seen: to not stand up.
gerdesj 6 days ago||
Not stand out.
TZubiri 5 days ago|||
>Obscurity can be fine but it's not security

You literally just read how Obscurity protected OP in a cybersecurity incident. Now you are just playing word games, which are a waste of time.

tgv 5 days ago||
It does seem to be a word game, because "it's not stopping any bullets" either isn't honest (it does stop bullets from hitting you when the enemy doesn't know where to shoot) or it's limited, just like obscurity is ("it may stop a few bullets, but it won't stop all, and there will be other weapons it can't stop either"). I think public key exchange is considered security, but it still requires to obscure your private keys.

Perhaps a better word would be resistance (to intrusion), which is a dimension orthogonal to visibility.

j45 6 days ago|||
100%.

Obscurity alone isn't security. Security that includes obscurity in it's architecture is relevant.

Byamarro 5 days ago|||
Security through obscurity is mitigation basically. You reduce risk/impact, not eliminate it. There are problems - such as denial of wallet attacks - where you can only mitigate and can't eliminate the problem completely
red369 6 days ago|||
Well off-topic, but did you recently listen to Andy Stumpf on a podcast?

Asking because of the Baader–Meinhof phenomenon :)

shric 6 days ago||
> Asking because of the Baader–Meinhof phenomenon :)

I recently learned about that and now I see it everywhere, weird.

6r17 6 days ago|||
The problem with that statement is that a lot of people who yield it fail to see the advantages that come with these extra shenanigans ; and let's just take pure concealment so I don't pushing weird arguments ; in the age of AI - each time we are able make an attacking AI misaligned we are essentially buying time ; an on-going attack is never a on-shot event ; it's an ongoing process where the attacker has to understand where it is located and what it can do ; since each element will be a resource ; do not let it have it in the first place.

It's a bit of an elitist view of security that romanticize concepts without thinking about what they can actually be used for. My personal bad experience with that was a manager who was stating me that having a different subdomain for the admin panel was a concealment and not a security practice.

I mean - it's very easy to see how this kind of argument actually prevents from doing something that can help just on the basis of philosophical purity - which often just miss the point - security is not a mechanism that will solve all your problems ; heck in fact I have to layer at least 4 mechanisms just on the http interface to feel safe ; it's more of a lot of layers that together form a barrier ;

We sit too much on TLS thinking "That's it, security job is done" - then we get some crazy stuff like French ANTS that get pawned with some IDOR ; as IF f* using some hash or something ; ANYTHING PLEASE F* HELL ; would have not helped

pamcake 5 days ago|||
Obscurity isn't security but it can support security. Until it doesn't.
hackeraccount 5 days ago||
I think I got the difference in my head. How about this:

Obscurity is decreasingly effective as more people use it. Security is more effective as more people use it.

lucketone 6 days ago|||
All modes of cyber security depend on some obscurity (e.g. password)

Ideally we want a viable plan B, for when it’s leaked/figured out. (E.g. generate new passwords)

(For convenience let’s label air-gap as kind of physical security)

pdpi 6 days ago|||
> All modes of cyber security depend on some obscurity (e.g. password)

That's not what the expression means.

"Security through obscurity" has a very specific meaning — that your system's security depends on your adversary not understanding how it works. E.g. understanding RSA is a few wikipedia articles away, and that doesn't compromise its security, so RSA isn't security through obscurity.

lucketone 3 days ago|||
I’m aware of that specific meaning. (Hiding under uncommon port, also falls under same umbrella)

But I think it is interesting and useful to detach from that specific label with all connotations, and treat it for a moment as just regular english phrase.

So we can analyse the wider pattern, see why it is deemed flawed, whether it is a binary choice or a spectrum.

(Notable thing to frame the analysis: hacker does not attack RSA, hacker will hack certain implementation of SSH server and use heartblead-v2 to sidestep RSA completely)

strken 6 days ago||||
Lucketone likely knows this and was pointing out that "obscurity" is a misleading word to use when talking about systems which all rely on obscurity, in the plain English sense of the word.
pdpi 6 days ago||
We're in a technical forum, discussing a term of art that refers to a very specific bad practice.

Lucketone's argument is essentially saying that the bad practice itself isn't actually a bad practice by equivocating the term of art and the plain language definition.

strken 6 days ago|||
The problem is that the term of art is confusing to technical people. See TFA. Technical people make logical leaps from "avoid security through obscurity [in the specific context of security systems which depend on obscurity and for which there are better alternatives than obscurity]" to "you should never obfuscate JavaScript" because the word is imprecise.
fsckboy 6 days ago|||
"security through obscurity" is not a term of art; it is not solely that property which RSA does not rely upon.
sroussey 6 days ago|||
No, "Security through obscurity" is a valid and useful layer. A lot of weight hangs on your word “depends” though, in which case if it is the only layer then you will eventually have, uh, problems.

I’ve used it for a long long time. Like in 1999 I’d have a knock on certain ports in a certain order to unlock the ssh port.

And lots of weird stuff to stop forum spam. Which could work for weeks or months or even a year.

pdpi 6 days ago||
Port knocking isn't security through obscurity. Given the knowledge that you have a port knocking system in place doesn't tell me what specific sequence of knocks will open up the service I want to target. Even just a two knock sequence gives you a key with 32 bits of entropy, which makes it trivial to block attempts at bruteforcing the key.
ZoomZoomZoom 6 days ago||
I don't see how your argument makes sense. It's all just bits of entropy in the end, be it knowing a port to connect to or a character in your key.
pdpi 6 days ago||
Yeah absolutely. That was precisely my point — Requiring a secret (be it a password or the private part of an asymmetric key) isn't security through obscurity, and finding the sequence of knocks is equivalent to finding a password of equivalent complexity.
afiori 6 days ago||||
In cryptosystems there is a difference between things that can be changed and not, eg passwords/keys are a secret that can be easily charged. Algorithms not so much.

"Security through obscurity" refers to the practice of using an hard to change "thing" as a secret, which is indeed bad practice

xeyownt 6 days ago||
Not exactly.

Security through obscurity in cryptosystems would mean defining your own crypto algorithm (or using a secretly-defined one, secret in the sense that it is unknown to the adversaries) to protect your system.

It is NOT bad in itself. It IS bad if you only rely on that. Even if you'd use a "secret" algorithm, you MUST protect the keys as with a public algorithm. Also, being secret means you cannot benefit from the cryptanalysis of the community, which is in practice very important. BUT... if you have a lot of cryptanalysis expertise at disposal, then using a secret algorithm can be very effective.

0123456789ABCDE 6 days ago|||
i don't know a lot about the subject, but the little i know tells me this is not the way to look at this

your password (plain text) is secret because only you are supposed to a have it. in the digital realm, sharing the contents of the password (plain-text) is be akin to making a copy of it — undesirable

now, the algorithm that hashes the plain-text for comparison with the stored hash, that can be know by anyone, and typically is

so password ≠ hashing algorithm

lucketone 6 days ago||
Yes. Password and hashing algorithm are distinct things. I fully agree with you.
m463 6 days ago|||
I kind of wonder if the analogy might also carry over to the age of AI.

if you were hiding in cover during ww1, maybe you had a chance.

But if you were hiding from the Terminator, who is "Tireless, Fearless, Merciless", it might not last that long.

same might be said of exploits hiding from people... vs AI.

Lammy 6 days ago||
> Obscurity can be fine but it's not security.

All security is security through obscurity. When it gets obscure enough we call it “public key cryptography”. Guess the 2048-bit prime number I'm thinking of and win a fabulous prize! (access to all of my data)

thephyber 6 days ago||
> Security ONLY through obscurity is bad (Kerckhoffs's Principle).

This is the crux of the article.

(1) Kerckhoffs's Principle doesn’t say that. It says to design the system AS IF the adversary has all of the info about it except the secrets (encryption key, certificates, etc).

(2) this rule is okay if you are a solo maintainer of a WordPress installation. It’s a problem if you work at a large company and part of the company knows the full intent of this, while the rest of the company doesn’t know the other layers of security BECAUSE of the obscurity layer. In this way, it’s important to communicate that this is only a layer and shouldn’t replace any other security decisions.

MattPalmer1086 6 days ago||
Kerkhoff's principle is not about security in general, it is about the design of cryptography. Assume your opponent knows everything about how your crypto system works. Your security then lies in the keys and not knowledge of the method.

More broadly, anything that raises the cost of an attack helps security. Whether it is worth investing your defensive effort in that vs on more actual security is a different matter.

rileymat2 6 days ago|||
If it does not obscure your own view of the security or reasoning about the security stance.

For instance, with respect to url parameters, I have seen people being told they have an Insecure Direct Object Reference, then apply base64 encoding to it to obscure what is going on. To QA they don't notice it looks like junk, it is obscure, but base64 encoded parameters are catnip to hackers.

So in this case, the obscurity made the system worse over time.

Heck, the most cringeworthy phrase "Base64 Encryption" which I have heard many many times.

TZubiri 5 days ago|||
I love this nuance!

But I think it's covered by your immediate parent comment

> Whether it is worth investing your defensive effort in that vs on more actual security is a different matter.

So the base64 introduces a marginal security gain, but in addition to expending effort in implementation, it increases the cost of other efforts (which is the case for almost all features), in the case of a fixed cost QA (which is again, always the case), the quality of the QA (pardon the redundancy) will be the parameter that suffers.

So yes, if the security gain is very minimal, then it's likely that the cost of the feature will be so great comparatively, that it will not only affect all other parameters like ease of use, but the negative indirect impact on security will be greater than the marginal positive direct impact on security.

Many such cases.

MattPalmer1086 6 days ago|||
A nice point!
catlifeonmars 6 days ago|||
I agree, that anything that raises the cost of an attack may be worth doing. Most “obscurity” related practices do not meaningfully raise the cost of an attack beyond a certain threshold. Physical locks are not a great analogy.
sroussey 6 days ago||
"Security through obscurity" can help in the reverse (for a time) — if they have your keys but haven’t found the locks.

Might give you enough time to change the locks. But not provably — which can matter to a lot of people.

thephyber 6 days ago||
The example in the article is more likely. Changing the name of a DB table from the default helps because any low quality probe script will break as soon as this assumption of default errors. It means that low effort, low tech, low talent attacks will fail. This is not a bad thing because these are likely to be the most common kinds of attacks.

Again, I'm not opposed to simple tricks like this to “buy some time” so long as they don’t PREVENT the deeper layers of security from being performed. But if a company has scarce resources and a choice between patching unpatched software or changing DB names from the defaults the former actually improves security and the latter should only be performed if the staff has solved all of the higher risk items.

dspillett 6 days ago||
Obscurity is not security.

But it can add a bit of delay to someone breaking actual security, so maybe they'll hit the next target first as that is a touch easier. Though with the increasing automation of hole detection and exploitation, even that might stop being the case if it hasn't already.

The biggest problem with obscurity measures IMO is psychological: people tend to assume that the measures⁰ are far more effective than they actually are, so they might make less effort to verify that the proper security is done properly.

----

[0] like moving SSHd to a non-standard port¹

[1] a solution that can inconvenience your users more than attackers, and historically (in combination with exploiting a couple of bugs) actually made certain local non-root credential scanning attacks possible if you chose a high port

titularcomment 6 days ago||
Obscurity can be combined with security for much better results. Machines as well as AI in extension thrives off patterns and making illogical off-pattern decisions is usually to the benefit of the defender, not the attacker. As you said, the attacker has a wide attack surface to cover while the defender only has to fortify his home NAT. E.g. port knocking may very well throw off the hoard of scanners on the wide net simply because its not standart and the combination is known only to you. Similarly, fail2ban may not work as well on a standart SSH port, because every attacker is going to hammer that and one may get your misconfigured password root login right.

Now, in both instances, the obscurity provided does not necessarily cure your infrastructure's vulnerabilities, a dedicated attacker wouldn't have a single problem with either of these. But for someone who hammers the whole internet in a dim hope of finding another Wordpress server from 2017, or the latest flawed online security cam, your disguise is as good as perfect.

kbrkbr 6 days ago|||
> Obscurity is not security.

So ASLR [1] is not a security control? I guess you are pretty alone with this opinion.

[1] https://en.wikipedia.org/wiki/Address_space_layout_randomiza...

msm_ 6 days ago|||
No this is not what GP said, and I don't get how you reached this conclusion. This is like saying that AES is security through obscurity because it relies on key being secret. See [1] (linked in the OP) to understand the difference better.

I am pretty sure everyone who works in security agrees that obscurity is not security.

[1] https://en.wikipedia.org/wiki/Kerckhoffs%27s_principle

minitech 6 days ago||||
ASLR is (still[1]) not security by obscurity.

[1] https://news.ycombinator.com/item?id=43408079

bigstrat2003 6 days ago||
ASLR is, by definition, security by obscurity. The entire purpose of it is to make it so that it's hard to find the memory which is in use.
imtringued 5 days ago|||
The point of ASLR is that even if you fully understand how it works, this won't make it easier to bypass the protections of ASLR, since the primary way ASLR works is through dynamic adaptation. This turns it into a probabilistic security technique where there is always a chance that an attack goes through.

Security through obscurity in this case would be to roll your own ASLR implementation with a different randomization strategy.

sixtiethutopia 6 days ago||||
That's not what security through obscurity means. Security through obscurity has a specific meaning, it doesn't just mean to gain security by hiding anything it means to attempt to gain security by hiding how a system works.

ASLR is a well understood system that exploit writers know to expect and thus ASLR is not security through obscurity.

grayhatter 5 days ago|||
no because it's still possible to find the data using standard techniques, it doesn't count as obsecurity it's still possible.

I.e. just because you* don't know where something is, doesn't mean it's using obsecurity to hide.

The reason is important, because words mean things: If you say, knowledge of some secret is security though obsecurity. That means passwords are security though obsecurity.

*: that may or may not be available to the attacker.

it other words, just because a secret exists, doesn't put that secret into the 'obsecurity' category.

staticassertion 6 days ago||||
No, because ASLR uses a secret.
andix 6 days ago|||
> But it can add a bit of delay

The delay can also be infinite in practice. If a really bad zero day is discovered, it might protect you from becoming a victim. No guarantees, but it can improve your chances.

NewsaHackO 6 days ago|||
The other thing though is that there are situations where you only have a limited amount of tries for a password, and incorrect tries can have dire consequences. If you are being asked for a password by an armed guard, and you hack the system completely and get the password, but didn't know about the last obscured step that you were supposed to type it with your left hand, not your right, you will still face whatever consequences even though that step didn't add any security.
diarrhea 6 days ago||
As a fan and believer of obscurity in support of security, I do not understand why

> that step didn't add any security.

It is a decision that’s part of the entire process. A branch of many in the decision tree. Other branches are deciding which characters to type for the password; ASCII characters can be as little as 1 bit apart. Deciding between left and right is also 1 bit apart.

I think it boils down to what people commonly understand to be publicly knowable information versus understood-to-be-secret information.

One example: I self-host my password manager at pw.example.com/some-secret-path/. That extra path adds as much to security as a randomly picked username in HTTP Basic Auth: arguably none. Yet, it is as impossible for attackers to enumerate and find that path as it is with passwords.

The difference is that the path leaks easier. It’s not generally understood to be a secret. Yet I argue it helps security. (Example: leaking the domain name through certificate transparency logs AND even, say, user credentials means an attack is still unsuccessful; a strictly necessary piece of the puzzle is missing).

ChrisMarshallNY 6 days ago||
I don't think "obscurity" really buys you much (especially these days, with LLMs).

However "Not Having Stuff to Steal" works like a charm. It's thousands of years old, and has never gone out of style.

I know that it's considered blasphemy, hereabouts, but I've found that not collecting information that I don't absolutely need is pretty effective.

Even if someone knocks down all my gates and fences, they'll find the fox wasn't worth the chase.

It does make stuff like compiling metrics more of a pain, but that's my problem; not my users'.

keeda 6 days ago||
Totally agreed, to me data is just like code: extremely valuable for the functionality it provides, but in most other ways a serious liability. That said:

> I don't think "obscurity" really buys you much (especially these days, with LLMs).

Actually I think it does so even more with LLMs. As has been posited before (particularly on the threads about open source projects going closed source) security comes down to who has paid more attention to the code, the attacker or the defender. And of course, these days attention is measured in tokens.

We know that LLM's are pretty capable of reversing-engineering to figure out an application's logic, but I would bet it takes many more tokens than reading the code or other public information directly. As such, obscurity adds an important layer to security: increasing the costs on the attacker.

Security has always been a numbers game, but now the numbers will overwhemingly be tokens and scale. If the defenders can cheaply raise the costs on the attackers by adding simple layers of obscurity, it can act as a significant deterrent at scale. I wonder if we'll even see new obfuscation techniques that are cheap to implement but targeted specifically at LLMs...

ChrisMarshallNY 6 days ago||
Very good point.
danparsonson 6 days ago||
That's fine if the goal of breaking in is immediate theft; it might also be more along the lines of leaving something behind.
ChrisMarshallNY 6 days ago||
Not sure if obscurity buys you anything, then. In that case, it's all about standard security practices.
AshamedCaptain 6 days ago||
The problem with this argument is that you can justify an infinite amount of crap with it, the security equivalent of cockroach papers; which inevitably people ends up treating as real security.

One example I remember is Pidgin storing its passwords in plain text in $HOME. They could have encrypted them with some hardcoded string, and made a lot of people happy that they would no longer grep their $HOME and find their passwords right there. However this had the side effect that now people were dropping the ball and sharing their config files with others. Or forgetting to setup proper permissions for their $HOME, etc.

In addition, these layers of obscurity are also not overhead free: they may complicate debugging, hey may introduce dangerous dependencies, they may tie you to a vendor, they may reduce computing freedom (e.g. Secure Boot), etc.

vlovich123 6 days ago||
Why a hardcoded string and not a user specific password the user used for pidgin? Then you’ve got real security and even using a password stored in the user’s keychain means that the passwords are not trivially accessible.

The whole point of security in depth is that you use non colinear layers of protection to raise the cost of an attack and reduce the blast radius of a successful attack.

AshamedCaptain 6 days ago||
Pidgin predates keychains, but if I remember correctly you had the option to set up a master password or to simply disable storing passwords, which were the only options that were truly incrementing security. But most users would not do that (they want autologin for a reason), so the example still applies.

(Note also most keychain implementations are not truly improving security in any way, but this is a separate topic)

rw_grim 3 days ago||
For the full reasoning see this page https://developer.pidgin.im/wiki/PlainTextPasswords which is now back online. It was accidentally broken in a recent server migration.

That said, purple3/pidgin3 (still in development) only supports for keyrings and doesn't try to do any password management on its own even though password managers fall into the "Store a password(s) behind a password" as detailed on the above page.

2OEH8eoCRo0 6 days ago|||
> The problem with this argument is that you can justify an infinite amount of crap with it

Does that make it wrong?

grayhatter 5 days ago|||
If the model you're using tries to claim something that is false is actually true. Yes, your model is wrong.
dspillett 6 days ago||||
Not per se. But it does make it potentially dangerous thinking depending on how it is applied.
HeavyStorm 6 days ago|||
Yes
i_think_so 6 days ago||
> The problem with this argument is that you can justify an infinite amount of crap with it, the security equivalent of cockroach papers; which inevitably people ends up treating as real security.

I almost missed the twist at the end because I had no idea what the hell cockroach papers were. I still don't understand the reference, but at least it sounds mildly interesting. So, well done.

Now, as for this strawman argument of yours about justifying an infinite amount of crap, that's true of all manner of disingenuous arguments. Who cares about that in this case?

> Or forgetting to setup proper permissions for their $HOME, etc.

This is Pidgin's fault how?

Now, if you wanted to argue that Pidgin should have put the passwords into a separate file and chmod400'ed it that would make much more sense.

> In addition, these layers of obscurity are also not overhead free: they may complicate debugging, hey may introduce dangerous dependencies, they may tie you to a vendor, they may reduce computing freedom (e.g. Secure Boot), etc.

Not many good things have zero cost, do they... The point of TFA is that a little bit of well thought out obscurity pays huge dividends when applied in the real world. His example about the WP exploit ought to be all you need to read to get on board with that.

Bender 6 days ago||
Security through obscurity is NOT bad.

Security ONLY through obscurity is bad (Kerckhoffs's Principle).

Security through obscurity, as an additional layer, is good!

I've been saying this ever since that phrase was coined. A layer or two of obscurity keeps a lot of noise out of logs, reduces alert fatigue and cuts down on storage costs especially if one is using Splunk as their SIEM and makes targeted attacks much easier to detect. I will keep it.

mobeigi 6 days ago||
Couldn't agree more, I have personally benefited from the additional layer and it irks me when people outright claim it has no value.
ithkuil 6 days ago||
The informed claim is not that the obscurity layer has no value. Quite the contrary, it has such a great value that it basically reduces the incentives to have great proper security and thus once the obscurity layer is breached the second line of defense is weaker.

The argument is that it's much easier to secure proper key material rather than design and config information that can often be leaked accidentally because it's actually directly manipulated by humans (employee onboarding, employee churn etc)

kstrauser 6 days ago||
That's an interesting way to describe it. It's kind of like the turn away from requiring regular password updates. On paper, password rotation is good. But when you consider its interaction with human psychology, the policy makes security worse by causing people to make bad decisions.
rcleveng 6 days ago|||
This sounds just like my thoughts on PostgreSQL's row level security. As a additional layer it's good, as the only thing, watch out!
bee_rider 6 days ago|||
It would be nice if there was no overlap between terms for the operational things that help improve security (log reduction and other non-cryptographic methods of reducing admin fatigue), and the mathematical cryptographic characteristics of the system.

If the focus is on the latter, obscurity buys you nothing and adds complexity/distraction, which is bad. The former can be important though.

tokai 6 days ago||
>I've been saying this ever since that phrase was coined

You have been alive since the 1880s?

Bender 5 days ago|||
1492 was the first time I experienced the quickening.
Barbing 6 days ago|||
Smart talking your elders!
catoc 6 days ago||
“Security through obscurity” has the connotation that it is the obscurity that achieves the security - which is bad.

”Security including obscurity“ is fine.

consumer451 6 days ago||
Yeah, I always thought that real security is priority #1. But, using convenient obscurity lowers the obvious attack surface to things like automated scanners, just a bit.
justonceokay 6 days ago||
Yes it’s not that it’s bad, it just means you aren’t done yet
andix 6 days ago||
If the obscurity it is only an additional layer on top of a secure system, it is called "defense in depth".

It's a simple probability calculation. If some automated scanning tools can't find your service, a lot of attackers will never know of its existence. So even if it has an unpatched vulnerability, they won't attack it.

If 1000 attackers find the vulnerable system, the probability is high at least one is attacking it. If it's only or two one who find it, they might just ignore your system, because they found thousands of others they randomly chose first.

linsomniac 6 days ago||
I get what this post is saying, but I'm going to push back that "security through obscurity" isn't just something that people parrot without understanding.

Obscurity provides, effectively, no security. There may be other benefits to the obscurity, but considering the obscurity a layer of your security is bad. I hope we all agree that moving telnet to another port provides no security (it's easily sniffable, easily fingerprintable).

If it provides another benefit, use it, but don't think there's any security in it.

For ~30 years I've moved my ssh to a non-standard port. It quiets down the logs nicely, people aren't always knocking on the door. But it's not a component of my security: I still disable password auth, disable root login, and only use ssh keys for access. But considering it security is undeniably bad.

Aurornis 6 days ago||
> but I'm going to push back that "security through obscurity" isn't just something that people parrot without understanding.

I disagree on this. It's right up there with "premature optimization is the root of all evil" on the list of phrases that get parroted by a certain type of engineer who is more interested in repeating sound bites than understanding the situation.

You can even see it throughout this comment section: Half of the top level comments were clearly written by people who didn't even read the first section of the article and are instead arguing with the headline or what they assumed the article says

elevation 6 days ago|||
> But it's not a component of my security

You may not see it as “security“, but any entity that is actively monitoring their logs benefits when the false positives decrease. If I am dealing with 800 failed login attempts per minute I cannot possibly investigate all of them. But if failed logins are rare in my environment, I may be able to investigate each one.

Obscurity that increases the signal to noise ratio is a force multiplier for active defense.

vlovich123 6 days ago|||
If port numbers were 64bit or 128bit, actually it would provide a meaningful amount of security through obscurity. Port numbers are easy to dunk on because it’s such a trivially small search space.
sudb 6 days ago|||
Similarly I've often flip-flopped on the safety of public API endpoints that are "protected" by virtue of no sitemap + UUIDs in the URL path - I think the answer ultimately is that this is fine so long as there's no way to enumerate the IDs in use?
vlovich123 6 days ago||
It’s fine as a hardening measure, not as a security measure. The lack of a site map doesn’t necessarily guarantee it doesn’t leak somehow and then the question is what happens after it leaks
gavmor 6 days ago||||
But at this point, that's like saying my password is merely 'obscure.'
i_think_so 6 days ago|||
Good luck scanning 64k ports on a server that has a few randomly assigned fail2ban listeners.
vlovich123 6 days ago||
If you think it’s not trivial to get 64k random IP addresses to make requests for you for pennies, you are completely delusional if you think fail2ban protects a random port number in any way.
i_think_so 5 days ago||
It's not "trivial", and it costs dollars, not pennies for that many attack endpoints[1]. My firewalls scale much more cost effectively, especially when I coalesce individuals into netblocks.

I don't think fail2ban protects obfuscated ports, I know it. If an IP is trying to connect to a system on port 22, it is ipso facto unwanted and doing unauthorized activities. Plonk! Onto the ban list it goes. You'd be surprised how effective that is.

Once the roar of automated skiddies is silenced, the signal of real attacks cuts through the noise quite clearly.

Remember, to avoid being eaten by most bears, you don't have to outrun them -- you only have to outrun the poor sap next to you. ;-) There is real world value in raising the bar and becoming even a moderately harder target than the rest of the crowd.

Maybe I should spin up a vanilla VM and just let it get hammered for a month and post the logs here....

[1] It's been a while since I looked at prices for tens of thousands of distinct proxy connections. Anyone want to pretend to be a hax0r and get a current price quote?

spacemule 6 days ago|||
I would argue moving SSH to a non-standard port is security, but it's a different kind. By reducing the noise in logs, it reduces the workload on the human or agent reviewing the logs. So, you can detect an attack in progress or respond to an attack before it gets out of hand. With SSH on a standard port, the harmful malicious logs can blend in with the annoying malicious logs much better.
i_think_so 6 days ago|||
> By reducing the noise in logs, it reduces the workload on the human or agent reviewing the logs. So, you can detect an attack in progress or respond to an attack before it gets out of hand. With SSH on a standard port, the harmful malicious logs can blend in with the annoying malicious logs much better.

Advice like this should be at the top of the chapter in the textbook that teaches young sysmonkeys how to admin a box securely. Well stated.

logifail 6 days ago|||
> By reducing the noise in logs, it reduces the workload on the human or agent reviewing the logs.

Q: Why would you "review the logs" by (human/agent) hand for a service exposed to the Internet? What are you actually looking for?

[I say this as someone who has tens of thousands of failed auth attempts against services I expose to the Internet. Per day.]

i_think_so 6 days ago||
Sounds like you are the poster child for moving ssh to a different port. :-)

If I were you I would do that immediately. Then, once your logs become actually useful again, look at them.

"Hmmm. There sure seem to be a lot of failed login attempts for bobsmith@server. Maybe I should call him up and see if there's something going on."

logifail 6 days ago|||
> It quiets down the logs nicely, people aren't always knocking on the door.

Q: If you've still done the right things - "disable[d] password auth, disable[d] root login, and only use ssh keys for access" - why do you care about how 'quiet' your logs are?

rcxdude 6 days ago||
Whatever you've done, you should keep an eye on your logs for anything suspicious. A quieter log is easier to monitor.
OhMeadhbh 6 days ago|
Saying anything about security without mentioning the nature of the threat is bad. It's also industry common practice.

Obfuscating JS is probably a decent defence against your 9 year old brother. It is not against a motivated, well funded state sponsored attacker.

Part of what bugs me about English is the practical ambiguity of the colloquial understanding of what "<foo> is <bar>" implies. Does it mean that all foos are also bars or does it mean there exists a foo where that foo is also bar? Does it mean foo is always bad or foo is often bar? Dutch is my first language and I grew up in South Viet Nam, Nigeria and Texas. I did not get the standard programming.

srdjanr 6 days ago|
The author gave a few examples where compiled/minified code is public (Javascript and games) or automated vuln exploits (Wordpress example). That does explain nature of threat well enough for me.

There's a whole spectrum between 9 year old and a motivated state actor, and obfuscation is effective for a big part of the spectrum.

OhMeadhbh 5 days ago||
I was talking about the click-baity title more than the content.
More comments...