Posted by WhyNotHugo 3 days ago
Yeah, stop those cute domain names. I never got the memo on Youtu.be, I just had “learn” it was okay. Of course people started to let their guard down because dumbasses started to get cute.
We all did dodge a bullet because we’ve been installing stuff from NPM with reckless abandon for awhile.
Can anyone give me a reason why this wouldn’t happen in other ecosystems like Python, because I really don’t feel comfortable if I’m scared to download the most basic of packages. Everything is trust.
I just try to avoid clicking links in emails generally...
Definitely good practice .
The only real solution is to have domain-bound identities like passkeys.
Always manually open the website.
This week Oracle Cloud started enforcing 2FA. And surely I didn't click their e-mail link to do that.
My theory is that if that companies start using that workflow in the future, it’ll become even _easier_ for users to click a random link, because they’d go “wow! That’s so convenient now!”
I don't just generally try, I _never_ click links in emails from companies, period. It's too dangerous and not actually necessary. If a friend sends me a link, I'll confirm it with them directly before using it.
Developer stuff is arguably the least scrutinized thing that routinely runs as mega root.
I wish I could say that I audit every elisp, neovim, vscode plugin and every nifty modern replacement for some creaky GNU userland tool. But bat, zoxide, fzf, atuin, starship, viddy, and about 100 more? Nah, I get them from nixpkgs in the best case, and I've piped things to sh.
Write a better VSCode plugin for some terminal panel LLM gizmo, wait a year or two?
gg
The post's author's resume section reinforces this feeling:
I am a skilled force multiplier, acclaimed speaker, artist, and prolific blogger. My writing is widely viewed across 15 time zones and is one of the most viewed software blogs in the world.
I specialize in helping people realize their latent abilities and help to unblock them when they get stuck. This creates unique value streams and lets me bring others up to my level to help create more senior engineers. I am looking for roles that allow me to build upon existing company cultures and transmute them into new and innovative ways of talking about a product I believe in. I am prioritizing remote work at companies that align with my values of transparency, honesty, equity, and equality.
If you want someone that is dedicated to their craft, a fearless innovator and a genuine force multiplier, please look no further. I'm more than willing to hear you out.
As far as I can tell it isn't a joke per se, but it is tongue-in-cheek and the ego is often very real.
DuckDB NPM packages 1.3.3 and 1.29.2 compromised with malware - https://news.ycombinator.com/item?id=45179939 - Sept 2025 (209 comments)
NPM debug and chalk packages compromised - https://news.ycombinator.com/item?id=45169657 - Sept 2025 (719 comments)
Do you think there would be the time to properly review applications to get on the whitelist?
They could add anti-phish features which force confirmation before clicking a link to an uncommon domain. Startups could pay a nominal fee to get their domain reviewed and whitelisted.
If it's new, you should be more cautious. Except even those companies that should know better need you to link through 7 levels of redirect tracking, and they're always using a new one.
URLs are also getting too damn long
Passkeys are disruptive enough that I don't think they need to be mandated for everyone just yet, but I think it might be time for that for people who own critical dependencies.
Absolutely not.
https://www.malwarebytes.com/blog/news/2025/08/clickjack-att...
https://thehackernews.com/2025/08/dom-based-extension-clickj...
https://www.intercede.com/the-dangers-of-password-autofill-a...
There's strong evidence that the latter is a more common concern.
You don't have to believe me, read the links.
I don’t think we did. I think it is entirely plausible that more sophisticated attacks ARE getting into the npm ecosystem.
Tons of people think these kind of micro dependencies are harmful and many of them have been saying it for years.
I didn't think it'll make things perfect, not by a long shot. But it can make the exploits a lot harder to pull off.
It didn't go well. The JVM did it's part well, but they couldn't harden the library APIs. They ended up playing whack-a-mole with a steady stream of library bugs in privileged parts of the system libraries that allowed for sandbox escapes.
There’s no reason a color parser, or a date library should require network or file system access.
A different idea: Special stack frames such that while that frame is on the stack, certain syscalls are prohibited. These "sandbox frames" could be enabled by default for most library calls, or even used by developers to handle untrusted user input.
If a package wants to access the filesystem, shell, OS API's, sockets, etc., those should be permissions you have to explicitly grant in your code.
https://blog.plan99.net/why-not-capability-languages-a8e6cbd...
On the other hand, it seems about as hard as I was imagining. I take for granted that it has to be a new language -- you obviously can't add it on top of Python, for example. And obviously it isn't compatible with things like global monkeypatching.
But if a language's built-in functions are built around the idea from the ground up, it seems entirely feasible. Particularly if you make the limits entirely around permissions around data communication -- with disk, sockets, APIs, hardware like webcams and microphones, and "god" permissions like shell or exec commands -- and not about trying to merely constrain resource usage around things like CPU, memory, etc.
If a package is blowing up your memory or CPU, you'll catch it quickly and usually the worst it can do is make your service unavailable. The risk to focus on should be exclusively data access+exfiltration and external data modification, as far as I can tell. A package shouldn't be able to wipe your user folder or post program data to a URL at all unless you give it permission. Which means no filesystem or network calls, no shell access, no linked programs in other languages, etc.
we could literally just take Go and categorize on "imports risky package" and we'd have a better situation than we have now, and it would encourage library design that isolates those risky accesses so people don't worry about them being used. even that much should have been table stakes over a decade ago.
and like:
>No language has such an object or such interfaces in its standard library, and in fact “god objects” are viewed as violating good object oriented design.
sure they do. that's dependency injection, and you'd probably delegate it to a dependency injector (your god object) that resolves permissions. plus go already has an object for it that's passed almost everywhere: context.
perfect isn't necessary. what we have now very nearly everywhere is the most extreme example of "yolo", almost anything would be an improvement.
hence: doesn't sound too bad
"truly needs": currently, yes. but that seems like a fairly easy thing to address with library packaging systems and a language that supports that. static analysis and language design to support it can cover a lot (e.g. go is limited enough that you can handle some just from scanning imports), and "you can ask for something you don't use, it just means people are less likely to use your library" for the exceptions is hardly a problem compared to our current "you already have every permission and nobody knows it".
I think I need to read up more on how to deal with (avoiding) changes to your public APIs when doing dependency injection, because that seems like basically what you're doing in a capability-based module system. I feel like there has to be some way to make such a system more ergonomic and make the common case of e.g. "I just want to give this thing the ability to make any HTTP request" easy, while still allowing for flexibility if you want to lock that down more.
there are some rather obvious challenges, but a huge amount of the ones I've run across end up looking mostly like "it's hard to add to an existing language" which is extremely understandable, but hardly a blocker for new ones.
function main(io) {
const result = somethingThatRequiresHttp(io.fetch);
// ...
}
and as long as you don't put I/O in global scope (i.e. window.fetch) but do an injection into the main entrypoint, that entrypoint gets to control what everyone else can do. I could for example do function main(io) {
const result = something(readonlyFetch(onlyOurAPI(io.fetch))
}
function onlyOurAPI(fetch) {
return (...args) => {
const test = /^https:\/\/api.mydomain.example\//.exec(args[0]);
if (test == null) {
throw new ValueError("must only communicate with our API");
}
return fetch(..args);
}
}
function readonlyFetch(fetch) { /* similar but allowlist only GET/HEAD methods */ }
I vaguely remember him being really passionate about "JavaScript lets you do this, we should all program in JavaScript" at the time... these days he's much more likely to say "JavaScript doesn't have any way to force you to do this and close off all the exploits from the now-leaked global scope, we should never program in JavaScript."Shoutout to Ryan Dahl and Deno, where you write `#!/usr/bin/env deno --allow-net=api.mydomain.example` at the start of your shell script to accomplish something similar.
In my amateur programming-conlang hobby that will probably never produce anything joyful to anyone other than me, one of those programming languages has a notion of sending messages to "message-spaces" and I shamelessly steal Doug's idea -- message-spaces have handles that you can use to communicate with them, your I/O is a message sent to your main m-space containing a bunch of handles, you can then pattern-match on that message and make a new handle for a new m-space, provisioned with a pattern-matcher that only listens for, say, HTTP GET/HEAD events directed at the API, and forwards only those to the I/O handle. So then when I give this new handle to someone, they have no way of knowing that it's not fully I/O capable, requests they make to the not-API just sit there blackholed until you get an alert "there are too many unread messages in this m-space" and peek in to see why.
Versus, when I've worked at places that eschew automatic dependency management, yes, there is some extra work associated with manually managing them. But it's honestly not that much. And in some ways it becomes a boon for maintainability because it encourages keeping your dependency graph pruned. That, in turn, reduces exposure to third-party software vulnerabilities and toil associated with responding to them.
And at least with a standardized package manager, the packages are in a standard format that makes them easier to analyze, audit, etc.
should it be higher friction than npm? probably yes. a permissions system would inherently add a bit (leftpad includes 27 libraries which require permissions "internet" and "sudo", add? [y/N]) which would help a bit I think.
but I'm personally more optimistic about structured code and review signing, e.g. like cargo-crev: https://web.crev.dev/rust-reviews/ . there could be a market around "X group reviewed it and said it's fine", instead of the absolute chaos we have now outside of conservative linux distro packagers. there's practically no sharing of "lgtm" / "omfg no" knowledge at the moment, everyone has to do it themselves all the time and not miss anything or suffer the pain, and/or hope they can get the package manager hosts' attention fast enough.
The more interesting comparison to me is, for example, my experience on C# projects that do and do not use NuGet. Or even the overall C# ecosystem before and after NuGet got popular. Because then you're getting closer to just comparing life with and without a package manager, without all the extra confounding variables from differing language capabilities, business domains, development cultures, etc.
I do agree that C is an especially-bad case for additional reasons though, yeah.
exceptions totally exist, I've seen them too. I just don't think they're enough to move the median away from "total chaotic garbage" regardless of the system
Leftpad as a library? Let it all burn down; but then, it's Javascript, it's always been on fire.
Before AI code generation, we would have called that copy-and-paste, and a code smell compared to proper reuse of a library. It's not any better with AI. That's still code you'd have to maintain, and debug. And duplicated effort from all the other code doing the same thing, and not de-duplicated across the numerous libraries in a dependency tree or on a system, and not benefiting from multiple people collaborating on a common API, and not benefiting from skill transfer across projects...
Smells are changing, friend. Now, when I see a program with 20000 library dependencies that I have to feed into a SAST and SCA system and continually point-version-bump and rebuild, it smells a hell of a lot worse to me than something self-contained.
At this point, I feel like I can protect the latter from being exploited better than the former.
In the C world, anything that is not direct is often a very stable library and can be brought in as a peer deps. Breaking changes happen less and you can resolve the tree manually.
In NPM, there are so many little packages that even renowned packages choose to rely one for no obvious reason. It’s a severe lack of discipline.
Micro-dependencies are not the only thing that went wrong here, but hopefully this is a wakeup call to do some cleaning.
Upgrading when falling off the train is serious drawback on moving fast..
Absolutely. A lot of developers work on a large Enterprise app for years and then scoot off to a different project or company.
What's not fun is being the poor Ops staff that have to deal with supporting the library dependencies, JVM upgrades, etc for decades after.
Nixing javascript in the frontend is a harder sell, sadly
Ruby, Python, and Clojure, though? They weren’t any better than my npm projects, being roughly the same order of magnitude. Same seems to be true for Rust.
Same with Java, if you avoid springboot and similar everything frameworks, which admittedly is a bit of an uphill battle given the state of java developers.
You can of course also keep dependencies small in javascript, but it's a very uphill fight where you'll have just a few options and most people you hire are used to including a library (that includes 10 libraries) to not have to so something like `if (x % 2 == 1)`
Just started with golang... the language is a bit annoying but the dependency culture seems OK
Hey that was also on NPM iirc!
If you think npm.help is something it isn't, that's not something DKIM et al can help with.
I can think of 3 paths to improve situation (assuming that "everyone deploys cryptographic email infrastructure instantly" is not gonna happen).
1. The email client doesn't indicate DKIM at all. This is strictly worse than today, because then the attack could have claimed to be from npmjs.com.
2. You only get a checkmark if you have DKIM et al plus you're a "verified domain". This means only big corporations get the checkmark -- I hate this option. It's EV SSL but even worse. And again, unless npmjs.com was a "big corporation" the attacker could have just faked the sender and the user would not notice anything different, since in that world the authentic npmjs.com emails wouldn't have a checkmark either.
3. The checkmark icon is changed into something else, nothing else happens. But what? "DKIM" isn't the full picture (and would be horribly confusing too). Putting a sunflower there seems a little weird. Do you really apply this much significance to the specific icon?
The path that HTTPS took just hasn't been repeatable in the email space; the upgrade cycles are much slower, the basic architecture is client->server->server not client->server, and so on.
Few years ago? I have lock icon right now in my address bar