Top
Best
New

Posted by WhyNotHugo 3 days ago

We all dodged a bullet(xeiaso.net)
Related: NPM debug and chalk packages compromised - https://news.ycombinator.com/item?id=45169657
822 points | 483 commentspage 2
ivape 3 days ago|
Dat domain name.

Yeah, stop those cute domain names. I never got the memo on Youtu.be, I just had “learn” it was okay. Of course people started to let their guard down because dumbasses started to get cute.

We all did dodge a bullet because we’ve been installing stuff from NPM with reckless abandon for awhile.

Can anyone give me a reason why this wouldn’t happen in other ecosystems like Python, because I really don’t feel comfortable if I’m scared to download the most basic of packages. Everything is trust.

1-more 3 days ago||
of all people my mortgage servicer is the worst about this. Your login is valid on like 3 different top level domains and you get bounced between them when you sign in, eventually going from servicer.com to myservicer.com to servicer.otherthing.com! It's as though they were training you to not care about domain names.
wzamqo 3 days ago||
Paying US taxes online is just as bad. The official way to pay tax balances with a debit card online is to use officialpayments[.]com. This is what the IRS advises you to use. Our industry is a clown factory.
LorenDB 3 days ago||
Wells Fargo apparently emails from epay@onlinemyaccounts[.]com.
jvdvegt 3 days ago|||
What about aka.ms, which is a valid domain for Microsoft. Why didn't they use microsoft.com, or windows.com? I always wonder if this aka is short for 'also known as'.
dymk 3 days ago||
They use that domain name because it’s used for short links
duxup 3 days ago||
Is it possible to do the thing proposed in the email without clicking the link?

I just try to avoid clicking links in emails generally...

loloquwowndueo 3 days ago||
Should be - open another browser window and manually log into npm whatever, and update your 2fa there.

Definitely good practice .

Dilettante_ 3 days ago|||
This is the Way. To minimize attack surface, the senders of authentic messages should straight-up avoid putting links to "do the thing" in the message. Just tell the user to update their credentials via the website.
viraptor 3 days ago|||
That's what the Australian Tax Office does. Just a plaintext message that's effectively "you've got a new message. Go to the website to read it."
duxup 3 days ago|||
All my medical places I use do that, with the note that you can also use their app. Good system.
foxglacier 3 days ago|||
Unfortunately, my doctor's office texts me their bank account number saying "please pay $75 to this account". It told them that's putting people at risk of phishing but they didn't care.
darthwalsh 2 days ago|||
Personally, I'd rather they put the HIPAA message content straight into the email, and let Gmail sort out the priority. About 90% "you have received a message" notifications are not actionable: "you made an appointment" or "take this survey nobody cares about."
amysox 3 days ago|||
My doctor's office does the same thing. So do some financial services companies.
Roguelazer 3 days ago|||
For most users, that'll just result in them going to Google, searching for the name of your business, and then clicking the first link blindly. At that point you're trusting that there's no malicious actors squatting on your business name's keyword -- and if you're at all an interesting target, there's definitely malvertising targeting you.

The only real solution is to have domain-bound identities like passkeys.

hu3 3 days ago||||
That's what I always do. Never click these kinds of links in e-mail.

Always manually open the website.

This week Oracle Cloud started enforcing 2FA. And surely I didn't click their e-mail link to do that.

ares623 3 days ago|||
But won’t someone think of the friction? /s

My theory is that if that companies start using that workflow in the future, it’ll become even _easier_ for users to click a random link, because they’d go “wow! That’s so convenient now!”

0cf8612b2e1e 3 days ago|||
The Microsoft ecosystem certainly makes this challenging. At work, I get links to Sharepoint hosted things with infinitely long hexadecimal addresses. Otherwise finding resources on Sharepoint is impossible.
JohnFen 3 days ago||
> I just try to avoid clicking links in emails generally...

I don't just generally try, I _never_ click links in emails from companies, period. It's too dangerous and not actually necessary. If a friend sends me a link, I'll confirm it with them directly before using it.

benreesman 3 days ago||
Now imagine if someone combined Jia Tan patience with swiss-cheese security like all of our editor plugins and nifty shell user land stuff and all that.

Developer stuff is arguably the least scrutinized thing that routinely runs as mega root.

I wish I could say that I audit every elisp, neovim, vscode plugin and every nifty modern replacement for some creaky GNU userland tool. But bat, zoxide, fzf, atuin, starship, viddy, and about 100 more? Nah, I get them from nixpkgs in the best case, and I've piped things to sh.

Write a better VSCode plugin for some terminal panel LLM gizmo, wait a year or two?

gg

jowea 3 days ago|
Someday, someone, hopefully, will fix xkcd 1200.
lysace 3 days ago||
This reads like a joke that's missing the punchline.

The post's author's resume section reinforces this feeling:

I am a skilled force multiplier, acclaimed speaker, artist, and prolific blogger. My writing is widely viewed across 15 time zones and is one of the most viewed software blogs in the world.

I specialize in helping people realize their latent abilities and help to unblock them when they get stuck. This creates unique value streams and lets me bring others up to my level to help create more senior engineers. I am looking for roles that allow me to build upon existing company cultures and transmute them into new and innovative ways of talking about a product I believe in. I am prioritizing remote work at companies that align with my values of transparency, honesty, equity, and equality.

If you want someone that is dedicated to their craft, a fearless innovator and a genuine force multiplier, please look no further. I'm more than willing to hear you out.

gertop 3 days ago|
That kind of fake self-aggrandizement-delusion-driven story telling is part of the autistic trans subculture. That particular subculture tends to speak of themselves as goddesses, wizards, or other higher beings. Their websites are usually dark themed with pastel or neon forecolors and you'll find anime girls inserted every now and then .

As far as I can tell it isn't a joke per se, but it is tongue-in-cheek and the ego is often very real.

dang 3 days ago||
Related. Others?

DuckDB NPM packages 1.3.3 and 1.29.2 compromised with malware - https://news.ycombinator.com/item?id=45179939 - Sept 2025 (209 comments)

NPM debug and chalk packages compromised - https://news.ycombinator.com/item?id=45169657 - Sept 2025 (719 comments)

jason-johnson 2 days ago|
https://en.wikipedia.org/wiki/Peacenotwar
sega_sai 3 days ago||
It seems to me that having an email client that simply disables all the links in the email is probably a good idea. Or maybe, there should be explicit white-listing of domains that are allowed to be hyperlinks.
SahAssar 3 days ago||
And who would control that whitelist? How would it be any different than the domain system or PKI CA system we have now?

Do you think there would be the time to properly review applications to get on the whitelist?

0xDEAFBEAD 3 days ago|||
Presumably Gmail already has anti-spam features which trigger based on domain name etc.

They could add anti-phish features which force confirmation before clicking a link to an uncommon domain. Startups could pay a nominal fee to get their domain reviewed and whitelisted.

toast0 3 days ago||||
In a world where those sending email were consistent, the user could control the whitelist. 'This link is from a domain you've clicked through X times, do you want to click through? Yes / Yes and don't ask again'

If it's new, you should be more cautious. Except even those companies that should know better need you to link through 7 levels of redirect tracking, and they're always using a new one.

sega_sai 3 days ago|||
A user for example. By default nothing would be in the whitelist. Then you would add things to the whitelist manually. Since it's not that frequent this needs to be done, that probably would be a useful extra step to stop phishing.
2OEH8eoCRo0 3 days ago||
I've always thought it's insane that anyone on the planet with a connection can drop a clickable link in front of you. Clickable links in email should be considered harmful. Force the user to copy/paste

URLs are also getting too damn long

falcor84 3 days ago||
How would copy-pasting help in this scenario?
Mystery-Machine 3 days ago||
Always use password manager to automatically fill in your credentials. If password manager doesn't find your credentials, check the domain. On top of that, you can always go directly to the website, to make any needed changes there, without following the link.
dewey 3 days ago||
Password managers are still too unreliable to auto-fill everywhere all the time, and manually having to copy paste something from the password manager happens regularly so it's not something that feels unusual if it doesn't auto-fill it for some reason.
zargon 3 days ago|||
I put the fault on companies for making their login processes so convoluted. If you take the time to do it, you can usually configure the password manager to work (we shouldn’t have to make the effort). But even if you do, then the company will at some point change something about their login processes and break it.
nilslindemann 3 days ago|||
Indeed. I have to fill in my TOTP manually on Lichess and on tutanota.com. On proton.me sometimes. On other sites it always works, e.g. GitHub.
Analemma_ 3 days ago|||
I don't think this really helps. I use Bitwarden and it constantly fails to autofill legitimate websites and makes me go to the app to copy-paste, because companies do all kinds of crap with subdomains, marketing domains, etc. Any safeguard relying on human attention is ultimately susceptible to this; the only true solutions are things like passkeys where human fuckups are impossible by design and they can't give credentials to the wrong place even if they want to.

Passkeys are disruptive enough that I don't think they need to be mandated for everyone just yet, but I think it might be time for that for people who own critical dependencies.

teekert 3 days ago||
It's a pita but BitWarden has quite some flexibility in filtering where what gets autofilled. I agree the defaults are pretty shit and indeed lead to constant copy-pasting. On the other hand, it will offer all my password all the time for all my selfhosted stuff on my 1 server.
teekert 3 days ago|||
Better yet, use password manager as the store of the valid domain and click there to go to resource.
fragmede 3 days ago|||
what do you mean bankofamericaabuse.com isn't a real website!? It's in the email and everything! The nice guy on the phone said it was legit...
esseph 3 days ago||
> Always use password manager to automatically fill in your credentials

Absolutely not.

https://www.malwarebytes.com/blog/news/2025/08/clickjack-att...

https://thehackernews.com/2025/08/dom-based-extension-clickj...

https://www.intercede.com/the-dangers-of-password-autofill-a...

darthwalsh 2 days ago||
What's more likely, the real npm site has a subdomain with XSS (IIRC the issue you linked) or you are manually filling your password into a phishing site?

There's strong evidence that the latter is a more common concern.

esseph 2 days ago||
What I'm saying is that autofill is a current method of credential extraction that should be avoided.

You don't have to believe me, read the links.

stevoski 3 days ago||
“We all dodged a massive bullet”

I don’t think we did. I think it is entirely plausible that more sophisticated attacks ARE getting into the npm ecosystem.

dsff3f3f3f 3 days ago||
> These kinds of dependencies are everywhere and nobody would even think that they could be harmful.

Tons of people think these kind of micro dependencies are harmful and many of them have been saying it for years.

Groxx 3 days ago||
I'm rather convinced that the next major language-feature wave will be permissions for libraries. It's painfully clear that we're well past the point where it's needed.

I didn't think it'll make things perfect, not by a long shot. But it can make the exploits a lot harder to pull off.

gmueckl 3 days ago|||
Java went down that road with the applet sandboxing. They thought that this would go well because the JVM can be a perfect gatekeeper on the code that gets to run and can see and stop all calls to forbidden methods.

It didn't go well. The JVM did it's part well, but they couldn't harden the library APIs. They ended up playing whack-a-mole with a steady stream of library bugs in privileged parts of the system libraries that allowed for sandbox escapes.

cjalmeida 3 days ago|||
It was too complex. Just making system calls require white listing libraries goes a long way of preventing a whole class of exploits.

There’s no reason a color parser, or a date library should require network or file system access.

0xDEAFBEAD 3 days ago||
The simplest approach to whitelisting libraries won't work, since the malicious color parser can just call the whitelisted library.

A different idea: Special stack frames such that while that frame is on the stack, certain syscalls are prohibited. These "sandbox frames" could be enabled by default for most library calls, or even used by developers to handle untrusted user input.

mike_hearn 3 days ago|||
Yes, but that was with a very ambitious sandbox that included full GUI access. Sandboxing a pure data transformation utility like something that strips ANSI escape codes would have been much easier for it.
crazygringo 3 days ago||||
Totally agreed, and I'm surprised this idea hasn't become more mainstream yet.

If a package wants to access the filesystem, shell, OS API's, sockets, etc., those should be permissions you have to explicitly grant in your code.

mike_hearn 3 days ago|||
It's harder than it looks. I wrote an essay exploring why here:

https://blog.plan99.net/why-not-capability-languages-a8e6cbd...

crazygringo 3 days ago|||
Thanks, it's great to see all the issues you raise.

On the other hand, it seems about as hard as I was imagining. I take for granted that it has to be a new language -- you obviously can't add it on top of Python, for example. And obviously it isn't compatible with things like global monkeypatching.

But if a language's built-in functions are built around the idea from the ground up, it seems entirely feasible. Particularly if you make the limits entirely around permissions around data communication -- with disk, sockets, APIs, hardware like webcams and microphones, and "god" permissions like shell or exec commands -- and not about trying to merely constrain resource usage around things like CPU, memory, etc.

If a package is blowing up your memory or CPU, you'll catch it quickly and usually the worst it can do is make your service unavailable. The risk to focus on should be exclusively data access+exfiltration and external data modification, as far as I can tell. A package shouldn't be able to wipe your user folder or post program data to a URL at all unless you give it permission. Which means no filesystem or network calls, no shell access, no linked programs in other languages, etc.

Groxx 3 days ago||||
tbh none of that sounds particularly bad, nor do I think capabilities are necessary (but obviously useful).

we could literally just take Go and categorize on "imports risky package" and we'd have a better situation than we have now, and it would encourage library design that isolates those risky accesses so people don't worry about them being used. even that much should have been table stakes over a decade ago.

and like:

>No language has such an object or such interfaces in its standard library, and in fact “god objects” are viewed as violating good object oriented design.

sure they do. that's dependency injection, and you'd probably delegate it to a dependency injector (your god object) that resolves permissions. plus go already has an object for it that's passed almost everywhere: context.

perfect isn't necessary. what we have now very nearly everywhere is the most extreme example of "yolo", almost anything would be an improvement.

mike_hearn 3 days ago||
Yes, dependency injection can help although injectors don't have any understanding of whether an object really needs a dependency. But that's not a god object in the sense it's normally meant. For one, it's injecting different objects :)
Groxx 3 days ago||
to be clear, I mean that the DI container/whatever is "the god object" - it holds essentially every dependency and every piece of your own code, knows how to construct every single one, and knows what everything needs. it's the biggest and most complicatedly-intertwined thing in pretty much any application, and it works so well that people forget it exists or how it works, and carrying permission-objects through that on a library level would be literally trivial because all of them already do everything needed.

hence: doesn't sound too bad

"truly needs": currently, yes. but that seems like a fairly easy thing to address with library packaging systems and a language that supports that. static analysis and language design to support it can cover a lot (e.g. go is limited enough that you can handle some just from scanning imports), and "you can ask for something you don't use, it just means people are less likely to use your library" for the exceptions is hardly a problem compared to our current "you already have every permission and nobody knows it".

mike_hearn 3 days ago||
Yes, I do agree that integration with DI is one way to make progress on this problem that hasn't been tried before.
ryukafalz 3 days ago|||
Thanks, this was a good overview of some of the challenges involved with designing a capability language.

I think I need to read up more on how to deal with (avoiding) changes to your public APIs when doing dependency injection, because that seems like basically what you're doing in a capability-based module system. I feel like there has to be some way to make such a system more ergonomic and make the common case of e.g. "I just want to give this thing the ability to make any HTTP request" easy, while still allowing for flexibility if you want to lock that down more.

mike_hearn 3 days ago||
In Java DI you can add dependencies without changing your public API using field injection. But really there needs to be a language with integrated DI. A lot of the pain of using DI comes from the way it's been strapped on the side.
int_19h 3 days ago||||
This exact idea has already been mainstream. Both Java and .NET used to have mechanisms like that, e.g.: https://en.wikipedia.org/wiki/Code_Access_Security
Groxx 2 days ago||
"it exists as a niche feature that few use and fewer understand" isn't exactly "mainstream" IMO (it's significantly less common from what I've seen than manual classloader shenanigans, for example). But yes, it's nice that it exists, and I wish it were used more - it'd catch low-effort stuff like this one was.
darthwalsh 2 days ago||
No, C# had it: past tense. CAS was neutered in .NET Framework 4.0 then removed in dotnet core.
Groxx 1 day ago||
alas. don't suppose you know of any good articles on why it's removed? I'd be curious about the reasoning / challenges.

there are some rather obvious challenges, but a huge amount of the ones I've run across end up looking mostly like "it's hard to add to an existing language" which is extremely understandable, but hardly a blocker for new ones.

int_19h 1 day ago||
I don't know if there were any articles specifically detailing it, but from blog posts at the time the clear message was that they didn't consider the intended security guarantees to be possible to uphold in practice, so much so that "CAS and appdomains shouldn't be considered a security boundary".
crdrost 3 days ago|||
This was one of Doug Crockford's big bugaboos since The Good Parts and JSLint and Yahoo days—the idea that lexical scope aka closures give you an unprecedented ability to actually control I/O because you can say

    function main(io) {
        const result = somethingThatRequiresHttp(io.fetch);
        // ...
    }
and as long as you don't put I/O in global scope (i.e. window.fetch) but do an injection into the main entrypoint, that entrypoint gets to control what everyone else can do. I could for example do

    function main(io) {
      const result = something(readonlyFetch(onlyOurAPI(io.fetch))
    }
    function onlyOurAPI(fetch) {
      return (...args) => {
        const test = /^https:\/\/api.mydomain.example\//.exec(args[0]);
        if (test == null) {
          throw new ValueError("must only communicate with our API");
        }
        return fetch(..args);
      }
    }
    function readonlyFetch(fetch) { /* similar but allowlist only GET/HEAD methods */ }
I vaguely remember him being really passionate about "JavaScript lets you do this, we should all program in JavaScript" at the time... these days he's much more likely to say "JavaScript doesn't have any way to force you to do this and close off all the exploits from the now-leaked global scope, we should never program in JavaScript."

Shoutout to Ryan Dahl and Deno, where you write `#!/usr/bin/env deno --allow-net=api.mydomain.example` at the start of your shell script to accomplish something similar.

In my amateur programming-conlang hobby that will probably never produce anything joyful to anyone other than me, one of those programming languages has a notion of sending messages to "message-spaces" and I shamelessly steal Doug's idea -- message-spaces have handles that you can use to communicate with them, your I/O is a message sent to your main m-space containing a bunch of handles, you can then pattern-match on that message and make a new handle for a new m-space, provisioned with a pattern-matcher that only listens for, say, HTTP GET/HEAD events directed at the API, and forwards only those to the I/O handle. So then when I give this new handle to someone, they have no way of knowing that it's not fully I/O capable, requests they make to the not-API just sit there blackholed until you get an alert "there are too many unread messages in this m-space" and peek in to see why.

bunderbunder 3 days ago||||
Alternatively, I've long been wondering if automatic package management may have been a mistake. Its primary purpose seems to be to enable this kind of proliferation of micro-dependencies by effectively sweeping the management of these sprawling dependency graphs under the carpet. But the upshot of that is, most changes to your dependency graph, and by extension your primary vector for supply chain attacks, becomes something you're no longer really looking at.

Versus, when I've worked at places that eschew automatic dependency management, yes, there is some extra work associated with manually managing them. But it's honestly not that much. And in some ways it becomes a boon for maintainability because it encourages keeping your dependency graph pruned. That, in turn, reduces exposure to third-party software vulnerabilities and toil associated with responding to them.

JoshTriplett 3 days ago|||
Manual dependency management without a package manager does not lead people to do more auditing.

And at least with a standardized package manager, the packages are in a standard format that makes them easier to analyze, audit, etc.

Groxx 3 days ago|||
yea, just look at the state of many C projects. it's rather clearly worse in practice in aggregate.

should it be higher friction than npm? probably yes. a permissions system would inherently add a bit (leftpad includes 27 libraries which require permissions "internet" and "sudo", add? [y/N]) which would help a bit I think.

but I'm personally more optimistic about structured code and review signing, e.g. like cargo-crev: https://web.crev.dev/rust-reviews/ . there could be a market around "X group reviewed it and said it's fine", instead of the absolute chaos we have now outside of conservative linux distro packagers. there's practically no sharing of "lgtm" / "omfg no" knowledge at the moment, everyone has to do it themselves all the time and not miss anything or suffer the pain, and/or hope they can get the package manager hosts' attention fast enough.

bunderbunder 3 days ago||
C has a lot of characteristics beyond simple lack of a standard automatic package manager that complicate the situation.

The more interesting comparison to me is, for example, my experience on C# projects that do and do not use NuGet. Or even the overall C# ecosystem before and after NuGet got popular. Because then you're getting closer to just comparing life with and without a package manager, without all the extra confounding variables from differing language capabilities, business domains, development cultures, etc.

Groxx 3 days ago||
when I was doing C# pre-nuget we had an utterly absurd amount of libraries that nobody had checked and nobody ever upgraded. so... yeah I think it applies there too, at least from my experience.

I do agree that C is an especially-bad case for additional reasons though, yeah.

bunderbunder 3 days ago||
Gotcha. When I was, we actively curated our dependencies and maintaining them was a regularly scheduled task that one team member in particular was in charge of making sure got done.
Groxx 3 days ago||
most teams I've been around have zero or one person who handles that (because they're passionate) (this is usually me) - tbh I think that's probably the majority case.

exceptions totally exist, I've seen them too. I just don't think they're enough to move the median away from "total chaotic garbage" regardless of the system

bunderbunder 2 days ago||
This is why I secretly hate the term software engineer. "Software tinker" would be more appropriate.
Groxx 2 days ago||
ha, I like that one - it evokes the right mental image.
mikestorrent 3 days ago|||
Well, consider that a lot of these functions that were exploited are simple things. We use a library to spare ourselves the drugdery of rewriting them, but now that we have AI, what's it to me if I end up with my own string-colouring functions for output in some file under my own control, vs. bringing in an external dependency that puts me on a permanent upgrade treadmill and opens the risk to supply chain attacks?

Leftpad as a library? Let it all burn down; but then, it's Javascript, it's always been on fire.

JoshTriplett 3 days ago||
> but now that we have AI, what's it to me if I end up with my own string-colouring functions for output in some file under my own control

Before AI code generation, we would have called that copy-and-paste, and a code smell compared to proper reuse of a library. It's not any better with AI. That's still code you'd have to maintain, and debug. And duplicated effort from all the other code doing the same thing, and not de-duplicated across the numerous libraries in a dependency tree or on a system, and not benefiting from multiple people collaborating on a common API, and not benefiting from skill transfer across projects...

mikestorrent 20 hours ago||
> a code smell

Smells are changing, friend. Now, when I see a program with 20000 library dependencies that I have to feed into a SAST and SCA system and continually point-version-bump and rebuild, it smells a hell of a lot worse to me than something self-contained.

At this point, I feel like I can protect the latter from being exploited better than the former.

ryandrake 3 days ago|||
Unpopular opinion these days, but: It should be painful to pull in a dependency. It should require work. It should require scrutiny, and deep understanding of the code you're pulling in. Adding a dependency is such an important decision that can have far reaching effects over your code: performance, security, privacy, quality/defects. You shouldn't be able to casually do it with a single command line.
heisenbit 3 days ago|||
For better or worse it is often less work to create a dependency than to maintain it over its lifetime. Improvements in maintenance also ease creation of new dependencies.
skydhash 3 days ago|||
I wouldn’t go for painful that much. The main issue is transitive dependencies. The tree can be several layer deep.

In the C world, anything that is not direct is often a very stable library and can be brought in as a peer deps. Breaking changes happen less and you can resolve the tree manually.

In NPM, there are so many little packages that even renowned packages choose to rely one for no obvious reason. It’s a severe lack of discipline.

mbrevda1 3 days ago|||
yup, here is node's docs for it (WIP): https://nodejs.org/api/permissions.html
SebastianKra 3 days ago|||
Yeah, there's an entire community dedicated to cleaning up the js ecosystem.

https://e18e.dev/

Micro-dependencies are not the only thing that went wrong here, but hopefully this is a wakeup call to do some cleaning.

skydhash 3 days ago||
Discord server? Is it that much work to create a forum or a mailing list with anonymous access. Especially with a community you can vet that easily?
stickfigure 3 days ago|||
It wouldn't be a problem if there wasn't a culture of "just upgrade everything all the time" in the javascript ecosystem. We generally don't have this problem with Java libraries, because people pick versions and don't upgrade unless there's good reason.
ilvez 3 days ago|||
From maintenance perspective both never and always seem like extremes though.

Upgrading when falling off the train is serious drawback on moving fast..

0xDEAFBEAD 3 days ago||
Maybe we need two upgrade paths: An expedited auto-upgrade path which requires multi-key signoff from various trusted developers, and a standard upgrade path which is low-pressure.
jcelerier 3 days ago|||
and then you get Log4Shell
anonzzzies 3 days ago|||
Yes. It is a bit painful this is not rather obvious by now. But I do have, every code review, whine about people who just include trivial outdated one function npms :(
balder1991 3 days ago|||
Working for a bank did make me think much more about all the vulnerabilities that can go into certain tools. The company has a lot of bureaucracy to prevent installing anything or adding external dependencies.
benoau 3 days ago||
Working for a fintech and being responsible for the software made me very wary of dependencies and weeding out the deprecated and EOL'd stuff that had somehow already found its way into what was a young project when I joined. Left unrestrained, developers will add anything if it resolves their immediate needs like you could probably spread malware very well just by writing a fake-blog advocating a malicious module to solve certain scenarios.
esseph 3 days ago||
> Left unrestrained, developers will add anything if it resolves their immediate needs

Absolutely. A lot of developers work on a large Enterprise app for years and then scoot off to a different project or company.

What's not fun is being the poor Ops staff that have to deal with supporting the library dependencies, JVM upgrades, etc for decades after.

procaryote 3 days ago|||
I've nixed javascript in the backend in several places, partly because of the weird culture around dependencies. Having to audit that for compliance, or keeping it actually secure, is a nightmare.

Nixing javascript in the frontend is a harder sell, sadly

christophilus 3 days ago||
What did you switch to instead? I used to be a C# dev, and have done my fair share of Go. Both of those have decent enough standard libraries that I never found myself with a large 3rd party dependency tree.

Ruby, Python, and Clojure, though? They weren’t any better than my npm projects, being roughly the same order of magnitude. Same seems to be true for Rust.

procaryote 3 days ago||
You can get pretty far in python without a lot of dependencies, and the dependencies you do need tend to be more substantial blocks of functionality. Much easier to keep the tree small than npm.

Same with Java, if you avoid springboot and similar everything frameworks, which admittedly is a bit of an uphill battle given the state of java developers.

You can of course also keep dependencies small in javascript, but it's a very uphill fight where you'll have just a few options and most people you hire are used to including a library (that includes 10 libraries) to not have to so something like `if (x % 2 == 1)`

Just started with golang... the language is a bit annoying but the dependency culture seems OK

amarant 3 days ago|||
Throwback to leftpad!

Hey that was also on NPM iirc!

amysox 3 days ago||
What I'd like to know is why anyone thinks it's a good idea to have this level of granularity in libraries? Seriously? A library that only contains "a utility function that determines if its argument can be used like an array"? That's a lot of overhead in dependency management, which translates into a lot of cognitive load. Sooner or later, something's going to snap...and something did, here.
fiatpandas 3 days ago|
His email client even puts a green check mark next to the fake NPM email. UX fail.
yencabulator 3 days ago|
The claim is valid -- it is legit from npm.help

If you think npm.help is something it isn't, that's not something DKIM et al can help with.

kccqzy 3 days ago|||
Do you remember a few years ago that browsers used to put a lock icon for all HTTPS connections? That lock icon signified that the connection is encrypted alright. To a tech geek that's a valid use of a lock icon. But browsers still removed it because it's a massive UX fail. You have to consider what the lock icon means to people who are minimally tech literate. I understand and have set up DKIM and SPF, but you cannot condense the intended security feature of DKIM/SPF/DMARC into a single icon and expect that to be good UX.
yencabulator 3 days ago|||
Browsers moved away from the https lock icon after https become very very common. Email hasn't reached a comparable state.
kccqzy 3 days ago||
We are talking about a UX failure regarding what a lock icon or a checkmark icon represents. Popularity is irrelevant. It's entirely about the disconnect between what tech geeks think a lock/checkmark icon represents and normal users think it represents.
yencabulator 3 days ago||
Instead of ranting, can you say something constructive?

I can think of 3 paths to improve situation (assuming that "everyone deploys cryptographic email infrastructure instantly" is not gonna happen).

1. The email client doesn't indicate DKIM at all. This is strictly worse than today, because then the attack could have claimed to be from npmjs.com.

2. You only get a checkmark if you have DKIM et al plus you're a "verified domain". This means only big corporations get the checkmark -- I hate this option. It's EV SSL but even worse. And again, unless npmjs.com was a "big corporation" the attacker could have just faked the sender and the user would not notice anything different, since in that world the authentic npmjs.com emails wouldn't have a checkmark either.

3. The checkmark icon is changed into something else, nothing else happens. But what? "DKIM" isn't the full picture (and would be horribly confusing too). Putting a sunflower there seems a little weird. Do you really apply this much significance to the specific icon?

The path that HTTPS took just hasn't been repeatable in the email space; the upgrade cycles are much slower, the basic architecture is client->server->server not client->server, and so on.

zokier 3 days ago|||
> Do you remember a few years ago that browsers used to put a lock icon for all HTTPS connections?

Few years ago? I have lock icon right now in my address bar

yencabulator 3 days ago||
Chrome removed it, Firefox de-emphasized it by making it grayscale.
More comments...