Top
Best
New

Posted by zdw 12/26/2025

How uv got so fast(nesbitt.io)
1290 points | 459 commentspage 3
VerifiedReports 12/26/2025|
So... will uv make Python a viable cross-platform utility solution?

I was going to learn Python for just that (file-conversion utilities and the like), but everybody was so down on the messy ecosystem that I never bothered.

IshKebab 12/26/2025||
Yes, uv basically solves the terrible Python tooling situation.

In my view that was by far the biggest issue with Python - a complete deal-breaker really. But uv solves it pretty well.

The remaining big issues are a) performance, and b) the import system. uv doesn't do anything about those.

Performance may not be an issue in some cases, and the import system is ... tolerable if you're writing "a python project". If you're writing some other project and considering using Python for its scripting system, e.g. to wrangle multiple build systems or whatever than the import mess is a bigger issue and I would thing long and hard before picking it over Deno.

VerifiedReports 12/27/2025||
Thanks! I don't really think about importing stuff (which maybe I should), because I assume I'll have to write any specialized logic myself. So... your outlook is encouraging.
pseudosavant 12/26/2025|||
I write all of my scripts in Python with PEP 723 metadata and run them with `uv run`. Works great on Windows and Linux for me.
zahlman 12/26/2025||
It has been viable for a long time, and the kinds of projects you describe are likely well served by the standard library.
oblio 12/27/2025||
It hasn't been viable and you'd know if you tried to deploy Python scripts to Windows users and maintain/update them over longer periods of time.
didip 12/26/2025||
If UV team has a spare time, they should rewrite Python in Rust without any of the legacy baggage.
annexrichmond 12/27/2025||
> This reduces resolver backtracking dramatically since upper bounds are almost always wrong.

I am surprised by this because Python minor versions break backwards compatibility all the time. Our company for example is doing a painful upgrade from py39 to py311

zahlman 12/27/2025|
Could you explain what major pain points you've encountered? I can't think of any common breakages cited in 3.10 or 3.11 offhand. 3.12 had a lot more standard library removals, and the `match` statement introduced in 3.10 uses a soft keyword and won't break code that uses `match` as an identifier.
ggm 12/26/2025||
Some of these speed ups looked viable to backport into pip including parallel download, delayed .pyc, ignore egg, version checks.

Not that I'd bother since uv does venv so well. But, "it's not all rust runtime speed" implies pip could be faster too.

simonw 12/26/2025||
This post is excellent. I really like reading deep dives like this that take a complex system like uv and highlight the unique design decisions that make it work so well.

I also appreciate how much credit this gives the many previous years of Python standards processes that enabled it.

Update: I blogged more about it here, including Python recreations of the HTTP range header trick it uses and the version comparison via u64 integers: https://simonwillison.net/2025/Dec/26/how-uv-got-so-fast/

eviks 12/26/2025||
> Every code path you don’t have is a code path you don’t wait for.

No, every code path you don't execute is that. Like

> No .egg support.

How does that explain anything if the egg format is obsolete and not used?

Similar with spec strictness fallback logic - it's only slow if the packages you're installing are malformed, otherwise the logic will not run and not slow you down.

And in general, instead of a list of irrelevant and potentially relevant things would be great to understand some actual time savings per item (at least those that deliver the most speedup)!

But otherwise great and seemingly comprehensive list!

zahlman 12/26/2025|
> No, every code path you don't execute is that.

Even in compiled languages, binaries have to get loaded into memory. For Python it's much worse. On my machine:

  $ time python -c 'pass'

  real 0m0.019s
  user 0m0.013s
  sys 0m0.006s

  $ time pip --version > /dev/null

  real 0m0.202s
  user 0m0.182s
  sys 0m0.021s
Almost all of that extra time is either the module import process or garbage collection at the end. Even with cached bytecode, the former requires finding and reading from literally hundreds of files, deserializing via `marshal.loads` and then running top-level code, which includes creating objects to represent the functions and classes.

It used to be even worse than this; in recent versions, imports related to Requests are deferred to the first time that an HTTPS request is needed.

eviks 12/27/2025||
> binaries have to get loaded into memory.

Unless memory mapped by the OS with no impact on runtime for unused parts?

> imports related to Requests are deferred

Exactly, so again have no impact?

zahlman 12/27/2025|||
> Unless memory mapped by the OS with no impact on runtime for unused parts?

Yeah, this is presumably why a no-op `uv` invocation on my system takes ~50 ms the first time and ~10 ms each other time.

> Exactly, so again have no impact?

Only if your invocation of pip manages to avoid an Internet request. Note: pip will make an Internet request if you try to install a package by symbolic name even if it already has the version it wants in cache, because its cache is an HTTP cache rather than a proper download cache.

But even then, there will be hundreds of imports mainly related to Rich and its dependencies.

eviks 12/27/2025||
> Only if your invocation of pip manages to avoid an Internet request.

Yes it does, by definition, the topic of discussion is the impact of unused code paths? How is http cache relevant here? That's a used path!

zahlman 12/27/2025||
I got confused by the direction of the discussion.

My original point was that Requests imports in pip used to not be deferred like that, so you would pay for them up front, even if they turned out to be irrelevant. (But also they are relevant more often than they should be, i.e. the deferral system doesn't work as well as it should.)

Part of the reason you pay for them is to run top-level code (to create function and class objects) that are irrelevant to what the program is actually doing. But another big part is the cost of actually locating the files, reading them, and deserializing bytecode from them. This happens at import time even if you don't invoke any of the functionality.

cmrx64 12/27/2025|||
rtld does a lot of work even in “static” binaries to rewrite relocations even in “unused parts” of any PIE (which should be all of them today) and most binaries need full dyld anyway.
trashburger 12/27/2025||
> Ignoring requires-python upper bounds. When a package says it requires python<4.0, uv ignores the upper bound and only checks the lower. This reduces resolver backtracking dramatically since upper bounds are almost always wrong. Packages declare python<4.0 because they haven’t tested on Python 4, not because they’ll actually break. The constraint is defensive, not predictive.

This is clearly LLM-generated and the other bullet points have the same smell. Please use your own words.

quantbagel 12/27/2025||
When I made a swift package manager as Rust rewrite I realized that the language wasn't the issue, design is a lot more important. Rust just gave a boost to everything else. you can try Gust here https://github.com/quantbagel/gust a lot better than using SwiftPM but there's room for improvement! Make issues with your ideas
ec109685 12/26/2025||
The article info is great, but why do people put up with LLM ticks and slop in their writing? These sentences add no value and treats the reader as stupid.

> This is concurrency, not language magic.

> This is filesystem ops, not language-dependent.

Duh, you literally told me that the previous sentence and 50 million other times.

aurumque 12/26/2025|
This kind of writing goes deeper than LLM's, and reflects a decline in both reading ability, patience, and attention. Without passing judgement, there are just more people now who benefit from repetition and summarization embedded directly in the article. The reader isn't 'stupid', just burdened.
twoodfin 12/26/2025||
Indeed, I am coming around in the past few weeks to realization and acceptance that the LLM editorial voice is a benefit to an order of magnitude more hn readers than those (like us) for whom it is ice pick in the nostril stuff.

Oh well, all I can do is flag.

sghaz 12/27/2025|
Liked the focus on standards and ecosystem decisions rather than just “it’s fast because Rust.”

One small timeline nit: the article mentions PEP 517 as being from 2017, but the PEP itself was created in 2015. From the PEP header:

Created: 30-Sep-2015 [1]

It did see important revisions and wider adoption around 2017, so I assume that’s what was meant.

[1] https://peps.python.org/pep-0517/

More comments...