Top
Best
New

Posted by zdw 12/26/2025

How uv got so fast(nesbitt.io)
1290 points | 459 commentspage 2
Revisional_Sin 12/27/2025|
> Ignoring requires-python upper bounds. When a package says it requires python<4.0, uv ignores the upper bound and only checks the lower. This reduces resolver backtracking dramatically since upper bounds are almost always wrong. Packages declare python<4.0 because they haven’t tested on Python 4, not because they’ll actually break. The constraint is defensive, not predictive.

Erm, isn't this a bit bad?

aragilar 12/27/2025||
Yes, but it's (probably) the least worse thing they can do given how the "PyPI" ecosystem behaves. As PyPI does not allow replacement of artefacts (sdists, wheels, and older formats), and because there is no way to update/correct metadata for the artefacts, unless the uploader knew at upload time of incompatibilities between their package and and the upper-bounded reference (whether that is the Python interpreter or a Python package), the upper bound does not reflect a known incompatibility. In addition, certain tools (e.g. poetry) added the upper bounds automatically, increasing the amount of spurious bounds. https://iscinumpy.dev/post/bound-version-constraints/ provides more details.

The general lesson from this is when you do not allow changes/replacement of invalid data (which is a legitimate thing to do), then you get stuck with handling the bad data in every system which uses it (and then you need to worry about different components handling the badness in different ways, see e.g. browsers).

Pawamoy 12/27/2025||
No. When such upper bounds are respected, they contaminate other packages, because you have to add them yourself to be compatible with your dependencies. Then your dependents must add them too, etc. This brings only pain. Python 4 is not even a thing, core developers say there won't ever be a Python 4.h
akoboldfrying 12/27/2025||
> you have to add them yourself to be compatible with your dependencies

This is no more true for version upper bounds than it is for version lower bounds, assuming that package installers ensure all package version constraints are satisfied.

I presume you think version lower bounds should still be honoured?

zahlman 12/27/2025||
The point is that you can know that a lower bound is necessary at the time of publication; an upper bound is either speculative or purely defensive, and has possibly unnecessary consequences for your dependents.
akoboldfrying 12/28/2025||
You can also know that an upper bound is necessary at the time of publication -- for example, if your foo project uses bar 2.0, and bar 3.0 has already come out, and you have tried it and found it incompatible.

In the reverse direction, many version lower bounds are also "purely defensive" -- arising from nothing more than the version of the dep that you happened to get when you started the project. (Just because you installed "the latest baz" and got version 2.3.4, without testing there is nothing to say that version 2.3.3 would also work fine, so adding the version lower bound >=2.3.4 is purely defensive).

Basically, the two bound types are isomorphic.

yjftsjthsd-h 12/26/2025||
> No bytecode compilation by default. pip compiles .py files to .pyc during installation. uv skips this step, shaving time off every install. You can opt in if you want it.

Are we losing out on performance of the actual installed thing, then? (I'm not 100% clear on .pyc files TBH; I'm guessing they speed up start time?)

woodruffw 12/26/2025||
No, because Python itself will generate bytecode for packages once you actually import them. uv just defers that to first-import time, but the cost is amortized in any setting where imports are performed over multiple executions.
yjftsjthsd-h 12/26/2025|||
That sounds like yes? Instead of doing it once at install time, it's done once at first use. It's only once so it's not persistently slower, but that is a perf hit.

My first cynical instinct is to say that this is uv making itself look better by deferring the costs to the application, but it's probably a good trade-off if any significant percentage of the files being compiled might not be used ever so the overall cost is lower if you defer to run time.

VorpalWay 12/26/2025|||
I think they are making the bet that most modules won't be imported. For example if I install scipy, numpy, Pillow or such: what are the chances that I use a subset of the modules vs literally all of them?

I would bet on a subset for pretty much any non-trivial package (i.e. larger than one or two user facing modules). And for those trivial packages? Well they are usually small, so the cost is small as well. I'm sure there are exceptions: maybe a single gargantuan module thst consists of autogenerated FFI bindings for some C library or such, but that is likely the minority.

woodruffw 12/26/2025||||
> It's only once so it's not persistently slower, but that is a perf hit.

Sure, but you pay that hit either way. Real-world performance is always usage based: the assumption that uv makes is that people run (i.e. import) packages more often than they install them, so amortizing at the point of the import machinery is better for the mean user.

(This assumption is not universal, naturally!)

dddgghhbbfblk 12/26/2025||
Ummm, your comment is backwards, right?
woodruffw 12/26/2025||
Which part? The assumption is that when you `$TOOL install $PACKAGE`, you run (i.e. import) `$PACKAGE` more than you re-install it. So there's no point in slowing down (relatively less common) installation events when you can pay the cost once on import.

(The key part being that 'less common' doesn't mean a non-trivial amount of time.)

dddgghhbbfblk 12/26/2025||
Why would you want to slow down the more common thing instead of the less common thing? I'm not following that at all. That's why I asked if that's backwards.
woodruffw 12/27/2025||
Because you only slow down the more common thing once, and the less common thing is slower in absolute terms.
lillecarl 12/27/2025||
uv optimizes for the common usecase: You will install more packages than you will import new packages.
beacon294 12/26/2025||||
Probably for any case where an actual human is doing it. On an image you obviously want to do it at bake time, so I feel default off with a flag would have been a better design decision for pip.

I just read the thread and use Python, I can't comment on the % speedup attributed to uv that comes from this optimization.

Epa095 12/26/2025|||
Images are a good example where doing it at install-time is probably the best yeah, since every run of the image starts 'fresh', losing the compilation which happened last time the image got started.

If it was a optional toggle it would probably become best practice to activate compilation in dockerfiles.

zahlman 12/27/2025|||
> On an image you obviously want to do it at bake time

It seems like tons of people are creating container images with an installer tool and having it do a bunch of installations, rather than creating the image with the relevant Python packages already in place. Hard to understand why.

For that matter, a pre-baked Python install could do much more interesting things to improve import times than just leaving a forest of `.pyc` files in `__pycache__` folders all over the place.

tedivm 12/26/2025||||
You can change it to compile the bytecode on install with a simple environment variable (which you should do when building docker containers if you want to sacrifice some disk space to decrease initial startup time for your app).
saidnooneever 12/26/2025|||
you are right. it depends on how often this first start is, if its bad or not..most usecases id guess (total guess, have limited exp with python projects professionally) its not an issue.
zmgsabst 12/27/2025|||
That’s actually a negative:

My Docker build generating the byte code saves it to the image, sharing the cost at build time across all image deployments — whereas, building at first execution means that each deployed image instance has to generate its own bytecode!

That’s a massive amplification, on the order of 10-100x.

“Well just tell it to generate bytecode!”

Sure — but when is the default supposed to be better?

Because this sounds like a massive footgun for a system where requests >> deploys >> builds. That is, every service I’ve written in Python for the last decade.

hauntsaninja 12/26/2025|||
Yes, uv skipping this step is a one time significant hit to start up time. E.g. if you're building a Dockerfile I'd recommend setting `--compile-bytecode` / `UV_COMPILE_BYTECODE`
salviati 12/26/2025|||
Historically the practice of producing pyc files on install started with system wide installed packages, I believe, when the user running the program might lack privileges to write them. If the installer can write the .oy files it can also write the .pyc, while the user running them might not in that location.
thundergolfer 12/26/2025|||
This optimization hits serverless Python the worst. At Modal we ensure users of uv are setting UV_COMPILE_BYTECODE to avoid the cold start penalty. For large projects .pyc compilation can take hundreds of milliseconds.
zahlman 12/26/2025|||
> I'm not 100% clear on .pyc files TBH; I'm guessing they speed up start time?

They do.

> Are we losing out on performance of the actual installed thing, then?

When you consciously precompile Python source files, you can parallelize that process. When you `import` from a `.py` file, you only get that benefit if you somehow coincidentally were already set up for `multiprocessing` and happened to have your workers trying to `import` different files at the same time.

plorkyeran 12/26/2025||
If you have a dependency graph large enough for this to be relevant, it almost certainly includes a large number of files which are never actually imported. At worst the hit to startup time will be equal to the install time saved, and in most cases it'll be a lot smaller.
zahlman 12/27/2025||
> a large number of files which are never actually imported

Unfortunately, it typically doesn't work out as well as you might expect, especially given the expectation of putting `import` statements at the top of the file.

w10-1 12/26/2025||
I like the implication that we can have an alternative to uv speed-wise, but I think reliability and understandability are more important in this context (so this comment is a bit off-topic).

What I want from a package manager is that it just works.

That's what I mostly like about uv.

Many of the changes that made speed possible were to reduce the complexity and thus the likelihood of things not working.

What I don't like about uv (or pip or many other package managers), is that the programmer isn't given a clear mental model of what's happening and thus how to fix the inevitable problems. Better (pubhub) error messages are good, but it's rare that they can provide specific fixes. So even if you get 99% speed, you end up with 1% perplexity and diagnostic black boxes.

To me the time that matters most is time to fix problems that arise.

zahlman 12/26/2025|
> the programmer isn't given a clear mental model of what's happening and thus how to fix the inevitable problems.

This is a priority for PAPER; it's built on a lower-level API so that programmers can work within a clear mental model, and I will be trying my best to communicate well in error messages.

bastawhiz 12/26/2025||
> When a package says it requires python<4.0, uv ignores the upper bound and only checks the lower. This reduces resolver backtracking dramatically since upper bounds are almost always wrong. Packages declare python<4.0 because they haven’t tested on Python 4, not because they’ll actually break. The constraint is defensive, not predictive.

This is kind of fascinating. I've never considered runtime upper bound requirements. I can think of compelling reasons for lower bounds (dropping version support) or exact runtime version requirements (each version works for exact, specific CPython versions). But now that I think about it, it seems like upper bounds solve a hypothetical problem that you'd never run into in practice.

If PSF announced v4 and declared a set of specific changes, I think this would be reasonable. In the 2/3 era it was definitely reasonable (even necessary). Today though, it doesn't actually save you any trouble.

wging 12/26/2025||
I think the article is being careful not to say uv ignores _all_ upper bound checks, but specifically 4.0 upper bound checks. If a package says it requires python < 3.0, that's still super relevant, and I'd hope for uv to still notice and prevent you from trying to import code that won't work on python 3. Not sure what it actually does.
breischl 12/26/2025|||
I read the article as saying it ignores all upper-bounds, and 4.0 is just an example. I could be wrong though - it seems ambiguous to me.

But if we accept that it currently ignores any upper-bounds checks greater than v3, that's interesting. Does that imply that once Python 4 is available, uv will slow down due to needing to actually run those checks?

VorpalWay 12/26/2025|||
Are there any plans to actually make a 4.0 ever? I remember hearing a few years ago that after the transition to 3.0, the core devs kind of didn't want to repeat that mess ever again.

That said, even if it does happen, I highly doubt that is the main part of the speed up compared to pip.

bastawhiz 12/27/2025|||
I think there's a future where we get a 4.0, but it's not any time soon. I think they'd want an incredibly compelling backwards-incompatible feature before ripping that band-aid off. It would be setting up for a decade of transition, which shouldn't be taken lightly.
zahlman 12/27/2025|||
There are indeed not any such plans.
cmrx64 12/27/2025|||
That would deliver a blow to the integrity of the rest of that section because those sorts of upper bound constraints immediately reducible to “true” cannot cause backtracking of any kind.
bastawhiz 12/27/2025|||
uv doesn't support <3.0 (I think the minimum is 3.8?) so it would be difficult for that to be relevant. But for pip, obviously yes.
wging 12/28/2025||
uv supports PyPI, which still has packages that are Python-2-only. So even if you're running python 3.8, it seems possible to try to declare a dependency on some <3.0 code from PyPI. That means it's an error they should detect.
unethical_ban 12/27/2025||
The problem: The specification is binary. Are you compatible or not?

That is unanswerable now, whether a python package will be compatible with a version that is not released.

Having an ENUM like [compatible, incompatible, untested] at the least would fix this.

zzzeek 12/26/2025||
> pip could implement parallel downloads, global caching, and metadata-only resolution tomorrow. It doesn’t, largely because backwards compatibility with fifteen years of edge cases takes precedence. But it means pip will always be slower than a tool that starts fresh with modern assumptions.

what does backwards compatibility have to do with parallel downloads? or global caching? The metadata-only resolution is the only backwards compatible issue in there and pip can run without a setup.py file being present if pyproject.toml is there.

Short answer is most, or at least a whole lot, of the improvements in uv could be integrated into pip as well (especially parallelizing downloads). But they're not, because there is uv instead, which is also maintained by a for-profit startup. so pip is the loser

BiteCode_dev 12/26/2025||
Other design decisions that made uv fast:

- uncompressing packages while they are still being downloaded, in memory, so that you only have to write to disk once

- design of its own locking format for speed

But yes, rust is actually making it faster because:

- real threads, no need for multi-processing

- no python VM startup overhead

- the dep resolution algo is exactly the type of workload that is faster in a compiled language

Source, this interview with Charlie Marsh: https://www.bitecode.dev/p/charlie-marsh-on-astral-uv-and-th...

The guy has a lot of interesting things to say.

zzzeek 12/26/2025||
> real threads, no need for multi-processing

parallel downloads don't need multi-processing since this is an IO bound usecase. asyncio or GIL-threads (which unblock on IO) would be perfectly fine. native threads will eventually be the default also.

BiteCode_dev 12/27/2025||
Indeed, but unzipping while downloading do. Analysing multiple metadata files and exporting lock data as well.

Now I believe unzip releases the GIL already so we could already benefit from that and the rest likely don't dominate perfs.

But still, rust software is faster on average than python software.

After all, all those things are possible in python, and yet we haven't seen them all in one package manager before uv.

Maybe the strongest advantage of rust, on top of very clean and fast default behaviors, is that it attracts people that care about speed, safety and correctness. And those devs are more likely to spend time implementing fast software.

Thought the main benefit of uv is not that it's fast. It's very nice, and opens more use cases, but it's not the killer feature.

The killer feature is, being a stand alone executable, it bypasses all python bootstrapping problems.

Again, that could technically be achieved in python, but friction is a strong force.

zzzeek 12/27/2025||
> Maybe the strongest advantage of rust, on top of very clean and fast default behaviors, is that it attracts people that care about speed, safety and correctness. And those devs are more likely to spend time implementing fast software.

people who have this opinion should use Rust, not Python, at all. if Python code does not have sufficient speed, safety, and correctness for someone, it should not be used. Python's tools should be written in Python.

> The killer feature is, being a stand alone executable, it bypasses all python bootstrapping problems.

I can't speak for windows or macs but on Linux, system pythons are standard, and there is no "bootstrapping problem" using well known utilities that happen to be written in Python.

BiteCode_dev 12/28/2025||
For the latter point, you are blinded by your own competence.

Bootstrapping a clean python env is the single biggest problem for people that are not daily coding in python.

That's half of the community in the python world.

When you write sqla that's not obvious, because you know a lot. But for the average user, uv was a savior.

I wrote a pretty long article on that here:

https://www.bitecode.dev/p/why-not-tell-people-to-simply-use

We also discuss it with brett cannon there:

https://www.bitecode.dev/p/brett-cannon-on-python-humans-and

But the most convincing argument is to teach python to kids, accountants, mathematicians, java coders and sysadmin.

After 20 years of doing that, I saw the same problems again and again.

And then uv arrived. And they disapeared for those people.

zzzeek 12/28/2025||
> And then uv arrived. And they disapeared for those people.

I'm not arguing against tools that make things as easy as possible for non programmers, I'm arguing against gigantic forks in the Python installation ecosystem. Forks like these are harmful to the tooling, I'm already suffering quite a bit due to the flake8/ruff forking where ruff made a much better linter engine but didnt feel like implementing plugins, so everyone is stuck on what I feel is a mediocre set of linting tools. Just overall I don't like Astral's style and I think a for-profit startup forking out huge chunks of the Python ecosystem is going to be a bad thing long term.

zahlman 12/27/2025||
> uncompressing packages while they are still being downloaded

... but the archive directory is at the end of the file?

> no python VM startup overhead

This is about 20 milliseconds on my 11-year-old hardware.

BiteCode_dev 12/27/2025|||
HTTP range strikes again.

As for 20 ms, if you deal with 20 dependencies in parallel, that's 400ms just to start working.

Shaving half a second on many things make things fast.

Althought as we saw with zeeek in the other comment, you likely don't need multiprocessing since the network stack and unzip in the stdlib release the gil.

Threads are cheaper.

Maybe if you'd bundle pubgrub as a compiled extension, you coukd get pretty close to uv's perf.

zahlman 12/27/2025||
Why are you starting a separate Python process for each dependency?
BiteCode_dev 12/28/2025||
Real thread are very recent and didn't exist when uv was created. So you needed multiprocesses.
zahlman 12/28/2025||
No, I mean why are you starting them for each dependency, rather than having a few workers pulling build requests from a queue?
BiteCode_dev 12/28/2025||
At least one worker for each virtual cpu core you get for CPU. I got 16 on my laptop. My servers have much more.

If I have 64 cores, and 20 dependencies, I do want the 20 of them to be uncompressed in parallel. That's faster and if I'm installing something, I wanna prioritize that workload.

But it doesn't have to be 20. Even say 5 with queues, that's 100ms. It adds up.

collinmanderson 12/27/2025|||
Using the -S (“isolated”) flag can maybe cut startup in half.
est 12/27/2025||
> Virtual environments required

This bothers me more than once when building a base docker image. Why would I want a venv inside a docker with root?

pornel 12/27/2025||
The old package managers messing up the global state by default is the reason why Docker exists. It's the venv for C.
forrestthewoods 12/27/2025||
Because a single docker image can run multiple programs that have mutually exclusive dependencies?

Personally I never want program to ever touch global shared libraries ever. Yuck.

est 12/27/2025||
> a single docker image can run multiple programs

You absolutely can. But it's not best practice.

https://docs.docker.com/engine/containers/multi-service_cont...

forrestthewoods 12/27/2025||
God I hate docker so much. Running computers does not have to be so bloody complicated.
ldng 12/27/2025||
That might be how uv got fast but that is not why it got popular.

PyPA has been a mess for a very long time for in-fighting, astroturfing, gatekeeping and so on with pip being the battlefield. The uv team just did one thing that PyPA & co stopped doing a long time ago (if they ever did ...) : actually solving pain point of their user and never saying "it's not possible because [insert bullshit]" or reply "it's OSS, do it yourself" to then reject the work with attitude and baseless argument.

They listened to their user's issues and solved their pain points without denying them. period.

nurettin 12/26/2025||
> When a package says it requires python<4.0, uv ignores the upper bound and only checks the lower.

I will bring popcorn on python 4 release date.

yjftsjthsd-h 12/26/2025||
If it's really not doing any upper bound checks, I could see it blowing up under more mundane conditions; Python includes breaking changes on .x releases, so I've had eg. packages require (say) Python 3.10 when 3.11/12 was current.
dev_l1x_be 12/26/2025|||
I always bring popcorn on major version changes for any programming language. I hope Rust's never 2.0 stance holds.
zahlman 12/26/2025||
It would be popcorn-worthy regardless, given the rhetoric surrounding the idea in the community.
robertclaus 12/26/2025|
At Plotly we did a decent amount of benchmarking to see how much the different defaults `uv` uses lead to its performance. This was necessary so we could advise our enterprise customers on the transition. We found you lost almost all of the speed gains if you configured uv behave as much like pip as you could. A trivial example is the precompile flag, which can easily be 50% of pips install time for a typical data science venv.

https://plotly.com/blog/uv-python-package-manager-quirks/

zahlman 12/26/2025|
The precompilation thing was brought up to the uv team several months ago IIRC. It doesn't make as much of a difference for uv as for pip, because when uv is told to pre-compile it can parallelize that process. This is easily done in Python (the standard library even provides rudimentary support, which Python's own Makefile uses); it just isn't in pip yet (I understand it will be soon).
More comments...