Top
Best
New

Posted by zdw 23 hours ago

How uv got so fast(nesbitt.io)
1018 points | 340 commentspage 2
w10-1 17 hours ago|
I like the implication that we can have an alternative to uv speed-wise, but I think reliability and understandability are more important in this context (so this comment is a bit off-topic).

What I want from a package manager is that it just works.

That's what I mostly like about uv.

Many of the changes that made speed possible were to reduce the complexity and thus the likelihood of things not working.

What I don't like about uv (or pip or many other package managers), is that the programmer isn't given a clear mental model of what's happening and thus how to fix the inevitable problems. Better (pubhub) error messages are good, but it's rare that they can provide specific fixes. So even if you get 99% speed, you end up with 1% perplexity and diagnostic black boxes.

To me the time that matters most is time to fix problems that arise.

zahlman 17 hours ago|
> the programmer isn't given a clear mental model of what's happening and thus how to fix the inevitable problems.

This is a priority for PAPER; it's built on a lower-level API so that programmers can work within a clear mental model, and I will be trying my best to communicate well in error messages.

andy99 19 hours ago||
I remain baffled about these posts getting excited about uv’s speed. I’d like to see a real poll but I personally can’t imagine people listing speed as one of the their top ten concerns about python package managers. What are the common use cases where the delay due to package installation is at all material?

Edit to add: I use python daily

techbruv 18 hours ago||
At a previous job, I recall updating a dependency via poetry would take on the order of ~5-30m. God forbid after 30 minutes something didn’t resolve and you had to wait another 30 minutes to see if the change you made fixed the problem. Was not an enjoyable experience.

uv has been a delight to use

pxc 18 hours ago||
> updating a dependency via poetry would take on the order of ~5-30m. God forbid after 30 minutes something didn’t resolve and you had to wait another 30 minutes to see if the change you made fixed the problem

I'd characterize that as unusable, for sure.

thraxil 18 hours ago|||
Working heavily in Python for the last 20 years, it absolutely was a big deal. `pip install` has been a significant percentage of the deploy time on pretty much every app I've ever deployed and I've spent countless hours setting up various caching techniques trying to speed it up.
stavros 19 hours ago|||
I can run `uvx sometool` without fear because I know that it'll take a few seconds to create a venv, download all the dependencies, and run the tool. uv's speed has literally changed how I work with Python.
quectophoton 17 hours ago||
I wouldn't say without fear, since you're one typo away from executing a typo-squatted malicious package.

I do use it on CI/CD pipelines, but I wouldn't dare type uvx commands myself on a daily basis.

stavros 17 hours ago||
uvx isn't more risky than `pip install`, which is what I used before.
pnt12 7 hours ago||
But with pip you only need to be careful on install - with uvx you need to be careful forever.

I'm a big fan of uv, but don't like that part of uvx.

(makes me wonder if a small wrapper can do this - safe uvx, or suvx for short)

stavros 6 hours ago||
I generally tend to let the shell autocomplete, so I don't type it out every time, but I see your point. If I use a program more than once or twice, I install it.
gordonhart 19 hours ago|||
`poetry install` on my dayjob’s monolith took about 2 minutes, `uv sync` takes a few seconds. Getting 2 minutes back on every CI job adds up to a lot of time saved
rsyring 19 hours ago|||
As a multi decade Python user, uv's speed is "life changing". It's a huge devx improvement. We lived with what came before, but now that I have it, I would never want to go back and it's really annoying to work on projects now that aren't using it.
recov 19 hours ago|||
Docker builds are a big one, at least at my company. Any tool that reduces wait time is worth using, and uv is an amazing tool that removes that wait time. I take it you might not use python much as it solves almost every pain point, and is fast which feels rare.
riazrizvi 3 hours ago|||
Probably 90% of ppl commenting here are focused on managing their own Python installs and mostly don’t care about speed. uv seems to be designed for enterprise, for IT management of company wide systems, and this post is, I’m guessing, a little promotional astroturfing. For most of us, uv solves a low priority problem.
VorpalWay 17 hours ago|||
CI: I changed a pipeline at work from pip and pipx to uv, it saved 3 minutes on a 7 minute pipeline. Given how oversubscribed our runners are, anything saving time is a big help.

It is also really nice when working interactivly to have snappy tools that don't take you out of the flow more than absolutely more than necessary. But then I'm quite sensitive to this, I'm one of those people who turn off all GUI animations because they waste my time and make the system feel slow.

nojs 1 hour ago|||
It’s a major factor in build times for Django containers for example.
zahlman 17 hours ago|||
It's not just about delays being "material"; waiting on the order of seconds for a venv creation (and knowing that this is because of pip bootstrapping itself, when it should just be able to install cross-environment instead of having to wait until 2022 for an ugly, limited hack to support that) is annoying.

But small efficiencies do matter; see e.g. https://danluu.com/productivity-velocity/.

pseudosavant 18 hours ago|||
I avoided Python for years, especially because of package and environment management. Python is now my go to for projects since discovering uv, PEP 723 metadata, and LLMs’ ability to write Python.
toenail 19 hours ago|||
The speed is nice, but I switched because uv supports "pip compile" from pip-tools, and it is better at resolving dependencies. Also pip-tools uses (used?) internal pip methods and breaks frequently because of that, uv doesn't.
SatvikBeri 18 hours ago|||
Setting up a new dev instance took 2+ hours with pip at my work. Switching to uv dropped the Python portion down to <1 minute, and the overall setup to 20 minutes.

A similar, but less drastic speedup applied to docker images.

optionalsquid 15 hours ago|||
Speed is one of the main reasons why I keep recommending uv to people I work with, and why I initially adopted it: Setting up a venv and installing requirements became so much faster. Replacing pipx and `uv run` for single-file scripts with external dependencies, were additional reasons. With nox adding uv support, it also became much easier and much faster to test across multiple versions of Python
morshu9001 14 hours ago|||
One weird case where this mattered to me, I wanted pip to backtrack to find compatible versions of a set of deps, and it wasn't done after waiting a whole hour. uv did the same thing in 5 minutes. This might be kinda common because of how many Python repos out there don't have pinned versions in dependencies.txt.
patrick91 18 hours ago|||
for me it's being able to do `uv run whatever` and always know I have the correct dependencies

(also switching python version is so fast)

pants2 19 hours ago|||
The biggest benefit is in CI environments and Docker images and the like where all packages can get reinstalled on every run.
scotty79 3 hours ago|||
For me speed was irrelevant however uv was the first Python project manger with tolerable ui that I encountered. I never before done any serious development in Python because I just refused dealing with venvs requirements.txt and whatever. When a script used a dependancy or another Python version I installed it system wide. uv is perfectly usable, borderline pleasent. But I'm sure the speed helps.
ExoticPearTree 16 hours ago|||
Build jobs where you have a lot of dependencies. Those GHA minutes go brrrr.
IshKebab 18 hours ago|||
Do you still remain baffled after the many replies that people actually do like their tooling to be not dog slow like pip is?
blibble 15 hours ago|||
conda can take an hour to tell you your desired packages are unsatisifiable

saying that, other than the solver, most of what uv does is always going to be IO bound

curiousgal 10 hours ago||
People criticising conda's solver prove they haven't used it in years.
optionalsquid 2 hours ago||
You can also use pixi[1] if you want conda with uv's solver, that does appears to be faster than the mamba solver. Though the main reasons I recommend pixi, are that it doesn't have a tendency to break random stuff due to polluting your environment by default, and that it does a much better job of making your environments reproducible, among another benefits

[1] https://pixi.sh/

adammarples 18 hours ago||
It's annoying. Do you use poetry? Pipenv? It's annoying.
didibus 18 hours ago||
There's an interesting psychology at play here as well, if you are a programmer that chooses a "fast language" it's indicative of your priorities already, it's often not much the language, but that the programmer has decided to optimize for performance from the get go.
yjftsjthsd-h 20 hours ago||
> No bytecode compilation by default. pip compiles .py files to .pyc during installation. uv skips this step, shaving time off every install. You can opt in if you want it.

Are we losing out on performance of the actual installed thing, then? (I'm not 100% clear on .pyc files TBH; I'm guessing they speed up start time?)

woodruffw 20 hours ago||
No, because Python itself will generate bytecode for packages once you actually import them. uv just defers that to first-import time, but the cost is amortized in any setting where imports are performed over multiple executions.
yjftsjthsd-h 19 hours ago|||
That sounds like yes? Instead of doing it once at install time, it's done once at first use. It's only once so it's not persistently slower, but that is a perf hit.

My first cynical instinct is to say that this is uv making itself look better by deferring the costs to the application, but it's probably a good trade-off if any significant percentage of the files being compiled might not be used ever so the overall cost is lower if you defer to run time.

VorpalWay 17 hours ago|||
I think they are making the bet that most modules won't be imported. For example if I install scipy, numpy, Pillow or such: what are the chances that I use a subset of the modules vs literally all of them?

I would bet on a subset for pretty much any non-trivial package (i.e. larger than one or two user facing modules). And for those trivial packages? Well they are usually small, so the cost is small as well. I'm sure there are exceptions: maybe a single gargantuan module thst consists of autogenerated FFI bindings for some C library or such, but that is likely the minority.

woodruffw 19 hours ago||||
> It's only once so it's not persistently slower, but that is a perf hit.

Sure, but you pay that hit either way. Real-world performance is always usage based: the assumption that uv makes is that people run (i.e. import) packages more often than they install them, so amortizing at the point of the import machinery is better for the mean user.

(This assumption is not universal, naturally!)

dddgghhbbfblk 19 hours ago||
Ummm, your comment is backwards, right?
woodruffw 19 hours ago||
Which part? The assumption is that when you `$TOOL install $PACKAGE`, you run (i.e. import) `$PACKAGE` more than you re-install it. So there's no point in slowing down (relatively less common) installation events when you can pay the cost once on import.

(The key part being that 'less common' doesn't mean a non-trivial amount of time.)

dddgghhbbfblk 17 hours ago||
Why would you want to slow down the more common thing instead of the less common thing? I'm not following that at all. That's why I asked if that's backwards.
woodruffw 12 hours ago||
Because you only slow down the more common thing once, and the less common thing is slower in absolute terms.
lillecarl 3 hours ago||
uv optimizes for the common usecase: You will install more packages than you will import new packages.
beacon294 19 hours ago||||
Probably for any case where an actual human is doing it. On an image you obviously want to do it at bake time, so I feel default off with a flag would have been a better design decision for pip.

I just read the thread and use Python, I can't comment on the % speedup attributed to uv that comes from this optimization.

Epa095 19 hours ago|||
Images are a good example where doing it at install-time is probably the best yeah, since every run of the image starts 'fresh', losing the compilation which happened last time the image got started.

If it was a optional toggle it would probably become best practice to activate compilation in dockerfiles.

zahlman 9 hours ago|||
> On an image you obviously want to do it at bake time

It seems like tons of people are creating container images with an installer tool and having it do a bunch of installations, rather than creating the image with the relevant Python packages already in place. Hard to understand why.

For that matter, a pre-baked Python install could do much more interesting things to improve import times than just leaving a forest of `.pyc` files in `__pycache__` folders all over the place.

tedivm 18 hours ago||||
You can change it to compile the bytecode on install with a simple environment variable (which you should do when building docker containers if you want to sacrifice some disk space to decrease initial startup time for your app).
saidnooneever 19 hours ago|||
you are right. it depends on how often this first start is, if its bad or not..most usecases id guess (total guess, have limited exp with python projects professionally) its not an issue.
zmgsabst 13 hours ago|||
That’s actually a negative:

My Docker build generating the byte code saves it to the image, sharing the cost at build time across all image deployments — whereas, building at first execution means that each deployed image instance has to generate its own bytecode!

That’s a massive amplification, on the order of 10-100x.

“Well just tell it to generate bytecode!”

Sure — but when is the default supposed to be better?

Because this sounds like a massive footgun for a system where requests >> deploys >> builds. That is, every service I’ve written in Python for the last decade.

hauntsaninja 18 hours ago|||
Yes, uv skipping this step is a one time significant hit to start up time. E.g. if you're building a Dockerfile I'd recommend setting `--compile-bytecode` / `UV_COMPILE_BYTECODE`
salviati 19 hours ago|||
Historically the practice of producing pyc files on install started with system wide installed packages, I believe, when the user running the program might lack privileges to write them. If the installer can write the .oy files it can also write the .pyc, while the user running them might not in that location.
thundergolfer 18 hours ago|||
This optimization hits serverless Python the worst. At Modal we ensure users of uv are setting UV_COMPILE_BYTECODE to avoid the cold start penalty. For large projects .pyc compilation can take hundreds of milliseconds.
zahlman 17 hours ago|||
> I'm not 100% clear on .pyc files TBH; I'm guessing they speed up start time?

They do.

> Are we losing out on performance of the actual installed thing, then?

When you consciously precompile Python source files, you can parallelize that process. When you `import` from a `.py` file, you only get that benefit if you somehow coincidentally were already set up for `multiprocessing` and happened to have your workers trying to `import` different files at the same time.

plorkyeran 19 hours ago||
If you have a dependency graph large enough for this to be relevant, it almost certainly includes a large number of files which are never actually imported. At worst the hit to startup time will be equal to the install time saved, and in most cases it'll be a lot smaller.
zahlman 8 hours ago||
> a large number of files which are never actually imported

Unfortunately, it typically doesn't work out as well as you might expect, especially given the expectation of putting `import` statements at the top of the file.

bastawhiz 18 hours ago||
> When a package says it requires python<4.0, uv ignores the upper bound and only checks the lower. This reduces resolver backtracking dramatically since upper bounds are almost always wrong. Packages declare python<4.0 because they haven’t tested on Python 4, not because they’ll actually break. The constraint is defensive, not predictive.

This is kind of fascinating. I've never considered runtime upper bound requirements. I can think of compelling reasons for lower bounds (dropping version support) or exact runtime version requirements (each version works for exact, specific CPython versions). But now that I think about it, it seems like upper bounds solve a hypothetical problem that you'd never run into in practice.

If PSF announced v4 and declared a set of specific changes, I think this would be reasonable. In the 2/3 era it was definitely reasonable (even necessary). Today though, it doesn't actually save you any trouble.

wging 18 hours ago||
I think the article is being careful not to say uv ignores _all_ upper bound checks, but specifically 4.0 upper bound checks. If a package says it requires python < 3.0, that's still super relevant, and I'd hope for uv to still notice and prevent you from trying to import code that won't work on python 3. Not sure what it actually does.
breischl 18 hours ago||
I read the article as saying it ignores all upper-bounds, and 4.0 is just an example. I could be wrong though - it seems ambiguous to me.

But if we accept that it currently ignores any upper-bounds checks greater than v3, that's interesting. Does that imply that once Python 4 is available, uv will slow down due to needing to actually run those checks?

cmrx64 3 hours ago|||
That would deliver a blow to the integrity of the rest of that section because those sorts of upper bound constraints immediately reducible to “true” cannot cause backtracking of any kind.
VorpalWay 17 hours ago|||
Are there any plans to actually make a 4.0 ever? I remember hearing a few years ago that after the transition to 3.0, the core devs kind of didn't want to repeat that mess ever again.

That said, even if it does happen, I highly doubt that is the main part of the speed up compared to pip.

zahlman 9 hours ago||
There are indeed not any such plans.
unethical_ban 13 hours ago||
The problem: The specification is binary. Are you compatible or not?

That is unanswerable now, whether a python package will be compatible with a version that is not released.

Having an ENUM like [compatible, incompatible, untested] at the least would fix this.

est 14 hours ago||
> Virtual environments required

This bothers me more than once when building a base docker image. Why would I want a venv inside a docker with root?

pornel 13 hours ago||
The old package managers messing up the global state by default is the reason why Docker exists. It's the venv for C.
forrestthewoods 13 hours ago||
Because a single docker image can run multiple programs that have mutually exclusive dependencies?

Personally I never want program to ever touch global shared libraries ever. Yuck.

est 13 hours ago||
> a single docker image can run multiple programs

You absolutely can. But it's not best practice.

https://docs.docker.com/engine/containers/multi-service_cont...

forrestthewoods 12 hours ago||
God I hate docker so much. Running computers does not have to be so bloody complicated.
vjay15 5 hours ago||
Amazing that how much python's pip was so bottlenecked, it was basic design problem damn
robertclaus 17 hours ago||
At Plotly we did a decent amount of benchmarking to see how much the different defaults `uv` uses lead to its performance. This was necessary so we could advise our enterprise customers on the transition. We found you lost almost all of the speed gains if you configured uv behave as much like pip as you could. A trivial example is the precompile flag, which can easily be 50% of pips install time for a typical data science venv.

https://plotly.com/blog/uv-python-package-manager-quirks/

zahlman 17 hours ago|
The precompilation thing was brought up to the uv team several months ago IIRC. It doesn't make as much of a difference for uv as for pip, because when uv is told to pre-compile it can parallelize that process. This is easily done in Python (the standard library even provides rudimentary support, which Python's own Makefile uses); it just isn't in pip yet (I understand it will be soon).
annexrichmond 10 hours ago||
> This reduces resolver backtracking dramatically since upper bounds are almost always wrong.

I am surprised by this because Python minor versions break backwards compatibility all the time. Our company for example is doing a painful upgrade from py39 to py311

zahlman 9 hours ago|
Could you explain what major pain points you've encountered? I can't think of any common breakages cited in 3.10 or 3.11 offhand. 3.12 had a lot more standard library removals, and the `match` statement introduced in 3.10 uses a soft keyword and won't break code that uses `match` as an identifier.
simonw 17 hours ago|
This post is excellent. I really like reading deep dives like this that take a complex system like uv and highlight the unique design decisions that make it work so well.

I also appreciate how much credit this gives the many previous years of Python standards processes that enabled it.

Update: I blogged more about it here, including Python recreations of the HTTP range header trick it uses and the version comparison via u64 integers: https://simonwillison.net/2025/Dec/26/how-uv-got-so-fast/

More comments...