What I want from a package manager is that it just works.
That's what I mostly like about uv.
Many of the changes that made speed possible were to reduce the complexity and thus the likelihood of things not working.
What I don't like about uv (or pip or many other package managers), is that the programmer isn't given a clear mental model of what's happening and thus how to fix the inevitable problems. Better (pubhub) error messages are good, but it's rare that they can provide specific fixes. So even if you get 99% speed, you end up with 1% perplexity and diagnostic black boxes.
To me the time that matters most is time to fix problems that arise.
This is a priority for PAPER; it's built on a lower-level API so that programmers can work within a clear mental model, and I will be trying my best to communicate well in error messages.
Edit to add: I use python daily
uv has been a delight to use
I'd characterize that as unusable, for sure.
I do use it on CI/CD pipelines, but I wouldn't dare type uvx commands myself on a daily basis.
I'm a big fan of uv, but don't like that part of uvx.
(makes me wonder if a small wrapper can do this - safe uvx, or suvx for short)
It is also really nice when working interactivly to have snappy tools that don't take you out of the flow more than absolutely more than necessary. But then I'm quite sensitive to this, I'm one of those people who turn off all GUI animations because they waste my time and make the system feel slow.
But small efficiencies do matter; see e.g. https://danluu.com/productivity-velocity/.
A similar, but less drastic speedup applied to docker images.
(also switching python version is so fast)
saying that, other than the solver, most of what uv does is always going to be IO bound
[1] https://pixi.sh/
Are we losing out on performance of the actual installed thing, then? (I'm not 100% clear on .pyc files TBH; I'm guessing they speed up start time?)
My first cynical instinct is to say that this is uv making itself look better by deferring the costs to the application, but it's probably a good trade-off if any significant percentage of the files being compiled might not be used ever so the overall cost is lower if you defer to run time.
I would bet on a subset for pretty much any non-trivial package (i.e. larger than one or two user facing modules). And for those trivial packages? Well they are usually small, so the cost is small as well. I'm sure there are exceptions: maybe a single gargantuan module thst consists of autogenerated FFI bindings for some C library or such, but that is likely the minority.
Sure, but you pay that hit either way. Real-world performance is always usage based: the assumption that uv makes is that people run (i.e. import) packages more often than they install them, so amortizing at the point of the import machinery is better for the mean user.
(This assumption is not universal, naturally!)
(The key part being that 'less common' doesn't mean a non-trivial amount of time.)
I just read the thread and use Python, I can't comment on the % speedup attributed to uv that comes from this optimization.
If it was a optional toggle it would probably become best practice to activate compilation in dockerfiles.
It seems like tons of people are creating container images with an installer tool and having it do a bunch of installations, rather than creating the image with the relevant Python packages already in place. Hard to understand why.
For that matter, a pre-baked Python install could do much more interesting things to improve import times than just leaving a forest of `.pyc` files in `__pycache__` folders all over the place.
My Docker build generating the byte code saves it to the image, sharing the cost at build time across all image deployments — whereas, building at first execution means that each deployed image instance has to generate its own bytecode!
That’s a massive amplification, on the order of 10-100x.
“Well just tell it to generate bytecode!”
Sure — but when is the default supposed to be better?
Because this sounds like a massive footgun for a system where requests >> deploys >> builds. That is, every service I’ve written in Python for the last decade.
They do.
> Are we losing out on performance of the actual installed thing, then?
When you consciously precompile Python source files, you can parallelize that process. When you `import` from a `.py` file, you only get that benefit if you somehow coincidentally were already set up for `multiprocessing` and happened to have your workers trying to `import` different files at the same time.
Unfortunately, it typically doesn't work out as well as you might expect, especially given the expectation of putting `import` statements at the top of the file.
This is kind of fascinating. I've never considered runtime upper bound requirements. I can think of compelling reasons for lower bounds (dropping version support) or exact runtime version requirements (each version works for exact, specific CPython versions). But now that I think about it, it seems like upper bounds solve a hypothetical problem that you'd never run into in practice.
If PSF announced v4 and declared a set of specific changes, I think this would be reasonable. In the 2/3 era it was definitely reasonable (even necessary). Today though, it doesn't actually save you any trouble.
But if we accept that it currently ignores any upper-bounds checks greater than v3, that's interesting. Does that imply that once Python 4 is available, uv will slow down due to needing to actually run those checks?
That said, even if it does happen, I highly doubt that is the main part of the speed up compared to pip.
That is unanswerable now, whether a python package will be compatible with a version that is not released.
Having an ENUM like [compatible, incompatible, untested] at the least would fix this.
This bothers me more than once when building a base docker image. Why would I want a venv inside a docker with root?
Personally I never want program to ever touch global shared libraries ever. Yuck.
You absolutely can. But it's not best practice.
https://docs.docker.com/engine/containers/multi-service_cont...
I am surprised by this because Python minor versions break backwards compatibility all the time. Our company for example is doing a painful upgrade from py39 to py311
I also appreciate how much credit this gives the many previous years of Python standards processes that enabled it.
Update: I blogged more about it here, including Python recreations of the HTTP range header trick it uses and the version comparison via u64 integers: https://simonwillison.net/2025/Dec/26/how-uv-got-so-fast/