Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I never used anything other than pip. I never felt the need to use anything other than pip (with virtualenv). Am I missing anything?


Couple of things.

- pip doesn't handle your Python executable, just your Python dependencies. So if you want/need to swap between Python versions (3.11 to 3.12 for example), it doesn't give you anything. Generally people use an additional tool such as pyenv to manage this. Tools like uv and Poetry do this as well as handling dependencies

- pip doesn't resolve dependencies of dependencies. pip will only respect version pinning for dependencies you explicitly specify. So for example, say I am using pandas and I pin it to version X. If a dependency of pandas (say, numpy) isn't pinned as well, the underlying version of numpy can still change when I reinstall dependencies. I've had many issues where my environment stopped working despite none of my specified dependencies changing, because underlying dependencies introduced breaking changes. To get around this with pip you would need an additional tool like pip-tools, which allows you to pin all dependencies, explicit and nested, to a lock file for true reproducibility. uv and poetry do this out of the box.

- Tool usage. Say there is a python package you want to use across many environments without installing in the environments themselves (such as a linting tool like ruff). With pip, you need to install another tool like pipx to install something that can be used across environments. uv can do this out of the box.

Plus there is a whole host of jobs that tools like uv and poetry aim to assist with that pip doesn't, namely project creation and management. You can use uv to create a new Python project scaffolding for applications or python modules in a way that conforms with PEP standards with a single command. It also supports workspaces of multiple projects that have separate functionality but require dependencies to be in sync.

You can accomplish a lot/all of this using pip with additional tooling, but its a lot more work. And not all use cases will require these.


Yes, generally people already use an additional tool for managing their Python executables, like their operating system's package manager:

  $> sudo apt-get install python3.10 python3.11 python3.12
And then it's simple to create and use version-specific virtual environments:

  $> python3.11 -m venv .venv3.11
  $> source .venv3.11/bin/activate
  $> pip install -r requirements.txt
You are incorrect about needing to use an additional tool to install a "global" tool like `ruff`; `pip` does this by default when you're not using a virtual environment. In fact, this behavior is made more difficult by tools like `uv` if or `pipx` they're trying to manage Python executables as well as dependencies.


> sudo apt-get install python3.10 python3.11 python3.12

This assumes the Python version you need is available from your package manager's repo. This won't work if you want a Python version either newer or older than what is available.

> You are incorrect about needing to use an additional tool to install a "global" tool like `ruff`; `pip` does this by default when you're not using a virtual environment.

True, but it's not best practice to do that because while the tool gets installed globally, it is not necessarily linked to a specific python version, and so it's extremely brittle.

And it gets even more complex if you need different tools that have different Python version requirements.


>This assumes the Python version you need is available from your package manager's repo. This won't work if you want a Python version either newer or older than what is available.

And of course you could be working with multiple distros and versions of the same distro, production and dev might be different environment and tons of others concerns. You need something that just works across.


Surely you just use Docker for production, right?


You almost need to use Docker for deploying Python because the tooling is so bad that it's otherwise very difficult to get a reproducible environment. For many other languages the tooling works well enough that there's relatively little advantage to be had from Docker (although you can of course still use it).


And how do you know everything is ok when you build your new docker image?


>> You are incorrect about needing to use an additional tool to install a "global" tool like `ruff`; `pip` does this by default when you're not using a virtual environment.

>True, but it's not best practice to do that because while the tool gets installed globally, it is not necessarily linked to a specific python version, and so it's extremely brittle.

"Globally" means installed with sudo. These are installed into the user folder under ~/.local/ and called a user install by pip.

I wouldn't call it "extremely brittle" either. It works fine until you upgrade to a new version of python, in which case you install the package again. Happens once a year perhaps.

The good part of this is that unused cruft will get left behind and then you can delete old folders in ~/.local/lib/python3.? etc. I've been doing this over a decade without issue.


> "Globally" means installed with sudo. These are installed into the user folder under ~/.local/ and called a user install by pip.

> It works fine until you upgrade to a new version of python, in which case you install the package again.

Debian/Ubuntu doesn't want you to do either, and tell you you'll break your system if you force it (the override flag is literally named "--break-system-packages"). Hell, if you're doing it with `sudo`, they're probably right - messing with the default Python installation (such as trying to upgrade it) is the quickest way to brick your Debian/Ubuntu box.

Incredibly annoying when your large project happens to use pip to install both libraries for the Python part, and tools like CMake and Conan, meaning you can't just put it all in a venv.


Not Debian specific. The braindead option was added by pip to scare off newbies.

No one with the most basic of sysad skills is “bricked” by having to uninstall a library. Again have not experienced a conflict in over 15 years.

Use the system package manager or buid yourself for tools like cmake.


Uninstalling a library - no. But I specifically mentioned trying to upgrade system Python, which is a quick way to break e.g. apt.


Ok, getting it now. I said upgrade python, and you thought I meant upgrade the system python in conflict with the distro. But that's not really what I meant. To clarify... I almost never touch the system python, but I upgrade the distro often. Almost every Ubuntu/Mint has a new system Python version these days.

So upgrade to new distro release, it has a new Python. Then pip install --user your user tools, twine, httpie, ruff, etc. Takes a few moments, perhaps once a year.

I do the same on Fedora, which I've been using more lately.


Nah, pip is still brittle here because it uses one package resolution context to install all your global tools. So if there is a dependency clash you are out of luck.

So that's why pipx was required, or now, UV.


Not happened in the last fifteen years, never used pipx. See my other replies.


> It works fine until you upgrade to a new version of python, in which case you install the package again.

Or you install a second global tool that depends on an incompatible version of a library.


Never happened, and exceedingly unlikely to because your user-wide tools should be few.


> exceedingly unlikely to because your user-wide tools should be few.

Why "should"? I think it's the other way around - Python culture has shied away from user-wide tools because it's known that they cause problems if you have more than a handful of them, and so e.g. Python profilers remain very underdeveloped.


There are simply few, I don't shy away from them. Other than tools replaced by ruff, httpie, twine, ptpython, yt-dlp, and my own tools I don't need anything else. Most "user" tools are provided by the system package manager.

All the other project-specific things go in venvs where they belong.

This is all a non-issue despite constant "end of the world" folks who never learned sysadmin and are terrified of an error.

If a libraries conflict, uninstall them, and put them in a venv. Why do all the work up front? I haven't had to do that in so long I forget how long it was. Early this century.


> This is all a non-issue despite constant "end of the world" folks who never learned sysadmin and are terrified of an error.

It's not a non-issue. Yes it's not a showstopper, but it's a niggling drag on productivity. As someone who's used to the JVM but currently having to work in Python, everything to do with package management is just harder and more awkward than it needs to be (and every so often you just get stuck and have to rebuild a venv or what have you) and the quality of tooling is significantly worse as a result. And uv looks like the first of the zillions of Python package management tools to actually do the obvious correct thing and not just keep shooting yourself in the foot.


It’s not a drag if you ignore it and it doesn’t happen even once a decade.

Still I’m looking forward to uv because I’ve lost faith in pypa. They break things on purpose and then say they have no resources to fix it. Well they had the resources to break it.

But this doesn’t have much to do with installing tools into ~/.local.


> pip doesn't resolve dependencies of dependencies.

This is simply incorrect. In fact the reason it gets stuck on resolution sometimes is exactly because it resolved transitive dependencies and found that they were mutually incompatible.

Here's an example which will also help illustrate the rest of my reply. I make a venv for Python 3.8, and set up a new project with a deliberately poorly-thought-out pyproject.toml:

  [project]
  name="example"
  version="0.1.0"
  dependencies=["pandas==2.0.3", "numpy==1.17.3"]
I've specified the oldest version of Numpy that has a manylinux wheel for Python 3.8 and the newest version of Pandas similarly. These are both acceptable for the venv separately, but mutually incompatible on purpose.

When I try to `pip install -e .` in the venv, Pip happily explains (granted the first line is a bit strange):

  ERROR: Cannot install example and example==0.1.0 because these package versions have conflicting dependencies.

  The conflict is caused by:
      example 0.1.0 depends on numpy==1.17.3
      pandas 2.0.3 depends on numpy>=1.20.3; python_version < "3.10"

  To fix this you could try to:
  1. loosen the range of package versions you've specified
  2. remove package versions to allow pip to attempt to solve the dependency conflict
If I change the Numpy pin to 1.20.3, that's the version that gets installed. (`python-dateutil`, `pytz`, `six` and `tzdata` are also installed.) If I remove the Numpy requirement completely and start over, Numpy 1.24.4 is installed instead - the latest version compatible with Pandas' transitive specification of the dependency. Similarly if I unpin Pandas and ask for any version - Pip will try to install the latest version it can, and it turns out that the latest Pandas version that declares compatibility with 3.8, indeed allows for fetching 3.8-compatible dependencies. (Good job not breaking it, Pandas maintainers! Although usually this is trivial, because your dependencies are also actively maintained.)

> pip will only respect version pinning for dependencies you explicitly specify. So for example, say I am using pandas and I pin it to version X. If a dependency of pandas (say, numpy) isn't pinned as well, the underlying version of numpy can still change when I reinstall dependencies.

Well, sure; Pip can't respect a version pin that doesn't exist anywhere in your project. If the specific version of Pandas you want says that it's okay with a range of Numpy versions, then of course Pip has freedom to choose one of those versions. If that matters, you explicitly specify it. Other programs like uv can't fix this. They can only choose different resolution strategies, such as "don't update the transitive dependency if the environment already contains a compatible version", versus "try to use the most recent versions of everything that meet the specified compatibility requirements".

> To get around this with pip you would need an additional tool like pip-tools, which allows you to pin all dependencies, explicit and nested, to a lock file for true reproducibility.

No, you just use Pip's options to determine what's already in the environment (`pip list`, `pip freeze` etc.) and pin everything that needs pinning (whether with a Pip requirements file or with `pyproject.toml`). Nothing prevents you from listing your transitive dependencies in e.g. the [project.dependencies] of your pyproject.toml, and if you pin them, Pip will take that constraint into consideration. Lock files are for when you need to care about alternate package sources, checking hashes etc.; or for when you want an explicit representation of your dependency graph in metadata for the sake of other tooling.

> This assumes the Python version you need is available from your package manager's repo. This won't work if you want a Python version either newer or older than what is available.

I have built versions 3.5 through 3.13 inclusive from source and have them installed in /opt and the binaries symlinked in /usr/local/bin. It's not difficult at all.

> True, but it's not best practice to do that because while the tool gets installed globally, it is not necessarily linked to a specific python version, and so it's extremely brittle.

What brittleness are you talking about? There's no reason why the tool needs to run in the same environment as the code it's operating on. You can install it in its own virtual environment, too. Since tools generally are applications, I use Pipx for this (which really just wraps a bit of environment management around Pip). It works great; for example I always have the standard build-frontend `build` (as `pyproject-build`) and the uploader `twine` available. They run from a guaranteed-compatible Python.

And they would if they were installed for the system Python, too. (I just, you know, don't want to do that because the system Python is the system package manager's responsibility.) The separate environment don't matter because the tool's code and the operated-on project's code don't even need to run at the same time, let alone in the same process. In fact, it would make no sense to be running the code while actively trying to build or upload it.

> And it gets even more complex if you need different tools that have different Python version requirements.

No, you just let each tool have the virtual environment it requires. And you can update them in-place in those environments, too.


> This is simply incorrect. In fact the reason it gets stuck on resolution sometimes is exactly because it resolved transitive dependencies and found that they were mutually incompatible.

The confusion might be that this used to be a problem with pip. It looks like this changed around 2020, but before then pip would happily install broken versions. Looking it up, this change of resolution happened in a minor release.


You have it exactly, except that Pip 20.3 isn't a "minor release" - since mid-2018, Pip has used quarterly calver, so that's just "the last release made in 2020". (I think there was some attempt at resolving package versions before that, it just didn't work adequately.)


Ah thank you for the correction, that makes sense - it seemed very odd for a minor version release.

I think a lot of people probably have strong memories of all the nonsense that earlier pip versions resulted in, I know I do. I didn't realise this was a more solved problem now as not seeing an infrequent issue is hard to notice.


> Well, sure; Pip can't respect a version pin that doesn't exist anywhere in your project. If the specific version of Pandas you want says that it's okay with a range of Numpy versions, then of course Pip has freedom to choose one of those versions. If that matters, you explicitly specify it

Nearly every other language solves this better than this. What your suggesting breaks down on large projects.


>Nearly every other language solves this better than this.

"Nearly every other language" determines the exact version of a library to use for you, when multiple versions would work, without you providing any input with which to make the decision?

If you mean "I have had a more pleasant UX with the equivalent tasks in several other programming languages", that's justifiable and common, but not at all the same.

>What your suggesting breaks down on large projects.

Pinned transitive dependencies are the only meaningful data in a lockfile, unless you have to explicitly protect against supply chain attacks (i.e. use a private package source and/or verify hashes).


IMHO the clear separation between lockfile and deps in other package managers was a direct consequence of people being confused about what requirements.txt should be. It can be both and could be for ages (pip freeze) but the defaults were not conductive to clear separation. If we started with lockfile.txt and dependencies.txt, the world may have looked different. Alas.


The thing is, the distinction is purely semantic - Pip doesn't care. If you tell it all the exact versions of everything to install, it will still try to "solve" that - i.e., it will verify that what you've specified is mutually compatible, and check whether you left any dependencies out.


What's your process for ensuring all members of a large team are using the same versions of libraries in a non trivial python codebase?


If all you need to do is ensure everyone's on the same versions of the libraries - if you aren't concerned with your supply chain, and you can accept that members of your team are on different platforms and thus getting different wheels for the same version, and you don't have platform-specific dependency requirements - then pinned transitive dependencies are all the metadata you need. pyproject.toml isn't generally intended for this, unless what you're developing is purely an application that shouldn't ever be depended on by anyone else or sharing an environment with anything but its own dependencies. But it would work. The requirements.txt approach also works.

If you do have platform-specific dependency requirements, then you can't actually use the same versions of libraries, by definition. But you can e.g. specify those requirements abstractly, see what the installer produces on your platform, and produce a concrete requirement-set for others on platforms sufficiently similar to yours.

(I don't know offhand if any build backends out there will translate abstract dependencies from an sdist into concrete ones in a platform-specific wheel. Might be a nice feature for application devs.)

Of course there are people and organizations that have use cases for "real" lockfiles that list provenance and file hashes, and record metadata about the dependency graph, or whatever. But that's about more than just keeping a team in sync.


So you a re proposing to manually manage all transitive dependencies?


It’s like a whole post of all the things you’re not supposed to do with Python, nice.


most developers I know do not use the system version of python. We use an older version at work so that we can maximize what will work for customers and don't try to stay on the bleeding edge. I imagine others do want newer versions for features, hence people find products like UV useful


That assumes that you are using a specific version of a specific Linux distribution that happens to ship specific versions of Python that you are currently targeting. That's a big assumption. uv solves this.


(I've just learned about uv, and it looks like I have to pick it up since it performs very well.)

I just use pipx. Install guides suggest it, and it is only one character different from pip.

With Nix, it is very easy to run multiple versions of same software. The path will always be the same, meaning you can depend on versions. This is nice glue for pipx.

My pet peeve with Python and Vim is all these different package managers. Every once in a while a new one is out and I don't know if it will gain momentum. For example, I use Plug now in Vim but notice documentation often refers to different alternatives these days. With Python it is pip, poetry, pip search no longer working, pipx, and now uv (I probably forgot some things).


Pipx is a tool for users to install finished applications. It isn't intended for installing libraries for further development, and you have to hack around it to make that work. (This does gain you a little bit over using Pip directly.)

I just keep separate compiled-from-source versions of Python in a known, logical place; I can trivially create venvs from those directly and have Pip install into them, and pass `--python` to `pipx install`.

>With Python it is pip, poetry, pip search no longer working, pipx, and now uv (I probably forgot some things).

Of this list, only Poetry and Uv are package managers. Pip is by design, only an installer, and Pipx only adds a bit of environment management to that. A proper package manager also helps you keep track of what you've installed, and either produces some sort of external lock file and/or maintains dependency listings in `pyproject.toml`. But both Poetry and Uv go further beyond that as well, aiming to help with the rest of the development workflow (such as building your package for upload to PyPI).

If you like Pipx, you might be interested in some tips in my recent blog post (https://zahlman.github.io/posts/2025/01/07/python-packaging-...). In particular, if you do need to install libraries, you can expose Pipx's internal copy of Pip for arbitrary use instead of just for updating the venvs that Pipx created.


I just tried uv and the performance blows pip(x) out of the water. No contest, really. I swapped most stuff from pipx to uv (some wouldn't build).


I also tend to use the OS package manager to install other binary dependencies. Pip does the rest perfectly well.


Yeah, venv is really the best way to manage Python environments. In my experience other tools like Conda often create more headaches than they solve.

Sure, venv doesn't manage Python versions, but it's not that difficult to install the version you need system-wide and point your env to it. Multiple Python versions can coexist in your system without overriding the default one. On Ubuntu, the deadsnakes PPA is pretty useful if you need an old Python version that's not in the official repos.

In the rare case where you need better isolation (like if you have one fussy package that depends on specific system libs, looking at you tensorflow), Docker containers are the next best option.


Sometimes I feel like my up vote doesn't adequately express my gratitude.

I appreciate how thorough this was.


Oh wow, it actually can handle the Python executable? I didn't know that, that's great! Although it's in the article as well, it didn't click until you said it, thanks!


I would avoid using this feature! It downloads a compiled portable python binary from some random github project not from PSF. That very same github project recommends against using their binary as the compilation flags is set for portability against performance. See https://gregoryszorc.com/docs/python-build-standalone/main/


https://github.com/astral-sh/python-build-standalone is by the same people as uv, so it's hardly random. The releases there include ones with profile-guided optimisation and link time optimisation [1], which are used by default for some platforms and Python versions (and work seems underway to make them usable for all [2]). I don't see any recommendation against using their binaries or mention of optimising for portability at the cost of performance on the page you link or the pages linked from it that I've looked at.

[1] https://github.com/astral-sh/uv/blob/main/crates/uv-python/d... (search for pgo)

[2] https://github.com/astral-sh/uv/issues/8015


This must have moved recently! I looked at this around end of December and it was hosted on https://github.com/indygreg/python-build-standalone/releases which had nothing to do with UV. If you read through the docs now it still references indygreg and still shows this https://github.com/indygreg/python-build-standalone so I guess the move has not completed it, but yes it's a positive change to see UV taking ownership of the builds.



Its not from some random github project its from a trusted member of open source community. Same as other libraries you use and install.

It was used by rye before rye and uv sort of merged and is used by pipx and hatch and mise (and bazel rules_python) https://x.com/charliermarsh/status/1864042688279908459

My understanding is that the problem is that psf doesnt publish portable python binaries (I dont think they even publish any binaries for linux). Luckily theres some work being done on a pep for similar functionality from an official source but that will likely take several years. Gregory has praised the attempt and made suggestions based on his experience. https://discuss.python.org/t/pep-711-pybi-a-standard-format-...

Apparently he had less spare time for open source and since astral had been helping with a lot of the maitinence work on the project he happily transfered over ownership to themin December

https://gregoryszorc.com/blog/2024/12/03/transferring-python... https://astral.sh/blog/python-build-standalone


That makes sense thanks for sharing these details


No problem.

That's not to say they aren't downsides. https://gregoryszorc.com/docs/python-build-standalone/main/q... documents them. As an example I had to add the https://pypi.org/project/gnureadline/ package to a work project that had its own auto completing shell because by default the builds replaces the gnu readline package with lived it/edit line and they're far from a drop in replacement.


I still don't understand why people want separate tooling to "handle the Python executable". All you need to do is have one base installation of each version you want, and then make your venv by running the standard library venv for that Python (e.g. `python3.x -m venv .venv`).


> All you need to do is have one base installation of each version you want

Because of this ^


But any tool you use for the task would do that anyway (or set them up temporarily and throw them away). Python on Windows has a standard Windows-friendly installer, and compiling from source on Linux is the standard few calls to `./configure` and `make` that you'd have with anything else; it runs quite smoothly and you only have to do it once.


I need to tell you a secret... I'm a long-life Linux user (since mandrake!)

Also, I don't have a c compiler installed.


Really? I was told Mint was supposed to be the kiddie-pool version of Linux, but it gave me GCC and a bunch of common dependencies anyway.

(By my understanding, `pyenv install` will expect to be able to run a compiler to build a downloaded Python source tarball. Uv uses prebuilt versions from https://github.com/astral-sh/python-build-standalone ; there is work being done in the Python community on a standard for packaging such builds, similarly to wheels, so that you can just use that instead of compiling it yourself. But Python comes out of an old culture where users expect to do that sort of thing.)


In Debian build-essential package is only recommended dependency of pip. Pyenv obviously wouldn't work without it.


Having to manually install python versions and create venvs is pretty painful compared to say the Rust tooling where you install rustup once, and then it will automatically choose the correct Rust version for each project based on what that project has configured.

UV seems like it provides a lot of that convenience for python.


I'm glad to let uv handle that for me. It does a pretty good job at it!


Lots of reasons starting. You may want many people to have the same point release. They have early builds without needing to compile it from source and have free threading (mogul) builds. I think they might even have pro builds. Not to mention that not all district releases will have the right python release. Also people want the same tool to handle both python version and venv creation and requirement installation


>Also people want the same tool to handle both python version and venv creation and requirement installation

This is the part I don't understand. Why should it be the same tool? What advantage does that give over having separate tools?


Because its easier. Because it fits together nicer and more consistently. Also because UV is well written and written in rust so all the parts are fast. You can recreate a venv from scratch for every run.

Also as silly as it is I actually have a hard time remembering the venv syntax each time.

uv run after a checkout with a lock file and a .python-version file downloads the right python version creates a venv and then installs the packages. No more needing throwaway venvs to get a clean pip freeze for requirements. And I don't want to compile python, even with something helping me compile and keep track of compiles like pyenv a lot can go wrong.

And that assumes an individualindividual project run by someone who understands python packaging. UV run possibly in a wrapper script will do those things for my team who doesn't get packaging as well as I do. Just check in changes and next time they UV run it updates stuff for them


I guess I will never really understand the aesthetic preferences of the majority. But.

>Because its easier. Because it fits together nicer and more consistently. Also because UV is well written and written in rust so all the parts are fast. You can recreate a venv from scratch for every run.

This is the biggest thing I try to push back on whenever uv comes up. There is good evidence that "written in Rust" has quite little to do with the performance, at least when it comes to creating a venv.

On my 10-year-old machine, creating a venv directly with the standard library venv module takes about 0.05 seconds. What takes 3.2 more seconds on top of that is bootstrapping Pip into it.

Which is strange, in that using Pip to install Pip into an empty venv only takes about 1.7 seconds.

Which is still strange, in that using Pip's internal package-installation logic (which one of the devs factored out as a separate project) to unpack and copy the files to the right places, make the script wrappers etc. takes only about 0.2 seconds, and pre-compiling the Python code to .pyc with the standard library `compileall` module takes only about 0.9 seconds more.

The bottleneck for `compileall`, as far as I can tell, is still the actual bytecode compilation - which is implemented in C. I don't know if uv implemented its own bytecode compilation or just skips it, but it's not going to beat that.

Of course, well thought-out caching would mean it can just copy the .pyc files (or hard-link etc.) from cache when repeatedly using a package in multiple environments.


pip's resolving algorithm is not sound. If your Python projects are really simple it seems to work but as your projects get more complex the failure rate creeps up over time. You might

   pip install
something and have it fail and then go back to zero and restart and have it work but at some point that will fail. conda has a correct resolving algorithm but the packages are out of date and add about as many quality problems as they fix.

I worked at a place where the engineering manager was absolutely exasperated with the problems we were having with building and deploying AI/ML software in Python. I had figured out pretty much all the problems after about nine months and had developed a 'wheelhouse' procedure for building our system reliably, but it was too late.

Not long after I sketched out a system that was a lot like uv but it was written in Python and thus had problems with maintaining its own stable Python enivronment (e.g. poetry seems to trash itself every six months or so.)

Writing uv in Rust was genius because it eliminates that problem of the system having a stable surface to stand on instead of pipping itself into oblivion, never mind that it is much faster than my system would have been. (My system had the extra feature that it used http range requests to extract the metadata from wheel files before pypi started letting you download the metadata directly.)

I didn't go forward with developing it because I argued with a lot of people who, like you, thought it was "the perfect being the enemy of the good" when it was really "the incorrect being the enemy of the correct." I'd worked on plenty of projects where I was right about the technology and wrong about the politics and I am so happy that uv has saved the Python community from itself.


May I introduce you to our lord and saviour, Nix and it's most holy child nixpkgs! With only a small tithing of your sanity and ability to Interop with any other dependency management you can free yourself of all dependency woes forever!

[] For various broad* definitions of forever.

[*] Like, really, really broad**

[**] Maybe a week if you're lucky


Except python builders in nixpkgs are really brain damaged because of the writers ways they inject search path which for example breaks if you try to execute a separate python interpreter assuming same library environment...


Within the holy church of Nix the sect of Python is troubled one, it can however be tamed into use via vast tomes of scripture. Sadly these times can only be written by those you have truly given their mind and body over to the almighty Nix.


It's not as bad as Common Lisp support which stinks to high heavens of someone not learning the lessons of the Common-Lisp-Controller fiasco


Lisp is of the old gods, only the most brave of Nix brethren dare tread upon their parenthesised ways.


Nix is really the best experience I've had with Python package management but only if all the dependencies are already in nixpkgs. If you want to quickly try something off github it's usually a pain in the ass.


>May I introduce you to our lord and saviour, Nix and it's most holy child nixpkgs!

In this case, instead of working with Python, you change how you manage everything!


The Nix of Python, conda, was already mentioned.

> add about as many quality problems as they fix


I used to have 1 problem, then I used Nix to fix it, now I have 'Error: infinite recursion' problems.


Ugh, I hate writing this but that's where docker and microservices comes to the rescue. It's a pain in the butt and inefficient to run but if you don't care about the overhead (and if you do care, why are you still using Python?), it works.


My experience was that docker was a tool data scientists would use to speedrun the process of finding broken Pythons. For instance we'd inexplicably find a Python had Hungarian as the default charset, etc.

The formula was

   - Docker - Discipline = Chaos
   Docker - Discipline = Chaos
   Docker + Discipline = Order
but

   - Docker + Discipline = Order
If you can write a Dockerfile to install something you can write a bash script. Circa 2006 I was running web servers on both Linux and Windows with hundreds of web sites on them with various databases, etc. It really was simple then as "configure a filesystem path" and "configure a database connection" and I had scripts that could create a site in 30 seconds or so.

Sure today you might have five or six different databases for a site but it's not that different in my mind. Having way too many different versions of things installed is a vice, not a virtue.


> If you can write a Dockerfile to install something you can write a bash script.

Docker is great for making sure that magic bash script that brings the system up actually works again on someone else’s computer or after a big upgrade on your dev machine or whatever.

So many custom build scripts I’ve run into over the years have some kind of unstated dependency on the initial system they were written on, or explicit dependencies on something tricky to install, and as such are really annoying to diagnose later on, especially if they make significant system changes.

Docker is strictly better than a folder full of bash scripts and a Readme.txt. I would have loved having it when I had to operate servers like that with tons of websites running on them. So much nicer to be able to manage dependency upgrades per-site rather than server-wide, invariably causing something to quietly break on one of 200 sites.


Unspoken libc dependencies are my favorite. Granted, you need to wait a few years after launching the project to feel that pain, but once you’re there, the experience is… unforgettable.

Second best are OpenSSL dependencies. I sincerely hope I won’t have to deal with that again.


Unfortunately sometimes you get to host things not written by you, or which exist for a long time and thus there's a lot of history involved that prevents nice and tidy.

My first production use of kubernetes started out because we put in the entirety of what we had to migrate to new hosting into spreadsheet, with columns for various parts of stack used by the websites, and figured we would go insane trying to pack it up - or we would lose the contract because we would be as expensive as the last company.

Could we package it nicely without docker? Yes, but the effort to package it in docker was smaller than packaging it in a way where it wouldn't conflict on a single host, because the simple script becomes way harder when you need to handle multiple versions of the same package, something that most distro do not support at all (these days I think we could have done it with NixOS, but that's a different kettle of deranged fishes)

And then the complexity of managing the stack was quickly made easier by turning each site into separate artifact (docker container) handled by k8s manifests (especially when it came to dealing with about 1000 domains across those apps).

So, theoretically discipline is enough, practical world is much dirtier though.


> If you can write a Dockerfile to install something you can write a bash script.

The trick isn't installing things, it's uninstalling them. Docker container is isolated in ways your bash script equivalent is not - particularly when first developing it, when you're bound to make an occasional mistake.


>For instance we'd inexplicably find a Python had Hungarian as the default charset, etc.

Sounds quite explicable: Docker image created by Hungarian devs perhaps?


My understanding is that UTF-8 is the world's charset and that reasonable Hungarians would use that (e.g. I sure don't use us-ascii or iso-latin-1 if I can at all help it. I mean my "better half" reads 中文 so I don't have to and having it all in UTF-8 makes it easy) The other mystery is how the data sci's found it.


IIRC there was some widely used image with many derivatives that redefined locale (the one in Docker Library used POSIX since forever).


>and if you do care, why are you still using Python?

Because I get other advantages of it. Giving in to overhead on one layer, doesn't mean I'm willing to give it up everywhere.


Docker will make it work, but is a heavy solution as it will happily take up GB of your disk. uv is a more efficient and elegant option.


Yes, another sound reason to use microservices. /s


> You might `pip install` something and have it fail and then go back to zero and restart and have it work but at some point that will fail.

Can you give a concrete example, starting from a fresh venv, that causes a failure that shouldn't happen?

> but it was written in Python and thus had problems with maintaining its own stable Python enivronment

All it has to do is create an environment for itself upon installation which is compatible with its own code, and be written with the capability of installing into other environments (which basically just requires knowing what version of Python it uses and the appropriate paths - the platform and ABI can be assumed to match the tool, because it's running on the same machine).

This is fundamentally what uv is doing, implicitly, by not needing a Python environment to run.

But it's also what the tool I'm developing, Paper, is going to do explicitly.

What's more, you can simulate it just fine with Pip. Of course, that doesn't solve the issues you had with Pip, but it demonstrates that "maintaining its own stable Python environment" is just not a problem.

>Writing uv in Rust was genius because it eliminates that problem of the system having a stable surface to stand on instead of pipping itself into oblivion, never mind that it is much faster than my system would have been.

From what I can tell, the speed mainly comes from algorithmic issues, caching etc. Pip is just slow above and beyond anything Python forces on it.

An example. On my system, creating a new venv from scratch with Pip included (which loads Pip from within its own vendored wheel, which then runs in order to bootstrap itself into the venv) takes just over 3 seconds. Making a new venv without Pip, then asking a separate copy of Pip to install an already downloaded Pip wheel would be about 1.7 seconds. But making that venv and using the actual internal installation logic of Pip (which has been extracted by Pip developer Pradyun Gedam as https://github.com/pypa/installer ) would take about 0.25 seconds. (There's no command-line API for this; in my test environment I just put the `installer` code side by side with a driver script, which is copied from my development work on Paper.) It presumably could be faster still.

I honestly have no idea what Pip is doing the rest of that time. It only needs to unzip an archive and move some files around and perform trivial edits to others.

> (My system had the extra feature that it used http range requests to extract the metadata from wheel files before pypi started letting you download the metadata directly.)

Pip has had this feature for a long time (and it's still there - I think to support legacy projects without wheels, because I think the JSON API won't be able to provide the data since PyPI doesn't build the source packages). It's why the PyPI server supports range requests in the first place.

> I'd worked on plenty of projects where I was right about the technology and wrong about the politics and I am so happy that uv has saved the Python community from itself.

The community's politics are indeed awful. But Rust (or any other language outside of Python) is not needed to solve the problem.


It occurs to me later: `installer` isn't compiling the .py files to .pyc, which probably accounts for the time difference. This can normally be done on demand (or suppressed entirely) but Pip wants to do it up front. Bleh. "Installing" from already-unpacked files would still be much faster.


Respectively, yes. The ability to create venvs so fast, that it becomes a silent operation that the end user never thinks about anymore. The dependency management and installation is lightning quick. It deals with all of the python versioning

and I think a killer feature is the ability to inline dependencies in your Python source code, then use: uv tool run <scriptname>

Your script code would like:

#!/usr/bin/env -S uv run --script # /// script # requires-python = ">=3.12" # dependencies = [ # "...", # "..." # ] # ///

Then uv will make a new venv, install the dependencies, and execute the script faster than you think. The first run is a bit slower due to downloads and etc, but the second and subsequent runs are a bunch of internal symlink shuffling.

It is really interesting. You should at least take a look at a YT or something. I think you will be impressed.

Good luck!


>Respectively, yes. The ability to create venvs so fast, that it becomes a silent operation that the end user never thinks about anymore.

I might just blow your mind here:

  $ time python -m venv with-pip

  real 0m3.248s
  user 0m3.016s
  sys 0m0.219s
  $ time python -m venv --without-pip without-pip

  real 0m0.054s
  user 0m0.046s
  sys 0m0.009s
The thing that actually takes time is installing Pip into the venv. I already have local demonstrations that this installation can be an order of magnitude faster in native Python. But it's also completely unnecessary to do that:

  $ source without-pip/bin/activate
  (without-pip) $ ~/.local/bin/pip --python `which python` install package-installation-test
  Collecting package-installation-test
    Using cached package_installation_test-1.0.0-py3-none-any.whl.metadata (3.1 kB)
  Using cached package_installation_test-1.0.0-py3-none-any.whl (3.1 kB)
  Installing collected packages: package-installation-test
  Successfully installed package-installation-test-1.0.0
I have wrappers for this, of course (and I'm explicitly showing the path to a separate Pip that's already on my path for demonstration purposes).

> a killer feature is the ability to inline dependencies in your Python source code, then use: uv tool run <scriptname>

Yes, Uv implements PEP 723 "Inline Script Metadata" (https://peps.python.org/pep-0723/) - originally the idea of Paul Moore from the Pip dev team, whose competing PEP 722 lost out (see https://discuss.python.org/t/_/29905). He's been talking about a feature like this for quite a while, although I can't easily find the older discussion. He seems to consider it out of scope for Pip, but it's also available in Pipx as of version 1.4.2 (https://pipx.pypa.io/stable/CHANGELOG/).

> The first run is a bit slower due to downloads and etc, but the second and subsequent runs are a bunch of internal symlink shuffling.

Part of why Pip is slow at this is because it insists on checking PyPI for newer versions even if it has something cached, and because its internal cache is designed to simulate an Internet connection and go through all the usual metadata parsing etc. instead of just storing the wheels directly. But it's also just slow at actually installing packages when it already has the wheel.

In principle, nothing prevents a Python program from doing caching sensibly and from shuffling symlinks around.


It's not the "runtime" that's slow for me with pip, but all the steps needed. My biggest gripe with python is you need to basically be an expert in different tools to get a random project running. Uv solves this. Just uv run the script and it works.

I don't care if pip technically can do something. The fact that I explicitly have to mess around with venvs and the stuff is already enough mental overhead that I disregard it.

I'm a python programmer at my job, and I've hated the tooling for years. Uv is the first time I actually like working with python.


None of GP is about what Pip can technically do. It's about what a better tool still written in Python could do.

The problems you're describing, or seeing solved with uv, don't seem to be about a problem with the design of virtual environments. (Uv still uses them.) They're about not having the paradigm of making a venv transiently, as part of the code invocation; or they're about not having a built-in automation of a common sequence of steps. But you can do that just as well with a couple lines of Bash.

I'm not writing any of this to praise the standard tooling. I'm doing it because the criticisms I see most commonly are inaccurate. In particular, I'm doing it to push back against the idea that a non-Python language is required to make functional Python tooling. There isn't a good conceptual reason for that.


It may not be required, but it has the virtue of existing. Now that it does, is it a problem that it's not written in Python? Especially given that they've chosen to take on managing the interpreter as well: being in a compiled language does mean that it doesn't have the bootstrap problem of needing an already functional Python installation that they need to avoid breaking.


Why does it matter if it's written in python or not? I want the best tooling, don't care how it's made.


You are free to evaluate tooling by your own standards.

But it commonly comes across that people think it can't be written in Python if it's to have XYZ features, and by and large they're wrong, and I'm trying to point that out. In particular, people commonly seem to think that e.g. Pip needs to be in the same environment to work, and that's just not true. There's a system in place that defaults to copying Pip into every environment so that you can `python -m pip`, but this is wasteful and unnecessary. (Pip is designed to run under the install environment's Python, but this is a hacky implementation detail. It really just needs to know the destination paths and the target Python version.)

It also happens that I care about disk footprint quite a bit more than most people. Maybe because I still remember the computers I grew up with.


If you switch to uv, you’ll have fewer excuses to take coffee breaks while waiting for pip to do its thing. :)


Pip only has requirements.txt and doesn't have lockfiles, so you can't guarantee that the bugs you're seeing on your system are the same as the bugs on your production system.


I’ve always worked around that by having a requirements.base.txt and a requirements.txt for the locked versions. Obviously pip doesn’t do that for you but it’s not hard to manage yourself.

Having said that, I’m going to give uv a shot because I hear so many good things about it.


With pip the best practice is to have a requirements.txt with direct requirements (strictly or loosely pinned), and a separate constraints.txt file [1] with strictly pinned versions of all direct- and sub-dependencies (basically the output of `pip freeze`). The latter works like a lock file.

[1] https://pip.pypa.io/en/stable/user_guide/#constraints-files


For direct requirements you're better off using the `pyproject.toml` for direct dependencies (and you can plausibly use it to pin everything if you're developing an application). It's project metadata that you'll need anyway for building your project, and the "editable wheel" hack allows Pip to use that information to set up an environment for your code (via `pip install -e .`).


I’m grouchy because I finally got religion on poetry a few years ago, but the hype on uv is good enough that I’ll have to give it a shot.


With the new major release of Poetry that just came out I also feel like it might be a good time to switch to Uv rather than adapt to this new version: https://python-poetry.org/blog/announcing-poetry-2.0.0/


I freaking love Poetry. It was a huge breath of fresh air after years of pip and a short detour with Pipenv. If uv stopped existing I’d go back to Poetry.

But having tasted the sweet nectar of uv goodness, I’m onboard the bandwagon.


This works until you need to upgrade something, pip might upgrade to a broken set of dependencies. Or if you run on a different OS and the dependencies are different there (because of env markers), your requirements file won't capture that. There are a lot of gotchas that pip can't fix.


> pip might upgrade to a broken set of dependencies.

I'm only aware of examples where it's the fault of the packages - i.e. they specify dependency version ranges that don't actually work for them (or stop working for them when a new version of the dependency is released). No tool can do anything about that on the user's end.

> Or if you run on a different OS and the dependencies are different there (because of env markers), your requirements file won't capture that. There are a lot of gotchas that pip can't fix.

The requirements.txt format is literally just command-line arguments to Pip, which means you can in fact specific the env markers you need there. They're part of the https://peps.python.org/pep-0508/ syntax which you can use on the Pip command line. Demo:

  $ pip install 'numpy;python_version<="2.7"'
  Ignoring numpy: markers 'python_version <= "2.7"' don't match your environment
> There are a lot of gotchas that pip can't fix.

There are a lot of serious problems with Pip - I just don't think these are among them.


You can specify markers in the requirements file you write, not in the frozen requirements from 'pip freeze'. Because it's just a list of what's installed on your machine.


Running 'pip freeze' creates a plain text file. You can edit it to contain anything that would have been in "the requirements file you write". "Your requirements file" may or may not capture what it needs to, depending on how you created it. But Pip supports it. (And so does the `pyproject.toml` specification.)


You will need another tool to write a lock file that actually locks dependencies for more environments than your own. I don't know what you're trying to say. Pip does not support writing it.

Sure, I guess if you have one Pip will "support" reading it.


The requirements.txt file is the lockfile. Anyways, this whole obsession with locked deps or "lockfiles" is such an anti-pattern, I have no idea why we went there as an industry. Probably as a result of some of the newer stuff that is classified as "hipster-tech" such as docker and javascript.


Just because you don't understand it, it's ok to call it an "anti-pattern"?

Reproducibility is important in many contexts, especially CI, which is why in Node.js world you literally do "npm ci" that installs exact versions for you.

If you haven't found it necessary, it's because you haven't run into situations where not doing this causes trouble, like a lot of trouble.


Just because someone has a different perspective than you doesn't mean they don't "understand".

Lockfiles are an anti-pattern if you're developing a library rather than an application, because you can't push your transitive requirements onto the users of your library.


If you're developing a library, and you have a requirement for what's normally a transitive dependency, it should be specified as a top-level dependency.


The point is that if I'm writing a library and I specify `requests == 1.2.3`, then what are you going to do in your application if you need both my library and `requests == 1.2.4`?

This is why libraries should not use lockfiles, they should be written to safely use as wide a range of dependencies' versions as possible.

It's the developers of an application who should use a lockfile to lock transitive dependencies.


The lock file is for developers of the library, not consumers. Consumers just use the library’s dependency specification and then resolve their own dependency closure and then generate a lock file for that. If you, as a library developer, want to test against multiple versions of your dependencies, there are other tools for that. It doesn’t make lock files a bad idea in general.


As another library developer, of course I want to test against multiple versions. Or more accurately, I don't want to prevent my users from using different versions prematurely. My default expectation is that my code will work with a wide range of those versions, and if it doesn't I'll know - because I like to pay attention to other libraries' deprecations, just as I'd hope for my users to pay attention to mine.

Lockfiles aren't helpful to me here because the entire point is not to be dependent upon specific versions. I actively want to go through the cycle of updating my development environment on a whim, finding that everything breaks, doing the research etc. - because that's how I find out what my version requirements actually are, so that I can properly record them in my own project metadata. And if it turns out that my requirements are narrow, that's a cue to rethink how I use the dependency, so that I can broaden them.

If I had a working environment and didn't want to risk breaking it right at the moment, I could just not upgrade it.

If my requirements were complex enough to motivate explicitly testing against a matrix of dependency versions, using one of those "other tools", I'd do that instead. But neither way do I see any real gain, as a library developer, from a lock file.


>If I had a working environment and didn't want to risk breaking it right at the moment, I could just not upgrade it.

The point of a lockfile is to only upgrade when you want to upgrade. I hope you understand that.


Why do I need a special file in order to not do something?


That’s not the perspective that OP was sharing, though.


You literally phrased it as "I have no idea why". You can't be upset if someone feels you don't understand why.


"I have no idea why" the industry went there. One can understand a technology or a design pattern yet think it's completely idiotic. (low-hanging fruit: JavaScript, containers, etc.)


I’m pretty sure it was a sarcasm.


"pip freeze" generates a lockfile.


No, that generates a list of currently installed packages.

That’s very much not a lock file, even if it is possible to abuse it as such.


A list of currently installed packages in the current environment, with their exact versions. This is only the actually needed packages, with their transitive dependencies, unless you've left something behind from earlier in development. If you're keeping abstract dependencies up to date in `pyproject.toml` (which you need to do anyway to build and release the project), you can straightforwardly re-create the environment from that list and freeze whatever solution you get (after testing).


Doesn’t account for differences in platforms or Python versions, and doesn’t contain resolved dependency hashes.

So it’s a “lockfile” in the strictest, most useless definition: only works on the exact same Python version, on my machine, assuming no dependencies have published new packages.


Look, I'm not trying to sell this as a full solution - I'm just trying to establish that a lot of people really don't need a full solution.

>only works on the exact same Python version

It works on any Python version that all of the dependencies work on. But also it can be worked around with environment markers, if you really can support multiple Python versions but need a different set of dependencies for each.

In practical cases you don't need anything like a full (python-version x dependency) version matrix. For example, many projects want to use `tomllib` from the standard library in Python 3.11, but don't want to drop support for earlier Python because everything else still works fine with the same dependency packages for all currently supported Python versions. So they follow the steps in the tomli README (https://github.com/hukkin/tomli?tab=readme-ov-file#building-...).

>on my machine

(Elsewhere in the thread, people were trying to sell me on lock files for library development, to use specifically on my machine while releasing code that doesn't pin the dependencies.)

If my code works with a given version of a dependency on my machine, with the wheel pre-built for my machine, there is no good reason why my code wouldn't work on your machine with the same version of the dependency, with the analogous wheel - assuming it exists in the first place. It was built from the same codebase. (If not, you're stuck building from source, or may be completely out of luck. A lockfile can't fix that; you can't specify a build artifact that doesn't exist.)

This is also only relevant for projects that include non-Python code that requires a build step, of course.

>assuming no dependencies have published new packages.

PyPI doesn't allow you to replace the package for the same version. That's why there are packages up there with `.post0` etc. suffixes on their version numbers. But yes, there are users who require this sort of thing, which is why PEP 751 is in the works.


So many misunderstandings here :/ I can’t muster the energy to correct them past these two obvious ones

> It works on any Python version that all of the dependencies work on

No, it doesn’t. It’s not a lockfile: it’s a snapshot of the dependencies you have installed.

The dependencies you have installed depend on the Python version and your OS. The obvious case would be requiring a Linux-only dependency on… Linux, or a package only required on Python <=3.10 while you’re on 3.11.

> PyPI doesn't allow you to replace the package for the same version

Yes and no. You can continue to upload new wheels (or a sdist) long after a package version is initially released.


>So many misunderstandings here :/

I've spent most of the last two years making myself an expert on the topic of Python packaging. You can see this through the rest of the thread.

>No, it doesn’t. It’s not a lockfile: it’s a snapshot of the dependencies you have installed.

Yes, it does. It's a snapshot of the dependencies that you have installed. For each of those dependencies, there is some set of Python versions it supports. Collectively, the packages will work on the intersection of those sets of Python versions. (Because, for those Python versions, it will be possible to obtain working copies of each dependency at the specified version number.)

Which is what I said.

> The dependencies you have installed depend on the Python version and your OS. The obvious case would be requiring a Linux-only dependency on… Linux, or a package only required on Python <=3.10 while you’re on 3.11.

A huge amount of packages are pure Python and work on a wide range of Python versions and have no OS dependency. In general, packages may have such restrictions, but do not necessarily. I know this because I've seen my own code working on a wide range of Python versions without making any particular effort to ensure that. It's policy for many popular packages to ensure they support all Python versions currently supported by the core Python dev team.

Looking beyond pure Python - if I depend on `numpy==2.2.1` (the most recent version at time of writing), that supports Python 3.10 through 3.13. As long as my other dependencies (and the code itself) don't impose further restrictions, the package will install on any of those Python versions. If you install my project on a different operating system, you may get a different wheel for version 2.2.1 of NumPy (the one that's appropriate for your system), but the code will still work. Because I tested it with version 2.2.1 of NumPy on my machine, and version 2.2.1 of Numpy on your machine (compiled for your machine) provides the same interface to my Python code, with the same semantics.

I'm not providing you with the wheel, so it doesn't matter that the wheel I install wouldn't work for you. I'm providing you(r copy of Pip) with the package name and version number; Pip takes care of the rest.

>You can continue to upload new wheels (or a sdist) long after a package version is initially released.

Sure, but that doesn't harm compatibility. In fact, I would be doing it specifically to improve compatibility. It wouldn't change what Pip chooses for your system, unless it's a better match for your system than previously available.


Holy hell dude, you don’t need to write a novel for every reply. It’s not a lockfile because it’s a snapshot of what you have installed. End of.

It doesn’t handle environment markers nor is it reproducible. Given any non-trivial set of dependencies and/or more than 1 platform, it will lead to confusing issues.

Those confusing issues are the reason for lock files to exist, and the reason they are not just “the output of pip freeze”.

But you know this, given your two years of extensive expert study. Which I see very little evidence of.


>It’s not a lockfile because it’s a snapshot of what you have installed.

I didn't say it was. I said that it solves the problems that many people mistakenly think they need a lockfile for.

(To be clear: did you notice that I am not the person who originally said "'pip freeze' generates a lockfile."?)

>It doesn’t handle environment markers nor is it reproducible.

You can write environment markers in it (of course you won't get them from `pip freeze`) and Pip will respect them. And there are plenty of cases where no environment markers are applicable anyway.

It's perfectly reproducible insofar as you get the exact specified version of every dependency, including transitive dependencies.

>Given any non-trivial set of dependencies and/or more than 1 platform, it will lead to confusing issues.

Given more than 1 platform, with differences that actually matter (i.e. not pure-Python dependencies), you cannot use a lockfile, unless you specify to build everything from source. Because otherwise a lockfile would specify wheels as exact files with their hashes that were pre-built for one platform and will not work on the others.

Anyway, feel free to show a minimal reproducible example of the confusion you have in mind.


> Given more than 1 platform, with differences that actually matter (i.e. not pure-Python dependencies), you cannot use a lockfile, unless you specify to build everything from source. Because otherwise a lockfile would specify wheels as exact files with their hashes that were pre-built for one platform and will not work on the others.

What is more likely:

1. Using a lockfile means you cannot use wheels and have to build from source

2. You don’t know what you’re talking about

(When deciding, keep in mind that every single lockfile consuming and producing tool works fine with wheels)


Pip is sort of broken before because it encourages confusion between requirements and lock files. In other languages with package managers you generally specify your requirements with ranges and get a lock file with exact versions of those and any transitive dependencies letting you easily recreate a known working environment. The only way to do that in pip is to make a *new* venue install then pip freeze. I think pip tools package is supposed to help but it's a separate tool (one which I've also includes). Also putting stuff in pyproject.toml feels more solid then requirements files (and allows options to be set on requirements (like installing only one package that's only on your company's private python package index mirror while installing the others from the global python package index) and allows dev dependencies and other optional features dependency groups without multiple requirements files and having to update locks on those files.

It also automatically creates venvs if you delete them. And it automatically updates packages when you run something with uv run file.py (useful when somebody may have updated the requirements in git). It also lets you install self contained (installed in a virtualenv and linked to ~/.local/bin which is added to your path)python tools (replacing pipx). It installs self contained python builds letting you more easily pick python version and specify it in a .python-version file for your project (replacing pyenv and usually much nicer because pyenv compiles them locally)

Uv also makes it easier to explore and say start a ipython shell with 2 libraries uv run --with ipython --with colorful --with https ipython

It caches downloads. Of course the http itself isn't faster but they're exploring things to speed that part up and since it's written in rust local stuff (like deleting and recreating a venv with cached packages) tends to be blazing fast


I am not a python developer, but sometimes I use python projects. This puts me in a position where I need to get stuff working while knowing almost nothing about how python package management works.

Also I don’t recognise errors and I don’t know which python versions generally work well with what.

I’ve had it happen so often with pip that I’d have something setup just fine. Let’s say some stable diffusion ui. Then some other month I want to experiment with something like airbyte. Can’t get it working at all. Then some days later I think, let’s generate an image. Only to find out that with pip installing all sorts of stuff for airbyte, I’ve messed up my stable diffusion install somehow.

Uv clicked right away for me and I don’t have any of these issues.

Was I using pip and asdf incorrectly before? Probably. Was it worth learning how to do it properly in the previous way? Nope. So uv is really great for me.


This is not just a pip problem. I had the problem with anaconda a few years ago where upgrading the built in editor (spyder?) pulled versions of packages which broke my ML code, or made dependencies impossible to reconsile. It was a mess, wasting hours of time. Since then I use one pip venv for each project and just never update dependencies.


Spyder isn't built-in; IDLE comes with Python (unless you get it via Debian, at least), but is not separately upgradable (as the underlying `idlelib` is part of the standard library).

If upgrading Spyder broke your environment, that's presumably because you were using the same environment that Spyder itself was in. (Spyder is also implemented in Python, as the name suggests.) However, IDEs for Python also like to try to do environment management for you (which may conflict with other tools you want to use specifically for the purpose). That's one of the reasons I just edit my code in Vim.

If updating dependencies breaks your code, it's ultimately the fault of the dependency (and their maintainers will in turn blame you for not paying attention to their deprecation warnings).


Thanks. I understand this a lot more now that I've learned about venvs, and I'm between VScode and emacs for editing. No longer would I install a editor which depends on the same environment as the code I want to run.

As for Spyder, it is included in the default Windows install of Anaconda (and linked to by the default Anaconda Navigator). As a new user doing package management via the GUI, it was not clear at all that Spyder was sharing dependencies with my project until things started breaking.

Anaconda was also half-baked in other ways: it broke if the Windows username contains UTF-8 characters, so I ended up creating a new Windows user just for that ML work. PITA.


You're all over the thread defending the standard python tools, which is fine, it works for you. But the amount of times you've had to write that something is natively supported already or people is just using it wrong speaks volumes about why people prefer uv: it just works without having to learn loads of stuff.


My life got a lot easier since I adopted the habit of making a shell script, using buildah and podman, that wrapped every python, rust, or golang project I wanted to dabble with.

It's so simple!

Create a image with the dependencies, then `podman run` it.


I'm fairly minimalist when it comes to tooling: venv, pip, and pip-tools. I've started to use uv recently because it resolves packages significantly faster than pip/pip-tools. It will generate a "requirements.txt" with 30 packages in a few seconds rather than a minute or two.


Well, for one you can't actually package or add a local requirement (for example , a vendored package) to the usual pip requirements.txt (or with pyproject.toml, or any other standard way) afaik.

I saw a discourse reply that cited some sort of possible security issue but that was basically it and that means that the only way to get that functionality is to not use pip. It's really not a lot of major stuff, just a lot of little paper cuts that makes it a lot easier to just use something else once your project gets to a certain size.


Sure you can.

It's in their example for how to use requirements.txt: https://pip.pypa.io/en/stable/reference/requirements-file-fo...

Maybe there's some concrete example you have in mind though?


I don't think so, though maybe I didn't explain myself correctly. You can link to a relative package wheel I think, but not to a package repo. So if you have a repo, with your main package in ./src, and you vendor or need a package from another subfolder (let's say ./vendored/freetype) , you can't actually do it in a way that won't break the moment you share your package. You can't put ./vendored/freetype in your requirements.txt, it just fails.

That means you either need to use pypi or do an extremely messy hack that involves adding the vendored package as a sub package to your main source, and then do some importlib black magic to make sure that everything uses said package.

https://github.com/pypa/pip/issues/6658

https://discuss.python.org/t/what-is-the-correct-interpretat...


In this scenario, reading between the lines, the vendor is not providing a public / published package but does provide the source as like a tarball?

I have yet to run into that particular case where the vendor didn't supply their own repo in favour of just providing the source directly. However I do use what are essentially vendor-supplied packages (distant teams in the org) and in those cases I just point at their GitLab/GitHub repo directly. Even for some components within my own team we do it this way.


It's more for either monorepos or in my case, to fix packages that have bugs but that I can't fix upstream.

So for me, in my specific case, the freetype-py repo has a rather big issue with Unicode paths (it will crash the app if the path is in Unicode).

There's a PR but it hasn't and probably won't get merged for dubious reasons.

The easy choice, the one that actually is the most viable, is to pull the repo with the patch applied, temporarily add it to my ./vendored folder and just ideally change the requirements.txt with no further changes (or need to create a new pypi package). But it's basically impossible since I just can't use relative paths like that.

Again it's rather niche but that's just one of the many problems I keep encountering. packaging anything with CUDA is still far worse, for example.


>The easy choice, the one that actually is the most viable, is to pull the repo with the patch applied, temporarily add it to my ./vendored folder and just ideally change the requirements.txt with no further changes (or need to create a new pypi package). But it's basically impossible since I just can't use relative paths like that.

You can use the repo's setup to build a wheel, then tell pip to install from that wheel directly (in requirements.txt, give the actual path/name of the wheel file instead of an abstract dependency name). You need a build frontend for this - `pip wheel` will work and that's more or less why it exists; but it's not really what Pip is designed for overall - https://build.pypa.io/en/stable/ is the vanilla offering.


Yeah, it unifies the whole env experience with the package installation experience. No more forgetting to activate virtualenv first. No more pip installing into the wrong virtual env or accidentally borrowing from the system packages. It’s way easier to specify which version of python to use. Everything is version controlled including python version and variant like cpython, puppy, etc. it’s also REALLY REALLY fast.


Performance and correctness mostly.


I was in your boat too. Been using Python since 2000 and pretty satisfied with venv and pip.

However, the speed alone is reason enough to switch. Try it once and you will be sold.


Also you can set the python version for that project. It will download whatever version you need and just use it.


in my view, depending on your workflow you might have been missing out on pyenv in the past but not really if you feel comfortable self-managing your venvs.

now though, yes unequivocally you are missing out.


Yeah, I switched from pip to uv. uv seems like its almost the perfect solution for me.

it does virtualenv, it does pyenv, it does pip, so all thats managed in once place.

its much faster than pip.

its like 80% of my workflow now.


Much of the Python ecosystem blatantly violates semantic versioning. Most new tooling is designed to work around the bugs introduced by this.


To be fair, Python itself doesn’t follow SemVer. Not in a “they break things they shouldn’t,” but in a “they never claim to be using SemVer.”


Relevant: https://iscinumpy.dev/post/bound-version-constraints/ Semver is hard; you never know what will break at least one of your users (see Hyrum's law), but on the other hand, clear backwards-compatibility breaks will often not affect a large fraction of users - if they preemptively declare that they won't support your next version, they may prevent Pip from finding a set of versions that would actually work just fine.


Cool story bro.

I've used pip, pyenv, poetry, all are broken in one way or another, and have blind spots they don't serve.

If your needs are simple (not mixing Python versions, simple dependencies, not packaging, etc) you can do it with pip, or even with tarballs and make install.


Pip doesn't resolve dependencies for you. On small projects that can be ok, but if you're working on something medium to large, or you're working on it with other people you can quickly get yourself into a sticky situation where your environment isn't easily reproducible.

Using uv means your project will have well defined dependencies.



My bad, see PaulHoule's comment for what I was getting at.


Oh wow it doesn’t? What DOES it do then?

As I commented here just now I never got pip. This explains it.


The guy doesn't know what he's talking about as pip certainly has dependency resolution. Rather get your python or tech info from a non-flame-war infested thread full of anti-pip and anti-python folk.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: