Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I wonder why activating a venv ended up being implemented as a script you source (updates env vars in the current process) and deactivate (puts them back) rather than being something that puts you in a subshell (detects your current shell and re-invokes it with new vars).

The venv is just the folders and Python stubs.

The activation script concept is probably just what happened to occur to Ian Bicking first. But also maybe the subshell UX isn't so pleasant on Windows?

Certainly others have implemented subshell-based venv management.

> As it is, deactivating takes you all the way out to the system package namespace (or maybe you can nest them, not sure, but it's up to the venv implementation and not your OS to restore the intermediate env).

The effects do not nest, at least not with the *sh activate script. You'd need to implement a stack of old environment variable settings.

But I think the use case you describe just doesn't resonate with a lot of people.



> But I think the use case you describe just doesn't resonate with a lot of people.

Yeah I suppose "flat is better than nested" is right there in the zen.

I guess I'm just an oddball. The idea of exiting all the way up to the root and finding nothing installed there except for the ability to descend into one or more specialized transient environments feels so... clean to me.


The other thing is, if you don't at least set $PS1 it gets hard to remember which level you're at.


You can configure direnv to en/disable envs based on dir enty/exit, so as long as your shell prompt shows your cwd you also know which env you're in. Less overall to think about if you bind the two states together.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: