Jupyter is really dominating now at university campus sites. I see it used for almost every course (at the least those that require data analysis via python).
I haven’t used Jupyter in a few years. Wondering what is the current standard practice of starting a new Jupyter project.
Do users typically have one system-wide Jupyter install that gets reused for each data analysis project that then have their own dependencies in a virtual environment that Jupyter activates?
Or is Jupyter installed inside each project’s virtual environment?
Typically one Jupyter install system wide, and then multiple kernels with each environment.
Personally, I really like the juv model where dependencies are taken from the first cell of the notebook and a new kernel is created to launch the interface, but I haven't seen others using it much yet:
The idea is good, but juv is a one-jupyter-per-notebook model which isn't very practical for how my team uses jupyter. My attempt at "juv, but systemwide-jupyter-plus-one-kernel-per-notebook model" is this: https://github.com/tobinjones/uvkernel
I'm definitely guilty of This system-wide install but I've noticed people doing per-project installs more often now and I'm trying to get in the habit.
And rightfully so. It's an interactive programming environment with embedded explanations. Between markdown and latex, you can write an entire class inside it. It's perfect for live demostrations and homework. Bloated as hell? Sure, but a huge step in education IMHO
This Jupyter (CRDT-based) extension appears to solve the BIGGEST HEADACHE I personally have with Jupyter(lab). Jupyter notebooks allow me to hack code/parameters too fluently, and I can't recover earlier positions in code/parameter space that produced interesting results.
Jupytext and git goes some way towards fixing that, but I don't save to git after every cut/paste of a parameter. This extension is effortless.
As a bonus, the extension appears to allow SubEthaEdit/GoogleDocs style collaboration too. (I haven't personally used that yet.)