Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Jupyterlab Desktop (jupyter.org)
366 points by anigbrowl on Feb 12, 2023 | hide | past | favorite | 161 comments


There are by now several "paradigms" for interacting with a python interpreter (beyond the basic console) depending on what the primary "product" of the work. You have notebook style with jupyter, you have "studio" style with spyder and various IDE's (which even have multiple modes)

The mode that I find most useful but still leaving something to be desired in terms of user friendliness is the "debug" mode. Step by step execution that at the same time provides access to variables, dataframes etc in a second panel.

One way or another its possible to debug, but a "debug-first" paradigm would make it more fluid and fun to write good code that does exactly what is meant.


Have you tried debugging with Visual studio code? It sounds a lot like what you're asking for. Stick breakpoints where you want them, you can step through etc, and there's variable viewer in the left hand panel - which isn't great on it's own, but then you can right click on the dataframe and open it in data viewer and it'll open a new tab with the dataframe in a table that you can view, and then if you step further through the program you can hit refresh on that view and it'l show you the updated frame.


Yep, and if you click the three dots next to the debugger config selector on the top left, you can access the debug console and play with a live terminal with access to all of the variables and data that are currently paused.


You know what would be killer, if VS Code debug mode would kick out into a notebook environment.


Vscode already has a jupyter notebooks mode with full debugger support if that’s what you mean by this.


I had this idea a few years ago and built an emacs front end for it

Would be really slick in vscode…

https://github.com/ebanner/pynt


The repl it provides is pretty close. You can also use special comments to designate cells and just run those parts of a larger program in isolation with a repl, which I have used for debugging


Coincidentally, I just tried this yesterday and it was a good experience. No setup time, I just clicked to the left of a line of code to set a breakpoint, and then clicked the debug icon instead of the run icon. The local variable viewer was good enough for me.


I think that JetBrains' PyCharm does an excellent job in the debugging arena.

For the basics, you can effortlessly but breakpoints, click through stackframes, inspect and modify variables via the GUI variable inspector window, execute statements in the console, etc.

A nice extra feature is the fact that as you step in, out and over certain pieces of code, the lines of code that are ran get annotated with the resulting values of the variables that get changed. E.g. when for-looping over an iterator, the value of x gets displayed next to the *for x in y:* line.

I also really love the conditional breakpoint functionality; it allows you to only break out into the debugger when a certain condition (expressed in python by the user) is met. Very handy when iterating over larger pieces of data that have sparse bugs in certain edge cases.

Edit: as a bonus, also quite nice that the vim plugin also works in the debugger console :)


Yep, and it'll display your pandas dataframes and had some notebook support too. <3 jetbrains


The most enjoyable programming experience I've ever had, hands-down, was writing Clojure in LightTable, an IDE that allowed line-by-line inspection/execution, which sounds like what you're talking about.

The project looks like it died unfortunately, but I think they did end up adding Python support before that happened. Might be worth checking out.


I believe this is called the REPL driven development in Clojure, when ever you are writing code you have the REPL available and it’s one of the main selling points of Clojure.

My opinion is that I get very close of this REPL style with a notebook and (right click on notebook tab in Jupyter lab) open console for notebook.

This opens a REPL (iphython console) alongside the notebook. The you can code all kinds of Clojure stylistic Python ways using nested dictionaries.

You have to make sure to copy paste you finished code into the notebook or other .py file not to loose it, but it’s minor.


I've been doing the same thing in VS Code for Julia/Python, but the experience is much worse than LightTable [0]. It's slow, the interface is clunky/janky, and noone has invested effort in designing the UI to make navigation/information clear and accessible.

[0] Admittedly LightTable was pretty buggy so it wasn't all fun and games. But still - I would love for my day-to-day development environment (Dart/C++) to look like that.


That functionality and more is available for the most common editors/IDEs: VSCode Calva, Emacs Cider, Intellij Cursive.

You can also write notebooks with clerk, which gives you a whole bunch of data visualization utility and renders to a browser (via websockets). Also has a static html export. Cool thing here it’s that it uses just normal Clojure files, so all your tooling just works.


That's cool, I knew about Cider but not Cursive/Calva (I haven't used Clojure in years). I would love for this to be more common for other languages (F# in particular would really benefit, IMO).


LightTable was indeed very cool when I tried it.

There is a Haskell IDE available in the MacOS Apple Store than has a similar live view feature. I don’t much use this IDE, but it is cool.


Actually what would be perfect in my world is when debug adapter protocol would support live reload. You change the line and everything is re-executed / incrementally updated to reflect the change and you are back at your current breakpoint in an instant.


This debug-first stepper, complete with workspace view, is now supported in JupyterLab for Python (not necessarily the app) using the xeus-python kernel.


Yes, this is the way to go. I found this was the easiest to use.


I think this is why engineers like Matlab. It’s not fashionable these days but they get their IDE and docs right.


The feature in Matlab that is great for this is the 'workspace' panel- and it's something I love most about the Matlab IDE.

being able to watch your variables as the program executes, see what was allocated, see what wound up being stored in them, if there were matricies what size they are and being able to stop and open them up and check data, see data types at a glance etc.. really helps people who are just getting started with understanding imperative, procedural code.

There used to be a Python IDE that had the same feature called Rodeo, but it has been abandoned.

There also used to be a Julia IDE called Juno which was excellent at this as well, but it also have been abandoned in favor of the Julia VS Code extension, which has similar features.


Spyder also has a variable explorer. Basically Spyder is often referred to as MATLAB-like IDE for Python and MATLAB is even referenced in the documentation.


yep, spyder is python for recovering matlab users


To add to this, Thonny has a variable explorer that shows object attributes.


Or RStudio like IDE for Python.


One format missing is a canvas. https://natto.dev/ is awesome for experimenting.


I think something hard when it comes to this is defining the resolution or verbosity of what you see. What i mean is that sometimes you want to see implementations etc and sometimes you want to keep something abstracted away and "debug around" it. I feel that the "step in", "step over" controls for this are somewhat too simplistic. It would be nice for example to mark some library functions that i never want to step into. I think debugging is great but there is still a lot of room for improvement.


JetBrains IDEs have a handy "step into my code" control in addition to the usual ones. It's not quite as granular as picking specific libraries not to step into but it requires no configuration and it's often enough what you want.


Yes, that is exactly my feeling. There are a number of dimensions to be covered, like code resolution, the representation of more complex objects (including prior states), visualization etc. Thats before we go into esoteric stuff like C++ bindings.

The "debugging/scripting a data pipeline" task is somewhat orthogonal to building applications or exploring data but these days it is something alot of people are effectively doing.


AREPL uses a debug first paradigm. It's not step by step, but it does give you access to your variables as you code.

It's free and open source. Let me know what you think: https://github.com/Almenon/AREPL-vscode


The first thing I do after starting a notebook is select 'new console for notebook' which brings up a live console underneath or next to your notebook window as you prefer. The if you hit the little bug icon in the notebook toolbar (on the right, next to the kernel), and the other bug icon in the right sidebar, you get full interactive control and views of everything.

Jetbrains' DataSpell has the nicest notebook UI in my experience - lots of database integrations, R as well as Python, endlessly configurable. It's a relatively new product and still hiccups on some things, eg iPywidget and other interactive notebook tools.


I will drop IPython.embed() break points along an execution path to debug with variable inspection. I tried pdb earlier and attribute to user error on my part not really getting it.


There is also VSCode python hybrid mode.It has variable viewer and debugging.


Like MATLAB?


I use Jupyter inside VS Code -- the Jupyter interface inside VS Code has a nicer UI and is very polished (supports black reformatting, refactoring, step debugging, etc.).

Haven't gone back to vanilla Jupyter or JupyterLab in the browser for years.


For me, I find I need both JupyterLab and the VSCode Jupyter extension. The vscode extension is superior for step-by-step debugging especially with integrated REPL console. However, I notice running cells in vscode is several magnitudes slower than executing the same cells in a JupyterLab session. Also I use several JupyterLab extensions such as citation manager, mathjax 3, etc and custom kernels utilising docker/GPUs etc. I'm not sure how to use these in vscode and also I'm not a fan of vscode's use of Katex over mathjax 3.


I find VS Code's UI for notebooks to be clunky. Each cell is way too large and it's cluttered with buttons that only show up with mouse-over.

Another issue is if you use vim key maps, Jupyter allows it in editor but vim mapping off automatically in notebooks.


Have you seen Python Interactive? There are no separate editors for each cell, but you edit a single file and use #%% to define the boundaries between cells. https://code.visualstudio.com/docs/python/jupyter-support-py


I could never figure out a good use of JupyterLab over regular Jupyter notebooks mainly because of compatibility reasons. Still it is good that they are making progress.

Over the last 5 years I've gone:

* Anaconda install of Jupyter Notebooks (until the license change)

* running JupyterHub server for my organization

* PyCharm

* VSCode

VS Code has made fantastic progress in the last few years for Jupyter Notebook support. Integrated with Copilot it is scarily productive.

At the same time for teaching non-technical audiences it is hard to beat Google Colab for availability(mybinder.org type solutions are generally more brittle).


I agree with the comments about bigger notebooks getting slower.

But I'm not sure it's enough of a hit to outweigh the other productivity gains I get, which are myriad. With vscode, I get vim emulation in Jupyter notebooks that actually works. I get better autoformatting options, and linting of notebook code. And I get a proper programmer's editor for working with *.py files. And that's just my greatest hits list.


Vscode offers such an improved experience. Once you discover that autocomplete can take a millisecond instead of 5 seconds its pretty hard to go back.


I have tried VS Code for notebooks, but for the life of me I am not able to get an interpreter console attached to the same kernel running the notebook, and for me that's a no-go. I usually use the console as the way to test things/syntax easily, and then move that to the notebook.


I am also mostly using VSCode for Notebooks however one big downside is the performance goes down the drain when the notebooks are large in size (i.e. Contains images, plots).

The cell execution speed drastically (up to 10x) slows down w.r.t. Notebook file size. Still looking for a solution to this problem.


The biggest block for me to use Jupyter in VSCode is because it’s support for Jupyter environments in docker is a huge pain.


The only case where this setup does not cut it is when you wish to use the notebook as slides for a presentation.


It has better auto completion. Not better interface.


black has an optional jupyter component that lets it format notebooks with

  pip install "black[jupyter]"


Jupyter is the main reason I come back to writing Python. I really wish there was a Jupyter environment for any language. I'm aware there exist kernels for other languages, but many of them are unstable, slow, outdated, or miss important features. I have tried multiple, and every time I come back to Python, not because I want to write Python, but because Jupyter works so well.


R and Julia’s aren’t good? They’re in the name, so I would have thought support would continue to be strong. I like Julia’s Pluto and the reactive style more than Jupyter anyway.


Every year, most JuliaCon workshops are presented via Jupyter notebooks; the Julia support is pretty good and has remained stable for a long while.

Some of the unofficial extensions like `code_prettify` don't work for Julia kernels, but at least for my usage, I've never felt the need for such tools in a Jupyter notebook.


IJulia is falling behind. If you look at github activity, it has 1 commit in past month. Compared to 9 authors and 39 commits in past month for IPython. IJulia issues are piling up.


Jupyter

Ju(lia)-pyt(hon)-eR

To make it clear


This just blew my mind, never realized that!


Hint: Do not use the conda package. Just install R and IPRkernel (an R package) and you are good to go.


Anaconda foundation does really great marketing, outreach, advocacy, education...it's a good first place for data scientists to land. But actually installing and using Conda is always annoying. I wind up doing everything with pip and venv, because pip jsut works.


On the topic of Conda and Jupyter, install jupyter via a virtualenv pip installation, and then to use a specific conda kernel, load your conda environment in another shell, and install IRkernel in that environment. As long as you install to the main Jupyer prefix, jupyter should see the new conda kernel.

You can do this for as many kernels/conda envs as you need


Is there a language/system which begins w/ the letter "e" which would be suitable to add?


Elixir, but there’s LiveBook already.


Jupyter's idea is based on how Smalltalk and Lisp environments work.


Isn’t it mainly based on how Wolfram Mathematica works?


Where do you think it got its inspiration from?

https://writings.stephenwolfram.com/2013/06/there-was-a-time...


Thanks for the link!

The bits I was aware of (/lived through), from UX standpoint was:

Mathematica -> iPython notebooks -> jupyter notebooks.

And from whatever source, I knew Mathematica notebooks had looked how they had /at least/ since the mid 90s. (My Mathematica days were college, and I did not spend much time contemplating the history of my tooling)

From elsewhere in the comments, it sounds like iPython->jupyter wasn't just a rebranding, as I assumed it was at the time.


But is that accidentally rediscovering things, or is it intentional and simply still has a long way to go?


Since Julia is Lisp-inspired and part of Jupyter since the beginning, I do think it's intentional.


Jupyter spun off IPython which has existed since 2000.


Can we squint and squeeze in postgres in there as well.


Jetbrains dataSpell is what you want.


Cannot see how. Smalltalk and Lisp basis are allowing modification of the environment itself even while running. Jupyter cannot do that and the notebook interface (basically originating from Mathematica) isn't suitable, or even has goal to be, for generic programming.


Here is how then,

"Symbolics Lisp Machine demo", special focus on 5 minute onwards.

https://www.youtube.com/watch?v=o4-YnLpLgtk

Mathematica was inspired from Lisp,

https://writings.stephenwolfram.com/2013/06/there-was-a-time...


>"Symbolics Lisp Machine demo", special focus on 5 minute onwards.

To provide more information since video description lacks, this is OpenGenera a Lisp Machine OS designed to also run hosted on Unix systems. This specific version (seeing it uses jpeg) should be from mid- to late- 90s. Had tried some version but didn't remembered being able to have graphics on REPL.

Yes, that's basically how Jupyter console works. But will still argue that the Jupyter model is such weaker version of this that can hardly be called a derivative. Listener isn't only limited to being used like an isolated shell but can be attached to any part of the environment. (And this introspection is core to Lisp and Smalltalks environments.)

>Mathematica was inspired from Lisp

The language Wolfram, yes. The cell-based notebook interface was new. But similar to previous is a more limited version of what you had available; specifically interchanging text and code in Zmacs. Something that was also an advantage (easier of reason with) for what Mathematica was used for.


Check out the Polyglot Notebooks extension for VS Code. It supports other languages but also, notably, mixing languages in the same notebook.


Even crazier, it lets you interop between the languages by way of sharing variables between them. Polyglot is pretty darn impressive.


I do all my data analysis in groovy using JupyterLab via beakerx:

https://github.com/twosigma/beakerx

It has some quirks, but it works amazingly well really, esp. with the enhanced widgets BeakerX gives you.


I’ve been messing around with Microsoft’s Polyglot Notebooks which looks pretty interesting. Caveat: I’ve not used Jupyter and that was my first experience of notebooks.


orgmode + org-babel comes really close, but I feel like org just doesn't get much traction outside of Emacs,because markdown is good enough.


This [1] looks ugly and distracting.

    #+begin_src R :colnames yes
      words <- tolower(scan("intro.org", what="", na.strings=c("|",":")))
      t(sort(table(words[nchar(words) > 3]), decreasing=TRUE)[1:10])
    #+end_src
Is it possible to have minimal syntax for these code blocks? Markdown is nice because it looks clean, and the syntax does not get in the way. Markdown codeblocks are

    ```
    code
    ```
[1] https://orgmode.org/worg/org-contrib/babel/intro.html#source...


You could put all the common headers in a property drawer of top-level heading, and all child source code blocks inside that heading will inherit them.

    * Notebook
    :PROPERTIES:
    :header-args:R: :colnames yes
    :END:

    #+begin_src R
    words <- tolower(scan("intro.org", what="", na.strings=c("|",":")))
    t(sort(table(words[nchar(words) > 3]), decreasing=TRUE)[1:10])
    #+end_src
As for making the block delimiter look different, there are various ways of making the text display differently from the actual text. For example, you could use the built-in `prettify-symbol-mode` with this config:

    (setq prettify-symbols-alist
          '(("#+begin_src" . ?)
            ("#+end_src"   . ?―))


You could, for example, use Python which I believe would be clearer than this R mess ;-)

Joking aside though, if src block header (which can have a lot of options set up including specifying environment, tangling other block variable of setting totally separate interpreter version) is huge problem there are plethora of presentation customization.

Emacs allows customizing face and more. Org-modern [1], for example, uses font signatures and fringes for making it less “technical”

[1]: https://github.com/minad/org-modern


I'd expect there will be a lot of new solutions coming soon, because Polymode [0] seemingly solves the hard problem the org-babel struggles against. The immiscible major modes, and all that extra syntax and buffer-switching that org-babel needs to work around that problem.

[0] https://www.masteringemacs.org/article/polymode-multiple-maj...


There is EIN, which indeed uses Polymode, and of which I am a happy user: http://millejoh.github.io/emacs-ipython-notebook/


Binder handles a stack of languages, but I've only tried Python & Julia.

Gonna place a bet that AI delivers your wish within 2-3 years.


What kind of programs do you use Jupyter for and what are the main benefits in your opinion?


Come to think of it, I don’t know why Node doesn’t have this. It’d be pretty trivial, no?


Creating something in the scope of Jupiter is not trivial.

You do have a few nodejs/typescript/javascript kernels that work fine, so I don't see the point for rewriting Jupiter in node specifically for any reason.

In any case, maybe creating something for the browser only is easier because JS is native, but still I wouldn't call it "pretty trivial". For this use case (and more) you have projects like

https://starboard.gg/

https://observablehq.com/


Perhaps I’m not fully up to speed with everything Jupyter does. It strikes me as a REPL with a canvas in what is basically MDX. What am I overlooking?


You're overlooking the power that a preconfigured REPL with a persisted canvas integrated with markdown brings to the table.

It is a really powerful toolset, and building your own environment from separate parts is not trivial; so having it preconfigured in a standard way that others can reuse is no small matter.

I prefer online notebooks with a functional-reactive behaviour, such as ObservableHQ (which is JavaScript-based, rather than python); but Jupyter was the first popular one, so it hit hard.

https://mobile.twitter.com/observablehq/status/1234868724588...


You might be misinterpreting me here. I’m not dismissing the utility. Quite the contrary, I’m saying that it wouldn’t be terribly groundbreaking or challenging to implement compared to some other green field problem. Although at this point I do agree that trivial was the wrong word.

Observable looks cool, but it doesn’t seem portable. It really bums me out that so much of the JS ecosystem is so sheerly commercial in this way. Sometimes I don’t understand why this is like this compared to the Python ecosystem. It’s not that I’m cheap either. I’ll happily pay for things that are properly monetized and provide a good value for my time. This doesn’t seem like it.


You might like tslab. It allows you to have the full notebook experience with either JavaScript or Typescript. My day to day is data analysis. JS/TS runs circles around pandas and you aren’t constrained to vectorized operations. If there were a suitable replacement for matplotlib I would leave python behind altogether.

https://github.com/yunabe/tslab


Rust works great in Jupyter.


What is the experience like using a compiled language in such interactive environment? Like you cannot for example do something in one cell and utilize the result to another, or can you?


Yes you can, it is no different from using OCaml or Haskell REPLs, for example.

https://github.com/google/evcxr/blob/main/evcxr_jupyter/samp...


Nice! I've been using and recommending JupyterLab Desktop to newcomers since the first release, and things work great out of the box. To give you an example, we held an "Intro to Python" tutorial with absolute beginners, and everyone was able to get their Python coding environment setup in 5 minutes instead of 1 hours (as is usual otherwise when beginners have to do command line stuff).

I think one of the biggest reasons why the R community is so strong is because of how easy it is to install RStudio and get started doing stats with a GUI program, and I see JupyterLab Desktop filling the same niche, for stats and for learning to code in Python more generally.

The pip-install business was always the weakest link in the Python beginner's journey, but now things are going to be be much smoother.


What is the problem with installing python outside of JupyterLab (or „the regular way“) in your opinion? I‘m teaching Python basics for a few years now and usually we get everything up and running with Python and VSCode in three processes as well: installing python if it’s not installed, installing VSCode and then installing the extension.


The main difficulties we've faced are around cross-platform instructions, specifically Windows. I suppose for basic Python it would be easy enough, but the complexity escalates once you need modules and have to run a `pip install` or two, because this requires learning about the command line (which some people have never seen before).

Here are some examples of instructions we've had to watch out for in the past, that are no longer needed when using JupyterLab Desktop:

  - Windows installer: make sure to check the box that adds Python to %PATH%
  - Use cmd.exe not Power Shell (which has weirdness if using venv[1])
  - Run the command `python -m venv myvenv` 
  - Activate myvenv (OS-specific instructions)
  - Run the command `python -m pip install pandas jupyterlab`
  - Run `juppyter-lab` to get started
  - Press CTRL+C (SIGINT) to stop execution in the end
It's doable, and we were able to run the tutorials since we had several co-hosts available to help beginners when they got stuck, but we definitely lose some momentum every time we try to run things this way.

[1] https://docs.python.org/3/library/venv.html#:~:text=On%20Mic...


Another strategy that works really well with beginners is to use jupter notebooks via https://mybinder.org/ links. We put all the materials on github, and then send the workshop participants a link[2] that launches a remote jupyter lab, so they don't have to install anything at all.

That works well, but make sure to download your notebook in the end of the session because they are ephemeral (will disconnect if no commands for 20 minutes).

[2] Example mybinder link that launches a notebook from a github repo: https://mybinder.org/v2/gh/minireference/noBSstatsnotebooks/...


I find that explaining virtualenvs alone quickly becomes a morass. You can skip the discussion entirely, but it is such a necessary step in good practices that it feels negligent to omit.

Not sure about what platform or level of experience you are accustomed, but I am frequently working with Windows-only users who have never even heard of PATH. Inevitably, someone needs assistance becomes something got stuck on configuring the tooling and python cannot be found. Especially fun when it is the person's N attempt at learning Python and I discover that there is a historical half-working interpreter already present.

Conda is also a huge hurdle which I try to avoid, but if I know the ultimate aim is for machine learning, gotta deal with that on-boarding.


Thank for answering. I understand that the interpreter situation can be annoying. There is WinPython [0] to circumvent that to some degree. I feel like if I don’t do it the „VSCode and py-file“ way, it’ll be more and more difficult to keep everything together when teaching about modularity and putting functions in helper scripts, putting tests in other directories and such. I think it’s just because I got used to using VSCode and not Notebooks although I’ve used them for a while.

[0] https://winpython.github.io


Why does the macOS version run as slow Intel code under Apple Silicon instead of native? It's just Electron, why is it not running native arm64?

I found this: https://github.com/jupyterlab/jupyterlab-desktop/issues/279

So JupyterLab Desktop is not new? I still don't understand why Electron apps can't immediately be built for arm64...


I have no idea what Jupyer is only that it’s vaguely related with machine learning

But… wild shot - lot of machine learning stuff is not on M1, because there is no free ARM compiler for FORTRAN and there is some FORTRAN code in some popular machine learning stuff. Like R, I think.


Huh? The GNU compiler collection contains a Fortran frontend, and GCC can target nearly every architecture under the sun, inluding Aarch64.


this is from 2020, maybe it was fixed.

> GCC’s GFortran supports 64-bit ARMs: … However, the Apple silicon platform uses a different application binary interface (ABI) which GFortran does not support, yet.

There was some experimental branch or whatever. I’m not sure of the state now.


I can see why you'd say it's ML, but Jupyter is a "notebook" or kind of "literate programming" environment for Python (originally) and other languages, a kind of REPL on steroids.

You see it in a lot of ML examples around the Internet because it's a pretty good way of demonstrating and documenting ML for tutorials.


Ah, okay. Then I’m totally off the mark here.


Also it's split into a frontend "client" for the UI and a backend "server" (also called a "kernel") for computation. The client doesn't need any of the Fortran BLAS stuff, only the backend, which runs in a completely separate process and communicates over network ports.


I find Jupyterlab working great, if you intend to publish your work on the web. When I do straight data science / machine learning research and prototyping, I find PyCharm Scientific Mode much better suited for the task, than Jupyterlab. It does not have the publishing UI overhead and basically re-creates MATLAB UI/UX for Python, together with ability to run cells separately, which is fantastic for the prototyping.


I agree but currently there still quite a few bugs that force me to open jupyter directly.

Eventually, I think there should be a better separation of concerns. IDEs should be IDEs and Jupiter notebook should be a thinner background service a-la typescript language service.


As a non-ds person, but that enjoys tinkering with data science stuff, I'm not a big fan of having to code in jupyterlab.

But with no proper gpu locally I'm kinda forced to use something remote. And getting DS dependencies set up correctly is almost impossible, so even if I had a GPU I'd probably anyways end up with something remote working out of the box.

I've tried to connect to them with Pycharm, and even downloaded a trial of Dataspell, their new IDE for DS. But I can't really get the integration to work. I'd like to do everything from the IDE, using the interpreter and power from the remote server. But feels like were not there yet, so many small bugs.


PyCharm has a full remote workflow, which I find too difficult to use. I am sure they will make it easier as it still in infancy. Instead, I just configure a remote SSH interpreter. Then you could create a remote SSH project pretty easily. The wrinkle is that you can not create an SSH project with the Scientific Mode (it says not supported). Instead you just create a regular SSH project and then, after the project is created, you switch to the Scientific Mode. Everything will work fine then. I do not think the above process will work in the Community version of the PyCharm, - you have to use Professional Edition.


Running Jupyter notebooks inside Visual Studio Code is also pretty convenient.

Just create a file with .ipymb extension, open it and install suggested packages.

https://code.visualstudio.com/docs/datascience/jupyter-noteb...


No,its better to create a notebook with .py extension and use light Percent format for cells. This creates a text file notebook that can be git diff'd



One of Matlab's best features.


Absolutely. You can always export it as .ipynb. You have now the full power of an IDE (with Pylint, Pytest, Black etc.) at hand.


The python ecosystem is moving towards the MyST format for jupyter notebooks to solve this issue,

https://jupyterbook.org/en/stable/content/myst.html


This! And Spyder can work as a jupyter notebook by evaluating on a cell-by-cell basis.


I don't know about git, but github is able to do smart diffs with .ipynb files.


Notebooks are complex because they allow you to mix different cell types and keep outputs. That's not a notebook.


I think there is also a Jupytext extension so that they can just be saved as markdown files.


Or even better: as plain python files, whose comment paragraphs are interpreted as markdown cells. Thus you have just one file notebook.py that you can run directly with the python interpreter, open it with a text editor, or open it with the browser and edit/run it like a notebook. Jupytext is fantastic!

Why this is not core functionality of jupyter is beyond me.


Indeed. It’s hard to beat the IDE concept here. Especially given the unique features of VS Code like remote development and dev containers.


Any plans for a portable Jupterlab Desktop?

I don't see a Portable Version of this software for Windows. I use Anaconda and it has it's own Jupyter but I would not mind having a Portable Desktop version of Jupyterlab that I could just bring up and have it work with the default Python or R interpreter in Conda.

:)


Agreed that would be a killer feature. Unzip this package and get a functional Python + Jupyter + scientific (numpy, pandas, scipy, matplotlib) environment.

I have been on-and-off teaching some people Python and the initial setup on-ramp is horrible. Ok, so install Python, now ignore-this-for-now-complications: create a "virtualenv", use this thing called "pip", install these half-dozen things to get a basic notebook (Jupyter + scipy things), install these other half-dozen quality of life things, you should probably also have "conda" for the future, etc. That's a lot of nonsense for someone I am trying to show an alternative to Excel.

My shortcut, "You want to try Python?" approach has been to start with JupyterLite[0] where I can immediately get people coding and delay that pain.

[0] https://github.com/jupyterlite/jupyterlite


Wait, what is wrong with Anaconda ?


I love notebooks for me there is no better way to dive into a new dataset or badly documented api and just try and error your way through. Keeping whatever messy state you want. I always see them as “throw away” anything that is useful gets turned into a typed modularised .py files.


Re debug experience I completely agree - it just isn’t sexy… yet? Browser dev tools have proven that debugging is super important. It feels like code debuggers have been stagnant for past couple decades.

https://thonny.org/ has been featured on HN recently. Looks at least as an iterative step forward.


So here is a piece of anecdata for those interested in comparisons to vscode, I saw this hn post and installed the jupyter desktop app, and set up my current play project on it (sicp exercises in jupyter, I'd previously been using the browser). I also set up the same thing in vscode.

For this example I couldn't get vscode to perform correctly (maybe possible but not obvious), while the desktop app worked as advertised.

Vscode mistakenly labelled the cells of my scheme notebook as python, while correctly running the scheme kernel. The result was that I could run my cells and get the right output, but entering code was annoying as the auto indenting and error markers were based on python instead of scheme.

Anyway, unless I can fix the vscode weirdness, score one for the desktop app, I will keep using it.


A small side observation: did you remember the first "fully integrated environments" of the origins? Like Xerox PARC Smalltalk workstations, LispM etc?

Well, it does not matter how much people invest in "applications" now, that's was and is the way to go because integration have proven COUNTLESS time it's value. The ONLY remaining reasons not to have it is business. An unsustainable form of business who DEPRIVE HUMANITY of much evolution potential for short/mid term big money for just some.

Think about that anytime you feel the power of such environments in some growing application, than think about what you miss being tied to modern systems because too many have developed stuff on them and do not have do the same on classic ones.


Atom + Hydrogen allowed for line by line execution and immediate output in regular .py files, no cell definitions needed. Haven't seen this functionality anywhere else yet and miss it a lot. If anyone knows an alternative, please let me know.


Smalltalk and Common Lisp IDEs were the very first offering similar capabilities.


Spamming debugger breakpoints everywhere in pycharm


Why would I switch to this?

I'm a huge fan of Jupyter, I use Jupyter Lab as my primary IDE, but a little confused here. Maybe if I frequently connect to different servers? The python env discovery looks cool, but nb_conda_kernels does this for me now.


IMHO this is for folks used to a desktop app experience, like VS Code. They want to download something and run it, not configure and setup a local web app.


Yep. They need something like this for Python period, not just Jupyter. Something I can just put on my boss' computer and tell him "load a script, change a number, hit run". No futzing around with environments.

Like qbasic.


Exactly, I love the web. I love being able to switch between my chromebook and other computers, multiple operating systems. I also like it all being in the cloud.


it's not for you


Is there a Jupyter-like environment for shell? For example, you could tell it to start from a Docker image, then give it shell commands to run in each block, and could show output. Maybe even could "script" to record interactive sessions (though no way of replaying those sessions on reloads, you'd have to detect file changes and diff and apply patches).


Just start each cell in a notebook with %%bash

Jupyter is a generic platform these days, so spans pretty much all langs. I learnt powershell on a mac by writing notes in a jupyter notebook and executing powershell cells.


If you’re talking python then you just use ipython. There’s an auto reload feature that live updates code in situ so you don’t even have to reload data in between.

I recorded a video with terrible quality audio years ago that shows how effective it is. https://youtu.be/k-mAuNY9szI


Yes, try running ipython from the shell. It should already be installed if you installed Jupyter.


tmux + tmuxinator seems like a solution?

Though my workflow right now is mostly SSH + tmux + jupyterlab


Somewhat off topic:

Does anybody have an IDE solution that offers auto-completion in multiple languages in the SAME notebook? For example, I'm currently using Jupyter Notebooks in DataSpell primarily in Python, but for a few cells I'll use R (using rpy2) and the R code doesn't do autocorrection.


Hi! My name is Claudia and I am a PM at Microsoft (opinions are my own) working on Polyglot Notebooks in VS Code. Polyglot Notebooks are exactly what you are describing! They are notebooks where you can use multiple languages AND share variables between them to ensure a continuous workflow. Not only that, but each language has language server support. Polyglot Notebooks currently supports C#, F#, PowerShell, JavaScript, HTML, SQL, KQL, and Mermaid.

We have just added support for Python and R integration and I am actually in search of external testers! If you are willing to sign an NDA to try out our Python and R integration and give us feedback please drop your email in the form below and I will reach out with instructions for you to try it out!

https://forms.office.com/r/UQchfQSGa5

If you'd like to start trying it out today you can install the extension from the marketplace here: https://marketplace.visualstudio.com/items?itemName=ms-dotne...

https://github.com/dotnet/interactive


That's amazing! I'll be happy to try Python & R integration! I've signed up on the form!


This is fantastic.

I will test out PowerShell. Thanks for providing links.


It's great but man... wish the design looked better. Someone from Atom or Arc should make it look a little nicer


It embeds Jupyter and looks like the same as any Jupyter Lab install aside from the welcome page. I think it looks fine.


This might be unfair judging off scrrenshots but the Welcome page and Connect menu both seem modelled off VS Code. The rest is from JupyterLab which is already very heavily used. It's tough to redesign something that already has a lot of users.


Try Jupyter inside VS Code. It looks better.


I am really, really not a fan of IDEs being a part of the environment in which you execute the code. Basically I want my virtual environment separate from the IDE code. Both Spyder and Jupyter do this and I don't like it.


The code is running in a separate interpreter.


Really ? That's nice. I thought you had to install some ipykernel and all kinds of libs in your venv.

Maybe that's changed now


This is the first Medium article I've read in a while, and wow, the interface feels so unbalanced to me. Having the author and recommended articles be stationary and take up 1/4 or 1/3 of the content space seems like a huge waste, and then centering everything and having decently large blocks of white space on either side of the content makes the text side of the space feel so much heavier. Maybe I've been using substack too much, but man, I'm glad to be moving away from Medium.


Interesting, does make me wanna try jupyter now


And yet another Electron app with a VSCode skin hits the road.


Is there no flatpak or AppImage for this desktop-app?


the jupyter desktop plus whisper and gpt might be the new way we use computers, like iron man's jarvis :)


....and here i am just using DataSpell


Is there something that can do the same with c#? I know c# and it's eco system much better and longer as python.



This desktop version goes against everything that makes jupyter awesome.


What's wrong with it, then?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: