There are by now several "paradigms" for interacting with a python interpreter (beyond the basic console) depending on what the primary "product" of the work. You have notebook style with jupyter, you have "studio" style with spyder and various IDE's (which even have multiple modes)
The mode that I find most useful but still leaving something to be desired in terms of user friendliness is the "debug" mode. Step by step execution that at the same time provides access to variables, dataframes etc in a second panel.
One way or another its possible to debug, but a "debug-first" paradigm would make it more fluid and fun to write good code that does exactly what is meant.
Have you tried debugging with Visual studio code? It sounds a lot like what you're asking for. Stick breakpoints where you want them, you can step through etc, and there's variable viewer in the left hand panel - which isn't great on it's own, but then you can right click on the dataframe and open it in data viewer and it'll open a new tab with the dataframe in a table that you can view, and then if you step further through the program you can hit refresh on that view and it'l show you the updated frame.
Yep, and if you click the three dots next to the debugger config selector on the top left, you can access the debug console and play with a live terminal with access to all of the variables and data that are currently paused.
The repl it provides is pretty close. You can also use special comments to designate cells and just run those parts of a larger program in isolation with a repl, which I have used for debugging
Coincidentally, I just tried this yesterday and it was a good experience. No setup time, I just clicked to the left of a line of code to set a breakpoint, and then clicked the debug icon instead of the run icon. The local variable viewer was good enough for me.
I think that JetBrains' PyCharm does an excellent job in the debugging arena.
For the basics, you can effortlessly but breakpoints, click through stackframes, inspect and modify variables via the GUI variable inspector window, execute statements in the console, etc.
A nice extra feature is the fact that as you step in, out and over certain pieces of code, the lines of code that are ran get annotated with the resulting values of the variables that get changed. E.g. when for-looping over an iterator, the value of x gets displayed next to the *for x in y:* line.
I also really love the conditional breakpoint functionality; it allows you to only break out into the debugger when a certain condition (expressed in python by the user) is met. Very handy when iterating over larger pieces of data that have sparse bugs in certain edge cases.
Edit: as a bonus, also quite nice that the vim plugin also works in the debugger console :)
The most enjoyable programming experience I've ever had, hands-down, was writing Clojure in LightTable, an IDE that allowed line-by-line inspection/execution, which sounds like what you're talking about.
The project looks like it died unfortunately, but I think they did end up adding Python support before that happened. Might be worth checking out.
I believe this is called the REPL driven development in Clojure, when ever you are writing code you have the REPL available and it’s one of the main selling points of Clojure.
My opinion is that I get very close of this REPL style with a notebook and (right click on notebook tab in Jupyter lab) open console for notebook.
This opens a REPL (iphython console) alongside the notebook. The you can code all kinds of Clojure stylistic Python ways using nested dictionaries.
You have to make sure to copy paste you finished code into the notebook or other .py file not to loose it, but it’s minor.
I've been doing the same thing in VS Code for Julia/Python, but the experience is much worse than LightTable [0]. It's slow, the interface is clunky/janky, and noone has invested effort in designing the UI to make navigation/information clear and accessible.
[0] Admittedly LightTable was pretty buggy so it wasn't all fun and games. But still - I would love for my day-to-day development environment (Dart/C++) to look like that.
That functionality and more is available for the most common editors/IDEs: VSCode Calva, Emacs Cider, Intellij Cursive.
You can also write notebooks with clerk, which gives you a whole bunch of data visualization utility and renders to a browser (via websockets). Also has a static html export. Cool thing here it’s that it uses just normal Clojure files, so all your tooling just works.
That's cool, I knew about Cider but not Cursive/Calva (I haven't used Clojure in years). I would love for this to be more common for other languages (F# in particular would really benefit, IMO).
Actually what would be perfect in my world is when debug adapter protocol would support live reload. You change the line and everything is re-executed / incrementally updated to reflect the change and you are back at your current breakpoint in an instant.
This debug-first stepper, complete with workspace view, is now supported in JupyterLab for Python (not necessarily the app) using the xeus-python kernel.
The feature in Matlab that is great for this is the 'workspace' panel- and it's something I love most about the Matlab IDE.
being able to watch your variables as the program executes, see what was allocated, see what wound up being stored in them, if there were matricies what size they are and being able to stop and open them up and check data, see data types at a glance etc.. really helps people who are just getting started with understanding imperative, procedural code.
There used to be a Python IDE that had the same feature called Rodeo, but it has been abandoned.
There also used to be a Julia IDE called Juno which was excellent at this as well, but it also have been abandoned in favor of the Julia VS Code extension, which has similar features.
Spyder also has a variable explorer. Basically Spyder is often referred to as MATLAB-like IDE for Python and MATLAB is even referenced in the documentation.
I think something hard when it comes to this is defining the resolution or verbosity of what you see. What i mean is that sometimes you want to see implementations etc and sometimes you want to keep something abstracted away and "debug around" it.
I feel that the "step in", "step over" controls for this are somewhat too simplistic. It would be nice for example to mark some library functions that i never want to step into.
I think debugging is great but there is still a lot of room for improvement.
JetBrains IDEs have a handy "step into my code" control in addition to the usual ones. It's not quite as granular as picking specific libraries not to step into but it requires no configuration and it's often enough what you want.
Yes, that is exactly my feeling. There are a number of dimensions to be covered, like code resolution, the representation of more complex objects (including prior states), visualization etc. Thats before we go into esoteric stuff like C++ bindings.
The "debugging/scripting a data pipeline" task is somewhat orthogonal to building applications or exploring data but these days it is something alot of people are effectively doing.
The first thing I do after starting a notebook is select 'new console for notebook' which brings up a live console underneath or next to your notebook window as you prefer. The if you hit the little bug icon in the notebook toolbar (on the right, next to the kernel), and the other bug icon in the right sidebar, you get full interactive control and views of everything.
Jetbrains' DataSpell has the nicest notebook UI in my experience - lots of database integrations, R as well as Python, endlessly configurable. It's a relatively new product and still hiccups on some things, eg iPywidget and other interactive notebook tools.
I will drop IPython.embed() break points along an execution path to debug with variable inspection. I tried pdb earlier and attribute to user error on my part not really getting it.
I use Jupyter inside VS Code -- the Jupyter interface inside VS Code has a nicer UI and is very polished (supports black reformatting, refactoring, step debugging, etc.).
Haven't gone back to vanilla Jupyter or JupyterLab in the browser for years.
For me, I find I need both JupyterLab and the VSCode Jupyter extension. The vscode extension is superior for step-by-step debugging especially with integrated REPL console. However, I notice running cells in vscode is several magnitudes slower than executing the same cells in a JupyterLab session. Also I use several JupyterLab extensions such as citation manager, mathjax 3, etc and custom kernels utilising docker/GPUs etc. I'm not sure how to use these in vscode and also I'm not a fan of vscode's use of Katex over mathjax 3.
I could never figure out a good use of JupyterLab over regular Jupyter notebooks mainly because of compatibility reasons. Still it is good that they are making progress.
Over the last 5 years I've gone:
* Anaconda install of Jupyter Notebooks (until the license change)
* running JupyterHub server for my organization
* PyCharm
* VSCode
VS Code has made fantastic progress in the last few years for Jupyter Notebook support. Integrated with Copilot it is scarily productive.
At the same time for teaching non-technical audiences it is hard to beat Google Colab for availability(mybinder.org type solutions are generally more brittle).
I agree with the comments about bigger notebooks getting slower.
But I'm not sure it's enough of a hit to outweigh the other productivity gains I get, which are myriad. With vscode, I get vim emulation in Jupyter notebooks that actually works. I get better autoformatting options, and linting of notebook code. And I get a proper programmer's editor for working with *.py files. And that's just my greatest hits list.
I have tried VS Code for notebooks, but for the life of me I am not able to get an interpreter console attached to the same kernel running the notebook, and for me that's a no-go. I usually use the console as the way to test things/syntax easily, and then move that to the notebook.
I am also mostly using VSCode for Notebooks however one big downside is the performance goes down the drain when the notebooks are large in size (i.e. Contains images, plots).
The cell execution speed drastically (up to 10x) slows down w.r.t. Notebook file size. Still looking for a solution to this problem.
Jupyter is the main reason I come back to writing Python. I really wish there was a Jupyter environment for any language. I'm aware there exist kernels for other languages, but many of them are unstable, slow, outdated, or miss important features. I have tried multiple, and every time I come back to Python, not because I want to write Python, but because Jupyter works so well.
R and Julia’s aren’t good? They’re in the name, so I would have thought support would continue to be strong. I like Julia’s Pluto and the reactive style more than Jupyter anyway.
Every year, most JuliaCon workshops are presented via Jupyter notebooks; the Julia support is pretty good and has remained stable for a long while.
Some of the unofficial extensions like `code_prettify` don't work for Julia kernels, but at least for my usage, I've never felt the need for such tools in a Jupyter notebook.
IJulia is falling behind. If you look at github activity, it has 1 commit in past month. Compared to 9 authors and 39 commits in past month for IPython. IJulia issues are piling up.
Anaconda foundation does really great marketing, outreach, advocacy, education...it's a good first place for data scientists to land. But actually installing and using Conda is always annoying. I wind up doing everything with pip and venv, because pip jsut works.
On the topic of Conda and Jupyter, install jupyter via a virtualenv pip installation, and then to use a specific conda kernel, load your conda environment in another shell, and install IRkernel in that environment. As long as you install to the main Jupyer prefix, jupyter should see the new conda kernel.
You can do this for as many kernels/conda envs as you need
And from whatever source, I knew Mathematica notebooks had looked how they had /at least/ since the mid 90s. (My Mathematica days were college, and I did not spend much time contemplating the history of my tooling)
From elsewhere in the comments, it sounds like iPython->jupyter wasn't just a rebranding, as I assumed it was at the time.
Cannot see how. Smalltalk and Lisp basis are allowing modification of the environment itself even while running. Jupyter cannot do that and the notebook interface (basically originating from Mathematica) isn't suitable, or even has goal to be, for generic programming.
>"Symbolics Lisp Machine demo", special focus on 5 minute onwards.
To provide more information since video description lacks, this is OpenGenera a Lisp Machine OS designed to also run hosted on Unix systems. This specific version (seeing it uses jpeg) should be from mid- to late- 90s. Had tried some version but didn't remembered being able to have graphics on REPL.
Yes, that's basically how Jupyter console works. But will still argue that the Jupyter model is such weaker version of this that can hardly be called a derivative. Listener isn't only limited to being used like an isolated shell but can be attached to any part of the environment. (And this introspection is core to Lisp and Smalltalks environments.)
>Mathematica was inspired from Lisp
The language Wolfram, yes. The cell-based notebook interface was new. But similar to previous is a more limited version of what you had available; specifically interchanging text and code in Zmacs. Something that was also an advantage (easier of reason with) for what Mathematica was used for.
I’ve been messing around with Microsoft’s Polyglot Notebooks which looks pretty interesting. Caveat: I’ve not used Jupyter and that was my first experience of notebooks.
#+begin_src R :colnames yes
words <- tolower(scan("intro.org", what="", na.strings=c("|",":")))
t(sort(table(words[nchar(words) > 3]), decreasing=TRUE)[1:10])
#+end_src
Is it possible to have minimal syntax for these code blocks? Markdown is nice because it looks clean, and the syntax does not get in the way. Markdown codeblocks are
You could put all the common headers in a property drawer of top-level heading, and all child source code blocks inside that heading will inherit them.
* Notebook
:PROPERTIES:
:header-args:R: :colnames yes
:END:
#+begin_src R
words <- tolower(scan("intro.org", what="", na.strings=c("|",":")))
t(sort(table(words[nchar(words) > 3]), decreasing=TRUE)[1:10])
#+end_src
As for making the block delimiter look different, there are various ways of making the text display differently from the actual text. For example, you could use the built-in `prettify-symbol-mode` with this config:
You could, for example, use Python which I believe would be clearer than this R mess ;-)
Joking aside though, if src block header (which can have a lot of options set up including specifying environment, tangling other block variable of setting totally separate interpreter version) is huge problem there are plethora of presentation customization.
Emacs allows customizing face and more. Org-modern [1], for example, uses font signatures and fringes for making it less “technical”
I'd expect there will be a lot of new solutions coming soon, because Polymode [0] seemingly solves the hard problem the org-babel struggles against. The immiscible major modes, and all that extra syntax and buffer-switching that org-babel needs to work around that problem.
Creating something in the scope of Jupiter is not trivial.
You do have a few nodejs/typescript/javascript kernels that work fine, so I don't see the point for rewriting Jupiter in node specifically for any reason.
In any case, maybe creating something for the browser only is easier because JS is native, but still I wouldn't call it "pretty trivial". For this use case (and more) you have projects like
You're overlooking the power that a preconfigured REPL with a persisted canvas integrated with markdown brings to the table.
It is a really powerful toolset, and building your own environment from separate parts is not trivial; so having it preconfigured in a standard way that others can reuse is no small matter.
I prefer online notebooks with a functional-reactive behaviour, such as ObservableHQ (which is JavaScript-based, rather than python); but Jupyter was the first popular one, so it hit hard.
You might be misinterpreting me here. I’m not dismissing the utility. Quite the contrary, I’m saying that it wouldn’t be terribly groundbreaking or challenging to implement compared to some other green field problem. Although at this point I do agree that trivial was the wrong word.
Observable looks cool, but it doesn’t seem portable. It really bums me out that so much of the JS ecosystem is so sheerly commercial in this way. Sometimes I don’t understand why this is like this compared to the Python ecosystem. It’s not that I’m cheap either. I’ll happily pay for things that are properly monetized and provide a good value for my time. This doesn’t seem like it.
You might like tslab. It allows you to have the full notebook experience with either JavaScript or Typescript. My day to day is data analysis. JS/TS runs circles around pandas and you aren’t constrained to vectorized operations. If there were a suitable replacement for matplotlib I would leave python behind altogether.
What is the experience like using a compiled language in such interactive environment? Like you cannot for example do something in one cell and utilize the result to another, or can you?
Nice! I've been using and recommending JupyterLab Desktop to newcomers since the first release, and things work great out of the box. To give you an example, we held an "Intro to Python" tutorial with absolute beginners, and everyone was able to get their Python coding environment setup in 5 minutes instead of 1 hours (as is usual otherwise when beginners have to do command line stuff).
I think one of the biggest reasons why the R community is so strong is because of how easy it is to install RStudio and get started doing stats with a GUI program, and I see JupyterLab Desktop filling the same niche, for stats and for learning to code in Python more generally.
The pip-install business was always the weakest link in the Python beginner's journey, but now things are going to be be much smoother.
What is the problem with installing python outside of JupyterLab (or „the regular way“) in your opinion? I‘m teaching Python basics for a few years now and usually we get everything up and running with Python and VSCode in three processes as well: installing python if it’s not installed, installing VSCode and then installing the extension.
The main difficulties we've faced are around cross-platform instructions, specifically Windows. I suppose for basic Python it would be easy enough, but the complexity escalates once you need modules and have to run a `pip install` or two, because this requires learning about the command line (which some people have never seen before).
Here are some examples of instructions we've had to watch out for in the past, that are no longer needed when using JupyterLab Desktop:
- Windows installer: make sure to check the box that adds Python to %PATH%
- Use cmd.exe not Power Shell (which has weirdness if using venv[1])
- Run the command `python -m venv myvenv`
- Activate myvenv (OS-specific instructions)
- Run the command `python -m pip install pandas jupyterlab`
- Run `juppyter-lab` to get started
- Press CTRL+C (SIGINT) to stop execution in the end
It's doable, and we were able to run the tutorials since we had several co-hosts available to help beginners when they got stuck, but we definitely lose some momentum every time we try to run things this way.
Another strategy that works really well with beginners is to use jupter notebooks via https://mybinder.org/ links. We put all the materials on github, and then send the workshop participants a link[2] that launches a remote jupyter lab, so they don't have to install anything at all.
That works well, but make sure to download your notebook in the end of the session because they are ephemeral (will disconnect if no commands for 20 minutes).
I find that explaining virtualenvs alone quickly becomes a morass. You can skip the discussion entirely, but it is such a necessary step in good practices that it feels negligent to omit.
Not sure about what platform or level of experience you are accustomed, but I am frequently working with Windows-only users who have never even heard of PATH. Inevitably, someone needs assistance becomes something got stuck on configuring the tooling and python cannot be found. Especially fun when it is the person's N attempt at learning Python and I discover that there is a historical half-working interpreter already present.
Conda is also a huge hurdle which I try to avoid, but if I know the ultimate aim is for machine learning, gotta deal with that on-boarding.
Thank for answering. I understand that the interpreter situation can be annoying. There is WinPython [0] to circumvent that to some degree. I feel like if I don’t do it the „VSCode and py-file“ way, it’ll be more and more difficult to keep everything together when teaching about modularity and putting functions in helper scripts, putting tests in other directories and such. I think it’s just because I got used to using VSCode and not Notebooks although I’ve used them for a while.
I have no idea what Jupyer is only that it’s vaguely related with machine learning
But… wild shot - lot of machine learning stuff is not on M1, because there is no free ARM compiler for FORTRAN and there is some FORTRAN code in some popular machine learning stuff. Like R, I think.
> GCC’s GFortran supports 64-bit ARMs: … However, the Apple silicon platform uses a different application binary interface (ABI) which GFortran does not support, yet.
There was some experimental branch or whatever. I’m not sure of the state now.
I can see why you'd say it's ML, but Jupyter is a "notebook" or kind of "literate programming" environment for Python (originally) and other languages, a kind of REPL on steroids.
You see it in a lot of ML examples around the Internet because it's a pretty good way of demonstrating and documenting ML for tutorials.
Also it's split into a frontend "client" for the UI and a backend "server" (also called a "kernel") for computation. The client doesn't need any of the Fortran BLAS stuff, only the backend, which runs in a completely separate process and communicates over network ports.
I find Jupyterlab working great, if you intend to publish your work on the web. When I do straight data science / machine learning research and prototyping, I find PyCharm Scientific Mode much better suited for the task, than Jupyterlab. It does not have the publishing UI overhead and basically re-creates MATLAB UI/UX for Python, together with ability to run cells separately, which is fantastic for the prototyping.
I agree but currently there still quite a few bugs that force me to open jupyter directly.
Eventually, I think there should be a better separation of concerns. IDEs should be IDEs and Jupiter notebook should be a thinner background service a-la typescript language service.
As a non-ds person, but that enjoys tinkering with data science stuff, I'm not a big fan of having to code in jupyterlab.
But with no proper gpu locally I'm kinda forced to use something remote. And getting DS dependencies set up correctly is almost impossible, so even if I had a GPU I'd probably anyways end up with something remote working out of the box.
I've tried to connect to them with Pycharm, and even downloaded a trial of Dataspell, their new IDE for DS. But I can't really get the integration to work. I'd like to do everything from the IDE, using the interpreter and power from the remote server. But feels like were not there yet, so many small bugs.
PyCharm has a full remote workflow, which I find too difficult to use. I am sure they will make it easier as it still in infancy. Instead, I just configure a remote SSH interpreter. Then you could create a remote SSH project pretty easily. The wrinkle is that you can not create an SSH project with the Scientific Mode (it says not supported). Instead you just create a regular SSH project and then, after the project is created, you switch to the Scientific Mode. Everything will work fine then. I do not think the above process will work in the Community version of the PyCharm, - you have to use Professional Edition.
Or even better: as plain python files, whose comment paragraphs are interpreted as markdown cells. Thus you have just one file notebook.py that you can run directly with the python interpreter, open it with a text editor, or open it with the browser and edit/run it like a notebook. Jupytext is fantastic!
Why this is not core functionality of jupyter is beyond me.
I don't see a Portable Version of this software for Windows. I use Anaconda and it has it's own Jupyter but I would not mind having a Portable Desktop version of Jupyterlab that I could just bring up and have it work with the default Python or R interpreter in Conda.
Agreed that would be a killer feature. Unzip this package and get a functional Python + Jupyter + scientific (numpy, pandas, scipy, matplotlib) environment.
I have been on-and-off teaching some people Python and the initial setup on-ramp is horrible. Ok, so install Python, now ignore-this-for-now-complications: create a "virtualenv", use this thing called "pip", install these half-dozen things to get a basic notebook (Jupyter + scipy things), install these other half-dozen quality of life things, you should probably also have "conda" for the future, etc. That's a lot of nonsense for someone I am trying to show an alternative to Excel.
My shortcut, "You want to try Python?" approach has been to start with JupyterLite[0] where I can immediately get people coding and delay that pain.
I love notebooks for me there is no better way to dive into a new dataset or badly documented api and just try and error your way through. Keeping whatever messy state you want. I always see them as “throw away” anything that is useful gets turned into a typed modularised .py files.
Re debug experience I completely agree - it just isn’t sexy… yet? Browser dev tools have proven that debugging is super important. It feels like code debuggers have been stagnant for past couple decades.
https://thonny.org/ has been featured on HN recently. Looks at least as an iterative step forward.
So here is a piece of anecdata for those interested in comparisons to vscode, I saw this hn post and installed the jupyter desktop app, and set up my current play project on it (sicp exercises in jupyter, I'd previously been using the browser). I also set up the same thing in vscode.
For this example I couldn't get vscode to perform correctly (maybe possible but not obvious), while the desktop app worked as advertised.
Vscode mistakenly labelled the cells of my scheme notebook as python, while correctly running the scheme kernel. The result was that I could run my cells and get the right output, but entering code was annoying as the auto indenting and error markers were based on python instead of scheme.
Anyway, unless I can fix the vscode weirdness, score one for the desktop app, I will keep using it.
A small side observation: did you remember the first "fully integrated environments" of the origins? Like Xerox PARC Smalltalk workstations, LispM etc?
Well, it does not matter how much people invest in "applications" now, that's was and is the way to go because integration have proven COUNTLESS time it's value. The ONLY remaining reasons not to have it is business. An unsustainable form of business who DEPRIVE HUMANITY of much evolution potential for short/mid term big money for just some.
Think about that anytime you feel the power of such environments in some growing application, than think about what you miss being tied to modern systems because too many have developed stuff on them and do not have do the same on classic ones.
Atom + Hydrogen allowed for line by line execution and immediate output in regular .py files, no cell definitions needed. Haven't seen this functionality anywhere else yet and miss it a lot. If anyone knows an alternative, please let me know.
I'm a huge fan of Jupyter, I use Jupyter Lab as my primary IDE, but a little confused here. Maybe if I frequently connect to different servers? The python env discovery looks cool, but nb_conda_kernels does this for me now.
IMHO this is for folks used to a desktop app experience, like VS Code. They want to download something and run it, not configure and setup a local web app.
Yep. They need something like this for Python period, not just Jupyter. Something I can just put on my boss' computer and tell him "load a script, change a number, hit run". No futzing around with environments.
Exactly, I love the web. I love being able to switch between my chromebook and other computers, multiple operating systems. I also like it all being in the cloud.
Is there a Jupyter-like environment for shell? For example, you could tell it to start from a Docker image, then give it shell commands to run in each block, and could show output. Maybe even could "script" to record interactive sessions (though no way of replaying those sessions on reloads, you'd have to detect file changes and diff and apply patches).
Jupyter is a generic platform these days, so spans pretty much all langs. I learnt powershell on a mac by writing notes in a jupyter notebook and executing powershell cells.
If you’re talking python then you just use ipython. There’s an auto reload feature that live updates code in situ so you don’t even have to reload data in between.
I recorded a video with terrible quality audio years ago that shows how effective it is. https://youtu.be/k-mAuNY9szI
Does anybody have an IDE solution that offers auto-completion in multiple languages in the SAME notebook? For example, I'm currently using Jupyter Notebooks in DataSpell primarily in Python, but for a few cells I'll use R (using rpy2) and the R code doesn't do autocorrection.
Hi! My name is Claudia and I am a PM at Microsoft (opinions are my own) working on Polyglot Notebooks in VS Code. Polyglot Notebooks are exactly what you are describing! They are notebooks where you can use multiple languages AND share variables between them to ensure a continuous workflow. Not only that, but each language has language server support. Polyglot Notebooks currently supports C#, F#, PowerShell, JavaScript, HTML, SQL, KQL, and Mermaid.
We have just added support for Python and R integration and I am actually in search of external testers! If you are willing to sign an NDA to try out our Python and R integration and give us feedback please drop your email in the form below and I will reach out with instructions for you to try it out!
This might be unfair judging off scrrenshots but the Welcome page and Connect menu both seem modelled off VS Code. The rest is from JupyterLab which is already very heavily used. It's tough to redesign something that already has a lot of users.
I am really, really not a fan of IDEs being a part of the environment in which you execute the code. Basically I want my virtual environment separate from the IDE code. Both Spyder and Jupyter do this and I don't like it.
This is the first Medium article I've read in a while, and wow, the interface feels so unbalanced to me. Having the author and recommended articles be stationary and take up 1/4 or 1/3 of the content space seems like a huge waste, and then centering everything and having decently large blocks of white space on either side of the content makes the text side of the space feel so much heavier. Maybe I've been using substack too much, but man, I'm glad to be moving away from Medium.
The mode that I find most useful but still leaving something to be desired in terms of user friendliness is the "debug" mode. Step by step execution that at the same time provides access to variables, dataframes etc in a second panel.
One way or another its possible to debug, but a "debug-first" paradigm would make it more fluid and fun to write good code that does exactly what is meant.