Really excited for this. Once some more time goes by and the most important python libraries update to support no GIL, there is just a tremendous amount of performance that can be automatically unlocked with almost no incremental effort for so many organizations and projects. It's also a good opportunity for new and more actively maintained projects to take market share from older and more established libraries if the older libraries don't take making these changes seriously and finish them in a timely manner. It's going to be amazing to saturate all the cores on a big machine using simple threads instead of dealing with the massive overhead and complexity and bugs of using something like multiprocessing.
> using simple threads instead of dealing with the massive overhead and complexity and bugs of using something like multiprocessing.
Depending on the domain, the reality can be the reverse.
Multiprocessing in the web serving domain, as in "spawning separate processes", is actually simpler and less bug-prone, because there is considerably less resource sharing. The considerably higher difficulty of writing, testing and debugging parallel code is evident to anybody who's worked on it.
As for the overhead, this again depends on the domain. It's hard to quantify, but generalizing to "massive" is not accurate, especially for app servers with COW support.
Using multiple processes is simpler in terms of locks etc, but python libraries like multiprocessing or even subprocess.popen[1] which make using multiple processes seem easy are full of footguns which cause deadlocks due to fork-safe code not being well understood. I’ve seen this lead to code ‘working’ and being merged but then triggering sporadic deadlocks in production after a few weeks.
The default for multiprocessing is still to fork (fortunately changing in 3.14), which means all of your parent process’ threaded code (incl. third party libraries) has to be fork-safe. There’s no static analysis checks for this.
This kind of easy to use but incredibly hard to use safely library has made python for long running production services incredibly painful in my experience.
[1] Some arguments to subprocess.popen look handy but actually cause python interpreter code to be executed after the fork and before the execve, which has caused production logging-related deadlocks for me. The original author was very bright but didn’t notice the footgun.
Same experiences, multiprocessing is such a pain in python. It's one of these things people think they can write production code in, but they just haven't run into all the ways their code was wrong so they figure out those bugs later in production.
As an aside I still constantly see side effects in imports in a ton of libraries (up to and including resource allocations).
>which make using multiple processes seem easy are full of footguns which cause deadlocks due to fork-safe code not being well understood. I’ve seen this lead to code ‘working’ and being merged but then triggering sporadic deadlocks in production after a few weeks
Just the other day I was trying to do two things in parallel in Python using threads - and then I switched to multiprocessing - why? I wanted to immediately terminate one thing whenever the other failed. That’s straightforwardly supported with multiprocessing. With threads, it gets a lot more complicated and can involve things with dubious supportability
There is a reason why it's "complicated" in threads, because doing it correctly just IS complicated, and the same reason applies to child processes, you just ignored that reason. That's one example of a footgun in using multiprocessing, people write broken code but they don't know that because it appears to work... until it doesn't (in production on friday night).
I don't agree. A big reason why abruptly terminating threads at an arbitrary point is risky is it can corrupt shared memory. If you aren't using shared memory in a multiprocess solution, that's not an issue. Another big reason is it can lead to resource leaks (e.g. thread gets terminated in a finally clause to close resources and hence the resource doesn't get closed). Again, that's less of an issue for processes, since many resources (file descriptors, network connections) get automatically closed by the OS kernel when the process exits.
Abruptly terminating a child process still can potentially cause issues, but there are whole categories of potential issues which exist for abrupt thread termination but not for abrupt process termination.
This is not a matter of opinion. Always clean up after yourself, the kernel doesn't know shit about your application or what state it's in, you can not rely on it to cleanly terminate your process. Just because it's a child process (by default it's a forked process!) not a thread, doesn't mean it can not have shared resources. It can lead to deadlocks, stuck processes, all kinds of resource leaks, data and stream corruption, orphaned processes, etc. etc.
> This is not a matter of opinion. Always clean up after yourself, the kernel doesn't know shit about your application or what state it's in, you can not rely on it to cleanly terminate your process.
If you have an open file or network connection, the kernel is guaranteed to close it for you when the process is killed (assuming it hasn't passed the fd/socket to a subprocess, etc). That's not a matter of opinion.
Yes, if you are writing to a file, it is possible abruptly killing the writer may leave the file in an inconsistent state. But maybe you know your process doesn't write to any files (that's true in my case). Or maybe it does write to files, but you already have other mechanisms to recover their integrity in this scenario (since file writing processes can potentially die at any time–kernel panic, power loss, intermittent crash bug, etc)
Different programming languages have different guarantees when it comes to threads. If IO is hidden behind object semantics, objects aren’t killed by “killing” a thread, and they can be gracefully terminated when they are deemed to no longer be in use.
You have no idea what I'm actually doing, yet you are convinced something bad is bound to happen, although you can't say what exactly that bad thing will be.
That's why I've always liked Java's take on this. Throw an InterruptedException and the thread is considered terminated once that has dropped all the way through. You can also defer the exception for some time if it takes time to clean something up.
The only issue there is that sometimes library code will incorrectly defer the exception (i.e. suppress it) but otherwise it's pretty good.
I feel like most things that will benefit from moving to multiple cores for performance should probably not be written in Python. OTH "most" is not "all" so it's gonna be awesome for some.
I often reach for python multiprocessing for code that will run $singleDigit number of times but is annoyingly slow when run sequentially. I could never justify the additional development time for using a more performant language, but I can easily justify spending 5-10 minutes making the embarrassingly parallel stuff execute in parallel.
I've generally been able to deal with embarassing parallelism by just chopping up the input and running multiple processes with GNU Parallel. I haven't needed the multiprocessing module or free threading so far. I believe CPython still relies on various bytecodes to run atomically, which you get automatically with the GIL present. So I wonder if hard-to-reproduce concurrency bugs will keep surfacing in the free-threaded CPython for quite some time.
I feel like all of this is tragic and Python should have gone to a BEAM-like model some years ago, like as part of the 2 to 3 transition. Instead we get async wreckage and now free threading with its attendant hazards. Plus who knows how many C modules won't be expecting this.
I've always found the criticism leveled by the colored functions blog post a bit contrived. Yes, when you replace the words async/await with meaningless concepts I do not care about, it's very annoying to have to arbitrarily mark a function as blue or red. But when you replace the word "aync" with something like "expensive", or "does network calls", it becomes clear that "async/await" makes intrinsic properties about your code (e.g., is it a bad idea to put this call in a loop from a performance perspective) explicit rather than implicit.
In short, "await" gives me an extra piece of data about the function, without having to read the body of the function (and the ones it calls, and the ones they call, etc). That's a good thing.
There are serious drawbacks to async/await, and the red/blue blog post manages to list none of them.
EDIT: all of the above is predicated on the idea that reading code is harder than writing it. If you believe the opposite, then blue/red has a point.
But a synchronous function can and many do make network calls or write to files. It is a rather vague signal about the functions behavior as opposed to the lack of the IO monad in Haskell.
To me the difficulty is more with writing generic code and maintaining abstraction boundaries. Unless the language provides a way to generalise over asyncness of functions, we need a combinatorial explosion of async variants of generic functions. Consider a simple filter algorithm it needs versions for: (synchronous vs asynchronous iterator) times (synchronous vs asynchronous predicate). We end up with a pragmatic but ugly solution: provide 2 versions of each algorithm: an async and a sync, and force the user of the async one to wrap their synchronous arguments.
Similarly changing some implementation detail of a function might change it from a synchronous to an asynchronous function, and this change must now propagate through the entire call chain (or the function must start its own async runtime). Again we end up in a place where the most future proof promise to give for an abstraction barrier is to mark everything as async.
> But a synchronous function can and many do make network calls or write to files
This, for me, is the main drawback of async/await, at least as it is implemented in for example Python. When you call a synchronous function which makes network calls, then it blocks the event loop, which is pretty disastrous, since for the duration of that call you lose all concurrency. And it's a fairly easy footgun to set off.
> It is a rather vague signal about the functions behavior as opposed to the lack of the IO monad in Haskell.
I'm happy you mentioned the IO monad! For me, in the languages people pay me to write in (which sadly does not include Haskell or F#), async/await functions as a poor man's IO monad.
> Again we end up in a place where the most future proof promise to give for an abstraction barrier is to mark everything as async.
Yes, this is one way to write async code. But to me this smells the same as writing every Haskell program as a giant do statement because the internals might want to do I/O at some point. Async/await makes changing side-effect free internals to effectful ones painful, which pushes you in the direction of doing the I/O at the boundaries of your system (where it belongs), rather than all over the place in your call stack. In a ports-adapters architecture, it's perfectly feasible to restrict network I/O to your service layer, and leave your domain entirely synchronous. E.g. sth like
Async/await pushes you to code in a certain way that I believe makes a codebase more maintainable, in a way similar to the IO monad. And as with the IO monad, you can subvert this push by making everything async (or writing everything in a do statement), but there's better ways of working with them, and judging them based on this subversion is not entirely fair.
> ugly solution: provide 2 versions of each algorithm: an async and a sync
I see your point, and I think it's entirely valid. But having worked in a couple async codebases for a couple of years, the amount of stuff I (or one of my collaborators) have had to duplicate for this reason I think I can count on one hand. It seems that in practice this cost is a fairly low one.
What you write is eerily similar to one of the pain points of Haskell. You can write a compiler that is purely functional. But then you want logging so you must put wrap it with the IO monad. And then also every function that calls the compiler and so on.
> when you replace the word "aync" with something like "expensive", or "does network calls", it becomes clear that "async/await" makes intrinsic properties about your code explicit rather than implicit.
Do you think we should be annotating functions with `expensive` and/or `networking`? And also annotating all of their callers, recursively? And maintaining 4 copies of every higher-order function depending on whether the functions it calls are `expensive`, `networking`, neither or both?
No, we rely on documentation for those things, and IMO we should for `async` as well. The reason we can’t, and why `async`/`await` exist, is because of shortcomings (lack of support for stackful coroutines) in language runtimes. The best solution is to fix those shortcomings, not add viral annotations everywhere.
So here I think we differ fundamentally in how we like to read code. I much prefer being able to quickly figure out things of interest about a function by glancing at its signature, rather than look at documentation, or worse, having to read the implementation of the function and the functions it calls (and so on, recursively).
For example, I much prefer a signature like
def f(a: int) -> str:
over
def f(a):
because it allows me to see, without reading the implementation of the function (or, if it exists, and I'm willing to bet on its reliability, the documentation), that it takes an integer, and gives me a string. And yes, this requires that I write viral type annotations on all my functions when I write them, but for me the bottleneck at my job is not writing the code, it's reading it. So that's a small upfront cost I'm very much willing to pay.
> Do you think we should be annotating functions with `expensive` and/or `networking`? And also annotating all of their callers, recursively?
Yes, absolutely, and yes, absolutely. That's just being upfront and honest about an intrinsic property of those functions. A function calling a function that does network I/O by transitivity also does network I/O. I prefer code that's explicit over code that's implicit.
Fair enough, that's a valid philosophy, and one in which `async`/`await` makes perfect sense.
However, it's not Python's philosophy - a language with dynamic types, unchecked exceptions, and racy multithreading. In Python, `async`/`await` seems to be at odds with other language features - it feels like it's more at home in a language like Rust.
I completely agree with you. However I've always found the dynamic typing approach to be a bit at odds with
python3 -c "import this" | head -4 | tail -1
I think the fast and loose style that Python enables is perfect for small scripts and one off data science notebooks and the like. But having worked in large codebases which adopt the same style, and ones that avoid it through static typing and in some cases async/await, the difference in productivity I've noticed in both me and my collaborators is too stark for me to ignore.
I think I should've been more nuanced in my comments praising async/await. I believe that what I say is valid in large IO-bound applications which go beyond basic CRUD operations. In general it depends, of course.
> I think the fast and loose style that Python enables is perfect for small scripts and one off data science notebooks and the like.
Agreed - I only use Python for scripts like this, preferring statically-typed, AOT-compiled languages for larger programs.
That’s why I think Python should have adopted full coroutines - it should play to its strengths and stick to its fast-and-loose style. However, the people who decide how the language evolves are all employees of large companies using it for large codebases - their needs are very different from people who are only using Python for small scripts.
> The reason we can’t, and why `async`/`await` exist, is because of shortcomings (lack of support for stackful coroutines) in language runtimes
The JVM runtime has solved this problem neatly with virtual threads in my opinion. Run a web request in a virtual thread, and all blocking I/O is suddenly no longer blocking the OS thread, but yielding/suspending and giving and giving another virtual thread run time. And all that without language keywords that go viral through your program.
Yes, this is similar to how Go works. IIRC the same approach was available in Python as a library, “greenlet”, but Python’s core developers rejected it in favour of `async`/`await`.
The Python community seems to have a virulent hatred of threads. I don't understand the reason. Yes there are hazards but you can code in a style that avoids them. With something like BEAM you can even enforce the style. Async/await of course introduce their own hazards.
There's no hatred. There are just lots of libraries that can't be multithreaded, due to historical reasons. This is being worked on right now[0], though.
The GIL only gets in the way of parallelism. Yes there is real hatred. You can experience it if you visit #python on libera and ask anything about threads. "Aieee! The non-determinism! The race conditions! Etc." Of course the async requires an async version of the whole i/o system, it can block on long computations or the wrong system calls, etc. And many rock solid systems are written in Erlang, which uses preemptive lightweight processes for all its concurrency needs.
Lots of microcontroller OS's use cooperative multitasking but once there are enough machine resource's, OS's generally become preemptive. Async concurrency is basically cooperative multitasking with similar issues. Does Python give a way to open a file asynchronously in Linux? It's now possible with io_uring but it was impossible for a very long time (like decades). Erlang and GHC both use thread pools to deal with that. The use the old synchronous open(2) call but move it into an auxiliary thread so it won't block the calling thread.
> "Aieee! The non-determinism! The race conditions! Etc."
That doesn't sound like real hatred. Those sound like real concerns, which need to be addressed, and the attempt to remove the GIL is doing so very much with those concerns in mind.
I think it wouldn’t work as nicely with python, which deeply builds on C FFI. Java has a different history, and almost the whole ecosystem is pure, making it able to take advantage of it.
Yeah gotcha - I'm familiar with the colored functions argument. I see the conceptual downside, but it doesn't seem that bad in practice. I have a pre-commit hook that shouts at me if I call an async function synchronously.
I personally optimize more for development time and overall productivity in creating and refactoring, adding new features, etc. I'm just so much faster using Python than anything else, it's not even close. There is such an incredible world of great libraries easily available on pip for one thing.
Also, I've found that ChatGPT/Claude3.5 are much, much smarter and better at Python than they are at C++ or Rust. I can usually get code that works basically the first or second time with Python, but very rarely can do that using those more performant languages. That's increasingly a huge concern for me as I use these AI tools to speed up my own development efforts very dramatically. Computers are so fast already anyway that the ceiling for optimization of network oriented software that can be done in a mostly async way in Python is already pretty compelling, so then it just comes back again to developer productivity, at least for my purposes.
Right now you are right. This is about taking away that argument. There's no technical reason for this to stay true. Other than that the process of fixing this is a lot of work of course. But now that the work has started, it's probably going to progress pretty steadily.
It will be interesting to see how this goes over the next few years. My guess is that a lot of lessons were learned from the python 2 to 3 move. This plan seems pretty solid.
And of course there's a relatively easy fix for code that can't work without a GIL: just do what people are doing today and just don't fork any threads in python. It's kind of pointless in any case with the GIL in place so not a lot of code actually depends on threads in python.
Preventing the forking of threads in the presence of things still requiring the GIL sounds like a good plan. This is a bit of meta data that you could build into packages. This plan is actually proposing keeping track of what packages work without a GIL. So, that should keep people safe enough if dependency tools are updated to make use of this meta data and actively stop people from adding thread unsafe packages when threading is used.
So, I have good hopes that this is going to be a much smoother transition than python 2 to 3. The initial phase is probably going to flush out a lot of packages that need fixing. But once those fixes start coming in, it's probably going to be straightforward to move forward.
AMD EPYC 9754 with 128-cores/256-threads, and EPYC 9734 with 112-cores/224-threads. TomsHardware says they "will compete with Intel's 144-core Sierra Forest chips, which mark the debut of Intel's Efficiency cores (E-cores) in its Xeon data center lineup, and Ampre's 192-core AmpereOne processors".
What in 5 years? 10? 20? How long will "1 core should be enough for anyone using Python" stand?
Number crunching code in Python (such as using numpy/pytorch) performs the vast vast majority of its calculations in C/Fortran code under the hood where GIL can be released. Single python process can use multiple CPUs.
There is code that may benefit from the free threaded implementation but it is not as often as it might appear and it is not without its own downsides. In general, GIL simplifies multithreaded code.
There were no-GIL Python implementations such as Jython, IronPython. They hadn't replaced CPython, Pypy implementation which use GIL i.e., other concerns dominate.
Yes but jython am iron aren't the standard, and I feel the more relevant part is inertia, puppy is design whit lots of concern of compatibility, then being the new standard can totally make difference making both cases not a good comparison.
>> What in 5 years? 10? 20? How long will "1 core should be enough for anyone using Python" stand?
If you're looking for a 32x or 128x performance improvement from python supporting multi-core you should probably rewrite in C, C++, Rust, or Fortran and get that 100x improvement today on a single core. If done properly you can then ALSO get the gain from multiple cores on top of that. Or to put it another way, if performance is critical python is a poor choice.
"instead of taking advantage of hardware you own, you should do a LOT of work in a language you don't know" - how is that in any way a reasonable suggestion or alternative?
> "Or to put it another way, if performance is critical python is a poor choice."
To put it another way, just because performance isn't critical, doesn't mean that more performance for free is not desirable or beneficial, or that ignoring 127/128ths of available performance is fine.
Usually performance critical code is written in cpp, fortran, etc, and then wrapped in libraries for Python. Python still has a use case for glue code.
Yes, but then extensions can already release the GIL and use the simple and industrial strength std::thread, which is orders of magnitude easier to debug.
Concurrent operations exist at all levels of the software stack. Just because native extensions might want to release the GIL and use OS threads doesn't mean pure Python can't also want (or need) that.
(And as a side note: I have never, in around a decade of writing C++, heard std::thread described as "easy to debug.")
I expected that dropping down to C/C++ would be a large jump in difficulty and quantity of code, but I've found it isn't, and the dev experience isn't entirely worse, as, for example, in-editor code-intelligence is rock solid and very fast in every corner of my code and the libraries I'm using.
If anyone could benefit from speeding up some Python code, I'd highly recommend installing cppyy and giving it a try.
Sure! I tried pybind11, and some other things. cppyy was the first I tried that didn't give me any trouble. I've been using it pretty heavily for about a year, and still no trouble.
It seems like you might be able to enable some optimizations with EXTRA_CLING_ARGS. Since it's based on cling, it's probably subject to whatever limitations cling has.
To be honest, I don't know much about the speed, as my use-case isn't speeding up slow code.
It's not just about "raw-flop performance" though; it affects even basic things like creating data-loaders that run in the background while your main thread is doing some hard ML crunching.
Every DL library comes with its own C++ backend that does this for now, but it's annoyingly inflexible. And dealing with GIL is a nightmare if you're dealing with mixed Python code.
I think it is beneficial to some people, but not a lot. My guess is that most Python users (from beginners to advanced users, including many professional data scientists) have never heard of GIL or thought of doing any parallelization in Python. Code that needs performance and would benefit from multithreading, usually written by professional software engineers, likely isn't written in Python in the first place. It would make sense for projects that can benefit from disabling GIL without a ton of changes. Remember it is not trivial to update single threaded code to use multithreading correctly.
in Python language specifically. Their library may have already done some form of parallelization under the hood
> Code that needs performance and would benefit from multithreading, usually written by professional software engineers, likely isn't written in Python in the first place.
There are a lot of simple cases where multi-threading can easily triple or quadruple the performance.
I used to write a ton of MPI based parallel python. It's pretty straightforward. But one could easily imagine trying to improve the multiprocessing ergonomics rather than introducing threading. Obviously the people who made the choice to push forward with this are aware of these options, too. Still mildly puzzling to me why threads for Python are needed/reasonable.
A common pain point for me is parallel data loading and processing for PyTorch or TensorFlow. Multiprocessing has a lot of bugs and pain points to deal with when doing ML. Examples: https://github.com/search?q=repo%3Apytorch%2Fpytorch+multipr.... Most of these issues do not exist in a multithreading world, because resource sharing is trivial in that case.
Since Python leads over any other languages in the ML community, and ML is a hot topic right now, it makes sense for Python developers to secure the lead by making the life of ML developers easier, which is by introducing GIL-less multi-threading.
I know that document. It doesn't really answer this point though. It motivates the need for parallelism in slow python by noting that this is important once other performance critical code is in extensions.
But the mai point against multiprocessing seems to be that spawning new processes is slow ...
That single "alternatives" paragraph doesn't answer at all why mp isn't viable for python level parallelism.
I am no longer invested in python heavily, I am sure there are discussions or documents somewhere that go into this more. Might be that it's simply that everyone is used to threads so you should support it for sheer familiarity. All I am saying is it's not obvious to a casual observer.
>> Who descides why something should or should not be written I Python?
Why shouldn't someone who prefers writing in python benefit from using multiple cores?
I did use the words "most things". I'm not saying this is a bad development for Python, or that nobody should use it. But if performance is a top priority, Python is the wrong language and always has been.
I use Python from time to time, it's fun and easy to put certain kinds of things together quickly. But each time I do a project with it, the first thing I ask myself is "is this going to be fast enough?" If not I'll use something else.
it may not be tactful but it sure is factful (even if that last word is ungrammatical, at least the sentence is poetical). he he he.
jfc. guido knows ;), this thread is getting weirder and weirder, creepier and creepier.
are you literally implying that I should lie about known facts about python slowness?
"tactful" my foot.
then I guess the creators of PyPy and Unladen Swallow (the latter project was by Google) were/are not being tactful either, because those were two very prominent projects to speed up Python, in other words, to reduce its very well known slowness.
There is/was also Cython and Pyston (the latter at Dropbox, where Guido (GvR, Python creator) worked for a while. Those were or are also projects to speed up Python program execution.
Excerpt from the section titled "Rationale, Implementation" from the above link (italics mine):
[
Many companies and individuals would like Python to be faster, to enable its use in more projects. Google is one such company.
Unladen Swallow is a Google-sponsored branch of CPython, initiated to improve the performance of Google’s numerous Python libraries, tools and applications. To make the adoption of Unladen Swallow as easy as possible, the project initially aimed at four goals:
A performance improvement of 5x over the baseline of CPython 2.6.4 for single-threaded code.
100% source compatibility with valid CPython 2.6 applications.
100% source compatibility with valid CPython 2.6 C extension modules.
Design for eventual merger back into CPython.
tactful - showing skill and sensitivity in dealing with people.
Right or wrong, your comments do not contribute to the discussion. Open source development is about collaboration. Your irrelevant comments are disrespectful to all the hard working developers who have been working together to push the limits of what is possible with what may be the most widely used programming language of our time. I use Python for it's ergonomics and community, not for speed. Removing the GIL has been something we've been yearning for since the Python 2 days. And the speed has improved drastically in recent years with each release. You don't win a prize for being right.
>tactful - showing skill and sensitivity in dealing with people. Right or wrong, your comments do not contribute to the discussion.
just take a look at the depth of your incredibly rotten stupidity and ugliness of mind, you fucktard and dotard (dotard means something like a senile person, in case you didn't know):
hn user jacob019, you little mofo:
you say that "Right or wrong, your comments do not contribute to the discussion."
that itself is a contradiction in terms.
so you are saying that my comments do not contribute to the discussion even if i am right?
(what an utter fool and liar and swine you are.)
then why should i and anyone else think that your comments contribute to the discussion when they so totally fucking wrong, biased, shitheaded, pissheaded, and other choice epithets?
comgrats, you have really jumped the shark in terms of being a creep.
enjoy your miserable life for what it is worth, you worm.
if Google wanted a performance increase of 5x over the then existing python, I guess we can safely say that Python was slow, amirite. and yes, I know the version number mentioned, and what it is today.
It's very unpopular to mention Perl, but I did many cool things with it back in the day, and it still holds a special place for me. Perl taught me the power of regex--it's really first class in Perl. I still have some Perl code in production today. But to be fair, it is really easy to write spaghetti in Perl if you don't know what you're doing.
don't worry about unpopularity, bro. worry about being true. the rest will take care of itself. if not, you are in the wrong company, forum, or place, and better to work on getting out of there.
They’re probably downvoting you because you’ve posted like the laziest trope comment there is. lol Python slow everyone is paid to hide the truth amirite
Your comments feel like you want a reddit style battle of wits. You throw words like ignorant around rather freely. I downvote these comments because they lower the tone of the discussion. We're not here to talk about each other or feel smarter than others.. Well I'm not.
I don't give a flying fuck about your feelings or your opinions about my comments, just as you should not about mine. i am a free man. you should try to be one too. we can talk to each other, but that does not mean that either of us has to believe or be convinced by what the other person says. is this not fuckingly blindlingly obvious to you?
if not, you have a serious perception problem.
yes, I do throw around words, just like anyone else, but not freely, instead, I do that after at least some thought, which I do not see happening in the people whom I replied to, nor in your comment above.
You might be getting downvoted because many people know Python is among the slower dynamic languages, and there are other reasons to use it. Speaking for myself, the reasons that make me reach for Python for some projects are the speed of development, large ecosystem of libraries, large developer pool, and pretty good tooling/IDE support.
then it would be much preferable if they said so explicitly, although both of those are points I already know well. both are common knowledge, not just the first, i e. the slowness. the productivity and other benefits are common knowledge too.
downvoting is such a fucking dumb way of disagreeing. how does one know whether the downvote is because a person disapproves of or dislikes what one said or because they think what one said is wrong. no way to know. amirite? :)
> there is just a tremendous amount of performance that can be automatically unlocked with almost no incremental effort for so many organizations and projects
This just isn’t true.
This does not improve single threaded performance (it’s worse) and concurrent programming is already available.
This will make it less annoying to do concurrent processing.
It also makes everything slower (arguable where that ends up, currently significantly slower) overall.
This way over hyped.
At the end of the day this will be a change that (most likely) makes the existing workloads for everyone slightly slower and makes the lives of a few people a bit easier when they implement natively parallel processing like ML easier and better.
It’s an incremental win for the ML community, and a meaningless/slight loss for everyone else.
At the cost of a great. Deal. Of. Effort.
If you’re excited about it because of the hype and don’t really understand it, probably calm down.
Mostly likely, at the end of the day, it s a change that is totally meaningless to you, won’t really affect you other than making some libraries you use a bit faster, and others a bit slower.
Overall, your standard web application will run a bit slower as a result of it. You probably won’t notice.
Your data stack will run a bit faster. That’s nice.
Yes, good summary. My prediction is that free-threading will be the default at some point because one of the corporations that usurped Python-dev wants it.
The rest of us can live with arcane threading bugs and yet another split ecosystem. As I understand it, if a single C-extension opts for the GIL, the GIL will be enabled.
Of course the invitation to experiment is meaningless. CPython is run by corporations, many excellent developers have left and people will not have any influence on the outcome.
Removing the GIL requires operations that are protected by it when it exists to be made thread safe in ways which they don't need to be with it, which has some overhead. Even having multiple options from which the situationally correct one (e.g., a lighter-weight one in guaranteed single-threaded or GIL active cases) can automatically be selected has some overhead. (Conceptually, you could have separate implementations that can be selected by the programmer with zero runtime overhead where not needed, but that also has more conceptual/developer overhead, so other than particular things where the runtime cost is found to really bite in practice, that's probably not going to be a common approach for core libraries that might need to be called either way. There's no free lunch here.
If you assume two completely separate implementations where there is an #ifdef every 10 lines and atomics and locking only occur with --disable-gil, there is no slowdown for the --enable-gil build.
I don't think that is entirely the case though!
If the --enable-gil build becomes the default in the future, then peer pressure and packaging discipline will force everyone to use it. Then you have the OBVIOUS slowdown of atomics and of locking the reference counting and in other places.
The advertised figures were around 20%, which would be offset by minor speedups in other areas. But if you compare against Python 3.8, for instance, the slowdowns are still there (i.e., not offset by anything). Further down on the second page of this discussion numbers of 30-40% have been measured by the submitter of this blog post.
Actual benchmarks of Python tend to be suppressed or downvoted, so they are not on the first page. The Java HotSpot VM had a similar policy that forbid benchmarks.
> It also makes everything slower (arguable where that ends up, currently significantly slower) overall.
longer version:
If there was no reason for it to be slower, it would not be slower.
...but, implementing this stuff is hard.
Doing a zero cost implementation is really hard.
It is slower.
Where it ends up eventually is still a 'hm... we'll see'.
To be fair, they didn't lead the article here with:
> Right now there is a significant single-threaded performance cost. Somewhere from 30-50%.
They should have, because now people have a misguided idea of what this wip release is... and that's not ideal; because if you install it, you'll find its slow as balls; and that's not really the message they were trying to put out with this release. This release was about being technically correct.
...but, it is slow as balls right now, and I'm not making that shit up. Try it yourself.
If you're worried about performance then much of your CPU time is probably spent in a C extension (e.g. numpy, scipy, opencv, etc.). Those all release the GIL so already allow parallelisation in multiple threads. That even includes many functions in the standard library (e.g. sqlite3, zip/unzip). I've used multiple threads in Python for many years and never needed to break into multiprocessing.
But, for sure, nogil will be good for those workloads written in pure Python (though I've personally never been affected by that).
That means your code is using python as glue and you do most of your work completely outside of cPython. That's why you don't see the impact - those libraries drop GIL when you use them, so there's much less overhead.
The parent commenter said they're using the multiprocessing module, so it's irrelevant to them whether those modules drop the GIL (except for the fact that they are missing an opportunity to using threading instead). The overhead being referred to, whether significant or not, is that of spawning processes and doing IPC.
What about the pessimization of single-threaded workloads? I'm still not convinced a completely free-threaded Python is better overall than a multi-interpreter, separate-GIL model with explicit instead of implicit parallelism.
Everyone wants parallelism in Python. Removing the GIL isn't the only way to get it.
> It's going to be amazing to saturate all the cores on a big machine using simple threads instead of dealing with the massive overhead and complexity and bugs of using something like multiprocessing.
I'm saturating 192cpu / 1.5TBram machines with no headache and straightforward multiprocessing. I really don't see what multithreading will bring more.
What are these massive overheads / complexity / bugs you're talking about ?
FWIW, I think the concern though is/was that for most of us who aren't doing shared-data multiprocessing this is going to make Python even slower; maybe they figured out how to avoid that?
Pretty sure they offset any possible slowdowns by doing heroic optimizations in other parts of CPython. There was even some talk about keeping just those optimizations and leaving the GIL in place, but fortunately they went for the full GILectomy.