Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not until it addresses the JIT overhead.


In my tests I frequently switch between cpython, pypy and julia (depending on the libraries/task I want to perform) and I haven't found the JIT overhead to be worse than pypy on average.

Count me as one of the 1-based index haters, but I do love multiple dispatch and the language in general. As a language for explorative tools and analysis is on par of python (strict preference between the two according to taste).

To me the biggest flaw currently is the poor "catch" syntax for exception handling. There are countless spots where exceptions are incorrectly caught at random points due to the catch-all semantics hiding/masking/breaking stuff. This is one area where I really find the syntax has been chosen poorly and it's causing real damage.


Agreed on error handling actually. It doesn't quite have the feel that it should in a modern language. I think error handling and lifetimes/mutability are two of the top things we're looking at for a fundamental remodel in 2.0.


What about traits or multiple inheritance?


That's more on the "nice to have" list (at least from my perspective - others may have different opinions).


gotcha


Are many scientists using PyPy though? I never saw it in use in academia even though I'd tried it out personally.


I've never seen PyPy used anywhere by anyone, except for in talks about PyPy at PyCon.


PyPy is definitely used behind the scenes in many places. IMHO you don't often hear about it because if you're ready to take the performance hit that comes with python you're probably not trying to squeeze the best out of it all the time.

There was also the lack of compatibility with existing packages, a big issue in the past, but nowdays it's pretty rare.

You can often just run your program into both and measure if using pypy makes sense for your task. Frequently the free speedup is very welcome, especially for long or repeating jobs.

Even when used opportunistically like this PyPy is still tremendously useful.


I don't know. I also don't use pypy much, since python+numba is actually fast enough most of the time, and I always see pypy as a fallback to see if I can squeeze a little bit more performance before running a task.


Does Numba offer significantly faster compile times than Julia's JIT?


No, from my own (relatively limited) experience with Numba, if you enable "@nopython", it's about the same speed as Julia, which is the same speed as C.

The thing is that Numba is only applicable for simple numeric code. Last I checked it didn't even support custom classes. In fact, last I checked it didn't even support Numpy - to support "Numpy" it had to internally re-implement much of Numpy, which really says something bad about its use cases. In contrast, the Julia JIT speeds up the entire language from string processing to set operations.

Edit: To not be misleading: Julia and C (and Numba) have the same speed only in the simple cases you can apply Numba to. In more diverse workloads, C pulls ahead of Julia for various small reasons.


Yeah, because writing C and dealing with toolchain issues is so much more productive.

Or keeping doing the thankless gospel to get PyPy adopted.


That only matters in a scripting context where you're re-launching over and over as you make small changes.

Once you have a real application that's up and running, it just runs.


Julia has some packages to address this now.

https://github.com/timholy/Revise.jl




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: