Actually most of the LLMs algos are less efficient than the readable human one, even with only two nested loops. Only one of them precalculates the factors which makes the biggest difference (there are log2(N) factors, worst case, for large N and a triple loop over those is better than a double loop over 1..N).
I've used pypy on large codebases for years. Generally it's fine so long as you don't need any of the packages that are thin wrappers around C/fortran. It seems a lot of maintainers these days are pretty good about considering it as a target
On the other hand the memory footprint can be painful - not just the deferred garbage collection of things like weakrefs and closed files, but even regular objects. A while back I had hope that the faster cpython project would somewhat remove our need to use it, just so we could have a lower memory footprint, but that seems to have stalled
Why? What specifically caused you not to seek out alternatives? PyPy and libraries with CFFI seem to work okay together, but it can run python native code nearly as fast as C, so I always look for a pure python solution rather than depend on a C library.
They likely mean some of the numerical packages that call out to BLAS/LAPACK or whatever as all that was originally written in Fortran. I think a lot has been converted to C now.
numpy / scipy both use fortran. Scipy has fortran as a build dependency (ie a large portion of it depends on fortran), and numpy can be built without fortran, but has large portions of optimizations in fortran.
anyone who's ever used numpy/scipy has been using python integrated with fortran.
Scipy maintainer here. Numpy has no fortran code on itself and uses lapack lite written in C. Scipy has 10% of its codebase with very old F77 libraries that we are removing slowly. Nothing against fortran but this is slowly turning into a urban legend.
The main reason fortran is still used around is fundamentally LAPACK. BLAS has long been converted to C/Assembly, openblas mkl or others. But Lapack is a lot of code to translate not because fortran is better but the original authors wrote it in F77
I just recently made some Python wrappers for a Modern Fortran optimization library for derivative free optimization. https://github.com/libprima/prima
It basically had to go through Fortran -> C -> C++ (with pybind11) -> Python. At one point we had a slightly simpler setup with Fortran -> C -> Python (with ctypes), but ended up going with pybind11 after some discussion with the SciPy folks who steered us away from ctypes.
Not the OP, but I have written a fair few python wrappers around old Fortran code. There is a pretty great tool called f2py which automates almost all of the hard work. If the Fortran code isn't to weird it can completely autogenerate all the code you need to call your fortran function from python.
Fortran is actually pretty okay if you're doing matrix or vector operations! Builtin cross products, you choose how the array is indexed when you create it, can enforce functional purity to help you parallelize, etc
It is hot garbage for almost anything that isn't math though - which is okay, because it's been focused math from the start - strings and parsing an unstructured input is an exercise in pain for example. And the ecosystem is heavily geared toward math and scientific computing, so you will find yourself rolling your own stuff quite often if you deviate from that niche
Speaking as somebody who has had to debug why servers were ignoring changes to the configs in our git repos, this is a welcome change.
It's a shame there are ways to bypass it, but at least it communicates the intent: you aren't supposed to modify this system without modifying the actual config
Very nice! We spent quite some time on this and aren't even having a kid.
How did you implement the tournament? It feels very long when you have many names - almost like it's doing all N^2 pairs, or is there something smarter?
Yes. I had pretty much this conversation a while back with some non-technically minded people who had been convinced that by creating an ontology and set of "semantic business rules" - a lot of the writing of actual code could be automated away, leaving the business team to just create rules in a language almost like English and have the machine execute those English-like rules.
I had to explain that they were basically on track to re-implementing COBOL.
We use pypy pretty extensively at our firm for analytics/olap work.
We've actually tried using some of the more traditional libs (Pandas et al) with CPython, but there's always a pure-python bottleneck (e.g. SQLAlchemy).
Performance is important to our clients and trying to keep everything performance critical in C extensions / NumPy would be kind of risky for us when adding new functionality, so pypy's guarantee of more speed pretty much across the board is awesome.
There are downsides of course - higher memory usage, longer boot times, some more obscure libraries being unsupported - but on the whole, it's a good choice for us