My first hand experience with the deterioration of quality in MacOS: I bought a Mac Studio to run some traditional ML neural network type stuff with PyTorch. Unfortunately the GPU / MPS kept crashing (?!) and would require a full reboot of the computer to get working again. Without a reboot, PyTorch would not find a GPU / MPS device and random other apps would glitch. After finding nothing in logs and no information about how to debug anything related to MPS, I contacted Apple Support. Apple support was completely unhelpful and a very frustrating experience. It felt like they were laundering responsibility through the veneer of some engineer having glanced at whatever internal second hand report the support staff transcribed from me. The situation improved a bit after upgrading to Metal 4, but in the end I moved back to Tensorflow. It's very disappointing to spend $6k on a machine and the software doesn't work as expected.
People tend to assume things would go according to their fantasy mental model on this topic, but I strongly believe that legalisation of strongly addictive substances would pave the way for an illegal market for stronger and cheaper illegal drugs, untaxed and unregulated.
Addicted people don’t reason the same way as non-addicted people.
Disagree. Do you mean that vehicles gets louder because people skip maintenance? Or more stress produces more honking? An opposite effect: more financial stress means less spending on aftermarket exhaust mods means quieter traffic. Also, financial stress means less driving means less noise. Overall, I doubt you're going to find much signal in the noise!
Well, it is a perfect storm of factors. First of all lack of maintenance. Secondly, people gunning it with the gas pedal. Thirdly, lower income folks in a rush to get to a second or even third job, gunning it for reasons other than just sheer frustration.
>We test three representative tasks in materials chemistry: linking dopants and host materials, cataloging metal-organic frameworks, and general composition/phase/morphology/application information extraction. Records are extracted from single sentences or entire paragraphs, and the output can be returned as simple English sentences or a more structured format such as a list of JSON objects. This approach represents a simple, accessible, and highly flexible route to obtaining large databases of structured specialized scientific knowledge extracted from research papers.
Short answer: It’s a way to generate structured databases for (most) scientific topics. Why? Apply data driven methods to these databases. So what? It’s a powerful way to ask and investigate scientific questions/trends otherwise hidden inside a million scientific papers.
Example: Consider what PDB has done for our understanding of protein folding, as well as the ML/computational techniques they’ve enabled (eg, Alphafold). Most scientific questions and properties are not as data-rich as protein folding. What if they could be?
Longer answer: The last 15 years in computational/ML + science have shown that structured databases open up entirely new frontiers in discovery (eg Protein Data Bank, Materials Project). But most scientific topics/properties are NOT in structured DBs, they’re scattered about in millions of papers. It’s especially a huge problem in some topics in materials science. It’s not that these problems are data scarce, but that it’s hard to actually collate their data in a structured format. You literally cannot use most ML methods because structured DBs do not exist.
This paper is a way to generate massive structured databases of specialized, intricate, and hierarchical knowledge graphs from scientific literature. Fine tuning works, prompt engineering does not (at the time, perhaps this has changed). Once you have a database, you can analyze an entire subfield or topic in science with ML or stats methods.
1. First line is "Using Cursor, Claude 3.7, and OpenAI every day". You can't win with this. You don't take weekends off? Red flag. You do take weekends off? Then the first line of your resume is a lie and I wonder what else isn't honest.
2. #1 skill is Vibecoding? Red flag. Your resume would look better without the left column of skills. None of your experience backs up those skills.
3. The experiences listed are all 1-2 years, with the longest one being your self-employed one. Why are they all so short?
Scala used to be my hobby / enthusiast language. Introduced to it through a college course, and used a bit through school. Later, I would use it for Advent of Code, tinkered with a Scala Play webapp, and dream about using it professionally. Rust has almost completely filled that void now. Rust is native, I'm not waiting on the 1.0 release of `scala-native` anymore. The community around Rust seems to be enthusiastic and growing, as opposed to languishing for Scala. I hold some reservations about Rust in terms of how complicated it is. Despite having used it for an amount of time that I would be feeling comfortable in most languages, I am still not comfortable and continually encounter _stuff I don't understand_.
RIP Scala, I will miss you! You showed me the joy of pattern matching, functional OO, currying, how to use `map` `flatMap` `fold`, etc. All things with continued influence! <3
I feel like the biggest misstep that the Scala ecosystem and Typesafe/Lightbend did was that they didn't invest more in Play Framework. 10 or 12 years ago, Play had a lot of energy and momentum, and it's a kind of thing that has broad enterprise/start up appeal. But focus was always more on Akka and what seemed like really niche architecture astronaut stuff like Actors and Actor System Clusters and Event Sourcing etc, rather than getting the basics to be super ergonomic or productive.
If they had keep just making Play Framework better and better and focusing on the practical problems that every web service faces, they could be in a similar great position as Laravel is in today or any of the many Rails/Laravel consultancies.
> I feel like the biggest misstep that the Scala ecosystem and Typesafe/Lightbend did was that they didn't invest more in Play Framework. 10 or 12 years ago, Play had a lot of energy and momentum, and it's a kind of thing that has broad enterprise/start up appeal. But focus was always more on Akka and what seemed like really niche architecture astronaut stuff like Actors and Actor System Clusters and Event Sourcing etc, rather than getting the basics to be super ergonomic or productive.
I'd say the opposite. They pushed Play a lot. But it was never a killer app, and it never even really leveraged the strengths of Scala.
People and especially companies don't switch languages for "super ergonomic and productive". They switch because they want to do something they can't do in their current language. I'm not a fan of Akka or Actors, but it made for some incredible demos that you really couldn't do in anything else except Erlang.
Hmm. Fair point for Rails and Laravel (whereas I think React had a killer app in terms of being able to make SPAs without going crazy) but those are tools that you can pick up for a one-off throwaway project - indeed I suspect most adopters didn't "move" to them so much as start doing projects in them and eventually stop doing projects in other things. Scala was never really competing in that space - I don't think anyone would ever say "let's make our website for the next tradeshow in Play" - it's a "heavy" language that needs an IDE and deep familiarity to get the best out of it (and partly that's also what JVM folks would be expecting). So it needed to play for the big core codebases, and for a while it did (particularly when there was no alternative to Spark).
Could they have made Play an alternative to Rails for one-off throwaway websites? Maybe, but the thing that would have needed to be different wouldn't be pushing Play itself, but rather lighter tooling and making it easier to get from zero to pages being served. Honestly I struggle to see how they could've done it without making the compiler and build tool much faster, and either making the IDEs much more efficient (difficult) or making the language easier without an IDE (difficult, and would risk splitting efforts). And even then, you wouldn't really show the compelling advantage of Scala, which is fearless refactoring in large codebases. I don't know that it could ever have been better than Rails at what Rails does, and also we already have Rails. Whereas even if it eventually "dies", Scala has already pushed Java and Kotlin to be much better than they were.
I mean, I don't find that working with IntelliJ makes it any "heavier" than other languages in terms of prototyping. I use PHP Storm for Laravel development and I never say to my self, "if only I wasn't using an IDE, maybe I could get this site put together faster". Quite the opposite, Intellij makes me super productive.
I think all of these frameworks - Laravel, Rails, Django, Next.js, Spring - require deep familiarity to get the best out of them.
> Honestly I struggle to see how they could've done it without making the compiler and build tool much faster
Well, Lightbend literally was the owner of Play, SBT, and Scalac. They were in a perfect position to make the build tool and compiler much faster. Or even if SBT can't be made much faster, ditch it and make integration with gradle and/or maven really great.
> I use PHP Storm for Laravel development and I never say to my self, "if only I wasn't using an IDE, maybe I could get this site put together faster". Quite the opposite, Intellij makes me super productive.
But the first time you tried out PHP, did you have to install the IDE first? Did you have to change your existing PHP tooling setup the first time you tried out Lavarel?
I would agree that IDEs are an improvement over not using them in most languages, but my feeling the "tooling curve" is much steeper for Scala than for something like PHP.
Play's hype did more harm than good. I work at a company that has many legacy projects in Scala because we were/are heavily invested in Spark, and Play is a disaster overall.
- The churn caused by breaking changes in minor versions used to be annoyingly high.
- Slick looks neat at first but caused a lot of friction when used by less experienced developers.
- The fact Akka is in your dependency tree encourages people to reach for it and raw actors are usually a bad choice. Akka streams work well for websockets and SSE but it's another footgun.
Additionally:
- It was in state of semi-abandonment for several years before Lightbend gave the project to the community. Even though there are/were big companies deploying Play apps at scale, for instance Apple.
- The introduction of Guice (in 2.4 afaik) was a terrible mistake, completely unnecessary and at odds with the Scala philosophy. Sure you can not use it, or use something else (like macwire) but defaults matter.
- Play-JSON depends on Jackson which is annoying in the JVM world, causing binary compatibility issues when you have diamond dependencies.
- Standard library Futures are not so nice when you've experienced anything else (Scalaz Task, Cats IO, ZIO, even Twitter Future...)
- Code generation from routes files is an odd choice when Scala has always been expressive and DSL friendly.
- Swagger/OpenAPI integration is brittle.
I've personally used Tapir since 2019 and couldn't be happier. All Play apps still running at my company are being abandoned or replaced by Spring/Java projects.
Tapir looks nice, didn't know about. Can I ask, do you use it together with Netty? How fast is it for you? (if you happen to have benchmarked it)
Have you tried Vertx with Scala? (Or Spring + Scala, or sth else?)
> The introduction of Guice
Personally I've wired everything statically at compile time, zero dependency injections. (Felt as if what I did went a tiny bit against the framework, but works fine.)
I use Tapir with http4s as the http server isn't my bottleneck anyway and I like Cats Effect and fs2.
But SoftwareMill has done extensive benchmarking to make sure the overhead from Tapir vs calling the http backend directly was insignifiant. I believe Netty is the recommended backend if you want direct style (i.e no effect systems) on Java 21+ virtual threads even though Oracle's Helidon Níma is supported too.
Thanks! Ok, seems direct style is what I would want, after having asked an LLM: it says it's then simpler to debug async call stacks, with Java 21+ virtual threads.
Nice to know that there are good alternatives, if time it is some day to migrate away from Play.
Totally agreed, Slick is definitely not a good way to access the database. It massively over complicates things and was a massive oil spill that destroyed the maintainability of many codebases. But that's not really Play, specifically, just a library that lots of people used with Play. I personally was always more a fan of https://scalikejdbc.org/, if not just plain JDBC
> It was in state of semi-abandonment for several years
Yes, this is my main complaint. I remember on the front page for like 5 years after TypeSafe Activator had been totally removed from the internet, the Play website was still showing Activator commands. To this day, the Play site still hasn't removed their line about how they support CoffeeScript and Less.
> Guice (in 2.4 afaik) was a terrible mistake, completely unnecessary and at odds with the Scala philosophy
> Play-JSON depends on Jackson
> Standard library Futures are not so nice
Well, it turned out that the Scala Philosophy wasn't the be-all and end-all anyways, and was always changing (at some point DSL's were in, then they were out. The way people encode TypeClasses changed over the years, selectDynamic/applyDynamic were in and then they were out, symbols were everywhere and then deprecated, implicit conversions were in and then "best practice" switched to Converters) and there was always at least 2 camps who had very different philosophies. Guice is probably the most popular DI tool in the JVM world so seems to make sense to use it.
The Jackson dependency and Scala Future's shortcomings might be annoying to many, but I don't think they really hindered adoption. Even in your case, what happened? At your company they're ripping out Play and replacing it with Spring, which uses DI very similar to Guice, probably depending on Jackson, and using java Futures (if they're doing async at all).
It's true some Scala features or patterns fell out of fashion but I don't recall any time where replacing compile-time logic with annotation based runtime reflection was considered a good idea. And I don't think Guice was ever the most popular JSR-330 implementation, at least not when Play started using it. Spring had been dominating the Java world for several years by then. Funnily enough in the Android world I remember Dagger being very popular exactly around that time as people figured compile-time automatic DI is a lot saner than Guice.
Of course Spring is another can of worms entirely and I find many aspects infuriating. But things like diamond dependencies are less of an issue thanks to Maven BOMs which are common in the ecosystem.
My point is that ~10 years ago people started using Spark at my company, got curious about Scala and thought Play was compelling. Nowadays Spark (in Scala) is less ubiquitous and these teams remember that so many people got burned by Play before.
Funnily enough the peak of Scala's hype which I believe plateaued between 2014 and 2019 before dropping sharply was mainly driven by:
Spark: ground breaking in many ways but has become a huge liability to the ecosystem, holding so many libraries back. Databricks is also the main financial contributor to the Scala Center (I believe) while not giving much of a damn about the community or Scala 3 altogether. Spark is a very frustrating piece of software overall and today 95% of users are in PySpark anyway, avoiding JVM dependency hell being one reason.
Play: good idea, questionable execution, poor governance, and today mostly irrelevant to the future of Scala. Props to community maintainers who managed to secure funding and brought the framework back from the dead, saving many projects from a certain death.
Akka: also ground breaking, pretty much the only game in town if you need stateful cluster sharding, deployed at scale by top-tier companies. But also overkill for most people, and on top of that the relicensing really hurt the community and broke trust.
I use Play, and I think it is a good web framework, nowadays.
In the past, however, upgrading to new versions, was annoying, because of pretty big changes in the API. (Don't know if this is what GP had in mind though.)
And even worse (I suppose) for people who were using the different ORMs which have come and gone, instead of plain SQL.
Akka has been annoying too: Adding WebSocket to my project, using Akka, was extremely much more complicated than doing the same in Nodejs (at least looking at the Nodejs docs I've seen).
Today, to me, Play is feature complete and all I want is maintenance updates (and performance optimizations but not so important). And yes, that's how things look right now: Some bigger companies pay an open-source maintainer, so Play gets regular maintenance updates, but not any annoying major API changes (or so I hope), nice.
Play has become boring in a good way? :- ) (Thinking about some "Use boring tech" HN posts.)
Totally agree. I wrote a few Play apps way back when and really enjoyed it. I was so excited about the future of the framework and how it would beat out Java for web apps, and steal folks away from the Rails ecosystem.
And then it just…stopped. Not sure what happened there honestly.
I have been writing Scala and Rust everyday for the last few years.
I actually don’t see the two overlapping all that much. Rust is a terrible backend language compared to Scala/JVM. When you are dealing with real world concurrency i.e. error/thread management Rust’s memory management model becomes unusably complex very quickly. And the entire ecosystem lacks maturity i.e. the majority of libraries I use are not at version 1.
Whereas from Scala you can just use any Java library e.g. Vertx, Spring almost all of which have commercial, enterprise support and continue to be proven time and time again. It almost always just works.
Rust’s strength is in desktop apps e.g. Tauri and low-level programming.
> When you are dealing with real world concurrency i.e. error/thread management Rust’s memory management model becomes unusably complex very quickly
I've seen this several times, but having built several highly concurrent applications in Rust, I just don't agree. Building a concurrent Rust application the wrong way is certainly complex, but if you know how Rust's strong opinions on how to structure an application work, it's downright pleasant.
Except async. Rust's async story still sucks and should be avoided as much as possible.
I'm not speaking bad of Scala or the JVM: I actually agree. When the GC and overhead don't matter, a JVM language is an absolutely fantastic choice. I was more commenting that Rust can be simple if you architect it right, that's all.
I've mostly come to the conclusion that a mediocre engineer can write performant enough Java/Scala/Clojure/Kotlin/etc. Usually even more performant than what an inexperienced/mediocre C/C++/Rust engineer could write and have an easier time doing it. However trying to ilk out the last bit of performance in a JVM language can be very challenging.
> And the entire ecosystem lacks maturity i.e. the majority of libraries I use are not at version 1.
I'm marginally bothered by the reluctance to bite the bullet and accept a 2.0 will happen in the future, but version numbers do not make for mature libraries. There are plenty of foundational libraries written in C keeping Linux desktops running that are permanent 0.x versions.
> Whereas from Scala you can just use any Java library e.g. Vertx, Spring almost all of which have commercial, enterprise support and continue to be proven time and time again.
I find that "wide and storied library ecosystem" can be a double edged sword: an old library can either be battle-tested, or just old (with cruft or poor design or implementation) and you can't always tell which it is ahead of time. This is true for libraries in any language, and the same thing will happen to Rust in 10 years.
> but version numbers do not make for mature libraries
No but they do symbolise a belief amongst the library maintainers that it isn't stable. And so I end up spending an inordinate amount of time refactoring for the latest version.
> and the same thing will happen to Rust in 10 years
From what I've seen in the last year it won't. Most of the libraries I use have stopped being maintained. The reason Java does so well here is because in enterprise space supporting something for decades is common.
I double that. Rust is great language, and I like it a lot. But usually I pick Scala whenever I can, it’s perfectly suited for complex domain.
So my rule of thumb is:
Large enterprise monolithic projects - Scala.
Microservises, serverless functions, systems where resources have hard constraints, cli apps - Rust.
I thought I was reading my own story! same here... 10 years of Scala 2.xx till it was Scala 3 that caused my withdrawal, and that infinite waiting for scala-native... killed my interest. Now, and for the foreseeable future, it is Rust.
I had a look at Mojo, love it, but I am no longer interested in OO (used OO for 20+ years and I figured I am no more interested).
EDIT: even John DeGoes (the ZIO guy) left Scala for similar reasons, now I just remembered, and prefers Rust over it... I love what they are doing with their Golem Cloud.
True, but his article (https://degoes.net/articles/splendid-scala-journey) states it is also for Scala new syntax breaking a lot of things (if I remember correctly, been a a while since I read it) and for other reasons. I also thought it was for political reasons before until I read that article back in time.
Python mostly, and Java if you are using Flink. We do both. The thing with Scala is, it's mostly linked to Spark, and then probably a good portion of the gigs out there using Spark are using Databricks, and Databricks is pretty expensive, so people are moving away from it. At least this has been my case for two companies -- one moved to self-host Vertica and the other is about to move to Flink.
Java (+Kotlin) for Android, Python for its automation and tooling (and obviously, data.) These aren't very low-level - For Android there is always C++ if you want to go down to metal.
Ah that's interesting. And there is ofc Swift and ObjC for Apple devices. Maybe I should get into app dev.
Thanks. Do you think there is a strong market of Android/iOS native app development? As a DE I don't think my previous experience worths much -- maybe a bit more when we move to Flink which uses Java, and the might would rather hire new graduates instead of me.
Data is still a big thing, but nowadays vendors successfully removed most of the management because people just want to get things done, so a lot of companies are moving to Snowflake and similar places. You basically do some click ops and hopefully shit sticks on the walls.
Interesting if you just like data, but bad if you like the tech beneath. I'd recommend moving away from DWH part of data engineering if you can, and focus on OLTP, streaming which are more technical. The culture is also different.
It depends on what you want to do, of course, but for back end servers the Rust ecosystem is relatively immature. There is nothing comparable to ZIO in Rust world, for example.
What's more important than the current state is the direction where it's heading.
Rust has energy, vision and momentum - additionally it has a powerful niche that it dominates.
Scala has unfortunately fizzled out, I would say mainly because of incompetent leadership (or more precisely academia-driven) but also because the alternatives got much better since it got introduced (Java mainly). Which is a shame as it is a great language that offers elegance and great power.