I don't want microservices; I want an executable. Memory is shared directly, and the IDE and compiler know about the whole system by virtue of it being integrated.
Probably works OK for a small project with a close knit team of skilled contributors where there's some well defined structure and everyone has sufficient high level understanding of that structure to know what kinds of dependencies are or are not healthy to have.
But, unless you have some way of enforcing that access between different components happens through some kind of well defined interfaces, the codebase may end up very tightly coupled and expensive or impractical to evolve and change, if shared memory makes it easy for folks to add direct dependencies between data structures of different components that shouldn't be coupled.
> But, unless you have some way of enforcing that access between different components happens through some kind of well defined interfaces, the codebase may end up very tightly coupled and expensive or impractical to evolve and change
You are describing the "microservice architecture" that I currently loathe at my day job. Fans of microservices would accurately say "well that's not proper microservices; that's a distributed monolith" but my point is that choosing microservices does not enforce any kind of architectural quality at all. It just means that all of your mistakes are now eternally enshrined thanks to Hyrum's Law, rather than being private/unpublished functions that are easy to refactor using "Find All References" and unit tests.
I loved uberjars back when I was writing Scala. I don't miss much about the JVM, but I really miss having a single executable I could just upload and run without having to pay attention to the environment on the host machine.
Heavier weight? Yes. Slower? Should be the same performance. Unless you are on a non-linux host, then there is no docker penalty.
The only time I can think where a JVM might be faster is if you have a multi-tenant setup. In that case, the JVM can be more effective with the GC vs having multiple JVMs running.
I didn't mean the application is less performant (indeed, it might be even more performant since there are more things we can tweak), but that the overall build + upload process is noticeably slower. I remember it taking <1 minute to rebuild + deploy an uberjar in 2016, and on an application of similar complexity today it takes somewhere between 3-15 minutes. Admittedly, this is a different company, language, and setup, so there are a lot of other potential differences.
Uberjars are (typically) extracting all the classes from jar dependencies and combining them into a single jar. That's all gzip compression work.
Container layers are simply saved off filesystem modifications. If you use something like Jib to build your image, then the actual deployable should actually be a lot smaller than what you could do with an uber jar. That's because Jib will put your dependencies in one layer and your application jar in another. Assuming you work like most everyone, then that means the only thing that usually gets transferred is your application code. Dependencies only get sent if you change them or the base image.
I don't want microservices; I want an executable. Memory is shared directly, and the IDE and compiler know about the whole system by virtue of it being integrated.