Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think there are a lot of Java developers, that have just never worked without a DI framework, and just don't have a grasp on just how simple it can be to write code without one.


As someone who hated Java, used it for a few years, and now occasionally misses it...

I only miss DI. I miss being able to say "this system depends on these external things" and having a consistent, convenient way of sharing/swapping/testing those components and dependencies.

The solution in other languages? Unstructured globals, deep argument passing, or monkey patching with mocks?!

Yea, I can write simpler code without DI... By ignoring a bunch of stuff.


You can do DI without a framework.

If you write classes with final fields, with a constrcutor that takes the class' dependencies,and don't use static fields to hold mutable data.

You are doing DI. Just call `new` yourself, instead of having the framework do it for you.


When you have lots of things-that-create-things-that-create-things, this gets tedious really fast. DI frameworks exist because they result in a lot less code that does nothing but pass dependences along.

This reminds me of SQL/ORM debate. "Just use SQL!" Sure, until you get tired of typing the same SQL over and over and realize you can cut out most of that crap by adding an ORM.


The trick is to not encourage that many things-that-create-things-that-create-things. That's a uniquely Java problem.

https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo...


If you take the single responsibility principle even as much as half-seriously, the problem domain more or less decides which things will create which things. If your software platform can't support that, you get spaghetti mess when programmers inevitably build workarounds.


You know, you hear Java repeat things like that a lot, while Go programs just tend to stay simple and readable. It's either the culture or the language causing the problem. shrug


I did a fair bit of work in Go at Pivotal. I found Go anything but readable - a comical amount of boilerplate (especially around error handling), incredibly wordy constructs for simple tasks like making http requests, and the language is almost overtly hostile to functional programming (no generics!).

I use Go as a "better C". Though I'm honestly disappointed with even that. My current company, we built an image processing service in Go. It performed poorly and had poor stability (the imagemagick bindings appear to be half-baked). I rewrote it in Java and it is faster, more stable, and the code is much cleaner.

Honestly, the next time I need a "better C", I'll probably pick up Rust or D.

YMMV.


> I did a fair bit of work in Go at Pivotal. I found Go anything but readable - a comical amount of boilerplate (especially around error handling), incredibly wordy constructs for simple tasks like making http requests, and the language is almost overtly hostile to functional programming (no generics!).

Are you saying that Java is better about any of that?


Yes, absolutely. Java has had a competent implementation of generics since 2004 (Java 5) and really embraced functional programming in 2014 (Java 8). Any application of significance will require more LoC in Go than Java, hands down.

Just compare Java streams with Go container classes. Go's aren't typesafe (though that will hopefully change when generics are officially released) and almost every operation requires imperative code. And endless `if err != nil return err` every time you want to call a function - which actually destroys useful stack information.

I won't apologize for the crap Java code out there - but you can write crap in any language. Modern Java is capable of producing pretty, svelte code.


Fair points. I haven't worked with Go in a few years, and I remember hating it when I did, but I feel like I remember hating Java more. It's possible that part of the Java hate is not from the language itself, but from the ecosystem.

Can you elaborate on Java streams vs Go's containers? I assume you mean things like List and Heap in Go? I'm not sure why you'd compare those to Java's stream API rather than Java's collections. In any case, I do agree that Java's standard library has WAY better collections than Go does, and Go doesn't have the excuse of wanting a minimal standard library.

However, I'll push back a bit on the complaint that working with Go's containers/collections/whatever requires imperative code for everything. Now, I'll remind myself that one of your original points was that Go was "actively hostile toward functional programming" and I retorted to imply that Java was just as bad at all of the things you mentioned. I'll concede that Java isn't actually quite as hostile toward functional programming as Go. But, I'll move the goalposts a bit and claim that supporting some few functional programming patterns isn't inherently good and doesn't automatically make a language better.

> And endless `if err != nil return err` every time you want to call a function - which actually destroys useful stack information.

I agree and disagree. I'm one of the few people who still thinks that checked exceptions are a good idea for a language. I have my complaints about how they're implemented in Java, but I think the concept is still a good one and I honestly think that even the Java implementation of checked exceptions is mostly fine. The issue, IMO, is with training and explaining when to use checked vs. unchecked exceptions and how do design good error type hierarchies.

Go's idiomatic error handling is mostly stupid because Go doesn't have sum types. But, I'd argue that if you are wanting stack information, it means that you shouldn't be returning error values at all- you should be panicking. Error values are for expected failures, a.k.a. domain errors. You can and should attach domain-relevant information to error values when possible, but generally, there shouldn't be a need for call-stack information. A bug should be a panic.


Here's a Java example that sums the populations of a list of Countries:

    int population = countries.stream().mapToInt(Country::getPopulation).sum();
The Go implementation:

    var population = 0
    for _, country := range countries {
        population += country.Population
    }
It gets more perverse if you need to flatMap, or transmute components of map types, etc. If you want even more power, take a look at https://github.com/amaembo/streamex. This sort of container manipulation is bread and butter for business processing. I use it every day, sometimes with a dozen operations. This (with liberal use of `final` values) makes for some pretty functional-looking code.

I'll grant you the Kotlin or Scala version is slightly more compact. But not fundamentally different, like the Go version.

I (and the pretty much every language designer in the post-Java era) disagree with you about checked exceptions, but that's a whole different thread...


The go version looks perfectly fine to me (saying this as someone who uses clojure every day) ;)

Something else to consider is performance, in most implementations the for loop is going to be more efficient.


That's exactly my complaint- most languages have eager, mutable, non-persistent, collections because they were not designed with functional programming in mind.

Then FP became the hot new shit, so they all added some of the lowest hanging fruit so that people can say absolutely weird things like "I do FP in C#". The problem is that the majority of these implementations just eagerly iterate the collection and make full copies every time. So, you're much better off with a for-loop.

To be fair to GP, though, Java has legit engineering behind it, and the way they did it was to introduce the Stream API, which is lazy sequences, and they made the compiler smart enough to avoid actually allocating a new Stream object per method call (which is what the code nominally does, IIRC- each method wraps the original Stream in a new Stream object that holds on to the closure argument and applies on each iteration).


If you really want to go wild, take a look at https://www.vavr.io/ (formerly jslang). You can make programming in Java as functional as you want.


I have to admit, that looks pretty slick.

Have you used it? I'd be curious to hear how well it works in practice.

It seems like the only "big" things Scala has over this is its implicits (which so many people hate, but have been really improved in version 3) and its for-comprehension syntax.

It's so interesting to see a bunch of projects converge on really similar things. You look at Scala, at this Vavr stuff, and at Kotlin + Arrow.kt, and they're implementing all of the same stuff over Java.


Ah. You know what? I forgot that the Java implementation of these concepts isn't stupid like it is in some other languages (except what the heck is mapToInt? Some optimized version that makes a primitive array, I guess? Yucky- I wish the compiler could just figure that out).

So, I concede that Java's addition of the stream API is a legitimately good example of adding an aspect of functional programming to an otherwise very non-FP language.

But, let me go off on my tangent, anyway. ;)

It's not that you need to convince me that functional programming is great. It's just that I find that consistent and coherent designs tend to work well and that kitchen-sink or be-everything-to-everybody approaches tend to be good at nothing and mediocre-to-bad at everything.

MOST languages that have tacked on the low-hanging fruit of FP (map, filter, etc combinators on collections) have done it in a really sub-optimal way.

JavaScript, for example. JavaScript has eager, mutable, non-persistent, arrays as the default collection data structure. When they added map, reduce, filter, etc to Array, they added them in the most naive possible way, which means that doing something like your example above (map-then-sum), would create an entire extra array with the same number of elements as the original, and would end up looping both arrays once. So we have ~2N memory usage and 2N iterations where we really should just have an extra 8 bytes to hold the sum and iterate over the array once (N iterations).

Same thing with other languages like Swift and Kotlin.

Kotlin maybe should have an asterisk because it has Sequence, which will mostly work like Java's streams. However, there are two issues: it still offers them on eager iterables, instead of forcing us to use a sequence/stream to access them, and with suspend functions you have to be careful with Sequences. In you Java example, we're theoretically allocating a new Stream object with every combinator call, BUT we "know" that the compiler is smart enough to avoid those allocations and the result code will be about as fast as writing a for-loop. With Kotlin's suspend functions, we can very easily thwart the compiler's ability to do that. If you use a Sequence chain inside a suspend function and call another suspend function as part of that chain, then that's a yield point and the compiler can no longer optimize away the allocation of the intermediate Sequence object(s).

So, my point is that designing a language with some initial philosophy and then trying to borrow from, frankly, incompatible other philosophies usually leads to sub-optimal implementations and/or APIs. Again, though, Java's streams are a good counter example to my claim.

> I (and the pretty much every language designer in the post-Java era) disagree with you about checked exceptions, but that's a whole different thread...

Indeed it is! :) I'm willing to be the black sheep, and die on that hill, though (too many metaphors?). And, honestly, I don't think it's as unanimous as some people claim. I see returning monadic error values as isomorphic to checked exceptions, and several languages have gone that route since Java: Scala, Swift, and Rust, to name a few. Kotlin's lead dude, Roman, simultaneously claims that checked exceptions were a terrible mistake, but then also advocates for using sealed classes for return values when failure is expected or in the domain, which sounds a lot like what checked exceptions are supposed to be used for. TypeScript can't have monadic error handling because of its design philosophy of being a thin layer over JavaScript, but many in that community have embraced using union types for return values instead of throwing Errors.

Cheers!


Yeah, mapToInt is annoying because the primitive/object dichotomy in Java is annoying. No question about it, it's a wart on the language. Though it does offer some optimization abilities, so the dichotomy is not completely meritless - it's easy to understand why the language designers did it this way. Maybe project valhalla will fix this someday, I don't know. In the mean time, it's not a fatal flaw.


I imagine that is because Go is not used for applications of the same breadth as Java.

Go is typically structured with many relatively small binaries. Each binary can be relatively self-contained.

The way I've seen Java used, it typically has fewer binaries with each binary bundling many services. Many of which include clients for services at the company but a different org - where that other org can just provide a Guice module that sets up the client to call their service and anything that needs it can easily inject it.

I still hate Java but, damn, I see why it's used at B I G companies.


As mentioned, Go is not used for ENTERPRISE APP^TM — Java programs can really hold up under insane abstractions and complexity.

Also, Go has really poor abstracting capability, which may be good for small code bases where having abstractions is a detriment, but abstractions are the only way to handle complexity. If you have the logic spread out over many different parts (or God save us, copied code!), a new programmer will have much more trouble picking up what the hell is supposed to happen.

In the extreme case, compare reading assembly to a high level language. Sure, each instruction is trivial in the former case, but you have no idea what does the whole do.


So, COBOL? That's an argument that works in a historical moment of Java legacy, until it doesn't. Monzo is an example of a bank writing everything in Go.


Good luck with that for them..

Java is in the unique position of excellent performance (state of the art GC, very good JIT compiler) and observability with no-overhead real time options. Due to the language having multiple implementations of a standard and it being one of the top 3 biggest ecosystem, it is nothing like Cobol. You can say it is legacy for 3 decades to come, but it will not die. Hell, it improves with a never-before seen speed.


Now you seem to have switched the conversation of "Java projects tend to be overly complex" to "Java is great". Common talking point, and you have a lot of people who will agree with you, but pretty much unrelated to the topic.


My original point was that abstractions are not evil, hell, without them we would only have calculators, not computers.

Go not having too high abstraction power, while can be an advantage (as per the creator, not my words, you can throw as many bad developers at a project as you want), but it is a disadvantage as well, because then you will have the logic in distributed places, copied verbatim etc, hindering maintainability, understanding the original intent, new dev onboarding, everything.


These words are bandied around a lot by people outside the Go community, while the people who end up actually using Go a lot tend to say it's the most readable code they've worked with in their lives. shrug


Is that why Google has a DI code generation tool for Go (Wire)?


That thing that's largely not used? Sure, some person who signed Google's onerous employment contract wrote that. Look what other stuff those people are pushing https://cloud.google.com/open-cloud/ and see how it's all "enterprise solutions" while the community adoption is at 694 projects importing wire: https://pkg.go.dev/github.com/google/wire?tab=importedby


When I debug a well written C/C++ code usually the callstack is about 10 levels deep.

When I debug a well written Java code usually the callstack is about 50 levels deep.

It's not because of the Single Responsibility Principle.


Why? With such a well-known framework like Spring, you will get the benefit of any Spring-developer knowing instantly the conventions (which is not true with your in-house conventions where I will have to hunt down where does this class come from, oh this ugly abstraction which is buggy as well), less code is less opportunity to introduce bugs, less thing to maintain. Annotations are basically just a declarative DSL for a significant chunk of your code base.

I really don’t see any cons, other than a slight learning curve (and yeah sure, “developers” that just bash keys will have trouble with understanding what does an annotation do and blindly copy-pasting them can be dangerous but they will also fk-up regular code as well..)


> Sure, until you get tired of typing the same SQL over and over and realize you can cut out most of that crap by adding an ORM.

And adding an ORM isn't either/or. You can still use native SQL when necessary.


Yep, but if you have to change 5 constructors to get a new dependency to where it needs to be, calling `new` yourself starts to suck.


> Just call `new` yourself, instead of having the framework do it for you.

But at that point, why would I want to?

There are reasons I wouldn't want to, but there is no inherent value, to me, in manually calling new.


by calling new yourself you get a sane stack trace when something is misconfigured. that alone is worth the tiny additional amount of code in my book.


How is that worthy? You pretty much only have to look at the topmost exception, or at worst the causing one. Whether it has 100 lines after or 3 doesn’t matter, not the slightest.


But how do you handle configuration then ? At some point you want a user-facing UI where the available features (which are generally classes) are listed and the user can choose the feature, say which log backend is enabled, without having to change code - that's the whole point of it. (And the most tedious code to write by hand - a complete waste of time)


> But how do you handle configuration then ?

In the main method, then you can pass the configured values wherever you need to when new-ing classes.

> At some point you want a user-facing UI where the available features (which are generally classes) are listed and the user can choose the feature, say which log backend is enabled, without having to change code - that's the whole point of it. (And the most tedious code to write by hand - a complete waste of time)

I consider DI a valuable pattern, but I've never experienced anything close to this need.


What happens with proxied classes? My ClassWithTransactions is actually a subclass of the written one auto-generated by Spring. I can’t inject a new instance of that manually.

And you may say that you don’t need Aspect Oriented programming, but the usual handling of transactions in many other languages without some meta-programming is.. to not handle transactions. Putting a single annotation over a method is imo a very elegant way to handle this needed functionality.


This is all considerably more abstraction than I have wanted or needed when writing Java. When handling transactions, I’ve passed around the same connection before committing.


The point of the transaction in this context is that both the database(s) and the business logic/state stay in sync. I don’t think that naive attempts will be logically correct.


What do you consider naive? JDBC supports transactions, I’ve had success with that.


As I mentioned, you will have to roll back not only the database, but the relevant application state as well. This is really error-prone if repeated enough times, or the flow of control is through many different methods, etc.


By application state, do you mean something like an in memory cache? I would prefer having no such state in the first place, and have all meaningful state in the DB to be pulled out or mutated as needed.

I recognize that what you’re advocating for makes of sense in some applications, I just wanted to point out that I haven’t felt the need for it in my eight years of software development.


> I consider DI a valuable pattern, but I've never experienced anything close to this need.

Literally every non-toy software I had to develop in my life required that lol


Do you seriously allow users to configure which logging framework is used through a UI? Perhaps I’m misunderstanding what you’re saying.


in my case it's more often which audio, gamepad or graphics backend but yes, I actually had the log backend configuration request once ! (wanted to choose to log stuff in text files or websockets depending on the case, for a GUI app ; there was an explicit requirement that the entire software could be configured and used with only a mouse, no keyboard so many configuration menus were needed)


My company chose exactly this design for several of our microservices. It is now almost universally considered to have been a mistake.


Why? What happened?


It very quickly becomes an unmaintainable mess once the service grows past a certain size.


But, how, exactly?

The only bad things I can see are:

1. Constructors with many parameters

2. Needing to pass a dependency many levels deep

But, I would still think that those are not big deals (what do you have, 40 parameters or something?) and that the explicitness can be helpful. Isn't it good to know that the top level service depends on your email-sender dependency from just looking at its code instead of needing to analyze its code and every single object under it?


Which is the standard way to do DI in Spring as well? It will be just called by reflection instead.

But frankly, how will you call that new if it depends on a class which is a singleton, another which has some more complicated scope so it may or may not have to be reused? DI is not only about calling new..


Another useful feature of Spring is aspect-oriented-programming (like when we manage transactions boundaries with @Transactional).

Spring takes care of that, but doing it manually (and without dynamic proxies) would add to the verbosity.


Is what you're thinking of equivalent to deep argument passing? I've seen it done where you pass around a global Factory object that can provide dependencies. It's basically rudimentary DIY DI.


It's really very simple, no you don't need to pass around a factory object.

You just have a class/classes that construct/wire all of your singleton objects and passes the required dependencies into their respective constructors as necessary.

Here is a contrived example of what the wiring code might look like for a web app that uses a database.

    public static void main(String[] args) {
        MyConfig config = readConfigFile();
        DatabaseConnection dbConn = new DatabaseConnection(config.dbHost(), config.dbPort());
        UserDao userDao = new UserDao(dbConn);
        UserController userController = new UserController(userDao);
        List<Controller> controllers = List.of(userController);
        WebServer webServer = new WebServer(config.listenPort(), controllers);
        webServer.runAndBlock();
    }


How is it better to make a dev write out that plumbing and others reread it? I’m made of meat, so I want to automate everything we safely can.


Advantages:

- Don't need to depend on a DI library, makes code more modular and portable.

- Faster application initialization time.

- Easy to navigate and understand relationships between classes, good IDE support.

- Easier to break apart and test parts of the application.

- Easy to understand, don't need to learn the intricacies of a complex DI framework.

I'm not saying there is no place for DI frameworks, although I do think they are overused.


> don’t need to learn

I know it is a nitpick, but I see this way more often than I should as a main reason to prefer alternatives.

Finding out what gets injected is not particularly hard, especially when only the basic capabilities of spring’s DI is used. In that case it will be almost always the single implementing class of the given type.


So if you are making any kind of reusable design, you cannot annotate your classes with @Bean anymore. Instead you will make an @Configuration (like spring boot auto configuration) that by discretion may pull in some more general (not @Configuration annotated) reusable configuration. Since some classes will be considered implementation details, you won't want to expose those into the dependency injection container of spring (since that is equivalent to making them public, people will inject them and depend on them!). So instead you will only create them inside your own @Configuration and pass them directly when generating an @Bean from a method.

Congratulations, your @Configuration is manual dependency injection. That is easy enough. Why did we need inversion of control over the dependency injection in the first place? It isn't immediately obvious to new engineers what aspect of the @Autowired is dependency injection and which aspect is inversion of control. Many of us don't see much of a benefit to the inversion of control if you are taking care of your application's hygiene in the first place.


A @Bean method is a signal that a class is so complicated that Spring can’t figure out how to create a valid instance after @Import or @ComponentScan. For limiting use, package-private types and methods are better than creating components yourself and reinventing pieces of Spring like @Profile and @Value and @Scope.


You’re reading and writing the “plumbing” no matter what you do. So, does it matter if you are writing a config file or Java code?


Unless WebServer is the only class that needs dependencies you're either going to have to pass those dependecies repeatedly from class to class or you're going to have a global factory that provides the dependencies to everybody.


Yep. Once you get too many arguments what you do is usually to create some kind of Context class that bundles all of them and just pass that on everywhere.


I'd say that the only one of your listed solutions-in-other-languages that is actually a valid solution is deep argument passing.

And I fail to see why it's a problem. If your FooService depends on a BarService, which depends on a BazService, and BazService needs a database connection, then that means your FooService really does also depend on a database connection. Hiding that information, to me, seems like a mistake. Can you articulate why one would prefer not to have FooService explicitly require that database connection, or am I inadvertently arguing against a straw man? If so, please correct me, because I'm asking sincerely.

Of all the time I spend thinking about my code and writing code, I truly can't say that adding a dependency and having the compiler complain until I fix a bunch of constructors has really caused me that much grief. And I'm not going to pretend that it has never been the case that I've had to fix 20 constructors.


Ultimately, I think thisnis going to come down to preference.

I would prefer not to have to fix 20 constructors.

It's tedious and time consuming. The intermediate classes that _do technically depend on FooService because BarService does_ - the intermediate classes don't care! It clutters the code everywhere else for minimal benefit.

Manually, you see all your dependencies just shy of main where the binary initializes them all and starts passing things down. In DI, you have a module file somewhere with them all.


Definitely a preference thing- no doubt.

But thank you for responding anyway.

(As a clarification, in case it's needed: I obviously didn't LOVE it when I had to update 20 ctors after changing a somewhat fundamental "service" to need a new dep. My point was that, even as painful as that was, it wasn't that bad and it's usually much less bad than that.)

I guess the (philosophical) difference comes to this statement:

> The intermediate classes that _do technically depend on FooService because BarService does_ - the intermediate classes don't care!

I can definitely understand what you're saying there, but it's interesting to me that I don't see it that way. I think I'm just less pragmatic and more... "academic" (?) about how I read and understand my own code. If X depends on Y and Y depends on Z, I'm comfortable with X explicitly depending on Z because I imagine "inlining" Y's functionality in X. Either that or you turn Y into an interface and then X only depends on IY. But, my brain just likes the explicit continuity I guess.

Cheers!


The solution in other languages is to use a DI framework written for them. Which one doesn't have any? In .NET, the basic DI interface (imports/exports etc) is even part the standard library as System.Composition.


You say that, and I usually agree, I mean, constructor args are the simplest form of DI.

But then, working in a complex codebase, I introduce a new dependency that is instantiated early in the tree, used two disparate classes rather deep in the tree, suddenly I'm changing 10 different constructors just to get the new dependency where it needs to be.

The tree of constructors is where DI shines as an alternative.


That REALLY depends on the size of your codebase. When it’s small, no need for a DI framework. But when it grows large, it becomes quite a pain, and a DI framework is nice, eliminates a bunch of boilerplate with every code change.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: