I have a full-on "The Problem With LangChain" blog post in the pipeline, and the reason I made a simple alternative (https://news.ycombinator.com/item?id=36393782) because I spent a month working with LangChain and coming to the conclusion that it's just easier to make my own Python package than it is to hack LangChain to fit my needs.
A few bullet points:
- LangChain encourages tool lock-in for little developer benefit, as noted in the OP. There is no inherent advantage into using them, and some have suboptimal implementations.
- The current implementations of the ReAct workflow and prompt engineering are based on InstructGPT (text-davinci-003), and are extremely out of date compared to what you can do with ChatGPT/GPT-4.
- Debugging a LangChain error is near impossible, even with verbose=True.
- If you need anything outside the workflows in the documentation, it's extremely difficult to hack, even with Custom Agents.
- The documentation is missing a lot of relevant detail (e.g. the difference between Agent types) that you have to go diving into the codebase for.
- The extreme popularity of LangChain is warping the entire AI ecosystem around the workflows to the point of harming it. Recent releases by Hugging Face and OpenAI recontextualize themselves around LangChain's "it's just magic AI" to the point of hurting development and code clarity.
Part of the reason I'm hesitant to release said blog post is because I don't want to be that asshole who criticizes open source software that's operating in good faith.
> Part of the reason I'm hesitant to release said blog post is because I don't want to be that asshole who criticizes open source software that's operating in good faith.
Beyond the "extreme popularity of LangChain is warping the entire AI ecosystem around the workflows to the point of harming it", hasn't it recently become an attractor for substantial amount of investment money? I'm not saying you should be an ass about it, but the ecosystem will keep getting warped further if knowledgeable people won't speak up, and LangChain doesn't seem to be a random small open source project anymore.
I'm not worried about LangChain not taking criticism well, it's more the fanboys who have a vested interest in maintaining the status quo and I don't have the free time to deal with annoying "you're just nitpicking because you're jealous" and "it's open source, why don't you just make a PR to fix everything instead of whining?" messages.
This is our worry with building Auto-GPT as well. We have had a number of rather involved discussions on why we don’t use it. I’d love if you published so I can point to it rather than rehashing it every few days.
> Debugging a LangChain error is near impossible, even with verbose=True.
(A while ago) I tried using LangChain and shortly gave up after not finding any way whatsoever to actually debug what’s going on under the hood (eg. see the actual prompts, LLM queries). It’s pretty ridiculous that this isn’t basic functionality, or at least it isn’t very discoverable.
I cannot imagine spending extended time with a framework without knowing what the internals are doing. I do realize this isn’t achievable on all levels with LLMs, but introducing more black boxes on top of existing ones isn’t solving any problems.
We’ve had a lot of similar concerns when working on Auto-GPT and have been repeatedly asked why we don’t use it. You’ve solidified a lot of the reasons it’s not fit for purpose for large complex projects.
We’ve received a lot of commentary on our unwillingness to use it, and I don’t blame you for being hesitant. I don’t want to be the open-source project that says it’s not good when it’s not suitable for our uses.
Arr matey, ye might be taken aback, but this here post is singing the praises of LangChain, loud and clear. You only spot such dreadful slander taking wings when a project be making mincemeat of its rivals, and someone's tender feelings be getting a bruising.
For that poor soul, it's like a slap in the face from the mighty Poseidon himself. The thriving project dares to steer its course in a way that ruffles their precious sensibilities. The audacity! Since they be the compass of all that's right, the project must be heading for the rocks, not them. Why would any sane sailor hitch their fortunes to this monstrous beast, when it's not charted on their blessed map?
But in a world soaked to the bone with crowd follies and tribal loyalties, the voice of the multitude sometimes manages to ring out as one, and for good reason, mark me words. Cast your eyes on the likes of React, Kubernetes, and Tailwind.
These examples, like our beloved whipping boy LangChain, skilfully merge a motley of tactics from the teeming ecosystem, distilling them into a chart that's simpler and more intuitive, though a tad odd and confining.
As it sails the high seas, growing and evolving, brace yerself for the titanic task of keeping the code shipshape. But our chummy critic, bless their heart, can't spot the shining treasure that's clear as day to the rest of us simple seafarers. They'd sooner believe it's a devilish trick or that the developers finding riches in it are either lost at sea or plain daft.
This stirs a merry storm of cognitive dissonance in their noggin. It becomes their holy mission to persuade themselves that it's the rest of the crew caught in a dreadful mirage, and they're the sole beacon of sanity in a mad world!
They nimbly dodge Occam's Cutlass, baffled by how the whole crew could go stark raving mad in harmony. And then comes the climax: a breathtaking revelation where they're forced to grapple with the unsettling truth of their own tunnel vision and stubborn notions. What a sight to behold, arr!
> That's my thoughts anyway. But filtered through a slightly Irish Pirate to make it sound a bit less like I think you're take is bad and you should feel bad about it. It's great you're helping get the word out about LangChain to more people though.
> Part of the reason I'm hesitant to release said blog post is because I don't want to be that asshole who criticizes open source software that's operating in good faith.
I agree with your restraint, this feels like it might be more productive in another format. Ultimately this either needs to be broached with the maintainers or an alternative should be started.
> Part of the reason I'm hesitant to release said blog post is because I don't want to be that asshole who criticizes open source software that's operating in good faith.
Please release it. People need to see these things BEFORE they get sucked into building an entire product around it.
A few bullet points:
- LangChain encourages tool lock-in for little developer benefit, as noted in the OP. There is no inherent advantage into using them, and some have suboptimal implementations.
- The current implementations of the ReAct workflow and prompt engineering are based on InstructGPT (text-davinci-003), and are extremely out of date compared to what you can do with ChatGPT/GPT-4.
- Debugging a LangChain error is near impossible, even with verbose=True.
- If you need anything outside the workflows in the documentation, it's extremely difficult to hack, even with Custom Agents.
- The documentation is missing a lot of relevant detail (e.g. the difference between Agent types) that you have to go diving into the codebase for.
- The extreme popularity of LangChain is warping the entire AI ecosystem around the workflows to the point of harming it. Recent releases by Hugging Face and OpenAI recontextualize themselves around LangChain's "it's just magic AI" to the point of hurting development and code clarity.
Part of the reason I'm hesitant to release said blog post is because I don't want to be that asshole who criticizes open source software that's operating in good faith.