Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've developed a new mental model of the LLM codebase automation solutions. These are effectively identical to outsourcing your product to someone like Infosys. From an information theory perspective, you need to communicate approximately the same amount of things in either case.

Tweaking claude.md files until the desired result is achieved is similar to a back and forth email chain with the contractor. The difference being that the contractor can be held accountable in our human legal system and can be made to follow their "prompt" very strictly. The LLM has its own advantages, but they seem to be a subset since the human contractor can also utilize an LLM.

Those who get a lot of uplift out of the models are almost certainly using them in a cybernetic manner wherein the model is an integral part of an expert's thinking loop regarding the program/problem. Defining a pile of policies and having the LLM apply them to a codebase automatically is a significantly less impactful use of the technology than having a skilled human developer leverage it for immediate questions and code snippets as part of their normal iterative development flow.

If you've got so much code that you need to automate eyeballs over it, you are probably in a death spiral already. The LLM doesn't care about the terrain warnings. It can't "pull up".



We, mere humans, communicate our needs poorly, and undervisualize until we see concrete results. This is the state of us.

Faced with us as a client, the LLM has infinite patience at linear but marginal cost (relative to your thinking/design time cost, and the value of instant iteration as you realize what you meant to picture and say).

With offshoring, telling them they're getting it wrong is not just horrifically slow thanks to comms and comprehension latency, it makes you a problem client, until soon you'll find the do-over cost becomes neither linear nor marginal.

Don't sleep on the power of small fast iterations (not vibes, concrete iterations), with an LLM tool that commits as you go and can roll back both code and mental model when you're down a garden path.


Intriguing perspective! Could you elaborate on this with another paragraph or two?

> We humans undervisualize until we see concrete results.


> > We humans undervisualize until we see concrete results.

> Could you elaborate on this with another paragraph or two?

Volunteer as a client-facing PdM at a digital agency for a week*, you'll be able to elaborate with a book.

* Well, long enough to try to iterate a client instruction based deliverable.


This matches well with my experience so far. It’s why the chat interface has remained my preference over autocomplete in an IDE.


The benefit of doing it like this is that I also get to learn from the LLM. It will surprise me from time to time about things I didn't know and it gives me a chance to learn and get better as well.


> These are effectively identical to outsourcing your product to someone like Infosys.

But in my experience, the user has to be better than an Infosys employee to know how to convey the task to the LLM and then verify iteratively.

So more like an experienced engg outsourcing work to a service company engg.


That’s exactly what they were saying.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: