Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There has always been a gap between the experience of solo/small shop developers, vs. developers who work in teams in a large corporate environment. But thanks to open source, we have for the past twenty years at least mostly all been using the same tools.

But right now, the difference in developer experience between a dev on a team at a business which has corporate copilot or Claude licenses and bosses encouraging them to maximize token usage, vs a solo dev experimenting once every few months with a consumer grade chat model is vast.

 help



Let’s take an extreme example.

Meta seemingly has a constant stream of product managers. If llm’s really augment the productivity of engineers, why isn’t meta launching lots more stuff? I mean there’s no harm in at least launching one new thing.

What are all those people doing with the so called productivity enhancements?

What I’m calling into question is how much does generating more code matter if the bottle neck is creativity/imagination for projects?

The only thing I’ve seen is a really crummy meta AI thing implemented within WhatsApp.


It’s allowed a sludge of internal tools to spin up, and more bloat. The ability to sand bag and over build these tools has gotten 2-10x worse.

Only solution I can think of is to drastically cut headcount so productivity is back to prior levels, and profitability is raised. Big Tech is mostly market constrained with not much room to grow beyond the market itself growing.

As for startups, seems like AI tools have drastically reduced their time to market and accelerated their growth curves.


Im convinced the most scarce skill on the planet is the ability to a) envision something that needs to exist in the world b) explain how the thing creates value from a financial perspective.

Most people tend to think they know what they are talking about (e.g. surface level understanding of how to think economically) and end up making basket-case decisions - only realising it months later. By that point they will fail to admit defeat and keep going on.

"As for startups, seems like AI tools have drastically reduced their time to market and accelerated their growth curves."

You mean like openclaw? lol


In a word, bottlenecks moved.

What I see in my backyard: coding now takes significantly less time, but its just coding. Before one gets to building there are squabbles between business and product people. Testing takes just as much as it used to. Since nice to haves are easy to add and product people begin to take it for granted, the product cycles don't get shorter.

Give it time. Right now its just coding, but procedural AI will come after product development, architecture, and then whatever is left of management.


Absolute delusion.

The best people can not only envision products but also possess great judgement without needing data. For AI to even come close it would need an insane amount of data that is nuanced and subtle - by the the time the AI has obtained all the necessary data and made sense of it the human is long gone working on something else.


But these people will age out and juniors do not get hired. “Good judgment comes from experience, and experience comes from bad judgment.” and all that.

Is LLM going to invent its own languages that no average programmer will understand? As in "I don't need your C++ human, I will rewrite your fart app in ClaudASM and you will like it". These are naive questions, but I can't visualize how all of this will unfold.


Forgive my ignorance, but what exactly is the vast difference? Who's doing more of what, or whatever you're implying? And how do you quantify this?

The people who use AI to the maximum learn more.

A neutral hobbyist on a $20 budget will build something and immediately bump into quotas. Its not going to be an enjoyable experience.

A negatively predisposed pro who only dabbles in AI gets to the first disappointment, smiles, and thinks "yeah, about what i expected" and quits.

To learn those new tools one needs to not be stingy. Invest as much as needed into tokens, subscriptions, and maybe most importantly invest the time. Spend time building various things. Try out various models not just for coding, but as part of apps being built. For bonus points, meaningfully experiment with local models. I try to avoid discussions with sceptics who have not put at least a few months of effort into learning those tools. It's like discussing driving with my mother in law, who spent maybe 20hrs behind the wheel through her whole life (and is very, very opinionated!).


And it's not a waste of money because?

You'd have learned something new. Useful, not useful, thorough understanding of a new thing is rewarding.

Also, its not primarily about money - the real investment here is time.


In my opinion it's a complete waste of time and money to learn something that is gated by a company that might disappear tomorrow.

It's akin to company courses to learn something that is specific to that company. Of course you do them on the job, there is no point in doing them if you don't work there.

Similarly what's the point of trying 300 different models if any job will decide for you which one they approve the use of, and you are liable to get fired and asked for damages if you let anything else access company intellectual property?


The difference is (if you'll forgive me recruiting a couple of straw men for the purpose of illustrating the spectrum we are talking about here):

Hobbyist solo dev, counting tokens, hitting quotas, trying things on little projects, giving up and not seeing what the fuss is about.

vs

Corporate developer, increasingly held accountable by their boss for hitting metrics for token usage; being handed every new model as soon as it comes out; working with the tools every day on code changes that impact other developers on other teams all of whom have access to those same tools.


Okay, so just to be clear you're not commenting on productivity? Or what does "changes that impact" mean?

I might be missing a lot of self-evident assumptions here but I feel like I'm still missing so much context and have no idea what this difference is actually describing.


If you have some objective measure of productivity in mind, feel free to share it, but no that's not what I'm commenting on.

I'm talking more about why threads like this seem to be full of people saying 'this has completely changed how corporate development works' and other people saying 'I tried it a few times and I don't get the hype'




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: