Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can definitely see the value in letting AI generate low stakes code. I'm a daily CoPilot user and, while I don't let it generate implementations, the suggestions it gives for boilerplate-y things is top notch. Love it as a tool.

My major issue with your position is that, at least in my experience, good software is the sum of even the seemingly low risk parts. When I think of real world software that people rely on (the only type I care about in this context) then it's hard to point a finger at some part of it and go "eh, this part doesn't matter". It all matters.

The alternative, I fear, is 90% of the software we use exhibiting subtle goofy behavior and just being overall unpleasant to use.

I guess an analogy for my concern is what it would look like if 60% of every film was AI generated using the models we have today. Some might argue that 60% of all films are low stakes scenes with simple exposition or whatever. And then remaining 40% are the climax or other important moments. But many people believe that 100% of the film matters - even the opening credits.

And even if none of that were an issue: in my experience it's very difficult to assess what part of an application will/won't be low/high stakes. Imagine being a tech startup that needs to pivot your focus toward the low stakes part of the application that the LLM wrote.



I think your concept of ‘what the AI wrote’ is too large. There is zero chance my one line copilot or three line cursor tab completions are going to have an effect on the overall quality of my codebase.

What it is useful for is doing exactly the things I already know need to happen, but don’t want to spend the effort to write out (at least, not having to do it is great).

Since my brain and focus aren’t killed by writing crud, I get to spend that on more useful stuff. If it doesn’t make me more effective, at least it makes my job more enjoyable.


I'm with you. I use Copilot every day in the way you're describing and I love it. The person I was responding to is claiming to code "hands off" and let the AI write the majority of the software.


> The alternative, I fear, is 90% of the software we use exhibiting subtle goofy behavior and just being overall unpleasant to use.

This sounds like most software honestly.


And that's what LLMs are trained on.

Hahaha




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: