I like coding. I've been doing it for a couple decades. I disagree with the "managing scrum" metaphor. Sure, you can use LLMs that way. And there is some truth to the fact that it may feel more like writing detailed Jira tickets than actually programming at times if you are trying to have it make huge changes... BUT coding with LLMs is really just a higher level abstraction. And the good news of that is LLMs are more deterministic than they seem, which is a lot of what people are fearful of losing by giving LLMs "the reins".
One nice thing about programming is that the computer is a dumb funnel for your human brain to encoded actions. If the program doesn't work, you did something wrong, missed a semicolon etc. This still applies to LLMs. If the LLM gave you shit output, it is your fault. You gave it shit input. If the model you used was the wrong one, you can still get good results, you just have to put in more work or break the problem down more first. But it's still programming. If you treat using LLMs that way, and start to apply your normal programming approaches to your prompts, you can find a workflow that satisfies your demands.
But even if you only use LLMs to rubber duck a problem, by literally typing "The program is doing X,Y,Z but it's only supposed to do Z. I already looked at A,B,C and I have a theory that G is causing the issue but I'm not seeing anything wrong there." and just paste that in the chat you might be surprised what it can turn up. And that's a fine use case!
LLMs are broadly useful and there are certainly elements of programming that are the "shit-shoveling" parts for you, from debugging to writing tests to planning or even re-writing Jira tickets, LLMs can help at different levels and in different ways. I think prescriptive calls to "code this way with LLMs" are shortsighted. If you are determined to write each line yourself, go for it. But like refusing to learn IDE shortcuts or use a new tool or language, you are simply cutting yourself off from technological progress for short term comfort.
The best part of programming to me is that it is always changing and growing and you are always learning. Forget the "AI eating the world" nonsense and treat LLMs as just another tool in your toolkit.
One nice thing about programming is that the computer is a dumb funnel for your human brain to encoded actions. If the program doesn't work, you did something wrong, missed a semicolon etc. This still applies to LLMs. If the LLM gave you shit output, it is your fault. You gave it shit input. If the model you used was the wrong one, you can still get good results, you just have to put in more work or break the problem down more first. But it's still programming. If you treat using LLMs that way, and start to apply your normal programming approaches to your prompts, you can find a workflow that satisfies your demands.
But even if you only use LLMs to rubber duck a problem, by literally typing "The program is doing X,Y,Z but it's only supposed to do Z. I already looked at A,B,C and I have a theory that G is causing the issue but I'm not seeing anything wrong there." and just paste that in the chat you might be surprised what it can turn up. And that's a fine use case!
LLMs are broadly useful and there are certainly elements of programming that are the "shit-shoveling" parts for you, from debugging to writing tests to planning or even re-writing Jira tickets, LLMs can help at different levels and in different ways. I think prescriptive calls to "code this way with LLMs" are shortsighted. If you are determined to write each line yourself, go for it. But like refusing to learn IDE shortcuts or use a new tool or language, you are simply cutting yourself off from technological progress for short term comfort.
The best part of programming to me is that it is always changing and growing and you are always learning. Forget the "AI eating the world" nonsense and treat LLMs as just another tool in your toolkit.