Important point. AKA the halo effect, and it can have a significant influence. In general, I feel that this is a more widespread problem with stories and experiences such as these - there are simply too many "hidden" variables to take them at face value. Environment, genetics, circumstances, upbringing, cognitive biases and instinctual/biological human nature all work together to create a cocktail of unique experiences, leading to unique conclusions.
I wonder whether it could be related to some kind of over-fitting, i.e. a prompting style that tends to work better with the older models, but performs worse with the newer ones.
By using AI, you learn how to use AI, not necessarily how to build architecturally sound and maintainable software, so being able to do much more in a limited amount of time will not necessarily make you a more knowledgeable programmer, or at least that knowledge will most likely only be surface-level pattern recognition. It still needs to be combined with hands-on building your own thing, to truly understand the nuts and bolts of such projects.
If you end up with a working project where you understand all the moving parts, I think AI is great for learning and the ultimate proof whether the learning was succesful if whether you can actually build (and ship) things.
So human teachers are good to have as well, but I remember they were of limited use for me when I was learning programming without AI. So many concepts they tried to teach me without having understood themself first. AI would have likely helped me to get better answers instead of, "because that is how you do it" when asking why to do something in a certain way.
So obviously I would have prefered competent teachers all the time and also now competent teachers with unlimited time instead of faulty AIs for the students, but in reality human time is limited and humans are flawed as well. So I don't see the doomsday expectations for the new generation of programmers. The ultimate goal, building something that works to the spec, did not change and horrible unmaintainable code was also shipped 20 years ago.
I don't agree, to me switching from hand coded source code to ai coded source code is like going from a hand-saw to an electric-saw for your woodworking projects. In the end you still have to know woodworking, but you experiment much more, so you learn more.
Or maybe it's more like going from analog photography to digital photography. Whatever it is, you get more programming done.
Just like when you go from assembly to c to a memory managed language like java. It did some 6502 and 68000 assembly over 35 years ago, now nowbody knows assembly.
Key words there. To you, it's a electric saw because you already know how to program, and that's the other person's point; it doesn't necessarily empower people to build software. You? Yes. Generally though when you hand the public an electric saw and say "have at it, build stuff" you end up with a lot of lost appendages.
Sadly, in this case the "lost appendages" are going to be man-decades of time spent undoing all the landmines vibecoders are going to plant around the digital commons. Which means AI even fails as a metaphorical "electric saw", because a good electric saw should strike fear into the user by promising mortal damage through misuse. AI has no such misuse deterrent, so people will freely misuse it until consequences swing back wildly, and the blast radius is community-scale.
> more like going from analog photography to digital photography. Whatever it is, you get more programming done.
By volume, the primary outcome of digital photography has been a deluge of pointless photographs to the extent we've had to invent new words to categorize them. "selfies". "sexts". "foodstagramming". Sure, AI will increase the actual programming being done, the same way digital photography gave us more photography art. But much more than that, AI will bring the equivalent of "foodstagramming" but for programs. Kind of like how the Apple App Store brought us some good apps, but at the same time 9 bajillion travel guides and flashlight apps. When you lower the bar you also open the flood gates.
Being able to do it quicker and cheaper will often ensure more people will learn the basics. Electrical tools open up woodworking to more people, same with digital photography, more people take the effort to learn the basics. There will also be many more people making rubbish, but is that really a problem?
With ai it’s cheap and fast for a professional to ask the AI: what does this rubbish software do, and can you create me a more robust version following these guidelines.
> With ai it’s cheap and fast for a professional to ask the AI: what does this rubbish software do, and can you create me a more robust version following these guidelines.
This falls apart today with sufficiently complex software and also seems to require source availability (or perfect specifications).
One of the things I keep an eye out for in terms of "have LLMs actually cracked large-product complexity yet" (vs human-overseen patches or greenfield demos) is exactly that sort of re-implementation-and-improvement you talk about. Like a greenfield Photoshop substitute.
Your last point is also something that happened when the big game engines such as Unity became free to use. All of a sudden, Steam Greenlight was getting flooded with gems such as "potato peeling simulator" et al. I suppose it is just a natural side effect of making things more accessible.
> Sadly, in this case the "lost appendages" are going to be man-decades of time spent undoing all the landmines vibecoders are going to plant around the digital commons.
Aren't you being overly optimistic that these would even get traction?
Pessimistic, but yeah. It's just my whole life has been a string of the absolute worst ideas being implemented at scale, so I don't see why this would buck the trend.
> By using AI, you learn how to use AI, not necessarily how to build architecturally sound and maintainable software
> will not necessarily make you a more knowledgeable programmer
I think we'd better start separating "building software" from programming, because the act of programming is going to continue to get less and less valuable.
I would argue that programming has been very overvalued for a while even before AI. And the industry believes it's own hype with a healthy dose of elitism mixed in.
But now AI is removing the facade and it's showing that the idea and the architecture is actually the important part, not the coding if it.
Ok. But most developers aren't building AI tech. Instead, they're coding a SPA or CRUD app or something else that's been done 10000 times before, but just doing it slightly differently. That's exactly why LLMs are so good at this kind of (programming) work.
I would say most people are dealing with tickets and meetings about the tickets more than they are actually spending time with their editor. It may be similar, but that 1 percent difference needs to be nailed down right, as that's where the business lifeline lays.
I think one difference between a hammer and an LLM is that hammers have existed since forever, so common sense is assumed to be there as to what their purpose is. For LLMs though, people are still discovering on a daily basis to what extent they can usefully apply them, so it's much easier to take such promises made by companies out of context if you are not knowledgeable/educated on LLMs and their limitations.
Wondering whether it subconsciously helps to have some sort of (automatic) expiration timer associated with things that end up in your "learning inbox".
I know from myself that I tend to bookmark and save many interesting talks/videos and articles for later, but often I never end up revisiting them; information hoarding in some sense.
I also don't believe in this claim, or at least in it being a generally applicable thing.
There's a point at which slowing down gives diminishing returns in my experience. I'm a music composer and I had to learn that the hard way. Sometimes my first ideas end up being good enough already, or even better than previous/future ideas that I had iterated a lot on. I also found that working for too long on a piece can potentially make it worse than previous iterations eventually.
I find the same with photography. I notice when editing that the first photo of a scene / subject is the best photo much more frequently than chance.
Though it's also true that the last photo is also much more frequent. If I took 20 photos of something, I'd say I use either the first or last photo taken probably 1/3 of the time.
What I liked most about .mod files is that you could actually open them directly in the editors and see how they were made, quite fun to do that with old games that still used this format (Death Rally, DX-ball 2, Unreal (Tournament)).
reply