> We used to have a training ground for junior engineers, but now AI is increasingly automating away that work. Both studies I referenced above cited the same thing - AI is getting good at automating junior work while only augmenting senior work. So the evidence doesn’t show that AI is going to replace everyone; it’s just removing the apprenticeship ladder.
Was having a discussion the other day with someone, and we came to the same conclusion. You used to be able to make yourself useful by doing the easy / annoying tasks that had to be done, but more senior people didn't want to waste time dealing with. In exchange you got on-the-job experience, until you were able to handle more complex tasks and grow your skill set. AI means that those 'easy' tasks can be automated away, so there's less immediate value in hiring a new grad.
I feel the effects of this are going to take a while to be felt (5 years?); mid-level -> senior-level transitions will leave a hole behind that can't be filled internally. It's almost like the aftermath of a war killing off 18-30 year olds leaving a demographic hole, or the effect of covid on education for certain age ranges.
Adding to this: it's not just that the apprenticeship ladder is gone—it's that nobody wants to deal with juniors who spit out AI code they don't really understand.
In the past, a junior would write bad code and you'd work with them to make it better. Now I just assume they're taking my feedback and feeding it right back to the LLM. Ends up taking more of my time than if I'd done it myself. The whole mentorship thing breaks down when you're basically collaborating with a model through a proxy.
I think highly motivated juniors who actually want to learn are still valuable. But it's hard to get past "why bother mentoring when I could just use AI directly?"
I don't have answers here. Just thinking maybe we're not seeing the end of software engineering for those of us already in it—but the door might be closing for anyone trying to come up behind us.
> Now I just assume they're taking my feedback and feeding it right back to the LLM.
This is especially annoying when you get back a response in a PR "Yes, you're right. I have pushed the fixes you suggested."
Part of the challenge (and I don't have an answer either) is there are some juniors who use AI to assist... and some who use it to delegate all of their work to.
It is especially frustrating that the second group doesn't become much more than a proxy for an LLM.
New juniors can progress in software engineering - but they have to take the road of disciplined use of AI and make sure that they're learning the material rather than delegating all their work to it... and that delegating work is very tempting... especially if that's what they did in college.
I must ask once again why we are having these 5+ round interview cycles and we aren't able to filter for qualities that the work requires of its talent. What are all those rounds for if we're getting engineers who aren't as valued for the team's needs at the end of the pipeline?
> I must ask once again why we are having these 5+ round interview cycles and we aren't able to filter for qualities that the work requires of its talent.
Hiring well is hard, specially if compensation isn't competitive enough to attract talented individuals who have a choice. It's also hard to change institutional hiring practices. People don't get fired by buying IBM, and they also don't get fired if they follow the same hiring practices in place in 2016.
> What are all those rounds for if we're getting engineers who aren't as valued for the team's needs at the end of the pipeline?
Software development is a multidiscinary field. It involves multiple non-overlapping skill sets, bot hard skills and soft skills. Also, you need multiple people vetting a candidate to eliminate corruption and help weed out candidates who outright clash with company culture. You need to understand that hiring someone is a disruptive activity, that impacts not only what skill sets are available in your organization but also how the current team dynamics. If you read around, you'll stumble upon stories of people who switch roles in reaction to new arrivals. It's important to get this sort of stuff right.
Well I'm still waiting. Your second paragraph seems to contradict the first. Which perfectly encapsulates the issue with hiring. Too afraid to try new things, so instead add beuracracy to leases accountability.
> Well I'm still waiting. Your second paragraph seems to contradict the first. Which perfectly encapsulates the issue with hiring. Too afraid to try new things, so instead add beuracracy to leases accountability.
I think you haven't spend much time thinking about the issue. Changing hiring practices does not mean they are improve. It only means they changed. You are still faced with the task of hiring adequate talent, but if you change processes them now you don't have baselines and past experiences to guide you. You keep those baselines if you keep your hiring practices then you stick with something that is proven to work albeit with debatable optimality, and mitigate risks because your experience with the process helps you be aware of some red flags. The worst case scenario is that you repeat old errors, but those will be systematic errors which are downplayed by the fact that your whole organization is proof that your hiring practices are effective.
>Changing hiring practices does not mean they are improve.
No, but I'd like to at least see conversation on how to improve the process. We aren't even at that point. We're just barely past acknowledging that it's even an issue.
>but if you change processes them now you don't have baselines and past experiences to guide you.
I argue we're already at this point. The reason we got past the above point of "acknowledging problem" (a decade too late, arguably) is that the baselines are failing to new technology, which is increasing false positives.
You have a point, but why does tech pick this point to finally decide not to "move fast and break things"? Not when it comes to law and ethics, but for aquiring new talent (which meanwhile is already disrupting heir teams with this AI slop?)
>those will be systematic errors which are downplayed by the fact that your whole organization is proof that your hiring practices are effective.
okay, so back to step zero then. Do we have a hiring problem? The thesis of this article says yes.
"it worked before" seems to be the antipattern the tech industry tried to fight back against for decades.
> No, but I'd like to at least see conversation on how to improve the process. We aren't even at that point. We're just barely past acknowledging that it's even an issue.
The current hiring practices are a result of acknowledging what they did before didn't work. The current ones work well enough that people don't wanna change it, the only ones who wanna change it are engineers not the companies.
Nit (not directed at you) : I don't appreciate being flagged for pointing out the exact issue of the article and someone just dismissing it as "well companies are making money, clearly it's not a crisis"
This goes beyond destructive thinking. Again, I hope the companies reap what they sow.
There's no fix for this problem in hiring upfront. Anyone can cram and fake if they expect a gravy train on the other end. If you want people to work after they're hired, you have to be able to give direct negative feedback, and if that doesn't work, fire quickly and easily.
>Anyone can cram and fake if they expect a gravy train on the other end.
If you're still asking trvia, yes. Maybe it's time to shift from the old filter and update the process?
If you can see in the job that a 30 minute PR is the problem, then maybe replace that 3rd leetcode round with 30 minutes of pair programming. Hard to chatGPT in real time without sounding suspicion.
That approach to interviewing will cause a lot of false negatives. Many developers, especially juniors, get anxious when thrown into a pair programming task with someone they don't know and will perform badly regardless of their actual skills.
I understand that and had some hard anxiety myself back then. Even these days I may be a bit shakey when love coding in an interview setting?
But is the false negative for a nervous pair programmer worse than a false positive for a leetcode question? Ideally a good interviewer would be able to separate the anxiety from the actual thinking and see that this person can actually think, but that's another undervalued skill among industry.
I don’t know why people are so hesitant to just fire bad people. It’s pretty obvious when someone starts actually working if they’re going to a net positive. On the order of weeks, not months.
Given how much these orgs pay, both direct to head hunters and indirect in interview time, might as well probationally hire the whoever passes the initial sniff test.
That also lets you evaluate longer term habits like punctuality, irritability, and overall not-being-a-jerkness.
Not so fast. I "saved" guys from being fired by asking to be more patient with them. The last one was not in my team as I moved out to lead another team. Turned out the guy did not please an influencial team member, who then complained about him.
What I saw instead was a young silent guy, given boring work and was longing for more interesting work. A tad later he took ownership of a neglected project, completed it and made a name of himself.
It takes considerably more effort and skill to treat colleagues as humans rather than "outputs" or ticket processing nodes.
Most (middle) management is an exercise in ass-covering, rather than creating healthy teams. They get easily scared when "Jira isn't green", and look someone else to blame for not doing the managing part correctly
Sunk cost. You've spent... 20 to 100 hours on interviews. Maybe more. Doing it again is another expense.
Onboarding. Even with good employees, it can take a few months to get the flow of the organization, understanding the code base, and understanding the domain. Maybe a bit of technology shift too. Firing a person who doesn't appear to be preforming in the first week or two or three would be churning through that too fast.
Provisional hiring with "maybe we'll hire you after you move here and work for us for a month" is a non-starter for many candidates.
At my current job and the job previous it took two or three weeks to get things fully set up. Be it equipment, provisioning permissions, accounts, training (the retail company I worked at from '10 to '14 - they sent every new hire out to a retail store to learn about how the store runs (to get a better idea of how to build things for them and support their processes).
... and not every company pays Big Tech compensation. Sometimes it's "this is the only person who didn't say «I've got an offer with someone else that pays 50% more»". Sometimes a warm body that you can delegate QA testing and pager duty to (rather than software development tasks) is still a warm body.
It's really not obvious to calculate the output of any employee even with years of data, way harder for a software engineer or any other job with that many facets. If you've found a proven and reliable way evaluate someone in the first 2 weeks you just solved one of the biggest HR problems ever.
What if, and hear me out, we asked the people a new employee has been onboarding with? I know, trusting people to make a fair judgment lacks the ass-covering desired by most legal departments but actually listening to the people who have to work with a new hire is an idea so crazy it might just work.
> I don’t know why people are so hesitant to just fire bad people.
"Bad" is vague, subjective moralist judgement. It's also easily manipulated and distorted to justify firing competent people who did no wrong.
> It’s pretty obvious when someone starts actually working if they’re going to a net positive. On the order of weeks, not months.
I feel your opinion is rather simplistic and ungrounded. Only the most egregious cases are rendered apparent in a few weeks worth of work. In software engineering positions, you don't have the chance to let your talents shine through in the span of a few weeks. The cases where incompetence is rendered obvious in the span of a few weeks actually spells gross failures in the whole hiring process, which failed to verify that the candidate failed to even meet the hiring bar.
> (...) might as well probationally hire the whoever passes the initial sniff test.
This is a colossal mistake, and one which disrupts a company's operations and the candidates' lives. Moreover, it has a chilling effect on the whole workforce because no one wants to work for a company ran by sociopaths that toy with people's lives and livelihood as if it was nothing.
The bar for “junior” has quietly turned into “mid-level with 3 years of production experience, a couple of open-source contributions, and perfect LeetCode” while still paying junior money. Companies list “0-2 years” but then grill candidates on system design, distributed tracing, and k8s internals like they’re hiring for staff roles. No wonder the pipeline looks broken.
I’ve interviewed dozens of actual juniors in the last six months. Most can ship features, write clean code, and learn fast, but they get rejected for not knowing the exact failure modes of Raft or how to tune JVM garbage collection on day one. The same companies then complain they “can’t find talent” and keep raising the bar instead of actually training people.
Real junior hiring used to mean taking someone raw, pairing them heavily for six months, and turning them into a solid mid. Now the default is “we’ll only hire someone who needs zero ramp-up” and then wonder why the market feels empty.
It's the cargo cult kayfabe of it all. People do it because Google used to do it, now it's just spread like a folk religion. But nobody wants guilds or licensure, so we have to make everyone do a week-long take-home and then FizzBuzz in front of a very awkward committee. Might as well just read chicken bones, at least that would be less humiliating.
And who would write the guild membership or licensure criteria? How much should those focus on ReactJS versus validation criteria for cruise missile flight control software?
You’re asking these rhetorical questions as if we haven’t had centuries of precedent here, both bad and good. How does the AMA balance between neurosurgeons and optometrists? Bar associations between corporate litigators and family estate lawyers? Professional engineering associations between civil engineers and chemical engineers?
> Professional engineering associations between civil engineers and chemical engineers?
One takes the FE exam ( https://ncees.org/exams/fe-exam/ ). You will note at the bottom of the page "FE Chemical" and "FE Civil" which are two different exams.
Then you have an apprenticeship for four years as an Engineer in Training (EIT).
Following, that, you take the PE exam. https://ncees.org/exams/pe-exam/ You will note that the PE exams are even more specialized to the field.
Depending on the state you are licensed in (states tend to have reciprocal licensing - but not necessarily and not necessarily for all fields). For example, if you were licensed in Washington, you would need to pass another exam specific to California to work for a California firm.
You have to take 30 hours of certified study in your field across every two years. This isn't a lot, but people tend to fuss about "why do CS people keep being expected to learn on our own?" ... Well, if we were Professional Engineers it wouldn't just be an expectation - it would be a requirement to maintain the license. You will again note the domain of the professional development is different - so civil and mechanical engineers aren't necessarily taking the same types of classes.
These requirements are set by the state licensure and part of legislative processes.
So what you’re saying is that it’s a solved problem. If we can figure out how to safely certify both bridge builders and chemical engineers working with explosives, we can figure out a way to certify both React developers and those working on cruise missile flight control software.
I'm saying the idea that you can do one test for software engineering and never have to study again or be tested on a different domain in the future isn't something that professional engineering licensure solves.
Furthermore, licensure requires state level legislation and makes it harder for employees (especially the EIT) to change jobs or move to other states for work there.
Licensure, the way that people often point to it as a way to solve the credentials problem vs interviews, isn't going to solve the problems that people think it would.
Furthermore, it is only something if there is a reason to do it. If there isn't a reason to have a licensed engineer signing off on designs and code there isn't a reason for a company to hire such.
Why should a company pay more for someone with a license to design their website when they could hire someone more cheaply who doesn't have a license? What penalties would a company have for having a secretary do some vbscripting in excel or a manager use Access rather than hiring a licensed developer?
Guilds and licensure perform gatekeeping, by definition, and the more useful they are at providing a good hiring signal, the more people get filtered out by the gatekeeping. So there's no support for it because everyone is afraid that effective guilds or licensing would leave them out in the cold.
Yeah, I'd be more than fine with licensing if I didn't have to keep going through 5 rounds of trivia only to be ghosted. Let me do that once and show I can code my way out of a paper bag.
I can understand such process for freshman, but for industry veteran with 10+ years of experience, with with recommendation from multiple senior managers?
I’ve started using ChatGPT for their take home projects, with only minor edits or refactors myself. If they’re upset I saved a couple hours of tedium, they’re the wrong employer for me.
And I’m being an accelerationist hoping the whole thing collapses under its own ridiculousness.
> there are some juniors who use AI to assist... and some who use it to delegate all of their work to.
Hmmm. Is there any way to distinguish between these two categories? Because I agree, if someone is delegating all their work to an LLM or similar tool, cut out the middleman. Same as if someone just copy/pasted from Stackoverflow 5 years ago.
I think it is also important to think about incentives. What incentive does the newer developer have to understand the LLM output? There's the long term incentive, but is there a short term one?
Dealing with an intern at work who I suspect is doing exactly this, I discussed this with a colleague. One way seems to be to organize a face to face meeting where you test their problem solving skills without AI use, the other may be to question them about their thought process as you review a PR.
Unfortunately, the use of LLMs has brought about a lot of mistrust in the workplace. Earlier you’d simply assume that a junior making mistakes is simply part of being a junior and can be coached; whereas nowadays said junior may not be willing to take your advice as they see it as sermonizing when an “easy” process to get “acceptable” results exists.
The intern is not producing code that is up to the standard you expect, and will not change it?
I saw a situation like this many years ago. The newly hired midlevel engineer thought he was smarter than the supervisor. Kept on arguing about code style, system design etc. He was fired after 6 months.
But I was friendly with him, so we kept in touch. He ended up working at MSFT for 3 times the salary.
> Earlier you’d simply assume that a junior making mistakes is simply part of being a junior and can be coached; whereas nowadays said junior may not be willing to take your advice
Hot take: This reads like an old person looking down upon young people. Can you explain why it isn't? Else, this reads like: "When I was young, we worked hard and listened to our elders. These days, young people ignore our advice." Every time I see inter-generational commentary like this (which is inevitably from personal experience), I am immediately suspicious. I can assure you that when I was young, I did not listen to older people's advice and I tried to do everything my own way. Why would this be any different in the current generation? In my experience, it isn't.
On a positive note: I can remember mentoring some young people and watching them comb through blogs to learn about programming. I am so old that my shelf is/was full of O'Reilly books. By the time I was mentoring them, few people under 25 were reading O'Reilly books. It opened my eyes that how people changes more than what people learn. Example: Someone is trying to learning about access control modifiers for classes/methods in a programming language. Old days: Get the O'Reilly book for that programming language. Lookup access modifiers in the index. 10 year ago: Google for a blog with an intro to the programming language. There will be a tip about what access modifiers can do. Today: Ask ChatGPT. In my (somewhat contrived) example, the how is changing, but not the what.
> Old days: Get the O'Reilly book for that programming language. Lookup access modifiers in the index. 10 year ago: Google for a blog with an intro to the programming language. There will be a tip about what access modifiers can do. Today: Ask ChatGPT.
The answer to this (throughout the ages) should be the same: read the authoritative source of information. The official API docs, the official language specification, the man page, the textbook, the published paper, and so on.
Maybe I am showing my age, but one of the more frustrating parts of being a senior mentoring a junior is when they come with a question or problem, and when I ask: “what does the official documentation say?” I get a blank stare. We have moved from consulting the primary source of information to using secondary sources (like O’Reilly, blogs and tutorials), now to tertiary sources like LLMs.
[Disclaimer: I'm a Gen Xer. Insert meme of Grandpa Simpson shouting at clouds.]
I think this is undoubtedly true from my observations. Recently, I got together over drinks with a group of young devs (most around half my age) from another country I was visiting.
One of the things I said, very casually, was, "Hey, don't sleep on good programming books. O'Reilly. Wiley. Addison-Wesley. MIT Press. No Starch Press. Stuff like that."
Well, you should've seen the looks on their faces. It was obvious that advice went over very poorly. "Ha, read books? That's hard. We'd rather just watch a YouTube video about how to make a JS dropdown menu."
So yeah, I get that "showing my age" remark. Used to be the discipline in this industry is that you shouldn't ask a question of a senior before you'd read the documentation. If you had read the documentation, man pages, googled, etc., and still couldn't come up with an answer, then you could legitimately ask for a senior mentor's time. Otherwise, the answer from the greybeards would have been "Get out of my face, kid. Go RTFM."
That system that used to exist is totally broken now. When reading and understanding technical documentation is viewed as "old school", then you know we have a big problem.
I like your sentiment about "first principles" of documents -- go to the root source. But for most young technologists (myself included, long long ago), the official docs (man pages for POSIX, MSDN for Win32 etc.) are way too complex. For years, when I was in university, I tried to grasp GUI programming by writing C and using the Win32 API. It was insane, and I did little more than type in code from a "big book of Win32 programming". Only when I finally tried Qt with C++ did the door of understanding finally open. Why? It was the number of simple examples that Qt docs provided they really helped me understand GUI (event-driven) programming. Another 10 years went by when I knew enough about Win32 that I was able to write small, but useful GUIs in pure C using the Win32 API. The very reason that StackOverflow was so popular: People read the official docs and still don't understand... so they ask a question. The best questions include a snip of code and ask about it.
To this day, I normally search on Google first, then try an LLM... the last place that I look is the official docs if my question is about POSIX or Win32. They are just too complex and require too much base knowledge about the ecosystem. As an interesting aside, when I first learned Python, Java, and C#, I thought their docs were as approachable as Qt. It was very easy to get started with "console" programming and later expand to GUI programming.
Despite my pro-documentation comment above, I think there is a legit criticism that a lot of official documentation is a mess. Take man pages, for instance. I don't think it's a good look for greybeards to say "just go read the man page, kid." Many of those man pages are so out of date. You can't legitimately adopt a position of smug superiority by pointing juniors to outdated docs.
If I have a problem with a USB datastream, the last place I'm going to look is the official USB spec. I'll be buried for weeks. The information may be there, but it will take me so long to find it that it might as well not.
The first place to look is a high quality source that has digested the official spec and regurgitated it into something more comprehensible.
[shudder] the amount of life that I've wasted discussing the meaning of some random phrase in IEC-62304 is time I will never get back!
> I can assure you that when I was young, I did not listen to older people's advice and I tried to do everything my own way.
Hot take: This reads like a person who was difficult to work with.
Senior people have responsibility, therefore in a business situation they have authority. Junior people who think they know it all don't like this. If there's a disagreement between a senior person and a junior person about something, they should, of course, listen to each other respectfully. If that's not happening, then one of them is not being a good employee. But if they are, then the supervisor makes the final call.
> Old days: Get the O'Reilly book for that programming language. Lookup access modifiers in the index. 10 year ago: Google for a blog with an intro to the programming language. There will be a tip about what access modifiers can do. Today: Ask ChatGPT. In my (somewhat contrived) example, the how is changing, but not the what.
The tangent to that is it is also changing with the how much one internalizes about the problem domain and is able to apply that knowledge later. Hard fought knowledge from the old days is something that shapes how I design systems today.
However, the tendency of people who reach for ChatGPT today to solve a problem results in them making the same mistakes again the next time since the information is so easy to access. It also results in things that are larger are more difficult... the "how do you architect this larger system" is something you learn by building the smaller systems and learning about them so that their advantages and disadvantages and how and such becomes an inherent part of how you conceive of the system as a whole. ... Being able to have ChatGPT do it means people often don't think about the larger problem or how it fits together.
I believe that is harder for a junior who is using ChatGPT to advance to being a mid level or senior developer than it is for a junior from the old days because of the lack of retention of the knowledge of the problems and solutions.
Yeah Ive got to agree with this hot take. Put yourself in the junior's shoes: if s/he wasn't there you'd be pulling it out of Claude Code yourself, until your satisfied with what comes out enough to start adding your "senior" touches. The fact is the way code is written has changed fundamentally, especially for kids straight out of college, and the answer is to embrace that everyone is using it, not all this shaming. If you're so senior, why not show the kid how to use the LLM right, so the work product is right from the start? It seems part of the problem is dinosaurs are suspicious of the tech, and so dont know how to mentor for it.
That being said, Im a machine learning engineer not a developer, and these LLMs have been a godsend. Assuming I do it correctly, there's just no way I could write a whole 10,000 line pipeline in under a week without it. While coding from outputs and error-driven is the wrong way for software Juniors, its fine by me for my AI work. It comes down to knowing when there's a silent error, if you haven't been through everything line by line. I've been caught before, Im not immune, its embarrassing, but every since GPT was in preview I have made it my business to master it.
I have a friend who is a dev, a very senior one at that, who spins up 4 Claudes at once and does the whole enterprises work. Hes a "Senior AI Director" with nobody beneath him, not a single direct report, and NO knowledge of AI or ML, to my chagrin.
This isn’t a question of the senior teaching the junior how to use the LLM correctly.
Once you’re a senior you can exercise judgement on when/how to use LLMs.
When you’re a junior you haven’t developed that judgement yet. That judgement comes from consulting documentation, actually writing code by hand, seeing how you can write a small program just fine, but noticing that some things need to change when the code gets a lot bigger.
A junior without judgement isn’t very valuable unless he/she is working hard to develop that judgement. Passing assignments through to the LLM does not build judgement, so it’s not a winning strategy.
There are some definite signs of over reliance on AI. From emojis in comments, to updates completely unrelated to the task at hand, if you ask "why did you make this change?", you'll typically get no answer.
I don't mind if AI is used as a tool, but the output needs to be vetted.
What is wrong with emojis in comments? I see no issue with it. Do I do it myself? No. Would I pushback if a young person added emojis to comments? No. I am looking at "the content, not the colour".
I think GP may be thinking that emojis in PR comments (plus the other red flags they mentioned) are the result of copy/paste from LLM output, which might imply that the person who does mindless copy/pasting is not adding anything and could be replaced by LLM automation.
Just like anything, anyone who did the work themself should be able to speak intelligently about the work and the decisions behind its idiosyncrasies.
For software, I can imagine a process where junior developers create a PR and then run through it with another engineer side by side. The short-term incentive would be that they can do it, else they'd get exposed.
Is/was copy/pasting from Stackoverflow considered harmful? You have a problem, you do a web search and you find someone who asked the same question on SO, and there's often a solution.
You might be specifically talking about people who copy/paste without understanding, but I think it's still OK-ish to do that, since you can't make an entire [whatever you're coding up] by copy/pasting snippets from SO like you're cutting words out of a magazine for a ransom note. There's still thought involved, so it's more like training wheels that you eventually outgrow as you get more understanding.
Pair programming! Get hands-on with your junior engineers and their development process. Push them to think through things and not just ask the LLM everything.
I've seen some overly excessive pair programming initiatives out there, but it does baffle me why less people who struggle with this do it. Take even just 30 minutes to pair program on a problem and see their process and you can reveal so much.
But I suppose my question is rhetorical. We're laying off hundreds of thousands of engineers and maming existing ones do the work of 3-4 engineers. Not much time to help the juniors.
having dealt with a few people who just copy/pasted Stackoverflow I really feel that using an LLM is an improvement.
That is at least for the people who don't understand what they're doing, the LLM tends to come out with something I can at least turn into something useful.
It might be reversed though for people who know what they're doing. IF they know what they're doing they might theoretically be able to put together some stackoverflow results that make sense, and build something up from that better than what gets generated from LLM (I am not asserting this would happen, and thinking it might be the case)
However I don't know as I've never known anyone who knew what they were doing who also just copy/pasted some stackoverflow or delegated to LLM significantly.
> This is especially annoying when you get back a response in a PR "Yes, you're right. I have pushed the fixes you suggested."
I've learnt that saying this exact phrase does wonders when it comes to advancing your career. I used to argue against stupid ideas but not only did I achieve nothing, but I was also labelled uncooperative and technically incompetent. Then I became a "yes-man" and all problems went away.
I was attempting to mock Claude's "You are absolutely right" style of response when corrected.
I have seen responses to PRs that appear to be a copy and paste of my feedback into it and a copy and paste of the response and fixes into the PR.
It may be the that the developer is incorporating the mannerisms of Claude into their own speech... that would be something to delve into (that was intentional). However, more often than not in today's world of software development such responses are more likely to indicate a copy and paste of LLM generated content.
> However, more often than not in today's world of software development such responses are more likely to indicate a copy and paste of LLM generated content.
This is nothing new. People rarely have independent thoughts, usually they just parrot whatever they've been told to parrot. LLMs created common world-wide standard on this parroting, which makes the phenomenon more evident, but it doesn't change the fact that it existed before LLMs.
Have you ever had a conversation with an intelligent person and thought "wow that's refreshing"? Yeah. There's a reason why it feels so good.
This. May you have great success! My PR comments that I get are so dumb. I can put the most obvious bugs in my code, but people are focused in the colour of the bike shed. I am happy to repaint the bike shed whatever colour they need it to be!
> Part of the challenge (and I don't have an answer either) is there are some juniors who use AI to assist... and some who use it to delegate all of their work to.
This is not limited to junior devs. I had the displeasure of working with a guy who was hired as a senior dev who heavily delegated any work they did. He failed to even do the faintest review of what the coding agent and of course did zero testing. At one time these stunts resulted in a major incident where one of these glorious PRs pushed code that completely inverted a key business rule and resulted in paying customers being denied access to a paid product.
Sometimes people are slackers with little to no ownership or pride in their craftsmanship, and just stumbled upon a career path they are not very good at. They start at juniors but they can idle long enough to waddle their way to senior positions. This is not a LLM problem, or caused by it.
I get that. I think that getting to know juniors outside of work, at a recurring meetup or event, in a setting where you can suss out their motivation level and teachability level, is _a_ way of going about it. That way, if your team is hiring juniors, you have people you have already vetted at the ready.
IMO teachability/curiosity is ultimately orthogonal to the more base question of money-motivation.
In a previous role I was a principal IC trying to mentor someone who had somehow been promoted up to senior but was still regularly turning in code for review that I wouldn't have expected from an intern— it was an exhausting, mind-numbing process trying to develop some sense of engineering taste in this person, and all of this was before LLMs. This person was definitely not just there for the money; they really looked up to the top-level engineers at our org and aspired to be be there, but everything just came across as extremely shallow, like engineering cosplay: every design review or bit of feedback was soundbites from a how-to-code TED talk or something. Lots of regurgitated phrases about writing code to be "maintainable" or "elegant" but no in-the-bones feeling about what any of that actually meant.
Anyway, I think a person like this is probably maximally susceptible to the fawning ego-strokes that an AI companion delivers alongside its suggestions; I think I ultimately fear that combination more than I fear a straight up mercenary for whom it's a clear transaction of money -> code.
I had one fairly-junior teammate at Google (had been promoted once) who was a competent engineer but just refused to make any choices about what to work on. I was his TL and I gave him a choice of 3 different parts of the system to work on, and I was planning to be building the other two. He got his work done adequately, but his lack of interest / curiosity meant that he never really got to know how the rest of the system operated, and got frustrated when he didn't advance further in his career.
Very odd. It was like he only had ever worked on school projects assigned to him, and had no actual interest in exploring the problems we were working on.
In my experience, curiosity is the #1 predictor of the kind of passionate, high-level engineer that I'm most interested in working with. And it's generally not that hard to evaluate this in a free-form interview context where you listen to how a person talks about their past projects, how they learn a new system or advocated/onboarded a tool at their company.
But it can be tricky to evaluate this in the kind of structured, disciplined way that big-company HR departments like to see, where all interviewees get a consistent set of questions and are "scored" on their responses according to a fixed rubric.
That does not even sounds like a problem? Like when people are that picky about what exact personality the junior musr have that good work is not enough ... then there is something wrong with us.
When presenting the three projects, I gave pros and cons about each one, like "you'll get to learn this new piece of technology" or "a lot of people will be happy if we can get this working". Absolutely no reaction, just "I don't care, pick one".
This guy claimed to want to get promoted to Senior, but didn't do anything Senior-shaped. If you're going to own a component of a system, I should be able to ask you intelligent questions about how you might evolve it, and you should be able to tell me why someone cares about it.
I am honestly totally fine with person like that. Sounds like someone easy to work with. I dunno, not having preference between working on three parts of the system is not abnormal. Most people choose randomly anyway.
>not having preference between working on three parts of the system is not abnormal.
I suppose it depends on the team and industry. This would be unheard of behavior for games, for example. Why you taking a pay cut and likely working more hours to just say "I don't know, whatever works?". You'd ideally be working towards some sort of goal. Management, domain knowledge, just begin able to solve hard problems.
Yea a lot software developers I’ve worked with, across the full spectrum of skill levels, didn’t have a strong preference about what code they were writing. If there is a preference, it’s usually the parts they’ve already worked on, because they’re already ramped up. Strong desire to work on a specific piece of the code (or to not work on one) might even in some cases be a red flag.
What I'm talking about is like asking "do you want a turkey sandwich or a ham sandwich" and getting the response "I don't care" - about everything. Pick something! Make a choice! Take some ownership of the work you're doing!
I didn’t say anything about career direction. I’m talking about what project or part of the project. I have worked with developers who insist that they only want to work on this very narrow section of the code, and won’t consider branching out somewhere else, and that kind of attitude often comes from people who are difficult in other ways to work with.
>Strong desire to work on a specific piece of the code (or to not work on one) might even in some cases be a red flag.
I understand an engineer should compromise. But if you want to specialize in high performance computing and you're pigeonholed into 6 months of front end web, I can understand the frustration. They need to consider their career too. It's too easy for the manager to ignore you of you don't stand up for yourself. Some even count on it and plan around the turnover.
Of course, if they want nothing other than kernel programming as a junior and you simply need some easy but important work done for a month, it can be unreasonable. There needs to be a balance as a team.
I don't think it's beyond the call of duty to expect someone to acquire context beyond their immediate assignments, especially if they have ambitions to advance. It's kind of a key prerequisite to the kind of bigger-picture thinking that says "hey I noticed my component is duplicating some functionality that's over there, maybe there's an opportunity to harmonize these, etc"
> Just thinking maybe we're not seeing the end of software engineering for those of us already in it—but the door might be closing for anyone trying to come up behind us.
It's worth considering how aggressively open the door has been for the last decade. Each new generation of engineers increasingly disappointed me with how much more motivated they were by a big pay check than they were for anything remotely related to engineering. There's nothing wrong with choosing a career for money, but there's also nothing wrong about missing a time when most people chose it because they were interested in it.
However I have noticed a shift: while half the juniors I work with are just churning out AI slop, the other half are really interested in the craft of software engineering and understanding computer science better.
We'll need new senior engineers in a few years, and I suspect they will come from a smaller pool of truly engaged juniors today.
This is what I see. Less of door slamming completely shut, more like, the door was enormous and maybe a little too open. We forget, the 6 month coding bootcamp to 6 figure salary pipeline was a real thing for a while at the ZIRP apex.
There are still junior engineers out there who have experiments on their githubs, who build weird little things because they can. Those people were the best engineers anyway. The last decade of "money falls from the sky and anyone can learn to code" brought in a bunch of people who were interested in it for the money, and those people were hard to work with anyway. I'd lump the sidehustle "ship 30 projects in 30 days" crowd in here too. I think AI will effectively eliminate junior engineers in the second camp, but absolutely will not those in the first camp. It will certainly make it harder for those junior engineers at the margins between those two extremes.
There's nothing more discouraging than trying to guide a junior engineer who is just typing what you say into cursor. Like clearly you don't want to absorb this, and I can also type stuff into an AI, so why are you here?
The best engineers I've worked with build things because they are truly interested in them, not because they're trying to get rich. This is true of literally all creative pursuits.
I love building software because it's extremely gratifying to a) solve puzzles and b) see things actually working when I've built them from literally nothing. I've never been great at coming up with projects to work on, but I love working on solving problems that other people are passionate about.
If software were "just" a job without any of the gratifying aspects, I wouldn't do nearly as good a job.
heh. i am making software for 40 years more-or-less.
Last re-engineering project was mostly done when they fired me as the probational period was almost over, and seems they did not want me further - too expensive? - and anyone can finish it right? Well...
So i am finishing it for them, one more month, without a contract, for my own sake. Maybe they pay, maybe they don't - this is reality. But I want to see this thing working live.. i have been through maybe 20-30 projects/products of such size and bigger, and only 3-4 had flown. The rest did not - and never for technical reasons.
Then/now i'll be back to the job-search. Ah. Long lists of crypto-or-adtech-or-ai-dreams, mostly..
Mentoring, juniors? i have not seen anything even faintly smelling of that, for decade..
> I think highly motivated juniors who actually want to learn are still valuable.
But it's hard to know if a candidate is one of those when hiring, which also means that if you are one of those juniors it is hard for you to prove it to a prospective employer.
> Adding to this: it's not just that the apprenticeship ladder is gone—it's that nobody wants to deal with juniors who spit out AI code they don't really understand.
I keep hearing this and find it utterly perplexing.
As a junior, desperate to prove that I could hang in this world, I'd comb over my PRs obsessively. I viewed each one as a showcase of my abilities. If a senior had ever pointed at a line of code and asked "what does this do?" If I'd ever answered "I don't know," I would've been mortified.
I don't want to shake my fist at a cloud, but I have to ask genuinely (not rhetorically): do these kids not have any shame at all? Are they not the slightest bit embarrassed to check in a pile of slop? I just want to understand.
> If I'd ever answered "I don't know," I would've been mortified.
I'm approaching 30 years of professional work and still feel this way. I've found some people are like this, and others aren't. Those who aren't tend to not progress as far.
It seems so obvious now, but it does make me thankful that my training drilled into my head to constantly ask "what is the problem I am trying to solve?". Communication in a team on what's going on (both in your head and the overall problem space) is just as important as the mechanical process of coding it.
I feel that's the bare minimum a junior should be asking. the "this is useful" or "this is slop" will come with experience, but you need to at least be able to explain what's going on.
the transition to mid and senior goes when you can start to quantify other aspects of the code. Like performance, how widespread a change affects the codebase at large, the input/outputs expected, and the overall correctness based on the language. Balancing those parameters and using it to accurately estimate a project scope is when you're really thinking like a senior.
I do think that there's a meaningful difference between writing code that was bad (which I definitely did and do) and writing code where I didn't know what each line did.
early on when I was doing iOS development I learned that "m34" was the magic trick to make flipping a view around have a nice perspective effect, and I didn't know what "m34" actually meant but I definitely knew what the effect of the line of code that mutated it was...
Googling on it now seems like a common experience for early iOS developers :)
Senior level. Still can't sometimes. Just the other day I looked over some code I wrote and realized what a pile of slop it was. I kept wondering "What was I thinking when I wrote this? And why couldn't I see how bad it is till now?" My impostor syndrome is triggered hard now.
>Now I just assume they're taking my feedback and feeding it right back to the LLM.
seems like something a work policy can fix quickly. If not something filtered in the interview pipeline. I wouldn't just let juniors go around and try to copy-pasting non-compilable Stackoverflow code, why would I do it here?
New students are presented with agentic coding now, so it's possible that CS will become a more abstract spec refine + verify. Although I can't make it work in my head, that's what I took from speaking with a young college student.
I'm staff and that is probably the main thing I use AI for. It's maybe a bit ironic that AI is a lot better at sounding like an empathetic human being than I am, but I'm still better at writing code.
I don't know what world you're living in but software development has always been a cut throat business. I've never seen true mentoring. Maybe a code review where some a-hole of a "senior" developer would come in having just read "clean code" and use some stupid stylistic preferences as a cudgel and go to town on the juniors. I'm cynical enough to believe that this, "AI is going to take your programming job!" is just a ploy to thin out the applicant pool.
Wow, you must have worked in some REALLY toxic places. I had one toxic senior teammate when I first started out - he mocked me when I was having trouble with some of the dev environment he had created - but he got fired shortly thereafter for being bad at his job.
Everybody else through my 21-year career has almost universally either been helpful or neutral (mostly just busy). If you think code reviews are just for bikeshedding about style minutia, then you're really missing out. I personally have found it extremely rewarding to invest in junior SWEs and see them progress in their careers.
Sure have. Finance, research labs, government contracting. Can't wait for people to chime in with their horror stories. I've seen some of the most dysfunctional crap you can imagine.
Toxicity is spread out and touching most of the industry. Is it fully toxic? Absolutely not. But I found some level of toxicity everywhere I worked for the past 20+ years in this industry.
Seriously. I guess I wouldn’t describe it as a “cut throat” thing, but absolutely nobody in 20 years of working has ever given a shit. The idea of being “mentored” is ridiculous. It doesn’t happen.
My hottest take on this is that it might be healthy for the business. During the recent boom everyone and their grandmother's dog got a job as software engineers, and some aren't really fit for it.
AI provides a bar. You need to be at least better than AI at coding to become a professional. It'll take genuine interest in the technology to surpass AI and clear that bar. The next generation of software professionals will be smaller, but unencumbered by incompetents. Their smaller number will be compensated by AI that can take care of the mundane tasks, and with any luck it's capabilities will only increase.
Surely I'm not the only one who's had colleagues with 10+years experience who can't manage to check out a new branch in git? We've been hiring people we shouldn't have hired.
The only problem is that people need to earn a living while they’re trying to get better than that bar.
Is there bar is set at a competent mid level engineer, people entering the industry need a path from algorithms 101 to above that bar which involves getting paid.
It's not helping that in the last 10 years a culture of job-hopping has taken over the tech industry. Average tenure at tech companies is often ~2 years and after that people job hop to increase compensation.
It's clear why people do it (more pay) but it sets up bad incentives for the companies. Why would a company invest money in growing the technical skill set of an employee, just to have them leave as soon as they can get a better offer?
When using this phrase in this context, is your sentiment positive or negative? In my experience, each time I have a job offer for more money, I go and talk to my current line manager. I explain the new job offer, and ask if they would like to counteroffer. 100% (<-- imagine 48 point bold font!) of the time, my line manager has been simultaneously emotionally hurt ("oh, he's disloyal for leaving") and unsupportive of matching compensation. In almost all cases, an external recruiter found me online, reached out, and had a great new opportunity that paid well. Who am I to look away? I'm nothing special as a technologist, but please don't fault me for accepting great opportunities with higher pay.
> Why would a company invest money in growing the technical skill set of an employee
What exactly is meant by "invest" here? In my career, my employers haven't done shit for me about training. Yet, 100% of them expect me to be up-to-date all the time on whatever technology they fancy this week. Is tech training really a thing in 2025 with so many great online resources? In my career, I am 100% self-trained, usually through blogs, technical papers, mailing lists, and discussions with peers.
At Taos, there was a monthly training session / tech talk on some subject.
At Network Appliance ('98-'09), there was a moderate push to go to trainings and they paid for the devs on the team I was on to go to the perl conference (when it was just down the road one year everyone - even the tech writers - went).
At a retail company that I worked at ('10-'14), they'd occasionally bring in trainers on some thing that... about half a dozen of the more senior developers (who would then be able to spread the knowledge out ... part of that was a formal "do a presentation on the material from the past two weeks for the rest of your team.")
However, as time went on and as juniors would leave sooner the appetite for a company to spend money on training sessions has dissipated. It could be "Here is $1000 training budget if you ask your manager" becoming $500 now. It could be that there aren't any more conferences that the company is willing to spend $20k to send a team to.
If half of the junior devs are going to jump to the next tier of company and the other half aren't going to become much better... why do that training opportunity at all?
Training absolutely used to be a thing that was much more common... but so too were tenures of half a decade or longer.
Then it sounds like you need to train them and also pay them better. Most people just want to stay at one company and not do the grind, but the lack of raises, poor treatment, and much better pay other places is blaming juniors for your companies problems.
When I'm hiring an engineer, HR will easily let me bump up the offer by $10-20K if the candidate counters. It is nearly impossible to get that same $10-20K bump for an existing engineer that is performing extremely well. Companies themselves set up this perverse incentive structure.
This! Each time I join a new job, about 1-3 months in the door, there is a sit-down with the new line manager to check-in and give some feedback. I always talk about future compensation expectations at the time. I tell them: The market pays approximately 4-5% increase in total comp per year. That means, up 20% every 4 years. That is my expectation. If they current company is not paying that rate, I will look elsewhere for work. In almost all cases, they nod their heads in agreement. Ironically, when I come to them 3-5 years later with a new job offer in hand with a nice pay raise, 100% of them do not support matching the compensation, and view me as an un-loyal "job hopper". You just can't win with middle managers.
This is why I never do internal job transfers. The total comp doesn't change. If I do an external job change, I will get a pay rise. I say it to my peers in private: "Loyalty is for suckers; you get paid less."
Yeah, companies broke the career structure decades ago. There's no seniority rewards nor pensions to look forward to, and meanwhile companies put more budget in hiring than in promoting. They look at the high turnover rates and executives shrug. Money is being made, no changes.
It's no surprise the market adapts to the new terms and conditions. But companies simply don't care enough to focus on retention.
This has been a thing for a long time and I've thought about it quite a bit, but I still have no solutions.
I'm pretty sure it just comes down to bean-counting: "we have a new fulltime permanent asset for $100k" vs "we have a new fulltime permanent asset for $120k" is effectively the same thing, and there's a clear "spend money, acquire person" transaction going on. Meanwhile, "we spent $20k on an asset we already have" is.. a hard sell. What are you buying with that $20k exactly? 20% more hours? 20% more output? No? Then why are we spending the money?
It's certainly possible to dance around it talking about reducing risk ("there's a risk this person leaves, which will cause...") but it's bogged down in hypotheticals and kinda a hard sell. Sometimes I wonder if it wouldn't be easier to just fire staff for a week then re-hire them at a new salary.
You keep a good thing going, you buy oil for the machinery, you keep your part of the bargain and do the maintenance. You pay the correct price for the stuff you are lucky enough to have been getting on the cheap.
I like the directness of the question: "Why should I pay more when it won't burn down right this instand if I don't?" This is a question asked all over, and it is dangerous, keeping anything going requires maintenance and knowledge in how to maintain it. That goes for cars and it goes for people.
This is not business, it is miserly behaviour, it is being cheap.
The miser will find himself in a harsh, transactional, brutal world. Because that is the only way for people to protect themselves against him.
This incentive is entirely backwards. It should be "what are we losing with not spending that 20k?". You lose out on someone used to the company workflow, you waste any training you invested in them, you create a hole that strains your other 3-4 100k engineers, and you add a time strain to your managers to spend time interviewing a new member.
if you really believe you can buy all that back for 120k as if you ran short on milkk, you're missing the forest for the tree.
>Sometimes I wonder if it wouldn't be easier to just fire staff for a week then re-hire them at a new salary.
if society conditions a workforce to understand the issue, sure. But psychologically. you'd create an even lower morale workplace. Even for a week, people don't want to be dropped like a hot potato, even if you pick it up later as it cools. People want some form of stability, especially in an assumed full time role.
In my view, I have observed many good, underpaid engineers because they choose stability over higher pay. Most people are happy with slow and stead pay rises while working at the same company. Companies know this and pay accordingly. Only your top 1-10% of employees need more careful "TLC" to give higher raises and regular off-cycle feedback: "You're doing great. We are giving you a special raise for your efforts." You can mostly afford to lose the rest.
I guess that's how we got here to begin with. We take a workforce and treat is as expendable instead of as a proper team.
I suppose it will vary per industry but I can't imagine an other kind of engineering being comfortable just letting go of people mid-project because "we can afford to lose them".
One would assume the solution is to simply offer a good package and retain employees with that. I returned to an old company after a few years of floating around because I realized they had the perfect mix of culture and benefits for me, even if the pay isn't massive.
You're falling for the exact same fallacy experienced by failed salesmen. "Why would I bother investing time in this customer when they're just going to take my offer to another dealership for a better deal?"
Answer: you offer a good deal and work with people honestly, because if you don't, you'll never get a customer.
They could do that: hire juniors, lose money while you train them, and give them aggressive raises. Or they could just do what they are doing: skip the juniors and just hire the people who've got experience.
Everyone's kicking the can down the road and we're very soon going to hit points of "no one has experience (or are already working)". Someone needs to do the training. It doesn't seem like school and bootcamps is enough for what companies need these days.
The game theory here says that such a company will be outcompeted and killed by a company which doesn't spend money+time on retention and training but instead invests that money in poaching.
What you say only works if everyone is doing it. But if you're spending resources on juniors and raises, you can easily be outcompeted and outpoached by companies using that saved money to poach your best employees.
give a big enough raise and they won't want to be poached. You won't retain everyone, but your goal probably isn't to compete with Google to begin with. So why worry of the scenario of boosting a good junior from 100k to 150k but losing them to a 250k job?
In some ways you will also need to read the room. I don't like the mentality of "I won't hire this person, they are only here for money", but to some extent you need to gauge how much of them is mission-focused and how much would leave the minute they get a 10k counter-offer. adjust your investments accordingly and focus on making something that makes money off that.
Its the tragedy of the commons. These companies will think they are very smart for doing this, but theyll just foster a culture where there are no competent employees once the current seniors retire
Compete for talent. You can never compete with big tech salaries, and often you can't compete with their benefits either. But you can still compete in creative ways. The most obvious way that no one does is to promote people into lower hours worked; instead of a pay raise you give them every Friday off, for example. There are a lot of types of people out there motivated by a lot of different things than money.
> It's not helping that in the last 10 years a culture of job-hopping has taken over the tech industry. Average tenure at tech companies is often ~2 years and after that people job hop to increase compensation.
I've started viewing developers that have never maintained an existing piece of software for over 3 years with skepticism. Obviously, with allowances for people who have very good reasons to be in that situation (just entered the market, bad luck with employers, etc).
There's a subculture of adulation for developers that "get things done fast" which, more often than not, has meant that they wrote stuff that wasn't well thought out, threw it over the wall, and moved on to their next gig. They always had a knack of moving on before management could connect the dots that all the operational problems were related to the person who originally wrote it and not the very-competent people fixing the thing. Your average manager doesn't seem to have the capability to really understand tech debt and how it impacts ability to deliver over time; and in many cases they'll talk about the "rock star" developer that got away with a glimmer in their eye.
Saw a post of someone on Hacker News the other day talking about how they were creating things faster than n-person teams, and then letting the "normies" (their words not mine) maintain it while moving on to the next thing. Thats exactly the kind of person I'd like to weed out.
Funny, I was at my previous company almost exactly two years. They never even gave me a cost of living increase, much less a "raise." So I was effectively earning less each year. Change needs to happen from both sides if extended tenure is the goal.
You have cause and effect reversed. Companies stopped training workers and giving them significant raises for experience, so we started job hopping.
Some genius MBA determined that people feel more rewarded by recognition and autonomy than pay, which is actually true. But it means that all the recognition and autonomy in the world won't make you stay if you can make 50% more somewhere else.
When I worked at a very small company we were extremely concerned about this, and so we paid people well enough that they didn't want to leave. All I can figure is that the bean counters just don't understand that churn has a cost.
some places like Amazon operate around the churn. Keep everyone anxious and they won't try to collectively bargain nor ask for raises. They won't be around long enough anyways.
Generally I understand the missing factor to be a control thing.
Th power structure that makes up a typical owners-vs-employees company demands that every employee be replacable. Denying raises & paying the cost of churn are vital to maintaining this rule. Ignoring this rule often results in e.g. one longer-tenured engineer becoming irreplacable enough to be able to act insubordinately with impunity.
A bit bleak but that's capitalism for you. Unionization, working at a smaller companies, or at employee-owned cooperatives are all alternatives to this dynamic.
Good to minimize bus factor, bad when you want to innovate and expand your business. So I guess it's ideal for this slowing economy focused on "maintenance".
I don't think it's good or bad per se. Depends o ntje company needs and the individual desite.
But as someone who originally wanted to be a specialist (or at the very leastT-shaped), I see a lot more problem in fostering specialists than generalists under this model. Sometimes you do just need that one guru who breathes C++ to come in and dig deep into your stack. Not always, but the value is irreplaceable.
Yeah, definitely some drawbacks as well. I think you can develop some specialization despite hopping around relatively often, though it’s not the path I’ve chosen (average tenure of 6-7 years per employer).
Sure but there needs to be a balance with momentum. You cant keep losing institutional knowledge like that. I think we are heavily disbalanced towards too much churn
They have this exact problem with scientific glassblowing, and it's been decades in the making. Manufacturing improvements now mean that you can buy almost everything from a factory, and only need experienced glassblowers for fancy, one-off stuff.
But that means there's no need for entry-level glassblowers, and everyone in the field with any significant experience is super old. The pipeline has been dead for a while now.
This will naturally select for the people who are self driven learners. In a sense this is nothing new, just a continued progression of the raising of the bar of who is still able to contribute economic value to the market
> AI means that those 'easy' tasks can be automated away, so there's less immediate value in hiring a new grad.
Not disagreeing that this is happening in the industry but it still feels like a missed opportunity to not hire juniors. Not only do you have the upcoming skill gap as you mention, but someone needs to instruct AI to do these menial/easy tasks. Perhaps it's only my opinion but I think it would be prudent to instead see this as just having junior engineers who can get more menial tasks done, instead of expecting to add it to the senior dev workflow at zero cost to output.
High risk bets like that cause bubbles. If that bet doesnt pay off then there will be a talent crisis that the american tech industry may not recover from
I think the current grads are going to be shafted either way. In 5 years, there might be more opening for "fresh" young grads and the companies will prefer them over the young people who're just graduating.
“automate it away” ironically still requires a human in the chain to determine what to automate, how, and to maintain that automation. Whether it be derived from an ai or a systemd script or an Antikythera mechanism. Now if you leave that to seniors you just ate a big chunk of their day playing shephard to a dozen plus “automated” pipelines while they still have stuff to do outside the weeds. Now you need more seniors and pretty soon they want triple what you could pay a junior and I don’t think they are 3x more prolific if the junior is managed efficiently quite frankly.
The process of setting up and maintaining automation should be less labor intensive than just doing it manually (or else why would you automate it?) and almost always requires a more advanced skillset than doing the manual task.
I hope juniors will figure out how to use AI to do larger tasks that are still annoying for seniors to do, while seniors take on larger tasks still. I think it's just seniors are learning this stuff faster at the moment and adapting it faster to current work, but as all that changes I would guess juniors reclaim some value back.
That said, you hit on something I've been feeling, the thing these models are best at by far is stuff that wasn't worth doing before.
I've been making use of copilot in VSCode to make changes in a codebase that's new to me, in a language that I can read if not necessarily write unaided - it's a dialect of SQL, so I can certainly understand what's happening, but generating new queries is very time-consuming (half of which is just stupid formatting stuff). Copilot seems to understand the style of the code in my project and so I don't have to do much work to make it conform, compared to my hand-written versions.
I've also written a lot of python 2 in my career, and writing python 3 still isn't quite native-level for me - and the AI tools let me make up for my lack of knowledge of modern Python.
Some juniors do figure it out, but my experience has been that the bar for such juniors is a lot higher than pre-AI junior positions, so there is less opportunity for junior engineers overall.
We had code school grads asking for $110-$130. Meanwhile, I can hire an actual senior engineer for $200 and he/she will be easily 4x as productive and useful, while also not taking a ton of mentorship time.
Since even that $110 costs $140, it's tough to understand how companies aren't taking a bath on $700/day.
If you're hiring in SF or NY, then the problem explains itself. Even a single young new grad needs that much to so live.
you can't have rent at 3.5k a month and not expect 6 figures when requiring in-office work. old wisdom of "30% of salary goes to rent" suggest that that kind of housing should only be rented if you're making 140k. Anyone complaining about junior costs in these areas needs to join in bringing housing prices down.
Yep, the value isn't there. I'm on a very lopsided team, about 5 juniors to 1 senior. Almost all of the senior time is being consumed in "mentorship", mostly slogging through AI slop laden code reviews. There have been improvements, but it's taking a long time.
That's fair. I'm sorry for being snippy. It just feels weird how my junior years always felt like I was on the edge of a needle for being fired because I didn't work "fast enough". Then I hear stories of this vibe coded slop and everyone seeks to be shrugging in confusion.
Its even more frustrating knowing those people went through a overly long gauntlet and prevailed over hundreds of other qualified would-be engineers. Its so weird just seeing an entire pipeline built around minimizing this situation utterly fail.
I entered the job market in late 2000. There was no reason to hire a junior engineer when every hiring manager and senior engineer knew 10 friends who recently lost their jobs. I found work on less desirable projects and yes it affected my career trajectory and it sucked. Starting out has always sucked for most people.
It's happening again now with robotics, self-driving vehicles and RL. Factory workers, truck drivers, construction work, order fulfillment, machinists, farm work, medical technicians and more are all very much at risk (same thing as OP: mostly junior roles getting automated). Some info at https://arxiv.org/pdf/2510.25137
For me the most annoying would be a technically correct solution that completely ignores the “higher-level style” of the surrounding code, at the same time defending the chosen solution by referencing some “best practices” that are not really applicable there for some higher-level reasons, or by insignificant performance concerns. Incidentally, LLMs often produce similar problems, only one doesn’t need to politely argue with them.
Writing unit tests, manual validation work, manual testing. Automating Deployments of infrastructure, DNS work, tracking down annoying one off bugs, fixing and validating dependency issues.
Basically this type of maintenance work for any sufficiently complex codebase. (Over 20k LOC)
When I was an QA intern / Software Dev Intern. I did all of that junk.
Writing automated tests is not the same thing as testing it. Senior engineers write a lot of code, write some tests, do some manual validation. But its usually far from complete.
Other roles in a software team can handle making a complete test suite for a project, and making sure that it covers all of the required end user functionality. In some shops they will push all of these together into a "fullstack" kind of role, in other software shops they will have dedicated team members who's job it is to handle some of this burden. Interns historically filled in the glue for this.
I think I know what you mean in that the most experienced person often writes proof of concept code that others take over. But I can’t imagine a situation where someone is expected to specialize in everything but the PoC.
Also in my part of the Ruby community most folks love tests and are that their code gets better when they write tests. I’d be sad the day I’m writing code and someone else writes the tests.
I don't know if that's it. Speaking from outside the tech space: most of my office jobs since 2012 have been "doing the easy/annoying tasks that had to be done, but more senior people didn't want to 'waste time' dealing with."
So, there are two parts to this:
The first is that a lot of those tasks are non-trivial for someone who isn't a digital native (and occasionally trivial for people who are). That is to say that I often found myself doing tasks that my bosses couldn't do in a reasonable time span; they were tasks which they had ALWAYS delegated, which is another way of saying that they were tasks in which proficiency was not necessary at their level.
This leads into the second part, which is that performing these tasks did not help me advance in relevant experience at all. They were not related to higher-level duties, nor did they endear me to the people who could have introduced me to such duties. My seniors had no interest in our growth as workers; anyone who wanted to see that growth had to take it into their own hands, at which point "junior-level" jobs are only worth the paycheck.
I don't know if it's a senior problem generally, or something specific to this cohort of Boomer/Gen-X seniors. Gun-to-my-head, I would wager the latter. They give enough examples in other arenas of public life to lend credence to the notion that that they simply don't care what happens to their juniors, or to their companies after they leave, particularly if there is added hassle in caring. This is an accusation often lobbed at my own generation, to which I say, it's one of the few things our forebears actually did teach us.
I grew up in the 70s. The hand wringing then was calculators. No one was going to be able to do math anymore! And then wrist watches with calculators came out. Everyone is going to cheat on exams, oh no!
Everything turned out fine. Turns out you don't really need to be able to perform long division by hand. Sure, you should still understand the algorithm at some level, esp. if you work in STEM, but otherwise, not so much.
There were losses. I recall my AP physics professors was one of the old school types (retired from industry to teach). He could find the answer to essentially any problem to about 1-2 digits of precision in his head nearly instantly. Sometimes he'd have to reach for his slide rule for harder things or to get a few more digits. Ain't no one that can do that now (for reasonable values of "no one"). And, it is a loss, in that he could catch errors nearly instantly. Good skill to have. A better skill is to be able to set up a problem for finite element analysis, write kernels for operations, find an analytic solution using Mathematica (we don't need to do integrals by hand anymore for the mot part), unleash R to validate your statistics, and so on. The latter are more valuable than the former, and so we willingly pay the cost. Our ability to crank out integrals isn't what it was, but our ability to crank out better jet engines, efficient cars, computer vision models has exploded. Worth the trade off.
Recently watched an Alan Guth interview, and he made a throwaway comment, paraphrased: "I proved X in this book, well, Mathematica proved...". The point being that the proof was multiple pages per step, and while he could keep track of all the sub/superscripts and perform the Einstein sums on all the tensors correctly, why??? I'd rather he use his brain to think up new solutions to problems, not manipulate GR equations by hand.
I'm ignoring AGI/singularity type events, just opining about the current tooling.
Yah, the transition will be bumpy. But we will learn the skills we need for the new tools, and the old skills just won't matter as much. When they do, yah, it'll be a bit more painful, but so what, we gained so much efficiency we can afford the losses.
> I feel the effects of this are going to take a while to be felt (5 years?);
Who knows if we'll even need senior devs in 5 years. We'll see what happens. I think the role of software development will change so much those years of technical experience as a senior won't be so relevant but that's just my 5 cents.
The way I'm using claude code for personal projects, I feel like most devs will become moreso architects and testers of the output, and reviewers of the output. Which is good, plenty of us have said for ages, devs dont read code enough. Well now you get to read it. ;)
While the work seems to take similar amounts of time, I spend drastically less time fixing bugs, bugs that take me days or God forbid weeks, solved in minutes usually, sometimes maybe an hour if its obscure enough. You just have to feed the model enough context, full stack trace, every time.
Man, I wish this was true. I've given the same feedback on a colleague's clearly LLM-generated PRs. Initially I put effort into explaining why I was flagging the issues, now I just tag them with a sadface and my colleague replies "oh, cursor forgot." Clearly he isn't reading the PRs before they make it to me; so long as it's past lint and our test suite he just sends the PR.
I'd worry less if the LLMs weren't prone to modifying the preconditions of the test whenever they fail such that the tests get neutered, rather than correctly resolving the logic issues.
We need to develop new etiquette around submitting AI-generated code for review. Using AI for code generation is one thing, but asking other people review something that you neither wrote nor read is inconsiderate of their time.
I'm getting AI generated product requirements that they haven't read themselves. It is so frustrating. Random requirements like "this service must have a response time of 5s or less" - "A retry mechanism must be present". We have a specific SLA already for response time and the designs don't have a retry mechanism built.
The bad product managers have become 10x worse because they just generate AI garbage to spray at the engineering team. We are now writing AI review process for our user stories to counter the AI generation of the product team. I'd much rather spend my time building things than having AI wars between teams.
Oof. My general principle is "sending AI-authored prose to another human without at least editing it is rude". Getting an AI-generated message from someone at all feels rude to me, kind of like an extreme version of "dictated but not read" being in a letter in the old days.
At least they're running the test suite? I'm working with guys who don't even do that! I've also heard "I've fixed the tests" only to discover, yes, the tests pass now, but the behavior is no longer correct...
> I feel like most devs will become moreso architects and testers of the output
which means either devs will take over architectural roles (which already exist and are filled) or architects will take over dev roles. same goes for testing/QA - these are already positions within the industry in addition to being hats that we sometimes put on out of necessity or personal interest.
I've seen Product Manager / Technical Program Manager types leaning into using AI to research what's involved in a solution, or even fix small bugs themselves. Many of these people have significant software experience already.
This is mostly a good thing provided you have a clear separation between solution exploration and actually shipping software - as the extra work put into productionizing a solution may not be obvious or familiar to someone who can use AI to identify a bugfix candidate, but might not know how we go about doing pre-release verification.
> I feel like most devs will become moreso architects and testers of the output
Which stands to reason you'll need less of them. I'm really hoping this somehow leads to an explosion of new companies being built and hiring workers , otherwise - not good for us.
> Which stands to reason you'll need less of them.
Depends on how much demand there would be for somewhat-cheaper software. Human hours taken could well remain the same.
Also depends on whether this approach leads to a whole lot of badly-fucked projects that companies can’t do without and have to hire human teams to fix…
This is what I'm doing, Opus 4.5 for personal projects and to learn the flow and what's needed. Only thing I'll disagree with is how the work takes similar amount of time because I'm finding it unbelievably faster. It's crazy how with smart planning and documentation that we can do with the agents, getting markdown files etc, they can write the code better and faster than I can as a senior dev. No question.
I've found Opus 4.5 as a big upgrade compared to any of the other models. Big step up and the minor issues that were annoying and I needed to watch out for with Sonnet and GPT5.1.
It's to the point where I'm on the side of, if the models are offline or I run out of tokens for the 5 hour window or the week (with what I'm paying now), there's kind of no use of doing work. I can use other models to do planning or some review, but then wait until I'm back with Opus 4.5 to do the code.
It still absolutely requires review from me and planning before writing the code, and this is why there can be some slop that goes by, but it's the same as if you have a junior and they put in weak PRs. Difference is much quicker planning which the models help with, better implementation with basic conventions compared to juniors, and much easier to tell a model to make changes compared to a human.
> This is what I'm doing, Opus 4.5 for personal projects and to learn the flow and what's needed. Only thing I'll disagree with is how the work takes similar amount of time because I'm finding it unbelievably faster.
I guess it depends on the project type, in some cases like you're saying way faster. I definitely recognize I've shaved weeks off a project, and I get really nuanced and Claude just updates and adjusts.
Was having a discussion the other day with someone, and we came to the same conclusion. You used to be able to make yourself useful by doing the easy / annoying tasks that had to be done, but more senior people didn't want to waste time dealing with. In exchange you got on-the-job experience, until you were able to handle more complex tasks and grow your skill set. AI means that those 'easy' tasks can be automated away, so there's less immediate value in hiring a new grad.
I feel the effects of this are going to take a while to be felt (5 years?); mid-level -> senior-level transitions will leave a hole behind that can't be filled internally. It's almost like the aftermath of a war killing off 18-30 year olds leaving a demographic hole, or the effect of covid on education for certain age ranges.