Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The number of use cases for which I use AI is actually rapidly decreasing. I don't use it anymore for coding, I don't use it anymore for writing, I don't use it anymore for talking about philosophy, etc. And I use 0 agents. even though I am (was) the author of multiple MCP servers. It's just all too brittle and too annoying. I feel exhausted when talking to much to those "things".... I am also so bored of all those crap papers being published about LLM. Sometimes, there are some gems but its all so low-effort. LLM papers bore the hell out of me...

Anyway, By cutting out AI for most of my stuff, I really improved my well-being. I found the joy back in manual programming, because I am one of the few soon that will actually understand stuff :-). I found the joy in writing with a fountain pen in a notebook and since then, I retain so much more information. Also a great opportunity for the future, when the majority will be dumbed down even more. And for philosophical interaction. I joined an online University and just read the actual books of the great thinkers and discuss them with people and knowledgable teachers.

For what I use AI still is to correct my sentences (sometimes) :-).

It's kinda the same than when I cut all(!) Social Media a while ago. It was such a great feeling to finally get rid ot all those mind-screwing algorithms.

I don't blame anyone if they use AI. Do what you like.



> Typewriters and printing presses take away some, but your robot would deprive us of all. Your robot takes over the galleys. Soon it, or other robots, would take over the original writing, the searching of the sources, the checking and crosschecking of passages, perhaps even the deduction of conclusions. What would that leave the scholar? One thing only, the barren decisions concerning what orders to give the robot next!

From Issac Asimov. Something I have been contemplating a lot lately.


I technically use it for programming, though really for two broad things:

* Sorting. I have never been able to get my head around sorting arrays, especially in the Swift syntax. Generating them is awesome.

* Extensions/Categories in Swift/Objective C. "Write me an extension to the String class that will accept an array of Int8s as an argument, and include safety checks." Beautiful.

That said I don't know why you'd use it for anything more. Sometimes I'll have it generate like, the skeleton of something I'm working on, a view controller with X number of outlets of Y type, with so and so functions stubbed in, but even that's going down because as I build I realize my initial idea can be improved.


I've been using LLMs as calculators for words, like they can summarize, spot, correct, but often can be wrong about this - especially when I have to touch language I haven't used in a while (Python, Powershell, Rust as recent examples), or sub-system (SuperPrefetch on WIndows, Or why audio is dropping on coworker's machines when they run some of the tools, and like this... don't ask me why), and all kinds of obscure subjects (where I'm sure experts exists, but when you need them they are not easy (as in "nearby") to reach for, and even then might not help)

But now my grain of salt has increased - it's still helpful, but much like a real calculator - there is limit (in precision), and what it can do.

For one it still can't make good jokes :) (my litmus test)


This is also my experience with (so called) AI. Coding with AI feels like working with a dumb colleague that constantly forgets. It feels so much better to manually write code.


> I don't use it anymore for coding

I'm curious, can you expand on this? Why did you start using coding agents, and why did you stop?


I started to code with them when Cursor came out. I've built multiple projects with Claude and thought that this is the freaking future. Until all joy disappeared and I began to hate the whole process. I felt like I didn't do anything meaningful anymore, just telling a stupid machine what I want and let it produce very ugly output. So a few months, I just stopped. I went back to VIM even....

I am pretty idealistic coder, who always thought of it as an art in itself. And using LLMs robbed me of the artistic aspect of actually creating something. The process of creating is what I love and like and what gives me inspiration and energy to actually do it. When a machine robs me of that, why would I continue to do it? Money then being the only answer... A dreadful existence.

I am not a Marxist, probably bceause I don't really understand him, but I think LLM is "detachment of work" applied to coders IMHO. Someone should really do a phenomenological study on the "Dasein" of a coder with LLM.

Funnily, I don't see any difference in productivity at all. I have my own company and I still manage to get everything done on deadline.


I'll need to read more about this ("Dasein") as I was not aware of it. Yesterday our "adoptive" family had a very nice Thanksgiving, and we were considered youngesters (close to our 50s) among our hosts & guests and this came multiple times when we were discussing AI among many other things - "The joy of work", the "human touch", etc. I usually don't fall for these "nice feel" talks, but now that you mentioned this it hit me. What would I do if something like AI completely replace me (if ever).

Thank you, and sorry my thoughts are all over...


I hear you, sometimes it is easier to just do it myself.

You can tell the AI to change the "ugly code" to be how you like. Work for me most of the time.

Even better, tell the AI to not write it that way in the first place. It writes a plan, you skim the plan, and tell it to change it.

These tools are not going away, so we need to learn how to use them effectively.


> let it produce very ugly output.

Did you try changing your prompts?


Skill declines over time, without practice.

If you speak fluent japanese, and you dont practice, you will remember being fluent but no longer actually be able to speak fluently.

Its true for many things; writing code is not like riding a bike.

You cant not write code for a year and then come back at the same skill level.

Using an agent is not writing code; but using an agent effectively requires that you have the skill of writing code.

So, after using a tool that automatically writes code for you, that you probably give some superficial review to, you will find, over time, that you are worse at coding.

You can sigh and shake your head and stamp your feet and disagree, but its flat out a fact of life:

If you dont practice, you lose skill.

I, personally found, this happening, so I now do 50/50 time: 1 week with AI, 1 week with strictly no AI.

If the no AI week “feels hard” then I extend it for another week, to make sure I retain the skills I feel I should have.

Anecdotally, here at $corp, I see people struggling because they are offloading the “make an initial plan to do x that I can review” step too much, and losing the ability to plan software effectively.

Dont be that guy.

If you offload all your responsibilities to an agent and sit playing with your phone, you are making yourself entirely replacable.


I cannot talk for OP, but I have been researching ways to make ML models learn faster, which obviously is a path that will be full of funny failures. I'm not able to use ChatGPT or Gemini to edit my code, because they will just replace my formulas with SimCLR and call it done.


That's it, these machines don't have an original thought in there. They have a lot of data so they seem like they know stuff, they clearly know stuff you don't.But go off the beaten path and they gently but annoyingly try to steer you back.

And that's fine for some things. Horrible if you want to do non-conventional things.


I liken it to a drug that feels good over the near term but has longer term impacts.. sometimes you have to get things out of your system. It's fun while it lasts and then the novelty wears off. (And just as some people have the tolerance to do drugs for much longer periods of time than others, I think the same is the case for AI)


It sounds like you went in deep for a while, and then rebounded. Good for you (no sarcasm, I mean it).

We should all find little joys in our life and avoid things that deaden us. If AI is that for you, I'd say you made a good decision.


I commend you for your choices. This is the way in the 2020s.


I use it for a lot of stuff, but ultimately redo almost all of it - which I think is right.

The LLM is the mush of everyone's stuff like the juice at the bottom of the bin is a mix of all the restaurants food.

The writing out the other end of the LLM is bland.

What it IS useful for is seeing a wrong thing and then going and making my own.

I still use it for various little scripts and menial tasks.

The push for this stuff to replace creativity is disgusting.

Sticking LLMs in every place is just crap, I've had enough.


This is the best take


No one uses agents. They're a myth that Marc Benioff willed into existence. No one who regularly uses LLMs would ever trust one to do unattended work.


You managed to move the goalposts in two sentences; if you realized that your first claim is wrong you probably should have rewrote it rather than try to save it at the end.


Agent = agentic LLM

LLM = co-pilot, Gemini, Claude, Mistral chat

No one who uses an LLM would trust an agentic LLM


You seem to be using a different definition of "agent(ic)" than me and perhaps most people, because your comment makes no sense.

I will repeat that no organization has adopted LLMs to work independently as agents, doing work in the background without human supervision.

Writing code doesn't could because the code is reviewed and easily reverted. Sending emails, writing and sending legal contracts, etc. would count.


The economics of the force multiplier is too high to ignore, and I’m guessing an SWEs who don’t learn how to use it consistently and effectively will be out of the job market in 5 or so years.


Back in the early 2000s the sentiment was that IDEs were a force multiplier that was too high to ignore, and that anyone not using something akin to Visual Studio or Eclipse would be out of a job in 5 or so years. Meanwhile, 20 years later, the best programmers you know are still using Vim and Emacs.


But the vast majority are still using an IDE - and I say this as someone who has adamantly used Vim with plugins for decades.

Something similar will happen with agentic workflows - those who aren't already productive with the status quo will have to eventually adopt productivity enhancing tooling.

That said, it isn't too surprising if the rate of AI adoption starts slowing down around now - agentic tooling has been around for a couple years now, so it makes sense that some amount of vendor/tool rationalization is kicking in.


It remains to be seen whether these tools are actually a net enhancement to productivity, especially accounting for longer-term / bigger-picture effects -- maintainability, quality assurance, user support, liability concerns, etc.

If they do indeed provide a boost, it is clearly not very massive so far. Otherwise we'd see a huge increase in the software output of the industry: big tech would be churning out new products at a record rate, tons of startups would be reaching maturity at an insane clip in every imaginable industry, new FOSS projects would be appearing faster than ever, ditto with forks of existing projects.

Instead we're getting an overall erosion of software quality, and the vast majority of new startups appear to just be uninspired wrappers around LLMs.


I'm not necessarily talking about AI code agents or AI code review (workflows which I think are difficult for agentic workflows to really show a tangible PoV against humans, but I've seen some of my portfolio companies building promising capabilities that will come out of stealth soon), but various other enhancements such as better code and documentation search, documentation generation, automating low sev ticket triage, low sev customer support, etc.

In those workflows and cases where margins and dollar value provided is low, I've seen significant uptake of AI tooling where possible.

Even reaching this point was unimaginable 5 years ago, and is enough to show workflow and dollar value for teams.

To use another analogy, using StackOverflow or Googling was viewed derisively by neckbeards who constantly spammed RTFD back in the day, but now no developer can succeed without being able to be a proficient searcher. And a major value that IDEs provided in comparison to traditional editors was that kind of recommendation capability along with code quality/linting tooling.

Concentrating on abstract tasks where the ability to benchmark between human and artificial intelligence is difficult means concentrating on the trees while missing the forest.

I don't foresee codegen tools replacing experienced developers but I do absolutely see them reducing a lot of ancillary work that is associated with the developer lifecycle.


> I've seen significant uptake of AI tooling where possible.

Uptake is orthogonal to productivity gain. Especially when LLM uptake is literally being forced upon employees in many companies.

> I do absolutely see them reducing a lot of ancillary work that is associated with the developer lifecycle.

That may be true! But my point is they also create new overhead in the process, and the net outcome to overall productivity isn't clear.

Unpacking some of your examples a bit --

Better code and documentation search: this is indeed beneficial to productivity, but how is it an agentic workflow that requires individual developers to adopt and become productive with, relative to the previous status quo?

Documentation generation: between the awful writing style and the lack of trustworthiness, personally I think these easily reduce overall productivity, when accounting for humans consuming the docs. Or in the case of AI consuming docs written by other AI, you end up with an ever-worsening cycle of slop.

Automating low sev ticket triage: Potentially beneficial, but we're not talking about a revolutionary leap in overall team/org/company productivity here.

Low sev customer support: Sounds like a good way to infuriate customers and harm the business.


Hard agree on documentation. In my view, generated documentation is utterly worthless, if not counterproductive. The point of documentation is to convey information that isn't already obvious from the code. If the documentation is just a padded, wordy text extrapolated from the code, reading it is a complete waste of time.


Another thing here is that LLMs don't have to be a productivity boost if it lets you be lazier. Sometimes I'll have an LLM do something and it doesn't save time compared to me doing it but I can fuck off while it's working and grab a drink or something. I can spend my mental energy on hard problems rather than looking through docs to find all of the right functions and plumb things in the code.


OK, but LLMs are being valued as if they are one of the most important technologies ever created. How much will companies pay for a product that doesn't boost productivity but allows employees to be lazier?


I think no one can predict what will happen. We need to wait until we can empirically observe who will be more productive on certain tasks.

Thats why I started with AI coding. I wanted to hedge against the possibility that this takes off and I am useless. But it made me sad as hell and so I just said: Screw it. If this is the future, I will NOT participate.


The good thing is that the selling point of LLM tools is that they're dead easy to use, so even if you find yourself having to do them in the future, it won't be an issue. I know the AI faithful love talking about how non-believers will be "left behind", and stylize prompt engineering as some kind of deeply involved, complex new science, it really isn't. As more down-to-earth AI fanatics have confirmed to me, it'll probably take you an afternoon of reading some articles on best practices and you'll be back amongst the best of them. This isn't like learning a new language or framework.


That’s fine, but you don’t want to be blind sided by changes in the industry. If it’s not for you, have a plan B career lined up so you can still put food on the table. Also, if you are good at old fashioned SE and AI, you’ll be OK either way.


As someone that uses vim full time all that happened is people started porting all the best features of IDEs over to vim/emacs as plugins. So those people were right it's just the features flowed.

Pretty sure you can count the number of professional programmers using vanilla vim/neovim on one hand.


People also started using vi edit mode inside IDEs. I've personally encountered that much more often.


It depends where you work. In gaming, the best programmers I know might not even touch the command-line / Linux, and their "life" depens on Visual Studio... Why? Because the eco-system around Visual Studio / Windows and how game console devkits work is pretty much tied - while Playstation is some kind of BSD, and maybe Nintendo - all their proper SDKs are just for Windows and tied around Visual Studio (there are some studios that are the exceptions, but rare).

I'm sure other industries would have their similar examples. And then the best folks in my direct team (infra), much smaller - are the command-line, Linux/docker/etc. guys that use mostly VSCode.


> Meanwhile, 20 years later, the best programmers you know are still using Vim and Emacs.

The best programmers I know are game programmers using Visual Studio. Real Visual Studio, not Visual Studio Code.

(vim is definitely a big thing but I'm not sure how many people I know who even use emacs anymore...)


I’m sceptical

The models seem to still (claude opus 4.5) not get things right, and miss edge cases, and work code in a way that’s not very structured.

I use them daily, but I often have to rewrite a lot to reshape the codebase to a point where it makes sense to use the model again.

I’m sure they’ll continue to get better, but out of a job better in 5 years? I’m not betting on it.


Ya, you have to shape your code base, not just that but get your AI to document your code base and come with some sort of pipeline to have different AI check things.

It’s fine to be skeptical, and I definitely hope I’m wrong, but it really is looking bad for SWEs who don’t start adopting at this point. It’s a bad bet in my opinion, at least have your F-u money built up in 5 if you aren’t going full in on it.


Why would you go full on ? There is no learning curve it seems like. What is there to learn about using AI to code?


The learning curve is actually huge. If you just vibe code with AI, the results are going to suck. You basically have to reify all of your software engineering artifacts and get AI to iterate on them and your code as if it were am actual software engineering (who forgot everything whenever you rebooted it, so that’s why you have to make sure it can re-read artifacts to get its context back up to speed again). So a lot more planning, design, and test documentation than you would do in a normal project. The nice thing is that AI will maintain all of it as long as you set up the right structure.

We are also in the early days still, I guess everyone has their own way of doing this ATM.


By this point you've burnt up any potential efficiency gains. So you spent a lot of hours learning a new tool which you then have to spend a lot of additional hours to babysit and correct, so much that you'll be very far from those claimed productivity gains. Plus the skills you need to verify and fix it will atrophy. So that learning curve earns you nothing expect the ability to put "AI" somewhere on your CV, which I expect will lose a lot of its lustre in 1-2 years time when everybody has made enough experiences with vibe coders who don't, or no longer can, enusre the quality of their super-efficient output.


This is all bullshit btw.

Speaking as someone with a ton of experience here.

None of the things they do can go without immense efforts in validation and verification by a human who knows what they're doing.

All of the extra engineering effort could have been spent just making your own infrastructure and procedures far more resilient and valuable to far more people in your team and yourself going forward.

You will burn more and more and more hours overtime because of relying on LLMs for ANYTHING non-trivial. It becomes a technical debt factory.

That's the reality.

Please stop listening to these grifters. Listen to someone who actually knows what they're talking about, like Carl Brown.


Care to share some links?

Not this one, presumably: https://en.wikipedia.org/wiki/Carl_Robert_Brown


He's the youtuber "The Internet of Bugs"


That’s interesting but how much of this if written down, documented and made into video tutorials could be learnt by just about any good engineer in 1-2 weeks?


I don’t see much yet, maybe everyone is just winging it until someone influential gives it a name. The vibe coding crowd have set us back a lot, and really so did the whole leetcode interview fad that are just throwing off. It’s kind of obvious though: just tell the AI to do what a normal junior SWE does (like write tests), but write a lot more documentation because they forget things all the time (a junior engineer who makes more mistakes, so they need to test more, and remembers nothing).


The trick is being a good engineer in the first place.


The concepts in the LLMs latent space are close to each other and you find them by asking in the right way, so if you ask like an expert you find better stuff.

For it to work best you should be an expert in the subject matter, or something equivalent.

You need to know enough about what your making not just to specify it, but to see where the LLM is deviating (perhaps because you needed to ask more specifically).

Garbage in garbage out is as important as ever.


I hope you are joking and/or being sarcastic with this comment…


I don't think they really are.

There is, effectively, a "learning curve" required to make them useful right now, and a lot of churn on technique, because the tools remain profoundly immature and their results are delicate and inconsistent. To get anything out of them and trust what you get, you need to figure out how to hold them right for your task.

But presuming that there's something real here, and there does seem to be something, eventually all that will smooth out and late adopters who decide want to use the tools will be able onboard themselves plenty fast. The whole vision of them is to make the work easier, more accessible, and more productive, after all. Having a big learning curve doesn't align with that vision.

Unless they happen to make you more significantly productive today on the tasks you want to pursue, which only seems to be true for select people, there's no particular reason to be an early adopter.


fantastic comment! I disagree on two fronts:

- we are far removed from “early adopter” stages at this point

- “eventually all that will smooth out…” is assuming that this is eventually going to be some magic that just works - if this actually happens both early and late adopters will be unemployed.

it is not magic, it is unlikely to ever be magic. but from my personal perspective and many others I read - if you spend time (I am now just over 1,200 hours spent, I bill it so I track it :) ) it will pay dividends (and also will feel like magic ocassionally)


If you spent 1200 hours not using it you would have matured in your craft 3x more and figured out far better ways of doing things.


been hacking 3 decades so exponentially north of 1,200 hours ... in my career the one trait that always seems to differentiate great SWEs from decent/mediocre/awful ones is laziness.

the best SWEs will automate anything they have to manually do more than once. I have seen this over and over and over again. LLMs have take automation to another level and learning everything they can be helpful with to automate as much of my work will be worth 12,000+ hours in the long run.


What is this fantasy about people being unemployed? The layoffs we’ve seen don’t seem to be discriminating against or in favor of AI - they appear to be moves to shift capital from human workers to capex for new datacenters.

It doesn’t appear like anything of this sort is happening and the idea that good employer with a solid technical team would start firing people for not “knowing AI” instead of giving them a 2 week intro course seems unrealistic to me.

The real nuts and bolts are still software engineering. Or is that going to change too?


I don't think their will be massive unemployment based on actual "AI has removed the need for SWEs of this level..." kind of talk but I was specifically commenting on eventually all that will smooth out and late adopters who decide want to use the tools will be able onboard themselves plenty fast. ... If this actually did happen (it won't) then we'd all have to worry about being unemployed


They'll be more employable, not less. Since they're the only ones who will be able to fix the huge mess left behind by the people relying on them.


Never in the history of tech did luddites have an advantage in employment.


I mean, yeah they did, in this sense literally all the time. The people who generated crap copy pasting from stack overflow or generated scaffold with tools without understanding that were literally the kind of programmers you tried to weed out.

This is equivalent of that.


Crappy engineers are going to be crappy engineers, so what?

> This is equivalent of that.

In the hands of a crappy engineer from above, you are correct.


It’s the opposite. The more you know to do without them the more employable you are. AI has no learning curve, not at the current level of complexity anyway. So anyone can pick it up in 5 years and if you’ve used it less your brain is better.


With all due respect, claiming “AI has no learning curve” can be an effective litmus test to see who has actually dig into agentic AI enough to give it a real evaluation. Once you start to peel back the layers of how to get good output you understand just how much skill is involved. Its very similar to being a “good googler”. Yeah on its face it seems like it shouldn’t be a thing but absolutely there are levels to it, and its a skill that must be learned.


There is nothing to learn, the entry barrier is zero. Any SWE can just start using it when they really need to.


Some of us will need time to learn to give less of a shit about quality.


Or you could learn how to do it the right way with quality intact. But it’s definitely your choice.


Good. The smartest and best should be cutting out middlemen and selling something of their own instead of keep shoveling all the money up the company pyramids. I think the pyramids will become easier and easier to spot their trash and avoid


> ... an SWEs who don’t learn how to use it consistently ...

an SWE does not necessarily need to "learn" Claude Code any more than someone who does not know programming at all to be able to use the tool effectively. What actually matters is that they know how things should be done without coding assistants, they understand what the tools may be doing, and then give directions/correct mistakes/review code.

In fact, I'd argue tools should be simple and intuitive for any engineer to quickly pick up. If an engineer who has solid background in programming but with no prior experience with the tools cannot be productive with such a tool after an hour, it is the tool that failed us.

You don't see people talk about "prompt engineering" as much these days, because that simply isn't so important any more. Any good tool should understand your request like another human does.


People dont talk about prompt engineering because it has become “context engineering“. Agentic AI is the real deal future


Don't think so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: