Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The key issue is that the current version of AI has no concept of understanding anything. Without understanding anything is possible and bad outcomes are almost guaranteed outside of the trivial. Throw a non-trivial codebase at any AI tool and watch as it utterly destroys it, introduces lots of new bugs, add massive amounts of bloat and, in general, makes it incomprehensible and impossible to support.

I ran a three month experiment with two of our projects, one Django and the other embedded C and ARM assembler. You start with "oh wow, that's cool!" and not too long after that you end up in hell. I used both ChatGPT and Cursor for this.

The only way to use LLMs effectively was to carefully select small chunks of code to work on, have it write the code and then manually integrate into the codebase after carefully checking it and ensuring it didn't want to destroy 10 other files. It other words, use a very tight leash.

I'm about to run a six month LLM experiment now. This time it will be Verilog FPGA code (starting with an existing project). We'll see how that goes.

My conclusion at this instant in time is that LLMs are useful if you are knowledgeable and capable in the domain they are being applied to. If you are not, shit show potential is high.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: