Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I usually feel like i can confidently express a change I want in code faster and better than I can explain what I want an AI to do in English. Like if I have a good prompt, these tools work okay, but getting that prompt almost as hard as just writing the code itself often. Do others feel the same struggle?


You learn how to effectively prompt it in a way where the AI can fill in the blanks. Sometimes the prompt is just a couple of words because I know that the AI will interpolate the rest by the context (getting the context right is crucial). However, even when I need to be verbose and have to spend a few minutes on a prompt it gives me code that would have taken half a day to write manually. You also need to learn how and when to split work into chunks. It's much less intuitive than one would think and is completely different than how you would split it for a human. You get to "know" the AI and what it needs in order to succeed with a task.

LLMs have other problems though. The biggest problem for me is that it feels like I lose control of the codebase. I don't have the same mental mapping of the code.


I’m positive my experience pales in comparison to yours, as I don’t actually code anything beyond the occasional single use script, but YES! I hate trying to explain the exact SQL result I’m looking for or some text modification I need to be able to throw together a CTE since I have read-only access and can’t even build a temp table.


GPT-4o usually does well with SQL. Did a query right first time using cte’s, array functions, table created on the fly in the query, etc.

SQL syntax is fiddly so it’s nice to have a robot do it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: