Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ok, maybe it helps somewhat. My experience is that when the agent fails or produce crappy code, it's not a matter of non-deterministic output of the LLM but rather that the task is just not suitable or the system prompt didn't provide enough information.


Not always, sometimes just a different internal "seed" can create a different working solution.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: