Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It definitely isn’t voodoo, it’s more like forecasting weather. Some forecasts are easier to make, some are harder (it’ll be cold when it’s winter vs the exact location and wind speed of a tornado for an extreme example). The difference is you can try to mix things up in the prompt to maximize the likelihood of getting what you want out and there are feasibility thresholds for use cases, e.g. if you get a good answer 95% of the time it’s qualitatively different than 55%.




No, it's not. Nowadays we know how to predict the weather with great confidence. Prompting may get you different results each time. Moreover, LLMs depend on the context of your prompts (because of their memory), so a single prompt may be close to useless and two different people can get vastly different results.

> we know how to predict the weather with great confidence

some weather, sometimes. we're not good at predicting exact paths of tornadoes.

> so a single prompt may be close to useless and two different people can get vastly different results

of course, but it can be wrong 50% of the time or 5% of the time or .5% of the time and each of those thresholds unlock possibilities.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: