Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The whole point of a cited source is that you read the source to verify the claim. Amazing how many people in this thread seem to not let this little detail get in the way of their AI hate.


> The whole point of a cited source is that you read the source to verify the claim. Amazing how many people in this thread seem to not let this little detail get in the way of their AI hate.

I like that you read all the citations in your concrete example of how good chat gpt is at citations and chose not to mention that one of them was made up.

Like you either would have seen it and consciously chose not to disclose that information or you asked a bot a question, got a response that seemed right, and then trusted that the sources were correct and posted it. But there’s no chance of the latter happening though because you specifically just stated that that’s not how you use language models.

On an unrelated note what are your thoughts on people using plausible-sounding LLM-generated garbage text backed by fake citations to lend credibility to their existing opinions as an existential threat to the concept of truth or authoritativeness on the internet?


I use LLMs all the time and have since they first became so I don’t hate them. But I do know they are just tools with limitations. I am happy that ChatGPT has better sitarions these days, but I still do not trust it with anything important without double-checking several places. Besides, the citation itself can be some AI generated blog post with completely wrong information.

This tooks have limitations. Sooner we accept it,sooner we learn to better use them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: