Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can't you say the same of the human brain, given a different algorithm? Granted, we don't know the algorithm, but nothing in the laws of physics implies we couldn't simulate it on a computer. Aren't we all programs taking analog inputs and spitting actions? I don't think what you presented is a good argument for LLMs not "know"ing, in some meaning of the word.




What meaning of "knowing" attributes understanding to a sequence of boolean operations?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: