Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can just admit you don't have any references & you do not actually know how neurons work & what type of computation, if any, they actually implement.




I think the problem with your line of reasoning is a category error, not a mistake about arithmetic.

I agree that every step of an LLM’s operation reduces to Boolean logic and arithmetic. That description is correct. Where I disagree is the inference that, because the implementation is purely arithmetic, higher-level concepts like representation, semantics, knowledge, or even lying are therefore meaningless or false.

That inference collapses levels of explanation. Semantics and knowledge are not properties of logic gates, so it is a category error to deny them because they are absent at that level. They are higher-level, functional properties implemented by the arithmetic, not competitors to it. Saying “it’s just numbers” no more eliminates semantics than saying something like “it’s just molecules” eliminates biology.

So I don’t think the reduction itself is wrong. I think the mistake is treating a complete implementation-level account as if it exhausts all legitimate descriptions. That is the category error.


I know you copied & pasted that from an LLM. If I had to guess I'd say it was from OpenAI. It's lazy & somewhat disrespectful. At the very least try to do a few rounds of back & forth so you can get a better response¹ by weeding out all the obvious rejoinders.

¹https://chatgpt.com/share/693cdacf-bcdc-8009-97b4-657a851a3c...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: