A lot of the experimentation I've done is too long and complex to fit nicely in an Ask HN post. People have the tendency to move the bar when assigning intelligence to AI. GPT4 is different. Here is a post from earlier today that might be more convincing.
GPT-4 is no different to any old deep neural network and fundamentally, they are black-boxes and have no capability of reasoning. What we are seeing in GPT-4 is regurgitating text it has been trained on.
Not even the researchers who created it can get it to transparently explain it decisions.
https://www.reddit.com/r/ChatGPT/comments/12l9nwx/really_imp...