Nobody knows what the most useful level of approximation is.
The first step to achieving a "useful level of approximation" is to understand what you're attempting to approximate.
We're not there yet. For the most part, we're just flying blind and hoping for a fantastical result.
In other words, this could be a modern case of alchemy --- the desired result may not be achievable with the processes being employed. But we don't even know enough yet to discern if this is the case or not.
We're doing a bit more than flying blind — that's why we've got tools that can at least approximate the right answers, rather than looking like a cat walking across a keyboard or mashing auto-complete suggestions.
That said, I wouldn't be surprised if if the state of the art in AI is to our minds as a hot air balloon is to flying, with FSD and Optimus being the AI equivalent of E.P. Frost's steam powered ornithopters wowing tech demos but not actually solving real problems.
Claude hasn't been trained with skin in the game. That is one of the reasons it confabulates so readily. The weights and biases are shaped by an external classification. There isn't really a way to train consequences into the model like natural selection has been able to train us.
I would say that consequences are exactly the modification of weights and biases when models make a mistake.
How many of us take a Machiavellian approach to calculate the combined chance of getting caught and the punishment if we are, instead of just going with gut feelings based on our internalised model from a lifetime of experience? Some, but not most of us.
What we get from natural selection is instinct, which I think includes what smiles look like, but that's just a fast way to get feedback.
But from our limited understanding, it is definitely not strictly digital and statistical in nature.