Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your brain is a lot of things --- much of which is not well understood.

But from our limited understanding, it is definitely not strictly digital and statistical in nature.



At different levels of approximation it can be many things, including digital and statistical.

Nobody knows what the most useful level of approximation is.


Nobody knows what the most useful level of approximation is.

The first step to achieving a "useful level of approximation" is to understand what you're attempting to approximate.

We're not there yet. For the most part, we're just flying blind and hoping for a fantastical result.

In other words, this could be a modern case of alchemy --- the desired result may not be achievable with the processes being employed. But we don't even know enough yet to discern if this is the case or not.


We're doing a bit more than flying blind — that's why we've got tools that can at least approximate the right answers, rather than looking like a cat walking across a keyboard or mashing auto-complete suggestions.

That said, I wouldn't be surprised if if the state of the art in AI is to our minds as a hot air balloon is to flying, with FSD and Optimus being the AI equivalent of E.P. Frost's steam powered ornithopters wowing tech demos but not actually solving real problems.


Your brain isn't a truth machine. It can't be, it has to create an inner map that relates to the outer world. You have never seen the real world.

You are calculating the angular distance between signals just like Claude is. It's more of a question of degree than category.


Claude hasn't been trained with skin in the game. That is one of the reasons it confabulates so readily. The weights and biases are shaped by an external classification. There isn't really a way to train consequences into the model like natural selection has been able to train us.


I would say that consequences are exactly the modification of weights and biases when models make a mistake.

How many of us take a Machiavellian approach to calculate the combined chance of getting caught and the punishment if we are, instead of just going with gut feelings based on our internalised model from a lifetime of experience? Some, but not most of us.

What we get from natural selection is instinct, which I think includes what smiles look like, but that's just a fast way to get feedback.


Is this statement of yours just a calculated angular distance between signals or does it have some relation to the real world?


It is formed inside a simulation. That simulation is based on information gathered by my sensors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: