The problem of induction is fatal. But we overcome it: we do witness causation.
When I act on the world, with my body, I take as a given "Body -> Action". We witness causation in our every action.
> This is an odd claim
The tokens can be given any meaning. The statistical distribution of token frequencies in our languages have an infinite number of causal semantics which are consistent with them.
We can find arbitary patterns such that
P(A) < P(A|B) < P(A|B & C) < P(A|B & C...)
Only those we give a semantics to ("Rain" = Rain), and only those we already know are causal we will count. This is the trick of humans reading the output of LLMs -- this is what makes it possible. It's essentially one big Eliza effect.
No, the structure of language isnt the structure of the world.
This pattern in tokens,
P(A) < P(A|B) < P(A|B & C) < P(A|B & C...)
Is an associative statistical model of conditional aggregate salience between token terms.
Phrase any such conditional probability you wish, it will never select for causal patterns.
this is why we experiment. It's why we act on the world to change it.
When the child burns their hand on the fireplace they do so once. Why?
Absent this, absent being in the world with a body, you cannot determine causes.
The problem of induction phrased in modern language is this: statistics isn't informative. Or, conditional probabilities are no route to knowledge. Or, AI is dumb.
When I act on the world, with my body, I take as a given "Body -> Action". We witness causation in our every action.
> This is an odd claim
The tokens can be given any meaning. The statistical distribution of token frequencies in our languages have an infinite number of causal semantics which are consistent with them.
We can find arbitary patterns such that
Only those we give a semantics to ("Rain" = Rain), and only those we already know are causal we will count. This is the trick of humans reading the output of LLMs -- this is what makes it possible. It's essentially one big Eliza effect.No, the structure of language isnt the structure of the world.
This pattern in tokens,
Is an associative statistical model of conditional aggregate salience between token terms.Phrase any such conditional probability you wish, it will never select for causal patterns.
this is why we experiment. It's why we act on the world to change it.
When the child burns their hand on the fireplace they do so once. Why?
Because the child immediately infers,
How? via the abduction, roughly: how? how? etc.In other words, we bottom out our reasoning in a
Absent this, absent being in the world with a body, you cannot determine causes.The problem of induction phrased in modern language is this: statistics isn't informative. Or, conditional probabilities are no route to knowledge. Or, AI is dumb.