One of the premises of hume's sceptical metaphysics was that
P(A|B) is just P(A | B -> A)
The argument for this was `A` and `B` are only "Ideas in the head" and don't refer to a world. And secondly, by assertion, that Ideas are "thin" pictorial phenomena that can only be sequenced.
Hume here is just wrong. Our terms refer: `A` succeeds in referring to, eg., Rain. And our experiences aren't "thin", they're "thick" -- this was Kant's point. Our experiences play a rich role in inference that cannot be played by "pictures".
To have a metal representation R of the world is to have a richly structured interpretation which does, in fact, contain and express causation.
ie., R can quite easily be a mental representation of "B -> A". This, after all, is what we are thinking when we think about the rain hitting our shoes. We do not imagine P(A|B), we imagine P(A|B->A) -- if we didnt, we couldn't reason about the future.
The question is only how we obtain such representations, and the answer is: the body with its intrinsic known causal structure.
Whenever we need to go beyond the body, we invent tools to do so -- and connect the causal properties of those tools to our body.
Hume here is wrong in every respect. And it's his extreme scepticism which undergirds all those who would say modern AI is a model of intelligence -- or is capable of modelling the world.
The word isnt a "constant conjunction of text tokens" -- even Hume wouldnt be this insane. Nevertheless, it is this lobotomised Hume we're dealing with.
There is a science now for how the mind comes to represent the world -- we do not need 18th C. crazy ideas. Insofar as they are presented as science, theyre pseudoscience
Thank you for sharing your opinion on Hume, but I don't see how e.g. Polyominoes, to take a random mathematical (ish) concept I was thinking about today, are connected to our body. I can think of many more examples. Geometry, trigonometry, algebra, calculus, the first order predicate calculus, etc. None of those seem to be connected to my body in any way.
Anyway this all is why I'm happy I'm not a philosopher. Philosophers deal in logic, but they don't have a machine that can calculate in logic, and keep them in the straight and narrow with its limited resources. A philosopher can say anything and imagine anything. A computer scientist -well, she can, but good luck making that happen on a computer.
Well Kant (Chomsky, et al.) are probably right that we must have innate concepts -- esp. causation, geometry, linguistic primitives etc. in order to be able to perceive at all.
So in this sense a minimal set of a-priori concepts are required to be built-in, or else we couldn't learn anything at all.
You might say that this means we can separate the sensory-motor genesis of concepts from their content -- but I think this only applies to a-priori ones.
What i'm talking about is conceptualisations of our environment that provide its causal structure. One important aspect of that is how desires (via goals) change the world. Another is how the world itself works.
Both of these do require a body, or at least a solution to the problem of induction (ie., that P(A|B) is consistent with P(A|B->A), P(A|~(B->A_), P(A| B->Z, C->Z, Z->A), etc.)
>> So in this sense a minimal set of a-priori concepts are required to be built-in, or else we couldn't learn anything at all.
I don't disagree with that at all. I'm pretty convinced that, as humans, we can learn and invent all those things we have because we have strong inductive biases that guide us towards certain hypotheses and away from others.
Where those inductive biases come from is a big open question, and I'd be curious to know the answer. We can wave our hands at evolution, but that doesn't explain, say, why have the specific inductive biases we have, and not others. Why do we speak human languages, for example? Why is our innate language ability the way it is? Intuitively, there must be some advantage in terms of efficiency that makes some inductive biases more likely than others to be acquired, but I get tired waving my hands like that.
I'm not convinced that all that absolutely requires a body, either. I think it's reasonable to assume it requires some kind of environment that can be interacted with, and some way to interact with the environment, but why can't a virtual environment, like a computer simulation of reality, provide that? And it doesn't have to be the real reality, either. A "blocks world" or a "grid world" will do, if it's got rules that can be learned by playing around in it.
Hume here is just wrong. Our terms refer: `A` succeeds in referring to, eg., Rain. And our experiences aren't "thin", they're "thick" -- this was Kant's point. Our experiences play a rich role in inference that cannot be played by "pictures".
To have a metal representation R of the world is to have a richly structured interpretation which does, in fact, contain and express causation.
ie., R can quite easily be a mental representation of "B -> A". This, after all, is what we are thinking when we think about the rain hitting our shoes. We do not imagine P(A|B), we imagine P(A|B->A) -- if we didnt, we couldn't reason about the future.
The question is only how we obtain such representations, and the answer is: the body with its intrinsic known causal structure.
Whenever we need to go beyond the body, we invent tools to do so -- and connect the causal properties of those tools to our body.
Hume here is wrong in every respect. And it's his extreme scepticism which undergirds all those who would say modern AI is a model of intelligence -- or is capable of modelling the world.
The word isnt a "constant conjunction of text tokens" -- even Hume wouldnt be this insane. Nevertheless, it is this lobotomised Hume we're dealing with.
There is a science now for how the mind comes to represent the world -- we do not need 18th C. crazy ideas. Insofar as they are presented as science, theyre pseudoscience