>The crux of the problem is that the field of artificial intelligence has not come to grips with the infinite complexity of language
Or, put another way, the problem space reality presents is unimaginably large. So large that it took 4 billion years to achieve human level intelligence. That said humans have been working on the AI problem a pretty short period of time and we have made some pretty good strides at reproducing intelligence.
Evolution wasn't trying very hard to create intelligence; it's really not that useful for survival relative to how expensive it is biologically, to the extent that some people think it's more an artifact of human mate selection and signaling (like a peacock's tail) than a proper adaptive trait.
Without a strong survival gradient pushing a population towards higher intelligence, it's actually surprising that evolution stumbled on it at all, which speaks to how easy it must be to find in solution space, not how hard.
I would be very surprised if a human-guided search can't crack the problem relatively soon - we're only just now starting to hit the threshold of computing power necessary to do it, so it's not too shocking that we haven't fully succeeded yet.
Yes, and thank you, I couldn't remember the name of the book to reference it!
I think it makes a somewhat strong case; to me, the big problem to explain regarding the evolution of intelligence is the fact that it only helps survival a little bit, not a lot. And it evolved in very little time, which usually requires huge selection pressure. Sexual selection offers a potential explanation there.
You know, it's kind of funny. We have no idea if the current path is trending toward a local optima, or if we're heading toward a global optima.
We are completely aware of the issues of finding a local optima in our AI (especially ML) algorithms, but little thought has been given if our AI evolution has been trending toward one or not. We're pouring vast research reserves along a few pathways, and hoping that our hill climbing leads to a global maxima (or at least a local one sufficiently high as to be 'human level intelligence')
It seems quite the opposite, that the vast research reserves of AI, now heavily funded and employed by industry, are applied to pathways on hills that credibly seem high enough. Current deep learning approaches are sufficient for systems that work well enough to make profitable products, and that's it, there's no claim, expectation or necessity for that to be even a local optimum, much less a global one.
> but little thought has been given if our AI evolution has been trending toward one or not.
Wait, you think people aren't thinking about that all the time? They absolutely are. And there's very little obvious ways to determine if this is a local or global optima. Resources are limited and everyone wants to follow the current best results, because that's necessary for business success.
One would expect a competitive advantage, then, in funding research to try something else. Exclusively following the current best results just means equivalency, and provides no competitive edge.
I think the authors are saying that the number of mental models (possible worlds) that language can suggest is unmanageably infinite, unless the listening brain is capable of rapidly eliminating all the infeasible / implausible worlds before forming a viable model of what the speaker is thinking, before formulating a response. To do that, the listener needs to bring to bear a wealth of world knowledge and common sense, all of which is lacking in the current models of AI created by deep learning.
As such, unless AI can somehow build useful models of the world and combine them with basic horse sense, it will always be just a one trick pony.
> the problem space reality presents is unimaginably large
For our purposes as a heavily time-limited species, it's infinite. Across the globe we'll probably create hundreds of thousands of narrow AI programs over the coming decades and that won't touch a meaningful fraction of what could be done. AI writing AI is the only way to jump that a lot further. I'm skeptical of general AI over the next 20-30 years or so, however I do expect we'll commonly see AI writing narrow AI programs in the next 10-15 years.
Or, put another way, the problem space reality presents is unimaginably large. So large that it took 4 billion years to achieve human level intelligence. That said humans have been working on the AI problem a pretty short period of time and we have made some pretty good strides at reproducing intelligence.