My apologieis but there is an ocean of misconceptions and a galaxy of misinformation, in that comment.
>> The complexity of language looks finite.
I have to ask what your defintion of "language" is because by formal language theory the only languages that are finite are, well, finite languages- which are simple than regular languages, which can be descirbed by a regular expression.
And I'm pretty sure that nobody has ever managed to write a regular expression that can describe all of human language.
>> Translation between all the European languages works fairly well. Asian languages, not so much yet.
In fact, translation between "all the European lagnuages" does not work at all well. For limited domains and for languages with lots of examples of translation, it works alright. Stray from that assumption and the results quickly descent into incomprehensible gibberish.
>> We're way ahead of where things were in the "AI winter", 1985-2005. This time the startups make money and do useful things.
Making money is a measure of progress of the industry- not of the science.
>> AI used to be tiny - about 20-50 people at MIT, CMU, and Stanford.
That was true back in the '50s, when the field began. In the '80s and '90s, at the hight of GOFAI and expert systems, there were several thousands of researchers working on AI.
For example, the 5th Generation Computer project was a massive effort by the Japanese Ministry of International Trade and Industry which encompassed pretty much all of that country's computer industry with huge loads of funding- not to mention the reaction in the West who panicked thinking that the Japanese were aobut to do to their computer industry what they had done to its car industry.
You might have heard of the failure of the project- but it took thousands of people a long time to fail. So, no, AI was not tiny, by any sense of the word, for any time after the first years of its birth.
>> (Me: MSCS, Stanford, 1985. I met most of the greats of classical logic-based AI. Trying to hammer the real world into predicate calculus just doesn't work. The expert systems guys were in denial big-time about this.)
Last week I met a guy with a background in theoretical physics, who works in high-performance computing. Am I now qualified to contribute an opinion about theoretical physics and high-performance computing?
>> The complexity of language looks finite.
I have to ask what your defintion of "language" is because by formal language theory the only languages that are finite are, well, finite languages- which are simple than regular languages, which can be descirbed by a regular expression.
And I'm pretty sure that nobody has ever managed to write a regular expression that can describe all of human language.
>> Translation between all the European languages works fairly well. Asian languages, not so much yet.
In fact, translation between "all the European lagnuages" does not work at all well. For limited domains and for languages with lots of examples of translation, it works alright. Stray from that assumption and the results quickly descent into incomprehensible gibberish.
>> We're way ahead of where things were in the "AI winter", 1985-2005. This time the startups make money and do useful things.
Making money is a measure of progress of the industry- not of the science.
>> AI used to be tiny - about 20-50 people at MIT, CMU, and Stanford.
That was true back in the '50s, when the field began. In the '80s and '90s, at the hight of GOFAI and expert systems, there were several thousands of researchers working on AI.
For example, the 5th Generation Computer project was a massive effort by the Japanese Ministry of International Trade and Industry which encompassed pretty much all of that country's computer industry with huge loads of funding- not to mention the reaction in the West who panicked thinking that the Japanese were aobut to do to their computer industry what they had done to its car industry.
You might have heard of the failure of the project- but it took thousands of people a long time to fail. So, no, AI was not tiny, by any sense of the word, for any time after the first years of its birth.
>> (Me: MSCS, Stanford, 1985. I met most of the greats of classical logic-based AI. Trying to hammer the real world into predicate calculus just doesn't work. The expert systems guys were in denial big-time about this.)
Last week I met a guy with a background in theoretical physics, who works in high-performance computing. Am I now qualified to contribute an opinion about theoretical physics and high-performance computing?