I think the idea is, it's like the difference between searching for new cancer treatments and bringing a promising treatment to market. If it takes the same kind of expertise to do both, then spending more time on one means less on the other.
Still no idea if that's true of AI, but I found a couple of interesting cites:
Patrick Winston, director of MIT's Artificial Intelligence Laboratory from 1972 to 1997, echoed Minsky. "Many people would protest the view that there's been no progress, but I don't think anyone would protest that there could have been more progress in the past 20 years. What went wrong went wrong in the '80s."
Winston blamed the stagnation in part on the decline in funding after the end of the Cold War and on early attempts to commercialize AI. But the biggest culprit, he said, was the "mechanistic balkanization" of the field, with research focusing on ever-narrower specialties such as neural networks or genetic algorithms. "When you dedicate your conferences to mechanisms, there's a tendency to not work on fundamental problems, but rather [just] those problems that the mechanisms can deal with," said Winston.
They don't go into detail about how the early attempts to commercialize contributed to the problem. This site does, but seems less trustworthy:
In the early 1980s, dark clouds also settled over the MIT Artificial Intelligence Lab as it split into factions by initial attempts to commercialize Artificial Intelligence (AI). In fact, some of MIT's best White Hats left the AI Lab for high-paying jobs at start-up companies.
So it sounds like some smart people in academia in the 80s think that some stones were left unturned, or turned too slowly, and that part of the problem was a refocus on making money on existing discoveries. According to that AI winter link, the tech mostly wasn't ready for primetime yet, presumably making it even harder to raise funds for new research.
Winston is still teaching and doing research at MIT. In fact I took two of his classes a couple years ago, and he's exactly who I had in mind when I mentioned researchers with experience and knowledge from decades past continuing to work :) Even if some previous research wasn't fully fleshed out, we can be confident that it hasn't been forgotten.
Still no idea if that's true of AI, but I found a couple of interesting cites:
Patrick Winston, director of MIT's Artificial Intelligence Laboratory from 1972 to 1997, echoed Minsky. "Many people would protest the view that there's been no progress, but I don't think anyone would protest that there could have been more progress in the past 20 years. What went wrong went wrong in the '80s."
Winston blamed the stagnation in part on the decline in funding after the end of the Cold War and on early attempts to commercialize AI. But the biggest culprit, he said, was the "mechanistic balkanization" of the field, with research focusing on ever-narrower specialties such as neural networks or genetic algorithms. "When you dedicate your conferences to mechanisms, there's a tendency to not work on fundamental problems, but rather [just] those problems that the mechanisms can deal with," said Winston.
http://www.technologyreview.com/computing/37525/
They don't go into detail about how the early attempts to commercialize contributed to the problem. This site does, but seems less trustworthy:
In the early 1980s, dark clouds also settled over the MIT Artificial Intelligence Lab as it split into factions by initial attempts to commercialize Artificial Intelligence (AI). In fact, some of MIT's best White Hats left the AI Lab for high-paying jobs at start-up companies.
http://computer.yourdictionary.com/golden-age-era
So it sounds like some smart people in academia in the 80s think that some stones were left unturned, or turned too slowly, and that part of the problem was a refocus on making money on existing discoveries. According to that AI winter link, the tech mostly wasn't ready for primetime yet, presumably making it even harder to raise funds for new research.