Dot-com proved you can be right about the revolution and still lose everything on misallocated bets. Add in the risk that algorithmic breakthroughs make current infrastructure obsolete, and suddenly those $400B data centers look a lot like pets.com.
The one trick pony of LLMs is a nice trick, but the quantity of data centers just does not add up. The costs are excessive, and I just can't see the justification to pay for services for all of this. What ever goes in these centers shrinks due to improvements, and it also depreciates quickly. And, for most of the LLM stuff, there is no error bars, so the results are always suspect.
What Hinton suggests the power and danger in AI systems will arise when they can talk to each other. Are we seeing thay yet?
Don't get me wrong: I know this is not a simple question. But data in some "vectorized" form may be swappable between systems that have some requirements regarding NDAs. Invoke Murphy's laws