Fiber optic cable actually got better over time because multiplexing continued to improve. You could take dark fiber laid in the late 90s and put new transceivers at the ends and get much more bandwidth than anyone expected when the cable was laid.
Internet traffic kept growing throughout the dotcom bubble. That valuations got ahead of themselves didn't mean that there wasn't something real driving the hype.
Fundamentally, there is no ceiling on the utility of "compute". There will always be some use for it, the current crop of AIs is just the first practical scalable one for this flavour of large-scale GPU clusters.
If chatbots turn out to be pointless for anything other than translation, copy-editing, and developer assistance, then we can still use the GPUs for robotics. I.e.: Farmers could have robots in the fields with cameras and arms for picking fruit, but outsource the machine vision and motion-control tasks to a data centre. It makes a lot more sense than lugging around batteries and GPUs literally "in the field" with the mud, heat, humidity, and vibration!
PS: As far back as the early 1990s I remember reading articles about "nobody needs 'X' upgrade", which was invalidated immediately, every time. I now have more computer power in my pocket than my first six or seven computers combined. I "found a use" for that "absurd" computer power. Not to mention that 5G to my own personal phone has bandwidth exceeding the country's entire international telecommunications bandwidth in 1990!
OpenAI's ChatGPT alone hit 500 million weekly active users in March, apparently they're closer to 800 million now. I guess they're still working out the monetization strategy, but in the worst case just think of how Google makes their revenue off search..
The first one does, then prompt caching kicks in.. turns out many people ask similar questions. People who frequently ask complicated questions might have to pay extra, we can already see this playing out.
Also, most ChatGPT users have their “personalization” prefix in the system prompt (which contains things like date/time), which would break caching of the actual user-query.
The prompt has to be precisely the same for that to work (and of course now you have to have an embedding hashmap which is its own somewhat advanced problem.) I doubt they do that especially given the things I've heard from API users.
In the recent Sam Altman interview he said the plan should be keep burning fossil fuels to power the data centers running AI because that’s the path to fusion. Just like LLM can help devs code 100x faster they can do that for nuclear engineers too.
Fusion seems short-sighted though. Antimatter is 100% efficient. I personally think Sam Altman should be looking into something like an Infinite Improbability Drive as it would would be a better fit here.
The pro-singularity/AGI people genuinely seem to believe that takeoff is going to happen within the next decade, so they should get a pass on the "haha they're saying that because they want to pander to Trump" accusations.
> The pro-singularity/AGI people genuinely seem to believe that takeoff is going to happen within the next decade
I'm as anti-AI as it can get - it has its uses, but it is still fundamentally built on outright sharting on all kinds of ethics, and that's just the training phase - the actual usage is filled with even more snake-oil salesmen and fraudsters, and that's not to speak of all the jobs for humans that are going to be irreversibly replaced by AI.
But I think the AGI people are actually correct in their assumption - somewhen the next 10-20 years, the AGI milestone will be hit. Most probably not on LLM basis, but it will hit. And societies are absolutely not prepared to deal with the fallout, quite the contrary - particularly the current US administration is throwing us all in front of the multibillionaire wolves.
> somewhen the next 10-20 years, the AGI milestone will be hit
You seem quite confident for a person who doesn't offer any arguments on why it would happen at all, and why within two decades specifically, especially if you claim it won't be LLM-based.
Second, if AGI means that ChatGPT doesn't hallucinate and has a practically infinite context window, that's good for humanity but I fail to see any of the usual terrible things happening like the "fallout" you mention. We'll adapt just like we adapted to using LLMs.
> You seem quite confident for a person who doesn't offer any arguments on why it would happen at all, and why within two decades specifically, especially if you claim it won't be LLM-based.
Rather sooner than later, IMHO the sheer amount of global compute capacity available will be enough to achieve that task. Brute force, basically. Doesn't take much imagination other than looking at how exponential curves work.
> that's good for humanity but I fail to see any of the usual terrible things happening like the "fallout" you mention.
A decent-enough AI, especially an AGI, will displace a lot of white collar workers - creatives are already getting hit hard and that is with AI still not being able to paint realistic fingers, and the typical "paper pusher" jobs will also be replaced by AI. In the "meatspace", aka robots doing tasks that are _for now_ not achievable by robots (say because the haptic feedback is lacking) there has been pretty impressive research happening over the last years. So that's a lot of blue collar / trades jobs going to go away as well when the mechanical bodies are linked up to an AI control system.
> We'll adapt just like we adapted to using LLMs.
Yeah, we just stuck the finger towards those affected. That's not adaptation, that's leaving people to be eaten by the wolves.
We're fast heading for a select few megacorporations holding all the power when it comes to AI, and everyone else will be serfs or outright slaves to them instead of the old scifi dreams where humans would be able to chill out and relax all day.
> Rather sooner than later, IMHO the sheer amount of global compute capacity available will be enough to achieve that task. Brute force, basically. Doesn't take much imagination other than looking at how exponential curves work.
Only assuming there is something to be found apart from the imagination itself. We can imagine AGI easily but it doesn't mean it exists, and even if it does, that we will discover it. By that logic - we want something and we spent a lot of compute resources on it - the success of a project like SETI would be guaranteed based on funding alone.
In other words, there is a huge gap between something that we are sure can be done, but it requires a lot of resources, like a round trip to Mars, and we can even speculate it can be done within 10-20 years (and still be wrong by a couple of decades) on the one hand, and something we just hope to discover based on the amount of GPUs available, without slightest clue of success other than funding and our desire for it to happen.
The thing is, for economic devastation you don't (necessarily) need to have an actually "general" intelligence that's able to do creative tasks - and the ethical question remains if "creative humans" aren't just a meat based PRNG.
A huge amount of public service and corporate clerkwork is served enough by an AI capable enough of understanding paperwork and applying a well-known set of rules against it. Say, a building permit application - an AI to replace a public service has to be able to actually read a construction plan, cross-reference it with building codes and zoning and check the math (e.g. statics). We're not quite there yet, with an emphasis on the yet - especially, at the moment even AI composition with agents calling specialized AI models can't reliably detect when it doesn't have enough input or knowledge and just hallucinates.
But once this fundamental issue is solved, it's game over for clerkwork - even assuming the pareto principle (aka, the first 80% are easy, only the remaining 20% are tough), that will cut 80% of employees and, with it, the managerial layers above. In the US alone, about 20 million people work in public service. Take 50% of that (to account for jobs that need a physical human, such as security guards, police and whatnot), gives 10 million jobs for clerkwork, take 80% of that and you got 8 million unemployed people, alone in government. There's no way any social safety net can absorb that much of an impact, and as said, that's government alone - the private sector employs about 140 million people, do the calculation for that number and you got 56 million people out of a job.
That is what is scaring me because other than "AI doomers" no one seems to have that issue even on their radar on the Democrat side, and the Republicans want to axe all regulations on AI.
> without slightest clue of success other than funding and our desire for it to happen
The problem is, money is able to brute-force progress. And there is a lot of money floating around in AI these days, enough to actually make progress.
Ah I see your point, and I agree. We've seen how it plays out in places where greedy entrepreneurs brought waves of immigrants to do sub-minimal-wages work and what effects it had on the society, so I agree about the consequences.
However, at least for LLMs, the progress slowed down considerably so we're now at the place where they are a useful extension of a toolkit and not a replacement. Will it change dramatically in 20 years? Possibly, but that's enough time to give people a chance to adapt. (With a huge disclaimer: if history taught me anything, it is that all predictions are as useful as a coin toss.)
> Will it change dramatically in 20 years? Possibly, but that's enough time to give people a chance to adapt.
Yeah, but for that, politicians need to prepare as well, and they don't. All that many of today's politicians care about is about getting reelected or at the very least lining their pockets. In Germany, we call this "nach uns die Sintflut" [1], roughly translated to "after us, the floods may come".
Here in Germany, we at least have set up programs to phase out coal over decades, but that was for a few hundred thousand workers - not even close to the scale that's looming over us with AI.
And who will buy the products and serivces of these "employers" when nobody has a job?
See you can keep adding middle layers, but eventually you'll find there's no one with any money at the bottom of this pyramid to prop this whole thing up.
When the consumer driven economy has no critical mass of consumers, the whole model kinda goes belly up, no?
I really don't mean this in any negative way, but I find it fascinating the wide range of opinions and attitudes in this matter. I find it _so_ hard to imagine having this view myself.
The amount of things I've learned by asking very specific, technical questions to ChatGPT (mostly with web search turned on, but some times it's not even necessary) - things I can immediately verify and/or use, such as small bash commands/scripts, visualizations, diagrams. The value of that, alone, is certainly in the hundreds of dollars per month. Things I would never learn because they are buried somewhere among the 30 answers/comments, sometimes pointing to 20 more terribly-hard-to-read pages or manuals riddled with irrelevant (for my question) content, somewhere in the first page of web search results... maybe it's an attention span question? I certainly won't spend more than 10 minutes reading anything if not's interesting or required in the most extreme sense of the word way for my job (books on quantum mechanics, general relativity, topology, all fall in the former category - bash and pandas documentation fall in neither).
I'm convinced I've saved _at least_ low thousands per week by using coding assistants (mostly Claude Code in my case, but that's personal and likely to change at some point), as evidenced by the amount of work I'm able to finish, get paid for, and maintain. I'm not vibe coding, mind you - most of the time, I have an almost complete mental model of what I want after a couple of hours thinking, and the only thing left to do is type the code, at which point I'd, previously, feel bored, since the fun part (the thinking) is over.
Edit: I have 20 years of experience with code, 15 in the industry as a SWE (been coding since I was 13)
I have had no trouble finding solutions to coding problems by normal searches very quickly. The solution is typically right there in API documentation or one of the first few results.
But I suppose that answers how they may make LLMs profitable. They could cripple or even eliminate normal search until paying for LLMs is the only option.
I find it's either people being flippantly dismissive because it's something they've decided to make part of their personality and refuse to consume any information that may challenge their opinion, or they just massively lack imagination and creativity both on what it can do but also what it could do in 1, 5, or 20 years.
AI: I'll get right on it! But before I do, have you had dinner yet? KFC's new finger-licking MEGA feast will bust your hunger for only $19.95. Click here to order.
Me too! Except I wondered why my non-tech wife had stopped complaining about ChatGPT limits, and it turns out she has quietly been subscribing to Pro plan. It's happening.
I have used both free and paid Google Gemini. I'm as cheap as can be. Im back to the free because it's good enough, and presumably getting better.
If they took away the free, I'd pay $20 and be thankful they kept it at $20.
I love doing things myself.. I mow my lawn, change my oil, change my water heater, and try to never use frameworks or libraries. But not using LLMs seems insane. If they weren't free, you wouldn't use them?
The majority of the time I’ve used an LLM, it’s failed to do the task properly. The times that it worked are the times that a bit of Googling would have solved. I’m not OP, but I’m not at the point where I would spend money on an LLM either.
I have almost never had an LLM not do the task I wanted. Either I'm asking it very easy things (not really, you definitely couldnt Google the entire task of what I wanted, although obviously each tiny subsection you could), or it's important to scope what you're asking for well.
But as a general statement, you cant just Google a comprehensive summary about beta glucans from chanterelle mushrooms, dosages, cooking methods, immune benefits and processes and get a 10 minute read about exactly what you asked for. But with Gemini deep research you can.
I’ve started with tasks that I understand at an expert level myself. The LLM has invariably gotten surface-level work correct and subtle details wrong. Given those errors, I’m not willing to use them for things I don’t understand.
LLMs are good at generating text that sounds authoritative. They’re great for creative writing to share for a laugh with friends. I’m not at the point where I’m willing to use them for important work, let alone pay for them.
(I’ve yet to try them as a coding assistant, though. Maybe that’s the missing link.)
I've done this too and sure it's not perfect. But it's better than the average person in my industry. So, 1, that means a decent chunk of my industry could use it and be about as good as they are now.
2, unless I magically have a plan for talking to an expert in HVAC repair, and not just an idiot in HVAC, I can diagnose my HVAC unit with AI just fine. And I did. And no it wasn't as simple as "well duh, every post online says it's the large capacitor".
Net net: Morgan Stanley is all excited about its role in financing the buildout. But dimissive of the idea that the efforts might not be generate enough revenue to pay for the centers, and stepping around the current lack of power for them.
If they arrange financing but don't hold the loans themselves, they get paid without assuming any risk, yes?
> If they arrange financing but don't hold the loans themselves, they get paid without assuming any risk, yes?
Yes, that's correct. It's typically how investment banks operate, their main business is facilitating transactions. They'll turn around and sell the loans to hedge funds and private investors.
This obviously mean they have incentive to encourage buildouts and to downplay the risks of the loans.
I was surprised that in the energy solutions to power the data centers the FT journalist just proposes nuclear and gas but not the obvious photovoltaics + storage.
A lot of people are viewing this through the lens of LLM/AI technology being used to do consumer facing things, and are wondering where the cost/benefit is. But this technology race feels like the Manhattan project, and should be viewed through the lens of a technology that will be used to fight the next war. That's why limitless resources are being thrown at it. He who controls the spice etc.
Assuming 10 billion humans all using the tech, and 10 year amortizing, and zero operating cost, that requires $30 per user. Not sure if that number acounts for interests either.
Yeah, the author could have used "when" instead of "if".
The most realistic success scenario would be rampant inflation eating the debt.
Given that datacenter capex spending is the US economy at the moment, a better question will be, what will replace data center spending when we stop building?
unpopular take here, no doubt:
arguably, 'need' might not be the best verb for what motivates the extremely short-sighted 'make number go up' race (try instead a noun that rhymes)
The printing press was a time when a technology was different than everything that came before it, completely ushering in a new era inconceivable to people from the past era.
Ah. I see your confusion. I was referencing the phrase in terms of financial markets and irrational exuberance, not technology phase shifts.
We had books before and after the printing press adoption, just now we could produce more with lower quality materials.
The areas that blockchain-cum-AI grifters think they will succeed in is art, which is deliciously ironic, and thoughtful work. The areas current AI achieves value are the margins that few enjoy, like filler emails and blockbuster movies that few go to theaters to see, compliance checklists, boilerplate code, and repetitive low value work flows.
Still, it will be interesting to see where else we can build margins. Alphafold is a great example of where GenAI can do well.
If you look at all the hyperscalers' earnings reports, they all report their data centers being capacity constrained, especially for GPUs. Then couple this with survey-based reports [1, 2] showing that so far a minority of people used AI regularly for work, and that too for a fraction of their workdays. This indicates huge demand with even bigger potential for growth.
This is why there is this huge gold rush for infrastructure, why these players have such sky-high valuations and why investors are scambling to pour in even more money. The focus on AGI and ASI is a distraction and only relevant to the frontier model labs (more on them later). Even if AGI/ASI is never achieved and all model development was frozen today, we have decades of growth ahead of us.
The only risk is that all these productivity gains are a mirage (cue that METR paper) and at some point people will realize it and the whole scheme will come tumbling down. However, studies like [1] contradict that premise and are already finding productivity gains that match various other RCT-based studies:
>This implies that each hour spent using genAI increases the worker’s productivity for that hour by 33%. This is similar
in magnitude to the average productivity gain of 27% from several randomized experiments of genAI usage (Cui et al., 2024; Dell’Acqua et al., 2023; Noy and Zhang, 2023; Peng et al., 2023)
Our estimated aggregate productivity gain from genAI (1.1%) exceeds the 0.7% estimate by Acemoglu (2024) based on a similar framework.
(Not to mention my own anecdotal experiences and the rising frequency of people posting about their successes with tools like Claude Code on HN and other social media.)
To me there is virtually no risk that this data center capacity will be unused. There probably is a bubble, but only as far as the frontier labs are concerned; given that models costing millions to train still get commoditized rapidly, it is not clear if they can capture the value produced by their models to sustain their valuations.
But those models require infra to run and that's exactly what the hyperscalers are stockpiling. The frontier labs will need to get in on that game to survive long term.