I am not saying this to be sarcastic - the problem is that people from OpenAI/Antrhopic are saying things like superintelligence in 3 years, or boris saying coding is solved and that 100% of his code is written by AI.
It's not good enough to just say oreo ceos say we need to more oreos.
There's a real grey area where these tools are useful in some capacity, and in that confusion we're spending billions. Too may people are saying too conflicting things and chaos is never good for clear long-term growth.
Either that 20 years is completelly inapplicable to AI, or we're in for a world of hurt. There's no in between given the kinds of bets that have been made.
AI companies don’t have 20 years, they have max 5 years where they have to turn to profit.
They don’t have time to wait for all the companies to pick up use of AI tooling in their own pace.
So they lie and try to manufacture demand. Well demand is there but they have to manufacture FOMO so that demand materializes now and not in 20 or 10 years.
This outlook is as short-sighted as the 2000 fiber optic bust. Critics then thought overcapacity meant the end, yet that infrastructure eventually created the modern internet. Capital does not walk away from a fundamental shift just because of one market correction. While specific companies may fail, the long-term value of the technology ensures that investment will continue far beyond a five-year window.
The massive investment in power grids and data centers provides a permanent physical backbone that outlives any specific silicon generation. This infrastructure serves as a durable shell for the model design knowledge and chip architectural IP gained through each iteration. Capital is effectively funding a structural moat built on energy access and engineering mastery.
Seems like there’s a lot of resources being dumped into those data centers that will not be very useful. Saying it will all be worthwhile because we’ll have the buildings and the modest power grid updates (which are largely paid for by tax payers, anyway,) feels like saying a PS5 is a good long-term investment because the cords and box will still be good long ag after the PS5 has outlived its usefulness.
The "PS5" analogy fails to account for how "useless" hardware often triggers the next paradigm shift. For decades, traditionalists dismissed high-end GPUs as expensive toys for gamers, yet that specific architecture became the accidental engine of the AI revolution.
And you imagine these incredibly expensive-to-operate, environmentally damaging, highly specialized, years-outdated GPUs will trigger some sort of technological revolution that won’t be infinitely better served by the shiny new GPUs of the day that will not only be dramatically more powerful, but offer a ton more compute for the amount of electricity used?
The AI use of GPUs didn’t stem from a glut of outdated, discarded units with nearly no market value. All of those old discarded GPUs were, and still are, worthless digital refuse.
The closest analog i can think of to what you’re referring to is cluster computing with old commodity PCs that got companies like Google and Hotmail off the ground… for a few years until they could afford big boy servers and now all of those, and most current PCs on the verge of obsolescence, are also worthless digital refuse.
The big difference is that Google et al chose those PC clusters because they were cheap, commodity pieces right off-the-bat, not because they were narrowly scoped specialty hardware pieces that collectively cost hundreds of billions of dollars.
Your supposition fails to account for our history with hardware in any reasonable way.
Focusing exclusively on the physical decay and replacement cycle of hardware is a classic case of tunnel vision. It ignores the fact that the semiconductor industry’s true value lies in the evolution of manufacturing processes and architectural design rather than the lifespan of a specific unit. While individual chips eventually become obsolete, the compounding breakthroughs in logic and efficiency are what actually drive the technological revolution you are discounting.
Tunnel vision is ignoring the astonishing amount of money and environmental resources our society is dumping into these very physical, very temporally useful chips and their housing because… of what we learn by doing that. We should have dumped 1/100th of that money into research and we’d have been further along.
This isn’t a normal tech expenditure— the scale of this threatens the economy in a serious way if they get it wrong. That’s 401ks, IRAs, pension plans, houses foreclosed on, jobs lost, surgeries skipped… if we took a tiny fraction of this race-to-hypeland and put towards childhood food insecurity, we could be living in a fundamentally different looking society. The big takeaway from this whole ordeal has nothing to do with semiconductors — it is that rich guys playing with other people’s money singularly focused on becoming king of the hill are still terrible stewards of our financial system.
Dismissing massive capital expenditure as "hypeland" ignores the historical reality that speculative bubbles often build the physical foundation for the next century. The Panic of 1873 saw a catastrophic evaporation of debt-driven capital, yet the "worthless" railroads built during that frenzy remained in the ground. That redundant, overbuilt infrastructure became the literal backbone of American industrialization, providing the logistics required for a global economic shift that far outlasted the initial financial ruin.
Divorcing research from "learning by doing" is a recipe for a bureaucratic ivory tower. If you only funnel money into pure research without the messy, expensive, and often "wasteful" reality of large-scale deployment, you end up with an economy of academic metrics rather than industrial power.
The most damning evidence against the "research-only" model is the birth of the Transformer architecture. It did not emerge from an ivory tower funded by bureaucratic grants or academic peer-review cycles; it was forged in the fires of industrial practice.
History shows that a fixation on immediate social utility or "rational" cost analysis can be a strategic trap. During the same era, Qing Dynasty bureaucrats employed your exact logic, arguing that the astronomical costs of industrialization and rail were a waste of resources better spent elsewhere. By prioritizing short-term stability over "expensive" technological leaps, they missed the industrial window entirely. Two decades later, they faced an industrialized Japan in 1894 and suffered a total collapse. The "waste" of one generation is frequently the essential infrastructure of the next.
How much capital was wiped out for it to be cheap after the bust? Someone is going to eat the exuberance loss in the near term, even if there is long term value.
It's a "Motte and Bailey" system [0], where the extreme "AI will do everything for you" claim keeps getting thrown around to try to get investors to throw in cash, but then somehow it transmutes into "all technologies took time to mature stop being mean to me."
To be fair, it isn't necessarily the same people doing both at once. Sometimes there are two groups under the same general banner, where one makes the big-claims, and another responds to perceived criticism of their lesser-claim.
> the problem is that people from OpenAI/Antrhopic are saying things like superintelligence in 3 years
An even bigger problem is that people listen to them even after they say rationally implausible things. When even Yann LeCunn is putting his arms up and saying "this approach won't work," it's pretty bad.
> the problem is that people from OpenAI/Anthropic are saying things like superintelligence in 3 years, or boris saying coding is solved and that 100% of his code is written by AI.
I'm going to be honest, you can feel the AGI when you use newer agentic tools like OpenClaw or Claude. It's an entirely different world from GPT-4.0. This is serious intelligence.
Superintelligence in 3 years doesn't really sound that crazy given how quickly I can write code with Claude. I mean we're 90%-95% of the way there already.
Researchers looked at GPT-3 in 2023 and saw “sparks of AGI”. The saying “feel the AGI” became widespread not long after, if I’m remembering right. We’ve been saying AGI is right around the corner for a while now. And of course, if you predict the end of the world every day, you’ll eventually be right. But for the moment, what we have is an exceptionally powerful coding assistant that can also speed up entry-level work in various other white collar industries. That is earth-shattering, paradigm-shifting. But given how competitive and expensive the AI game has become, that is not enough, so it needs to be “superintelligence” - and it’s just not.
Ah, that’s my mistake. Thank you. I saw 2023, I thought GPT-3. Even still, people talk about GPT-4 today like it was a quaint little demo. It was a magnificent achievement, it scared the pants off of a lot of people, and sparked a new round of “is AI conscious?” discourse.
What does that mean? By what metric do you measure "AGI", whatever that means? Industry definitions are incredibly vague, perhaps intentionally so, with no benchmarks to define how a model, harness, or other technology might achieve "AGI". They have no intelligence, and can't even reason that you need to take your car to the car wash to have it washed[0].
If somehow Claude became sentient that would be sci-fi. One day it’s wrangling CSS and Spring Boot Controllers and the next it’s telling you opinions it developed through its own experiences on programming languages. Not sure that’s on the near horizon, but it’s definitely impressive technology.
> Superintelligence in 3 years doesn't really sound that crazy given how quickly I can write code with Claude. I mean we're 90%-95% of the way there already.
Yeah? So you must have a clear idea of where "there" is, and of the route from here to there?
Forgive me my skepticism, but I don't believe you. I don't believe that you actually know.
Advances to round 2/7 to be able to do a powerpoint presentation so that companies will, at best, be forced to put some pointless label as a legal loophole, that consumers will promptly ignore because everyone will have it and it'll be meaningless.
Because that would mean AI isn't going to replace entire industries, which is the only way to justify the, not billions, but trillions in market value that AI leaders keep trying to justify.
Regardless of the value of Brexit, people tend to be biased against things that have happened or are around them when things are bad.
Like when people are against a president if the economy isn't doing well, regardless of if the alternative candidate would've been better.
This also isn't an issue thats being campaigned on. If there was another vote to join the EU, and people got flooded with anti-eu messaging specifically targeted at the demographic, I'd bet that number would drop.
The EU always has been a scapegoat for incompetent politicians. Now the EU is out of the picture, there’s no-one left to blame. And we can clearly see that the EU, for all its faults, is a very beneficial institution for all involved.
People may agree or disagree on Brexit. But my god your sentence sums up what is happening in the UK, without anyone to blame, whether it is Russia, China, US or EU, UK have simply failed to strategically plan or execute on anything.
And there are plenty of people on HN would say otherwise and say UK is fine.
The EU always has been a scapegoat for incompetent politicians.
You also see this in countries still in the EU and it will happen as long as we will not have true integration, it is always easier to blame the EU for your own failings, since it is harder for the EU to fend for itself in national politics.
One recent example in my country is nitrogen deposition. Long ago, countries have committed to keeping certain nature reserves in good health (or improving them when necessary). Then many subsequent governments always chose the side of the farmers at the detriment of nature. Now many reserves' soil quality is in a terrible state and the courts have told the government to stop and fix the problem. Then we got a bunch of right-wing populist countries that have wasted many more years by blaming the EU and questioning scientific methods for measuring deposits - while it has been abundantly for a long time what actually needs to be done, buy out farmers.
At any rate, this constant undermining and blaming of the EU has the effect you'd expect it to - it destroys trust in the EU. Ironically, the saving grace now seems to be the agressor and the lost ally. More people realize that we can't act in an increasingly hostile world as small and mid-sized countries.
The core issue with nitrogen deposition in the Netherlands is that the country chose to protect types of nature that are impossible to maintain in a densely populated country of nearly 20 million people. Instead, it could have focused on protecting and developing habitats that are better suited to higher nitrogen levels.
For example, converting heaths and meadows (which are unnatural for a delta anyway) into food forests might have increased carbon capture, reduced nitrogen-related pressures, and supported biodiversity in a way that aligns more realistically with achievable environmental goals.
Right now they're paying lip service to goals that are impossible to achieve, even if all economic activities were cancelled today, they wouldn't make a dent.
I've seen instances where this is used as an excuse for what is, ostensibly, a trick to dimiss people using something that sounds vaguely professional. Like when doctors say they don't want to do additional x-rays because of the risk of radiation exposure, nvm that if it comes out slightly blurry they'll ask to redo it, or if you're cautious about it initially they'll tell you how it's not big deal and there's more radiation in a cluster of bananas.
Biopsies, radiation from additional scans, surgery, treatment for cancers that would not have developed further.
A lot of potential harms are at a societal level as well—from a public health perspective, if everyone starts having regular MRIs that produce incidental findings which require followup, you’re suddenly tying up lots of resources that would otherwise be available to actually-sick people. A person with symptomatic problems whose treatment is delayed because they can’t get an appointment because the specialists are booked full with incidental findings, that person is indirectly harmed by this.
The radiation from CT scans is not especially concerning at an individual level when there is a compelling reason for it, but, if we’re suddenly doing tons more to investigate incidental MRI findings, there may well be a point where those scans are causing a significant amount of cancer overall—a recent study suggested, I believe, that CT scans may be responsible for 5% of cancers already.
Like when someone has an anaphylactic reaction to the contrast dye for their CT?
Contrast-induced nephropathy?
Gadolinium accumulation in the brain doesn't sound good for you...
Although I think this argument is usually talking about the risks of the resulting procedures (eg an injury or complication related to a biopsy done for a finding on imaging).
AI generates a solution that's functional, but that's at a 70% quality level. But then it's really hard to make changes because it feels horrible to spend 1 hour+ to make minor improvements to something that was generated in a minute.
It also feels a lot worse because it would require context switching and really trying to understand the problem and solution at a deeper level rather than a surface level LGTM.
And if it functionally works, then why bother?
Except that it does matter in the long term as technical debt piles up. At a very fast rate too since we're using AI to generate it.
Isn't that explictly against the TOS? I feel like Anthropic brought out the ban hammer a few days ago for things like opencode because it wasn't using the apis but the max subscriptions that are pretty much only allowed through things like claude code.
Yes, but there's a really large number of users who don't want to have to setup vscode, git, texlive, latex workshop, just to collaborate on a paper. You shouldn't have to become a full stack software engineer to be able to write a research paper in LaTeX.
Wait a minute, can we at least wait until this dethrones next.js before making suck claims?
reply