Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some researchers proposed using, instead of the term "AI", the much more fitting "self-parametrising probabilistic model" or just advanced auto-complete - that would certainly take the hype-inducing marketing PR away.




That’s like arguing that washing machines should be called rapid-rotation water agitators.

It’s the result that consumers are interested in, not the mechanics of how it’s achieved. Software engineers are often extraordinarily bad at seeing the difference because they’re so interested in the implementation details.


The problem is that intelligence isn't the result, or at the very least the ideas that word evokes in people don't match the actual capabilities of the machine.

Washing is a useful word to describe what that machine does. Our current setup is like if washing machines were called "badness removers," and there was a widespread belief that we were only a few years out from a new model of washing machine being able to cure diseases.


Arguably there isn't even a widely shared, coherent definition of intelligence: To some people, it might mean pure problem solving without in-task learning; others equate it with encyclopedic knowledge etc.

Given that, I consider it quite possible that we'll reach a point where even more people will consider LLMs having reached or surpassed AGI, while others still only consider it "sufficiently advanced autocomplete".


I'd believe this more if companies weren't continuing to use words like reason, understand, learn, and genius when talking about these systems.

I buy that there's disagreement on what intelligence means in the enthusiast space, but "thinks like people" is pretty clearly the general understanding of the word, and the one that tech companies are hoping to leverage.


The defining feature of true AGI, in my opinion, is that the software itself would decide what to do and do it without external prompts more than environmental input.

Doubly so if the AGI writes software for itself to accomplish a task it decided to do.

Once someone has software like that, not a dog that is sicced on a task, but a bloodhound that seeks out novelty and accomplishment for its own personal curiosity or to test its capabilities, then you have a good chance of convincing me that AGI has been achieved.

Until then, we have fancy autocomplete.


What about letting customers actually try the products and figure out for themselves what it does and whether that's useful to them?

I don't understand this mindset that because someone stuck the label "AI" on it, consumers are suddenly unable to think for themselves. AI as a marketing label has been used for dozens of years, yet only now is it taking off like crazy. The word hasn't change - what it's actually capable of doing has.


> What about letting customers actually try the products and figure out for themselves what it does and whether that's useful to them?

Yikes. I’m guessing you’ve never lost anyone to “alternative” medical treatments.


Not to mention ChatGPT-induced suicide ideation.

Please define intelligence

I'd be mad if washing machines were marketed as a "robot maid"

"Washer" and "dryer" are accepted colloquial terms for these appliances.

I could even see the humour in "washer-bot" and "dryer-bot" if they did anything notably more complex. But we don't need/want appliances to become more complex than is necessary. We usually just call such things programmable.

I can accept calling our new, over-hyped, hallucinating overlords chatbots. But to be fair to the technology, it is we chatty humans doing all the hyping and hallucinating.

The market capitalisation for this sector is sickly feverish — all we have done is to have built a significantly better ELIZA [1]. Not a HIGGINS and certainly not AGI. If this results in the construction of new nuclear power facilities, maybe we can do the latter with significant improvement too. (I hope.)

My toaster and oven will never be bots to me. Although my current vehicle is better than earlier generations, it contains plenty of bad code and it spews telemetry. It should not be trusted with any important task.

[1] _ https://en.wikipedia.org/wiki/ELIZA


A woman from 1825 would probably happily accept that description though (notwithstanding that the word “robot” wasn’t invented yet).

A machine that magically replaces several hours of her manual work? As far as she’s concerned, it’s a specialized maid that doesn’t eat at her table and never gets sick.


19 century washing machines were called washing/mangling machines.

They were not called maids nor personified.


Machines do get "sick" though, and they eat electricity.

Negligible cost compared to a real maid in 1825. The washing machine also doesn’t get pregnant by your teenage son and doesn’t run away one night with your silver spoons — the upkeep risks and replacement costs are much lower.

They do and will randomly kill people

Mostly from dryers. I assumed mostly from failure to clean the lint but the following link suggested that that was the cause only 27% of the time.

https://www.nfpa.org/education-and-research/research/nfpa-re...

In the table from the Pdf link failure to clean was the only category that resulted in deaths.


Dawg what kind of washing machines are you using?

In 1825? Certainly not one that ran on electricity, much less something that had meaningful safety features.

I used to play with a Maytag machine machine motor. It had a single cylinder, ran on gasoline, and had a kick-start. It was from, IIRC, 1926.

The exhaust would have been plumbed to the outdoors, but other than that the expectation was that there would be a gas-fired engine running in the house while the washing was done.


Samsung?

In 1825 both electricity prices and replacement costs would have been unaffordable for anyone, though. Because there was literally no prize you could pay to get these things.

Shame we are in 2025 huh? Ask someone today if they accept washing machine as robot maid.

The point is that, as far as development of AI is concerned, 2025 consumers are in the same position as the 1825 housewife.

In both cases, automation of what was previously human labor is very early and they’ve seen almost nothing yet.

I agree that in the year 2225 people are not going to consider basic LLMs artificial intelligences, just like we don’t consider a washing machine a maid replacement anymore.


I get mad at semantic arguments that distract from creative output.

Aside from the obviously humorous content the rest is useless allegory (I want a recipe not a story and need some code, not personal affection for software engineering) and no true scotsman (no true adherent of my native language would call it a robotic maid!)

As social creatures humans are pretty repetitive.


Calling an LLM AI or saying it will ever do AGI is like arguing a washboard and a tub can be called a washing machine. Sales people are so extraordinarily bad at seeing the difference because they are only interested in making sales.

Not at all.

LLMS can appear intelligent until they, often, say things no intelligent being would. Then they appear profoundly stupid.

Washing machines wash reliably. LLMs do not.

A machine will be intelligent when instead of producing false output it responds with “I don’t know” and can be trusted.


Businesses are interested in something that can work for them. And the way the LLM based agentic systems are going, it might actually deliver on "Automated Knowledge Workers". Probably not with full autonomy, but in teams lead by a human. The human needs to tend the AKW, much like we do with washing machines and industrial automation machines.

Current "AI" is the manual washboard.

if it's the result people cared about they'd be calling them ai. That's just an implementation detail

The term "AI" didn't make sense from the beginning, but I guess it sounded cool and that's why everything is "AI" now. And I doubt it will change, regardless of its correctness.

John McCarthy coined the term "Artificial Intelligence" in the 1950s. I doubt he was trying to be cool. The whole field of research involved in getting computers to do intelligent things has been referred to as AI for many decades.

I used to tell laymen that machine learning was a lot more about artificial perception than artificial intelligence. Made sense before ChatGPT 3.5/4.

The proof of Riemann hypothesis is [....autocomplete here...]

AI is intermitent wipers, for words, and the two are completly tied, as the perfect test for AI, will be to run intermitent wipers, to everybodys satisfaction.

I prefer Tesla's approach to call their adaptive cruise control "FSD (supervised)".

AI (supervised).


I've used Claude for the last month - it's not auto-complete.

It's a nice naming, fellow language-capable electrobiochemical autonomous agent.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: