Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Every disruptive technology creates and destroys jobs. This is business as usual since the start of the Industrial Revolution and beyond. No need to panic, like the Luddites, yet.

Real AGI is the time to panic, because that forms an event horizon we can't see beyond. There may well be no place at all for humans in a post AGI world.

But LLMs are still a long way from AGI. I don't expect to see AGI in my lifetime, despite the blistering pace of advancement.



Yep, you are a typical example. Your world (and mine) is safe, but let’s meet up and step into a fortune 1000 non FAANG office, hit someone up for a chat and let’s see if you don’t believe this person can be replaced now by gpt4. Try it.

I cannot see how, even after these stories, you cannot see how this is a problem and how this is not vastly different (horse drivers went to car drivers, farmers became factory workers etc; there is no such switch now) than the previous examples. We are limited by the amount of very clever people as the rest can be replaced (we need some robots but that’s coming).

When we reach AGI, then even the 130+ iq peeps like you and me can start the worrying. But that doesn’t mean there is a present danger.


I don’t share your disdain for the average person, and I don’t see ChatGPT replacing people on a huge scale. it’s a helpful productivity tool, but the I don’t see that it will replace huge amounts of jobs. I worry more about autonomous driving when it comes to AIs replacing humans. A LOT of people drive trucks, Uber’s, taxis, buses, etc.

It’s perhaps more useful in my field than most (software) since the output is often highly structured text. But using it daily, it doesn’t even make me 2x more productive. A big deal, yes. Going to take my job, no. Going to lower salaries and reduce demand? Doubtful.

We’ve lived through massive disruptions already, computers and the internet are some examples. It wasn’t cause for panic.


I don’t think the parent has disdain, but rather experience. I’ve had my eyes opened wide by getting to know some people at the bottom of the economy. A 50 year old man who has basically no skills other than manual labor. There are people just above him that can push papers around, but don’t change their flow! The next level is people who can barely figure out instructions that to me are more than obvious (and he works in tech - this happened today!). I know these people personally, and there are a LOT of them. I fear for their future.


Computers made people more productive and internet made trade far more efficient. AI makes my job far more efficient (I work in software as well); most my colleagues have been fired now in the past months because now I can do the work of 10+ by using chatgpt api. That says something about my employer, however there are boatloads of the same companies. And my colleagues, some of whom are great humans, are just not very good at what they are hired for and, as such, easy to replace by gpt4 wielded by someone who does understand. Nothing to do with disdain; it’s just the group that will fall first. But then it’ll come for us. I like your optimism, I just don’t share it; this is far more of a change than many think.

Sure maybe not next year (although…) but your taxis, ubers, buses and serving beers might be the only jobs left for you and me. I am worried for my kids, not for me. But this is now within reach, while it seemed very far off last year. Deniers won’t save anything.


It’s not coming for me because it can’t think. If anything it makes the senior engineer more valuable, not less so. Writing actual code is the minority part of my day usually. Using Amdahl’s law, that bounds the maximum speed up to under 2x. In reality, less.

What on earth are you doing that you see 10x speed up? That implies you spend more than 90% of your day writing code. That’s not even possible if you have no meetings and no teammates. I call bullshit. I’ll be generous and say you are exaggerating for effect rather than intending to lie about it. But either way its beyond what’s possible.


Of course they can't be replaced. Can the AI's boss watch him working? Can the AI's boss see he's physically in the office from 9 to 5?

Can the AI's boss get a promotion for growing a team of AI bots?

Can AI sit in meeting and give a sense of validation to the boss decisions?

What kind of boss in an office job replaces his workers with an AI?


Replace the middle manger with AI. I bet it would be an improvement.


If one person is more competent and willing to work for less money than someone else, what is a business to do? Hire/retain the more expensive less competent employee?

If a machine is more competent and costs less than a human worker, what is a business to do? Not use robots to assemble cars? Not use AI to do mechanizable routine work?


Yea, that’s what will happen. Not sure why you are responding to me? The gp seems to argue that this won’t be a problem now as in the past this made more jobs, so it will now too. I don’t believe that.


> I don’t believe that.

But why do you think this time is different?


I hope you stick to your point when your job is replaced by AI.


I don’t see it happening (software engineer.)

I’ve been using AI through copilot and ChatGPT for a year now. It’s great, it’s helpful, it’s worth the money. It’s not replacing me. It doesn’t let a junior dev do my job. It doesn’t make me so productive that the company will cut back on their engineering team. We’re still hiring.


Bio says "middle-aged software developer," so likely just hoping it's far enough that they'll be retired when it doesn't matter to them


That might still drag them down in retirement depending on where the profits of this revolution fall.


People in retirement live off investments, not wages. They’re exactly the class most likely to benefit.

Unless you don’t have retirement savings. But your not likely to be worse off, your pension doesn’t get automated away.


If the economy hit is large enough, money or investments or both might drop to 0. These things happen when there are huge societal changes.


We’re talking different risks now though. Not about retirees being less likely to share in the profits from AI based economic growth.


>No need to panic, like the Luddites, yet.

NED LUDD DID NOTHING WRONG

No, seriously, the Luddites weren't angry because tech would put them out of business. They were angry because they hadn't gotten to buy looms for themselves yet. The Luddites were the prototype of a union, and smashing machines was a tactic used to get business owners to the table for labor negotiations. One that was responded to with propaganda and the force of law.

Let's consider two possible worlds:

- The one in which artists have a fancy new tool to play with to produce better art

- The one in which publishers fire all artists so they can use the tool for themselves

So far we appear to be hurtling down the second path. I can point to artists that are using art generators as tools to improve their work, but almost all of the hype and discourse surrounding generative AI has been "finally we can fire all the artists and just have the art make itself." I think this is a wrong-headed move long-term[0], but so long as business people believe artists to be replaceable, they will be replaced.

Furthermore, this has wealth-concentrating effects. Directly, this is a transfer of wealth from regular artists to the few that get to stick around to bang on the machine when it breaks. Indirectly, this is a transfer of wealth away from both artists and publishers to the companies who are making the AI art generators. In the past few years, AI research has gone from open scientific collaboration to extremely closed-off data siphoning operations. OpenAI in particular reorganized itself into a "capped profit corporation" after Elon Musk stopped writing checks, and started closing things off in the name of safety[1].

The time to panic is right now, even if AGI is decades or centuries off, so that precedents are established as to who owns and benefits from that technology. Let me explain by analogy: did Richard Stallman know and understand in the 1980s that proprietary software would lead to a handful of tech companies owning everything and renting it back out to you on subscription? No. But he did understand very well and very early on that proprietary software was an abusive relationship. Likewise, I can see that the relationship we are already moving into with AI is similarly abusive, even if we don't have AGI yet. A world in which AGI displaces humans entirely is a terribly unjust, illiberal world that does not deserve to exist. We either ride into the Singularity along with AGI, or we do not build AGI at all.

[0] While AI art is startlingly good at drawing novel images in response to prompts, fine control and consistency of those images requires manual intervention and fine-tuning. Effective prompt writing also requires an intricate knowledge of artistic history and terminology. Furthermore, there's a whole capability of art generators called inpainting that is criminally underused because you need to have basic art knowledge in order to use it effectively.

[1] To be clear, AI does have safety risks that are playing themselves out right now. The problem is that those risks have been used to justify turning everything into the worst kind of abusive SaaS.


I don’t agree with nearly anything you said here. I think we’re looking at path one for the most part, augmentation, not replacement. It doesn’t matter what business believes one whit. It matters entirely what the reality supports. Otherwise the morons who fired their arts team have to spend the time and money to build a new one once their competition rubs their face in their mistake.

Richard Stallman is a bit out there, to put it gently.

AGI will happen no matter what we do. I think is unavoidable. I don’t think any amount of caution or regulation will prevent it, it will just happen in another country. But it’s still worth trying when it makes sense. It’s too early right now. There may well be no room for humans in such a world, nobody knows yet, but I also don’t think we can escape our fate. It may be that all biological intelligent life that builds technological civilization also inevitably makes themselves obsolete through that same innovation.

Focusing on AI as the thing to fix for people losing jobs is just stupid. The thing to fix is the unjust society with no safety net that you guys have created in the USA. Start there. Start today.

Because change is upon us and will not stop. We turn the wheel and the wheel turns us.


>Richard Stallman is a bit out there, to put it gently.

You're not wrong, but HN lionizes this guy so much that the few things he got right are worth leveraging for rhetorical effect.

>Focusing on AI as the thing to fix for people losing jobs is just stupid. The thing to fix is the unjust society with no safety net that you guys have created in the USA.

The USA absolutely does need a working social safety net, but other countries are not guaranteed to be better. The original article was talking about China, so I must mention that China's welfare program is arguably worse. For example, they don't have internal freedom of movement. In China, when you lose your job, you have to go back to the town you were born in.

But regardless of that, I think you missed why I was talking about transfers of wealth. The problem is that if we restrict AI to a technological priesthood of a few companies, the size or strength of the safety net won't matter. AI companies will be big enough to do to the world economy what Samsung did to South Korea. Tax the robots to give welfare to the structurally unemployed? Sure, that's fine, until OpenAI gets tired of paying confiscatory taxes on GPT-12 and starts overthrowing governments[0].

The underlying problem is economic centralization. Countries that get all their revenue from one thing (e.g. petrochemicals and fossil fuels) either turn into dictatorships or are overthrown by them. This is because economic enfranchisement - i.e. having a large labor force that is paid and educated well - is a backstop for democracy and against dictatorship. Currently, the ownership model that Google, Anthropic, and OpenAI are pursuing is extremely centralized, with everyone just calling into their servers and paying them in order to make the magic happen. The model weights are trade secret, and increasingly so is the training methodology and model architecture. These are not benevolent companies creating the future, these are dictators that haven't realized the extent of their power yet.

Sure, yes, artists won't be replaced long term. But they will have new bosses, worse than the old ones: the companies that own the AI they need to operate effectively. Sort of like how every artist needs to pay a troll toll to Adobe, or to Amazon, or to Apple today. Modern tech companies operate as quasi-governments, without any of the democratic accountability or constitutional protections that actual governments can provide. I see no reason why AI - general, superintelligent, or otherwise - will be any different. It'll just be worse. Unless we have distributed ownership of the underlying software and models to ensure that structural unemployment does not turn into economic disenfranchisement.

[0] https://en.wikipedia.org/wiki/Business_Plot




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: