Ah yes, and meanwhile here on HN, everyone keeps repeating AI is not ‘replacing anyone yet’ and no one has to worry. While people are getting axed and replaced by AI everywhere I look.
This is just the beginning and we don't know what will happen. Some people will say creative destruction is happening and people will find other jobs. They may equate this to the horse drivers finding other jobs at the beginning of the XXth century.
The truth is we don't know. People that will lose their jobs may find other jobs or not. Maybe it's harder to reinvent oneself now. Maybe a lot of people will truly suffer because of AI.
I find very hard to know with certainty what will happen, who will remain unharmed, and who will struggle but make it through, but change is coming.
At least in the early 20th century there were a ton of jobs one could pivot to that weren't too difficult to train up for.
That isn't the case anymore. The only jobs that will remain will be skilled blue collar jobs and for how long is anyone's guess until we start getting robots with limbic dexterity and power systems to rival man.
This will speed up as well. My boss was looking for a robot arm a few years ago and got quotes; the quote for the cheapest one was not in the budget. He now bought 2 for far less that are really excellent and indeed, again, replacing real people.
We should prepare for it though. All the existential stuff with AIs killing humans I don’t think is the biggest danger. No work and no money is far more urgent. Even if it doesn’t happen, it can happen, so why not assume it will and change tactics when it’s clear it’s far more sunnier?
Well yes, that’s the best outcome; infinite energy which means infinite everything and, with AGI, infinite intelligence. Done properly, there is no worry; we will live in paradise. No needs for ads, wars or money. But it will depend on who owns it, no?
It’s hard to prepare when we don’t have a good idea of what will happen (and I agree with the GP that we don’t at this point). Also, as usual, politics will only react after the fact.
What I keep seeing is "This frees people up to do something else", which is the part that I just don't agree with. I just can't help but think there is a very large net drop in jobs in the areas that are going to be hit (apparently Game Art is on) and no increase anywhere else, so we're going to be left with a LOT of people with no where to go.
A single factory worker replaced dozens of blacksmiths.
A single mechanic replaced dozens of farriers and groomsmen.
We went from 80% people employed directly in agriculture to 1%. Only a very small portion of the 79% no longer working on farms are now building the giant tractors etc that make the few remaining farmers so much more efficient.
> What I keep seeing is "This frees people up to do something else", which is the part that I just don't agree with.
A lot of true believers in the infinite abundance of free market capitalism are going to have a rude awakening when they too are left to die in the trenches like dogs.
> With a 70% drop in illustrator jobs in China over the past year – initially triggered by economic slowdown and regulatory pressures, then accelerated by the proliferation of AI tools
I think we would need a firmer accounting of what percent of the decline is due to economic slowdown vs AI
Source for this is the Run of World article linked in the document. Their China journalist spoke with recruiters who work on video game illustrator roles and gave background info on the trend and decrease.
I'd like to think this is similar to how Silicon Valley recruiters (the good ones, at least) have a strong pulse on the market and can share trends well in advance of them making it into any mainstream publications like WSJ or NYTimes.
Given the slow down in tech hiring, how would that slow down in illustrator hiring be compared to slowdowns in dev hiring? Interest rates going up put a damper on tech hiring in the states, and China could be going through something similar?
You are right, but the more prominent members in this community believe there is no issue, at least as far as I have seen. Of course I do not read everything (I still have a job), but I do read quite a lot. I see more the seniors/established saying no worries and the juniors worry. But I also notice that the people who don’t worry are in a reality distortion field where they never work or did work with bad programmers for instance. They were smart, went to a good schools, have friends with 130+ iqs and think ai is far removed. If that is true or not (the ai being far from 130 human iq) is not relevant; it is better or at least good enough and definitely faster at many many things compared to most humans, but many people here never meet these ‘most humans’ because they have none around them.
There were 130IQs thinking the internet and crypto had little value as well (we can argue about crypto's utility) but the point still stands: high IQ doesn't preclude someone from being wrong, it just reduces the likelihood.
And typically when high IQ people are wrong as rare as it is, they're really really wrong
Sure, but the thing is that people are in their own circles and if you only know and talk to smart people, you get a very skewed view of humanity. You start believing we as a whole are far better than we are.
> AI is not ‘replacing anyone yet’ and no one has to worry.
I have seen these comments and this excuse (among others) as well. The thing that amuses me, before that amusement turns to some sort of cosmic existential horror dread, is the lack of critical thinking and general reasoning skills on display.
The argument that AI is not replacing anyone yet is a strange one, because it implies that the outcome is inevitable (and at least unfortunate, presumably) but you shouldn't worry about it. You know, because it hasn't happened yet. But it will. So stop talking about it.
I don't know, I just find that strange. (and seeing this pattern all over the place increasingly frequently is deeply concerning but that's a real-talk so I better stop).
And this is the 8-bit version of AI Art, relatively speaking. In 10 years it will be hard to find a human that can match AI's quality in "regular art", i.e., something that doesn't try to innovate artistically but otherwise looks great.
Every disruptive technology creates and destroys jobs. This is business as usual since the start of the Industrial Revolution and beyond. No need to panic, like the Luddites, yet.
Real AGI is the time to panic, because that forms an event horizon we can't see beyond. There may well be no place at all for humans in a post AGI world.
But LLMs are still a long way from AGI. I don't expect to see AGI in my lifetime, despite the blistering pace of advancement.
Yep, you are a typical example. Your world (and mine) is safe, but let’s meet up and step into a fortune 1000 non FAANG office, hit someone up for a chat and let’s see if you don’t believe this person can be replaced now by gpt4. Try it.
I cannot see how, even after these stories, you cannot see how this is a problem and how this is not vastly different (horse drivers went to car drivers, farmers became factory workers etc; there is no such switch now) than the previous examples. We are limited by the amount of very clever people as the rest can be replaced (we need some robots but that’s coming).
When we reach AGI, then even the 130+ iq peeps like you and me can start the worrying. But that doesn’t mean there is a present danger.
I don’t share your disdain for the average person, and I don’t see ChatGPT replacing people on a huge scale. it’s a helpful productivity tool, but the I don’t see that it will replace huge amounts of jobs. I worry more about autonomous driving when it comes to AIs replacing humans. A LOT of people drive trucks, Uber’s, taxis, buses, etc.
It’s perhaps more useful in my field than most (software) since the output is often highly structured text. But using it daily, it doesn’t even make me 2x more productive. A big deal, yes. Going to take my job, no. Going to lower salaries and reduce demand? Doubtful.
We’ve lived through massive disruptions already, computers and the internet are some examples. It wasn’t cause for panic.
I don’t think the parent has disdain, but rather experience. I’ve had my eyes opened wide by getting to know some people at the bottom of the economy. A 50 year old man who has basically no skills other than manual labor. There are people just above him that can push papers around, but don’t change their flow! The next level is people who can barely figure out instructions that to me are more than obvious (and he works in tech - this happened today!). I know these people personally, and there are a LOT of them. I fear for their future.
Computers made people more productive and internet made trade far more efficient. AI makes my job far more efficient (I work in software as well); most my colleagues have been fired now in the past months because now I can do the work of 10+ by using chatgpt api. That says something about my employer, however there are boatloads of the same companies. And my colleagues, some of whom are great humans, are just not very good at what they are hired for and, as such, easy to replace by gpt4 wielded by someone who does understand. Nothing to do with disdain; it’s just the group that will fall first. But then it’ll come for us. I like your optimism, I just don’t share it; this is far more of a change than many think.
Sure maybe not next year (although…) but your taxis, ubers, buses and serving beers might be the only jobs left for you and me. I am worried for my kids, not for me. But this is now within reach, while it seemed very far off last year. Deniers won’t save anything.
It’s not coming for me because it can’t think. If anything it makes the senior engineer more valuable, not less so. Writing actual code is the minority part of my day usually. Using Amdahl’s law, that bounds the maximum speed up to under 2x. In reality, less.
What on earth are you doing that you see 10x speed up? That implies you spend more than 90% of your day writing code. That’s not even possible if you have no meetings and no teammates. I call bullshit. I’ll be generous and say you are exaggerating for effect rather than intending to lie about it. But either way its beyond what’s possible.
If one person is more competent and willing to work for less money than someone else, what is a business to do? Hire/retain the more expensive less competent employee?
If a machine is more competent and costs less than a human worker, what is a business to do? Not use robots to assemble cars? Not use AI to do mechanizable routine work?
Yea, that’s what will happen. Not sure why you are responding to me? The gp seems to argue that this won’t be a problem now as in the past this made more jobs, so it will now too. I don’t believe that.
I’ve been using AI through copilot and ChatGPT for a year now. It’s great, it’s helpful, it’s worth the money. It’s not replacing me. It doesn’t let a junior dev do my job. It doesn’t make me so productive that the company will cut back on their engineering team. We’re still hiring.
No, seriously, the Luddites weren't angry because tech would put them out of business. They were angry because they hadn't gotten to buy looms for themselves yet. The Luddites were the prototype of a union, and smashing machines was a tactic used to get business owners to the table for labor negotiations. One that was responded to with propaganda and the force of law.
Let's consider two possible worlds:
- The one in which artists have a fancy new tool to play with to produce better art
- The one in which publishers fire all artists so they can use the tool for themselves
So far we appear to be hurtling down the second path. I can point to artists that are using art generators as tools to improve their work, but almost all of the hype and discourse surrounding generative AI has been "finally we can fire all the artists and just have the art make itself." I think this is a wrong-headed move long-term[0], but so long as business people believe artists to be replaceable, they will be replaced.
Furthermore, this has wealth-concentrating effects. Directly, this is a transfer of wealth from regular artists to the few that get to stick around to bang on the machine when it breaks. Indirectly, this is a transfer of wealth away from both artists and publishers to the companies who are making the AI art generators. In the past few years, AI research has gone from open scientific collaboration to extremely closed-off data siphoning operations. OpenAI in particular reorganized itself into a "capped profit corporation" after Elon Musk stopped writing checks, and started closing things off in the name of safety[1].
The time to panic is right now, even if AGI is decades or centuries off, so that precedents are established as to who owns and benefits from that technology. Let me explain by analogy: did Richard Stallman know and understand in the 1980s that proprietary software would lead to a handful of tech companies owning everything and renting it back out to you on subscription? No. But he did understand very well and very early on that proprietary software was an abusive relationship. Likewise, I can see that the relationship we are already moving into with AI is similarly abusive, even if we don't have AGI yet. A world in which AGI displaces humans entirely is a terribly unjust, illiberal world that does not deserve to exist. We either ride into the Singularity along with AGI, or we do not build AGI at all.
[0] While AI art is startlingly good at drawing novel images in response to prompts, fine control and consistency of those images requires manual intervention and fine-tuning. Effective prompt writing also requires an intricate knowledge of artistic history and terminology. Furthermore, there's a whole capability of art generators called inpainting that is criminally underused because you need to have basic art knowledge in order to use it effectively.
[1] To be clear, AI does have safety risks that are playing themselves out right now. The problem is that those risks have been used to justify turning everything into the worst kind of abusive SaaS.
I don’t agree with nearly anything you said here. I think we’re looking at path one for the most part, augmentation, not replacement. It doesn’t matter what business believes one whit. It matters entirely what the reality supports. Otherwise the morons who fired their arts team have to spend the time and money to build a new one once their competition rubs their face in their mistake.
Richard Stallman is a bit out there, to put it gently.
AGI will happen no matter what we do. I think is unavoidable. I don’t think any amount of caution or regulation will prevent it, it will just happen in another country. But it’s still worth trying when it makes sense. It’s too early right now. There may well be no room for humans in such a world, nobody knows yet, but I also don’t think we can escape our fate. It may be that all biological intelligent life that builds technological civilization also inevitably makes themselves obsolete through that same innovation.
Focusing on AI as the thing to fix for people losing jobs is just stupid. The thing to fix is the unjust society with no safety net that you guys have created in the USA. Start there. Start today.
Because change is upon us and will not stop. We turn the wheel and the wheel turns us.
>Richard Stallman is a bit out there, to put it gently.
You're not wrong, but HN lionizes this guy so much that the few things he got right are worth leveraging for rhetorical effect.
>Focusing on AI as the thing to fix for people losing jobs is just stupid. The thing to fix is the unjust society with no safety net that you guys have created in the USA.
The USA absolutely does need a working social safety net, but other countries are not guaranteed to be better. The original article was talking about China, so I must mention that China's welfare program is arguably worse. For example, they don't have internal freedom of movement. In China, when you lose your job, you have to go back to the town you were born in.
But regardless of that, I think you missed why I was talking about transfers of wealth. The problem is that if we restrict AI to a technological priesthood of a few companies, the size or strength of the safety net won't matter. AI companies will be big enough to do to the world economy what Samsung did to South Korea. Tax the robots to give welfare to the structurally unemployed? Sure, that's fine, until OpenAI gets tired of paying confiscatory taxes on GPT-12 and starts overthrowing governments[0].
The underlying problem is economic centralization. Countries that get all their revenue from one thing (e.g. petrochemicals and fossil fuels) either turn into dictatorships or are overthrown by them. This is because economic enfranchisement - i.e. having a large labor force that is paid and educated well - is a backstop for democracy and against dictatorship. Currently, the ownership model that Google, Anthropic, and OpenAI are pursuing is extremely centralized, with everyone just calling into their servers and paying them in order to make the magic happen. The model weights are trade secret, and increasingly so is the training methodology and model architecture. These are not benevolent companies creating the future, these are dictators that haven't realized the extent of their power yet.
Sure, yes, artists won't be replaced long term. But they will have new bosses, worse than the old ones: the companies that own the AI they need to operate effectively. Sort of like how every artist needs to pay a troll toll to Adobe, or to Amazon, or to Apple today. Modern tech companies operate as quasi-governments, without any of the democratic accountability or constitutional protections that actual governments can provide. I see no reason why AI - general, superintelligent, or otherwise - will be any different. It'll just be worse. Unless we have distributed ownership of the underlying software and models to ensure that structural unemployment does not turn into economic disenfranchisement.
And it cannot be coincidental that many companies, even the US, have announced layoffs right alongside their new investments in LLMs and AI. Sometimes within days of each other! We are literally seeing people laid off in anticipation of making money from AI, rather than continuing to pay the employees.
Examples:
https://news.ycombinator.com/item?id=35326865
https://news.ycombinator.com/item?id=35194986