Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mean, for CO2, the U.S. and the EU (who were once the largest emitters) have not only flattened the curve, but have in fact reduced emissions over the past 20 years:

https://ourworldindata.org/grapher/annual-co-emissions-by-re...

China has blown up emissions astronomically, though. To a lesser extent other Asian countries have as well.

I generally agree that international regulations controlling AI are unlikely to work, though, since it seems like it might be such a powerful and disruptive technology: if it doesn't stall, it could effectively be single-shot Prisoner's Dilemma, and when you have 193 players, someone's going to defect.

Personally though I think there are two possible outcomes:

1. Progress stalls, and it turns out getting from GPT-4-Turbo to better-than-human intelligence just doesn't pan out. LLMs are stuck as junior engineers for decades. If so, this is largely good for software engineers (and somewhat good for everyone, since it means we're more productive), but society doesn't change too much.

2. Progress doesn't stall, and we hit at least slightly-superhuman intelligence within the next decade. While this would obviously be a tough shift for most knowledge workers, especially depending on how quickly the shift happens, I also think this would likely bring about incredible medical advances, as well as incredible advances in robotics that reduce the cost of physical labor as well: meaning the price of goods drops enormously, and thanks to the medical advances we significantly increase either our lifespans, or at least the quality of our lives in old age, which seems quite positive. We'd need to figure out some sort of UBI system once the labor costs drop enough, but I think most people will be in favor of that, and also most stuff will just be really cheap at that point: ultimately just the cost of electricity (even "raw materials" are priced based on the cost of labor to get the materials, and the labor would be... the cost of electricity to run the robotics).

There are probably some in between scenarios, but TBH it's hard to see anything other than "stall" vs "takeoff" as being likely: either you never get past human intelligence (stall), or you do break through the wall, and then intelligence self-improves faster than before, up to some sort of information theory limit that I think is a lot higher than the average human is operating close to (consider just the variation in intelligence between individual human beings!).

Takeoff could also result in some sort of doomsday scenario, but the current LLMs haven't seemed to have the problems that the early doomers predicted, and so I think the like, humanity-enslaving or destroying outcomes are probably just not gonna happen.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: