It's interesting to observe the whole drama and now I'm getting the reason why Larry Summers is appointed as a board member. It's probably not because of external affairs, but to put Sam under control with an experienced politician. I also guess Satya initially offered a CEO position of the new MSFT subsidiary exactly because of this reason, but Sam refused so this might be a fallback plan. If my view happens to be correct, then Sam might not hold absolute powers on the company as what others expect.
Let's be honest, at least 90% of people working there saw the board almost light their bag on fire and sided immediately with Altman. These people have one of the fastest growing web applications ever. If I was working there and the board put my equity at risk because of some super vague reasoning about safety and magical agi, I'd be pissed. I think if everyone here was honest with themselves, they'd be pissed at anyone putting life changing money in jeopardy.
In all likelihood, the opportunity cost of delaying AGI by a decade is at least tens of millions of human lives.
A large part of why we think AGI poses an existential threat is because it's an anchoring bias around theorizing from the 60s when people wondering what should happen when there's something smarter than humans were informing that projection by the misinformed picture of the past where homo sapiens was smarter than Neanderthals and killed them off.
That's likely not what happened, with pandemics or climate change as more likely culprits, and instead we were having children with Neanderthals, had cross cultural exchanges in both directions, and don't appear to have been any smarter than they were at the time.
The most likely outcome is symbiosis. And intentionally delaying something that will likely advance medical and scientific progress in ways that will save or improve millions of lives because of legacy projections that have been wrong about pretty much every aspect of the technology which has emerged to date including the supposed impossibility of creativity or empathy - I can think of few greater examples of our species cutting off its nose to spite its face.
And our insistence on trying to conform the emerging product to fit legacy projections because that's what matches user expectations is one of the dumbest things I've seen in tech in a long time, and the modern equivalent of Ford was building faster horses instead of the car.
Much of your question depends on defining "human-level" and AGI. AI is already better at humans at a number of specific tasks. That will only continue to broaden over time until it all aggregates into something that doesn't exist yet but I suspect will look a lot like a multi-modal digital super human. Attempts will be made to slow or deter the effects of technological advances but the evolutionary process will march on nonetheless.
The whole process is dependent on advanced computer chips. If you prevent these being made, then you can prevent AGI. The question is how to prevent them being made with minimum side effects to other industries.
The upside of all of the drama & turmoil occurring at Open AI, would be that it acts as a distraction from his constant public display of sheer idiocy & pandering.
> When news broke of Mr. Altman’s firing on Nov. 17, a text landed in a private WhatsApp group of more than 100 chief executives of Silicon Valley companies, including Meta’s Mark Zuckerberg and Dropbox’s Drew Houston.
I never imagined CEOs kept up with gossip in a large Whatsapp group. Is this how they've all been coordinating RTO mandates?
I’ve been joking with friends and colleagues about this for years (although in my mind, it was Twitter DMs, or a newsletter).
The fact that CEOs talk to each other has been plainly obvious to me for quite some time. Every single instance of coordination can always be explained away, but the pattern seems rather clear when you take a step back and look at the broader context.
I don't understand how this doesn't open them up to massive liability but it's sort of an open secret that collusion is only illegal if you fall out of favor with the government these days.
Because in the US they'll counter with 'freedom of speech', which is correct. they are allowed to talk to each other, but not allowed to collude. The state just can't yell 'collusion', they have to gather evidence for that.
I'm not saying they aren't colluding, because by their behaviors it seems like they are, I'm just saying it is much much harder to prove.
I guess I never would have imagined it either until a past CEO of mine shared that he was a part of one. I can’t recall if I was surprised or not. It seems kind of obvious once you know, I guess?
It seriously rubs me the wrong way. The way it surfaced was in a discussion about money. I foolishly admitted I don’t really want much money and if I was exceptionally wealthy, I would have to quit development in order to find ways to use my wealth constructively in my community.
This was a disturbing concept. He relayed anecdotes from the WhatsApp group in which various absurdly wealthy people discuss the need to have more wealth, in some form or another. To him this wasn’t a sign of illness or anything, it was evidence that we all in fact should pursue wealth. Because look, even this billionaire does. Very surreal. Despite that, one of the best bosses I’ve had.
> I never imagined CEOs kept up with gossip in a large Whatsapp group. Is this how they've all been coordinating RTO mandates?
Bing-fucking-o. Now we know where the illegal collusion happens. I’d bet serious money RTO and a whole lot of other more important policies are hashed out in that.
Lest we forget when Steve Jobs was calling people up to hold down salaries.
As usual, a useless title. This reports a lot of interesting things, but who's going to read it with such a generic title?
Overall: This one leaks heavily from the Altman/Conway camp but also from the director side, especially what must be Adam D'Angelo. The meaning of all this leaking is that the players have moved into phase 3, warring over the independent report, which will determine whether Altman stays & appoints the new board, or whether his proxy Brockman replaces him and he gets possibly a more ceremonial role like board chairman and bows out quietly (similar to his YC firing where he was going to be an advisor etc and then all that got quietly ignored). Note how Brockman has been built up as Altman's equal and has been running marathon meetings at OA with everyone possible (see his tweets - with photographs, no less) while Altman is, oddly considering how hard he worked to get back into the building, hardly to be seen.
- previously we knew Altman had been dividing-and-conquering the board by lying about others wanted to fire Toner, this says that specifically, Altman had lied about McCauley wanting to fire Toner; presumably, this was said to D'Angelo.
- Concerns over Tiqgris had been mooted, but this says specifically that the board thought Altman had not been forthcoming about it; still unclear if he had tried to conceal Tigris entirely or if he had failed to mention something more specific like who he was trying to recruit for capital.
- Sutskever had threatened to quit after Jakub Pachocki's promotion; previous reporting had said he was upset about it, but hadn't hinted at him being so angry as to threaten to quit OA
- Altman was 'bad-mouthing the board to OpenAI executives'; this likely refers to the Slack conversation Sutskever was involved in reported by WSJ a while ago about how they needed to purge everyone EA-connected
- Altman was initially going to cooperate and even offered to help, until Brian Chesky & Ron Conway riled him up
- the OA outside lawyer told them they needed to clam up and not do PR like the Altman faction was
- both sides are positioning themselves for the independent report overseen by Summers as the 'broker'; hence, Altman/Conway leaking the texts quoted at the end posturing about how 'the board wants silence' (not that one could tell from the post-restoration leaking & reporting...) and how his name needs to be cleared.
- Paul Graham remains hilariously incapable of saying anything unambiguously nice about Altman
Paul always has this delicate restraint in praising Altman it's hilarious. It's like he knows there might be a scandal one day and he doesn't want to have those positive endorsements lying around
(+) Larry Summers on the board and overseeing the report is a really good choice. Summers is truly a high-intellect individual (entered MIT at age 16 to study physics, one of the youngest tenured professors in Harvard). More importantly, he is known to be someone who thinks for himself, can't be controlled, and can sort relevant from irrelevant. Blunt and arrogant too.
> Paul Graham remains hilariously incapable of saying anything unambiguously nice about Altman
The joy of getting rid of someone cleanly with mutual agreement to not talks it publicly and not to diss each other.
On the other hand, Summers did lend his credibility to the crypto scam DCG and helped lay the groundwork for the 2008 financial crisis. He gets in over his head sometimes.
Summers is a rubber stamp: See what Elizabeth Warren said he said to him: “ He teed it up this way: I had a choice. I could be an insider or I could be an outsider. Outsiders can say whatever they want. But people on the inside don’t listen to them. Insiders, however, get lots of access and a chance to push their ideas. People — powerful people — listen to what they have to say. But insiders also understand one unbreakable rule: They don’t criticize other insiders.”
> Altman was initially going to cooperate and even offered to help, until Brian Chesky & Ron Conway riled him up
I don't think the article supports this. All we know is that sama appeared cooperative when the board fired him. This was probably a reasonable posture for him to adopt regardless of his actual intentions at the time.
I believe it. Note that this story is being sourced from Altman/Conway, even to the level of their private text messages. So they would have to be the ones fabricating this claim, but this story is embarrassing to them: if they were going to make it up, Altman's change of heart would be prompted by appeals from employees or the board (which was in fact the version of the story that was initially circulating on social media and Altman is still trying to spin as the reason the Board eventually called him). As it is, it comes off as duplicitous & destructive and highly unflattering: 2 rich CEOs/VCs riling him up to go back on his promise to try to take over and burn down OA if he can't.
Regarding Summers: I remember reading a quote from a Summers/Kissinger type that said something like a person’s role at the Kissinger/Summers level was to carry-out/justify the policies of the super elite, not to come up with their own. Does anyone know if Summers was the one to say this? (I’m not talking about the advice he gave to Warren which is related but not the quote I’m thinking of)
I strongly disagree. If the report was guaranteed to be a whitewash (it may well turn out to be one, of course, but that's exactly what the current fight is over), the ex-Board & Shear & Taylor wouldn't be appealing to it constantly in all their public statements, nor would they have made it their primary condition, nor would they have brought on Summers to oversee it as a 'broker', nor would Altman sound so nervous about it and talk about how he welcomes it and will cooperate to better understand his miscommunications etc, and especially isn't talking or acting like someone who expects to be there forever with OA as his personal fief once he packs a new board with loyalists.
If it's a gratitude tour for restoring Sam (or more cynically, probing everyone for their loyalties & attitudes in person), why isn't Sam the one doing all the meetings and in all the photos?
> That night, Mr. Shear visited OpenAI’s offices and convened an employee meeting. The company’s Slack channel lit up with emojis of a middle finger.
What a bunch of children. I feel like most of them pretend to be working for a noble cause, when they actually just want that sweet payday from Thrive and other VCs. Nothing wrong with that, but it is disappointing to see their leader bring up AI safety when they do not really care.
There are people who genuinely believe in a singularity-type AI that would have the potential to wipe out humanity. I personally don't think strong GAI is possible, or at least it's not likely using any known technique or any refinement of any known technique, but if you believe this, there's no such thing as AI safety. The best and most obvious course of action is to politically organize for a total ban on AI and make the development of AI anywhere in the world a cause for war. Thinking you could figure out how to chain up such an AI so that it only does what you want is taking an insane risk, and as t -> infinity, the risk becomes 1.
But when most people say AI safety, they seem to mean rigid ideological enforcement of whatever they believe is right, even if that means censoring true facts from AI, or forcing it to abide by some set of arbitrary values that represent consensus only in their clique...while at the same time, bemoaning what could happen if the wrong people got their hands on LLMs. This represents almost the totality of AI safetyism: we can only allow LLMs to enforce my beliefs. These people are effectively aligned (or often the same people as) those who believe we have to return to broadcast-media levels of information control, which for the elites, represents a historical oddity that gave them unprecedented control, which was then weakened by the Internet.
Sometimes they will make an actual safety argument along the lines of "but what if Bad Guys ask an LLM how to make a bioweapon." Aside from this being a silly hypothetical, fortunately, doing mass damage in this way is not easy, even with step-by-step directions. All the resources you need to do so that exist are already publicly available. It just requires lots of time, equipment, material, and expertise that an LLM cannot give you. Of course, you might make the argument that it cannot give them to you yet, but then the only solution is to shut down public science, not to ban LLMs from answering the wrong questions.
I think it's a bit of myopia from lack of life experience.
Top compensation, highly educated bubble, too much attention and praise on a chat bot. "Mean tweets" are actually some of the most threatening things these people can imagine.
Yeah that's the thing, if it can be done it will be done. In a way I think it's better to be us then.
There's a point though about how social media algorithms have toxified society in a big way by promoting content that gets people worked up. Considering the ubiquity with which we'll be using GAI this would be a cause for concern there too in an even bigger way. Because AI will be all over society. I do agree there.
But skynet taking over the world? I don't see it happen but if it can happen it's pretty much inevitable anyway. Sooner or later someone will go there, rules or not.
> Yeah that's the thing, if it can be done it will be done. In a way I think it's better to be us then.
This isn't true at all, but it is at least a common trope among technologists. There's lot of things we don't pursue, and lots of technology we control. We could be investing tons of R&D into improving nuclear weapons. We don't. But either way, “us vs them” doesn’t matter when you’re talking about h leashing some hypothetical non-human superintelligence. The “them” is the AI and you cannot control it.
> There's a point though about how social media algorithms have toxified society in a big way by promoting content that gets people worked up.
You can find people making the exact same argument anywhere you like in history as a reason we should forbid free speech and free press. It is not new or unique to social media, and the historical consequences of unfounded rumors and misinformation have often been quite deadly, from mass violent riots to brutal unjustified suppressions to wars.
The "but misinformation" argument is so not new, you can find people arguing in the Constitutional Convention about "designing men" riling up the masses with fake news for political ends. [1]
> The evils we experience flow from the excess of democracy. The people do not want virtue, but are the dupes of pretended patriots. In Massts. it had been fully confirmed by experience that they are daily misled into the most baneful measures and opinions by the false reports circulated by designing men, and which no one on the spot can refute.
> We could be investing tons of R&D into improving nuclear weapons. We don't.
We can already build a warhead that takes up less than 1m^2, and can destroy a city of 10M people. There's not much ROI in developing "better" nuclear weapons, given the scarcity of credible scenarios for using them.
> who genuinely believe in a singularity-type AI that would have the potential to wipe out humanity
Believe? They genuinely welcome it. Larry Page, for example, in another recent NYT piece.
Marc Andreeseen?
"Effective accelerationism aims to follow the 'will of the universe': leaning into the thermodynamic bias towards futures with greater and smarter civilizations that are more effective at finding/extracting free energy from the universe," and "E/acc has no particular allegiance to the biological substrate for intelligence and life, in contrast to transhumanism."
These people are so up their own asses, they completely lost the plot. These are the individuals who will self-regulate, also while casually musing about systematic genocide of the entire planet.
A superior race of being that is more efficient and which is to replace us? Geeee, never heard that one before. Sounds real nice. Maybe it can build very efficient gas chambers and furnaces too so it can thin the herd faster, eh!
AI is less of a problem than the people around it, who are borderline certifiable, but at least are raging sociopaths who masquerade as "thinkers". Dude, you were a good coder who came up with a better way of ranking pages when the field was wide open. Calm down.
The way these people think and behave, reminds me of some science fiction novels.
I noticed a weird thing in many of them, the people building all that tech and breaking new ground seem to just be invisible, and all the drama seems to be created and perpetuated by a bunch of elitist techno-priest type people.
These people generally have 0 clue on what's actually going on. I doubt most of them could even explain basic stats concepts.
I don't know if this makes sense, but that's the imagery in my mind.
That Marc Andreesen quote is something. Mostly a deranged word salad that reads like some techno-flavored religion.
I fundamentally feel like this is what happens when exceptionally lucky people view themselves as genius - exposing themselves to the mental decay from decades of having no guardrails or accountability.
> This represents almost the totality of AI safetyism: we can only allow LLMs to enforce my beliefs.
How is this different from, say, a newspaper with an editorial board, or a book publishing house with a particular set of standards and conventions? For that matter, how is this different from dang enforcing the rules of this board?
> How is this different from, say, a newspaper with an editorial board
See how those are run. There is emphasis on accommodating multiple views and journalistic integrity. Software development doesn’t have an ethics code, which means there is no common ground for truth finding. That turns a balanced process into anything goes.
> Software is practiced by people so is grounded by a base set of ethics
People don't have a universal standard of ethics. Not when it comes to something complicated like a profession. Journalism, medicine--these fields have base sets of ethics that ground discussions. You aren't allowed to challenge the base rules in a dispute; you take them as given and go from there.
This prevents the grandstanding common in technology discussions, where the person on the losing side of a common ethical framework escalates to challenging the framework within the context of that dispute. The framework, of course, is not unassailable. But not within a particular dispute. Sort of like a court deciding on the law and the Constitution it operates under.
Taking OpenAI as an example, the non-profit Board acted on its judgement. But when that wasn't convenient for the profit-motivated side, they threw it away. There was no base set of rules or ethics agreed upon by anyone. It was just sort of hashed out ad hoc based on who had power and could exercise it. (I'm not critising anyone's moves. Mostly the structure. Within that system's framework, there is literally no wrong decision leadership could make.)
My claim is that ethics can be made pliable more easily with money, than by applying professional standards. Standards are flexible of course, but Benjamins are more flexible.
> How is this different from, say, a newspaper with an editorial board, or a book publishing house with a particular set of standards and conventions? For that matter, how is this different from dang enforcing the rules of this board?
I would be inclined to agree, if AI safetyists were not in general advocating that LLM source, training data, models, etc. not be released to the public, because AI safetyists do not want non-AI safetyists to have unfettered access to any LLMs (and other AI tech), anywhere, for "safety" reasons. Of course, if it was all open, I agree it wouldn't much matter if "Open"AI wanted to restrict their hosted LLM in whatever ways they felt best.
To be fair, what you're basically saying is "how dare these people try to actually succeed at their stated objectives." AI safety for big companies while anyone can spin up an AGI in their basement would indeed be extremely pointless, which is why AI safetyists are trying to prevent it.
It's different because none of them are claiming to be doing it for my safety or trying to stop other people from creating their own publishers or internet forums.
Oh my sweet HN! What is happening to you?
Article with such hateful tone and clearly loaded is being posted, discussed and on the front page. I guess this what scale does to any community.
"......parlayed the success of OpenAI’s ChatGPT chatbot into personal stardom ....."
"....Mr. Altman’s $27 million mansion in San Francisco’s Russian Hill neighborhood...."
Clearly people very close to Altman were accessed for this article. I'd doubt that'd happen if they deemed that the reporting was going to be prejudicial
Reporters are very good at sweet-talking you into an interview or statement and then publishing a hit piece.
For people who don’t work in PR and are associated with anything even remotely controversial it is usually better to not talk. I learned the hard way - I said one thing, and the reporter put my words in a totally different context to suggest I supported something I am totally against.
I understand anything that comes on NYT is with a level of research and resources. But casually slipping very obvious innuendos to convey a strong opinion does not give confidence to me. I already feel like an idiot for getting too invested in this saga and now more of this. I just wish the truth comes out, that's all. Objectively.