This post came out just after the board posted the original notice, but before the reasons for the removal became public. So it seems like it has too much factual detail, unknown at the time to outsiders, to be entirely fake.
It would be very easy to identify them with what they wrote if what they're saying is true (even saying they were "in the room while this all went down.").
I think it's real, but they're not as senior as they seem. Or alternatively they've taken a serious, stupid, risk as senior leadership which calls their judgement into question.
Supposing for the sake of argument that the majority of the board privately believe that Sam is a snake, but there's no smoking gun. At this point you're saying they have 2 choices:
1. Let him leave with 60% of the staff. 40% stay at OpenAI
2. Let him win this fight, he's back at OpenAI with de facto 100% control, no check on his power over the AGI that OpenAI is supposedly creating
I'd rather have a snake lead 60% of the staff than 100% of the staff.
On the other hand, if he leaves with 60%, maybe he decides he needs revenge at all costs, and his new AI org doesn't even pretend to be in it for humanity.
Tough choice. I guess we just have to hope that if the board's statement was right, the silent majority of the staff are in agreement.
Not necessarily a bad idea to have someone thinking about these topics, the role seems more nuanced than saving us from Skynet.
But where's the team protecting us from a future where we have no more shared experience or discussion of media anymore because everyone is watching their own personalized X-Men vs. White Walkers AI fanfic?
I don't think it's narcissistic, but it's not necessarily just about supporting them. It's a maneuver/test/demonstration of where he stands with various employees. It gives him information and sends a signal about his standing with OpenAI staff.
So many key employees are defecting, this Tweet could be interpreted as his support of the people siding with him over the good of the company, which would be narcissistic if the board had a valid reason for their decision. But all that's speculative.
It may not feel close, but the rate of acceleration may mean that by the time it “feels” close it’s already here. It was barely a year ago that ChatGPT was released. Compare GPT-4 with the state of the art 2 years prior to its release, and the rate of progress is quite remarkable. I also think he has a better idea of what is coming down the pipeline than the average person on the outside of OpenAI does.
My money, based on my hobby knowledge and talking to a few people in the field, is on "no fucking way".
Maybe he believes his own hype or is like that guy who thought ChatGPT was alive.
Maybe he's legit to be worried and has good reason to know he's on the corporate manhattan project.
Honestly though...if they were even that close I would find it super hard to believe that we wouldn't have the DoD shutting down EVERYTHING from the public and taking it over from there. Like if someone had just stumbled onto nuclear fission it wouldn't have just sat in the public sector. It'd still be a top secret thing (at least certain details).
I think there is a good reason for you to be skeptical and I too am skeptical. But if there were a top five of the engineers in the world with the ability to really gauge the state of the art in AI and how advanced it was behind closed doors: Ilya Sutskever would be in that top five.
One of the board members who was closely aligned with Ilya in this whole thing was Helen Toner, who's a NatSec person. Frankly, this action by the board could be the US government making its preference about something felt with a white glove, rather than causing global panic and an arms race by pulling a 1939 Germany and shutting down all public research + nationalising the companies and scientists involved. If they can achieve the control without the giant commotion, they would obviously try to do that.
We can't see inside, so we don't know. Their Chief Scientist and probably the best living + active ML scientist probably has better visibility into the answer to that question than we do, but just like any scientist could easily fall into the trap of believing too strongly in their own theories and work. That said... in a dispute between a silicon valley crypto/venture capitalist guy and the chief scientist about anything technical, I'm going to give a lot more weight to Ilya than Sam.
Well said and I work in AI on LLM's as an engineer and am very skeptical in general that we're anywhere close to AGI, but I would listen to what Ilya Sutskever had to say with eager ears.
How? Per the blog post: "OpenAI’s board of directors consists of OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner." That's 4 directors after the steps taken today. Sam Altman and Greg Brockman both left the board as a result of the action. That means there were 6 directors previously. That means a majority of 4 directors. Assuming Sam & Greg voted against being pushed out, Ilya would have needed to vote with the other directors for the vote to succeed.
Edit: It occurs to me that possibly only the independent directors were permitted to vote on this. It's also possible Ilya recused himself, although the consequences of that would be obvious. Unfortunately I can't find the governing documents of OpenAI, Inc. anywhere to assess what is required.
It makes no sense to suggest that three external directors would vote out a CEO and the Chairman against the chief scientist/founder/principal's wishes.
That the board is unhappy with his for profit and moat building charted path.
That this is about his sister.
That he pissed off microsoft.
That he did something illegal, financially.
That he has been lying about costs/profit.
That he lied about copyrighted training data.
I will add: maybe he's not aggressive enough in pursuit of profit.
Sam's sister is an OnlyFans model who is estranged from the rest of her family and has a somewhat dubious reputation online.
She went viral on twitter a few months ago for saying that Sam molested her for years as the two of them were growing up. There's been no proof or coboration offered that I'm aware of.
It's obviously a difficult situation that I think most people here generally have avoided commenting on since there's no meaningful input we could give.
Her allegations were not new information and were made as far back as 2021. So it makes little sense for the board to suddenly react to it now. Plus with Greg now posting his announcement of quitting OpenAI makes it seem unlikely to be about the sexual assault allegations.
I have seen something on Twitter in regards to a woman (apparently his sister) mentioning that he molested her. I have no idea if it is true or not, or if the Tweets are real, or if it is even his sister. These were apparently from years ago before he became as known as he is today.
I won't like though it's the first thing that popped into my mind when I heard the news.
I thought he was gay? I don’t know if I’ve heard of gay men sexually molesting little girls before. Not saying it’s never happened, just that it seems kind of rare and unexpected.
Molestation is about power and opportunity, the gender of the victim is often inconsequential. You'll find plenty of straight pedophiles who abused victims of either gender, and the same with gay offenders.
No problem, it's a common misconception. Annie also touched upon that contradiction, essentially saying something to him akin to "you're welcome for my help in figuring your sexuality out".
Why? "Owned by Vanguard" is the closest possible thing to "owned by the American public". What do you think Vanguard is and where do you think their money comes from?
Maybe just restrict index funds from interfering with whatever business they own? Index funds are ultimately a dumb momentum-based business model that just buys whatever companies are doing well at the moment and sells whichever aren't, but bear no risk themselves outside the economy shrinking. Not sure why they should have any say in how the companies they own operate, it seems like a complete competence mismatch (dumb ones with little risk controlling the smart ones with the skin in the game).
What's funny is nobody said India, and here you go talking about India.
I don't know why you deny Indian racial discrimination. There's a huge problem with Indian's discriminating. Not only discrimination against non-Indians, discrimination based on caste/skin color of other Indians.
It's a huge problem. I've seen it run rampant in every single IT Department or tech firm I've worked at which leverage offshore & H-1B employees.
I am Indian, and Indians often get attacked for 70% H1b going to India. I am tired of it. I do not represent the Indian govt when I try to find job in USA.
Wait if 70% of H1Bs go to Indians, isn’t it even more suspicious that the team i’m talking about in America couldn’t find one Indian or American to hire?
It would be very easy to identify them with what they wrote if what they're saying is true (even saying they were "in the room while this all went down.").
I think it's real, but they're not as senior as they seem. Or alternatively they've taken a serious, stupid, risk as senior leadership which calls their judgement into question.