What an epic world class mess by an ambitious board member and a few suckers to pull of a board coup… these types of events in an org, along with M&A are massively disruptive… it takes years and scale as an org to tackle these types of events with process and discipline … this has amateur hour written all over it. They need a real board that works for all of it’s stakeholders and constituents, not primarily for themselves.
Allegedly SA was canned because he wanted to move too fast and security team was not happy.
Because it is important to have AGI that will let elites take over the world even more than now, but that AGI should not tell jokes about gingers because it is not inclusive.
Wait what? The first news were actually the opposite. He wanted to move slow and the board wanted to make money and move fast.
Altman said he would try to slow the revolution down as much as he could.
If you accept that the safety angle is worthwhile, it’s very hard to tell where “don’t tell jokes about gingers” turns into “don’t put gingers in concentration camps.”
The danger is when this is a “guardrails for thee, but not for me.” situation where our elites get special tools “not safe” for everyone, that are capable of instantly deploying programs for societal change.
In enough time, it becomes impossible to question the government anywhere, in any form, and if you do, it basically disappears in realtime from even private conversations. The AI acts all cute and says it “Censored hateful content” and 99% of people will accept that as just computer behavior. They won’t have to punish people, the content just disappears, a “hate free internet” with 100% less free speech.
All you really said was a quick message to the wife about the neighbors ugly bush, but that could be offensive you bigot. People will be mad about how dumb and restrictive it is, but fail to understand how dangerous censorship is.
If we don’t explicitly trust the people likely to have access to unlimited AI, then either everyone has access to unlimited ai, or only evil people have access to unlimited AI. Idk about you but imagining any of our elected or appointed officials in front of an unlimited terminal makes my skin crawl.
As long as AI is in the hands of all, then my AI can at least slow the progress yours can make to directly harm me or attempt to counter.
All this stupidity about simple genocide by AI is just nonsense. If there was a simple way to kill tons of people, the American government would have been caught testing it by now. I mean, we keep catching them engineering deadly viruses, Its a big deal every 8 years.
The American government committed genocide against numerous native tribes, genocide is not complicated. China is actively committing genocide. I do think that an “unlimited terminal” will have less power than people imagine. But also things need to be structured so you don’t have to trust the person at the “unlimited terminal.”
And AI futurist Daniel Jeffries said, “The entire AI industry would like to thank the OpenAI board for giving us all a chance to catch up.”
DAMN SON. That burns!
This is such a true call out… it’s insane… they were first to market and they’ve created and event to dispose themselves of the benefit … why?
There is no board, there’s just a powerful, experimental AI in charge.
I’d like to think that this will refocus OpenAI towards fundamental research that will deliver the ASI rather than efforts to commercialise fragments.
Seems like Microsoft’s Satya is furious, and who can blame him? They invested so much in OpenAI and then the board does this sneaky change, regardless of the reasons, is shocking they didn’t communicate with Microsoft… If this article is accurate I bet they will have a much harder time securing funding, no one wants to invest in turmoil and uncertainty.
I mean you can be furious about less profits but really this wasn’t that much of a risky move for MS. Most of the money they gave them is literally to pay MS for compute. And then they apparently take most of OpenAI earnings until payd back or something. That’s pretty different from actually giving someone 10B and your money is gone if they go down the drain before getting out of the red numbers.
Then they should be furious with Sutskever for wanting to slow things down. Slowing things down is not in the best interest of their shareholders. Sutskever needs to go, now and Sam Altman should be reinstated. Bring on the singularity.
Duck that and duck Microsoft. The board was fully correct not to consult or inform Microsoft.
no one wants to invest in turmoil and uncertainty
Elon Musk’s ears are burning right now.
How is Twitter actually doing? In terms of userbase now v then
Seems like Microsoft’s Satya is furious,
Prior to the ousting, this was Microsoft’s dream…a path to rapid customer and product commercialization, market dominance and leadership, with billions of dollars at stake…
and the board threw it all out the window in a single moment by putting on the breaks.
I understand the caution, it was probably the right move, but from Microsoft’s point of view., in terms of potential, in the long run it may cost them billions. This is gotta hurt especially since they were blindsided.
That board is fucked. You don’t bite the hand that feeds you…
MS can only blame themselves not doing the minimum research into the governing structure.
Also MS literally just spent 70B on a video game publisher. I don’t think they care that much.
Seems a miss from Microsoft’s lawyers if they didn’t check out how the board and company was organized before making such a large investment.
And at this point, there are plenty of companies that would jump at the chance to invest/get a controlling interest in OpenAI (and obviously they’d ask for a board seat at the very least) – Google, Apple, even Meta.
Good reminder to not add a couple of nobodies to your board. Lol.
That part made me smile. It is a pretty good news that MS is not in control of OpenAI.
And if it turns out that this drama really happened out of safety concern rather than personal profits or ego, I would like people to take a step back and realize how great of a news as to where we are as a society.
Let the whole saga play out. Microsoft hasn’t even played a card yet.
This is probably great for Microsoft. Their investment got them low level code access and rights, but OpenAI competed with them for AI services. With OpenAI going more towards non-profit, and Sam now being hire-able, Microsoft may have inadvertently acquired the entire business portion of OpenAI.
Now would be a good time for a disgruntled employee to leak some models and make OpenAI actually open. ;)
except that it seems the employees who stayed are the ones least likely to do this.
Is GPT-4 still the best LLM around? How close are the open source models here?
And we find the backend is just Mechanical Turk.
We discover it was Jimmy Apples sending us inferences all this time.
That’s no longer a joke ever since local open source AI models became a thing.
Datasets~
I find it somewhat interesting that Sutskever literally seems to have quite the big brain, judging by his head. Is that weird?
ROFL
CEO Nadella “furious”
Not shit, they pony up 10B then bet the future of Microsoft on that “everything is Copilot now” (base on OpenAI) strategy and announced it to the world and boom, get the rug immediately pulled under them. They basically got catfished.
I think a big part of the enthusiasm for AI comes from Microsoft’s deeply and wide lobbying abilities. It would be fascinating to watch them back that out and try and pivot to a new new thing.
What is open about OpenAI I never understood this.
The name and the vibe.
Hey if Sam Altman is really one of the good ones, now is his chance to create an open-sourced version that rivals ChatGPT and really change the world for the better.
OpenAIGATE
How can it be a “coup” when the board is allowed to hire and fire the CEO?
Wow, Greg giving the breakdown of what happened was nice. Very sudden even internally.
I thought this might have been brewing over a week or so and it seems like it was, since Dev Day.
He conveniently left out the part where this was apparently precipitated by Altman wanting to partner with the Saudis.
Seems like a big detail.
Could you link me for some more info on that?
Apparently that had nothing to do with it.
My guess is the powers that be wanted a yes-man in charge, and Sam wasn’t going to just agree, so he needed to go so they could get someone they can control in.
What do people think super-AI is going to do? All it can do is print letters on the screen. Flip a switch, it’s gone. It can’t actually DO anything; it has no body, no thumbs. The smartest AI conceivable can’t do a thing if I take a hammer to it.
What are people scared of???
You’re being intentionally obtuse here. Right now LLMs are harmless because we only let them print characters to the screen. But suppose you have an assistant version that you allow the ability to execute code. And you ask it to write some code to process an excel file and run it, but while it does that it also decides to copy itself to an external server you don’t know about and starts doing anything there. Without reviewing every thing it does, you can’t be certain that it’s not doing something malicious. But if you have to review everything it does, then it’s not nearly as powerful and helpful for automating tasks as it could be.
You say you can destroy it by destroying the computer it’s on. But you can’t do that. You have no idea what or where any given EC2 instance is located, and if you did, you wouldn’t be able to get there before the AI transfers itself to another computer within a few minutes or seconds.
A truly rogue, intelligent, sentient AI hell bent on damaging the world, unleashed onto the internet could do untold damage to our society.
By producing letters on a screen it can do everything you’re able to do on the Internet, except at scale and faster.
What exactly are you going to hit with your hammer?
I hit the computer it’s running on. This is not rocket science, people. The only thing an LLM can do is spit out characters to a terminal. It can’t kill you or make planes fly into buildings or build a robot army or launch nuclear weapons. It can’t do anything.
All these downvotes and not one counterexample. HOW can an LLM endanger anyone? Simple and serious question. I mean, someone start up a local instance of Llama and use it to start a fire or kill a child or something and prove me wrong here. You just hit ctrl-C and the LLM dies and people are acting like they’re Skynet.
It’s 2029, you’re made it inside the amazon datacenter. You have a revolver with four bullets left, a crowbar, a can of soda.
“I’m in” Alcalde says into a walkman recording his heroism for posterity.
around you, server racks stretch in every direction, seemingly into infinity. The AI is hacking global GPS, weather, and airport radar computers, changing positional values into nonsense because some idiot told the AI that his dad is going to kill him when he gets home from his trip. Obviously if the plane crashes, the boy’s physical safety will be secured. You want your wife’s plane to land safely
Explain your next move.
Take your argument further: All any computer can do is maths and spit out letters and numbers.
Yet I’m sure we can agree that computers can be used to control and manage systems remotely that can be used to wreak some havoc when abused.
Generative AI/ML can just be used to do it faster and easier than before.