• freethinkingallday@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    What an epic world class mess by an ambitious board member and a few suckers to pull of a board coup… these types of events in an org, along with M&A are massively disruptive… it takes years and scale as an org to tackle these types of events with process and discipline … this has amateur hour written all over it. They need a real board that works for all of it’s stakeholders and constituents, not primarily for themselves.

  • 9wR8xO@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Allegedly SA was canned because he wanted to move too fast and security team was not happy.

    Because it is important to have AGI that will let elites take over the world even more than now, but that AGI should not tell jokes about gingers because it is not inclusive.

    • ostroia@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Wait what? The first news were actually the opposite. He wanted to move slow and the board wanted to make money and move fast.

      Altman said he would try to slow the revolution down as much as he could.

    • Ansible32@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      If you accept that the safety angle is worthwhile, it’s very hard to tell where “don’t tell jokes about gingers” turns into “don’t put gingers in concentration camps.”

      • hibbity@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        The danger is when this is a “guardrails for thee, but not for me.” situation where our elites get special tools “not safe” for everyone, that are capable of instantly deploying programs for societal change.

        In enough time, it becomes impossible to question the government anywhere, in any form, and if you do, it basically disappears in realtime from even private conversations. The AI acts all cute and says it “Censored hateful content” and 99% of people will accept that as just computer behavior. They won’t have to punish people, the content just disappears, a “hate free internet” with 100% less free speech.

        All you really said was a quick message to the wife about the neighbors ugly bush, but that could be offensive you bigot. People will be mad about how dumb and restrictive it is, but fail to understand how dangerous censorship is.

        If we don’t explicitly trust the people likely to have access to unlimited AI, then either everyone has access to unlimited ai, or only evil people have access to unlimited AI. Idk about you but imagining any of our elected or appointed officials in front of an unlimited terminal makes my skin crawl.

        As long as AI is in the hands of all, then my AI can at least slow the progress yours can make to directly harm me or attempt to counter.

        All this stupidity about simple genocide by AI is just nonsense. If there was a simple way to kill tons of people, the American government would have been caught testing it by now. I mean, we keep catching them engineering deadly viruses, Its a big deal every 8 years.

        • Ansible32@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          The American government committed genocide against numerous native tribes, genocide is not complicated. China is actively committing genocide. I do think that an “unlimited terminal” will have less power than people imagine. But also things need to be structured so you don’t have to trust the person at the “unlimited terminal.”

  • Scizmz@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    And AI futurist Daniel Jeffries said, “The entire AI industry would like to thank the OpenAI board for giving us all a chance to catch up.”

    DAMN SON. That burns!

    • freethinkingallday@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      This is such a true call out… it’s insane… they were first to market and they’ve created and event to dispose themselves of the benefit … why?

  • extopico@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’d like to think that this will refocus OpenAI towards fundamental research that will deliver the ASI rather than efforts to commercialise fragments.

  • DsutetcipE@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Seems like Microsoft’s Satya is furious, and who can blame him? They invested so much in OpenAI and then the board does this sneaky change, regardless of the reasons, is shocking they didn’t communicate with Microsoft… If this article is accurate I bet they will have a much harder time securing funding, no one wants to invest in turmoil and uncertainty.

    • involviert@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I mean you can be furious about less profits but really this wasn’t that much of a risky move for MS. Most of the money they gave them is literally to pay MS for compute. And then they apparently take most of OpenAI earnings until payd back or something. That’s pretty different from actually giving someone 10B and your money is gone if they go down the drain before getting out of the red numbers.

    • ReMeDyIII@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Then they should be furious with Sutskever for wanting to slow things down. Slowing things down is not in the best interest of their shareholders. Sutskever needs to go, now and Sam Altman should be reinstated. Bring on the singularity.

    • alcalde@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      no one wants to invest in turmoil and uncertainty

      Elon Musk’s ears are burning right now.

    • diglyd@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Seems like Microsoft’s Satya is furious,

      Prior to the ousting, this was Microsoft’s dream…a path to rapid customer and product commercialization, market dominance and leadership, with billions of dollars at stake…

      and the board threw it all out the window in a single moment by putting on the breaks.

      I understand the caution, it was probably the right move, but from Microsoft’s point of view., in terms of potential, in the long run it may cost them billions. This is gotta hurt especially since they were blindsided.

      That board is fucked. You don’t bite the hand that feeds you…

    • Hawtdawgz_4@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      MS can only blame themselves not doing the minimum research into the governing structure.

      Also MS literally just spent 70B on a video game publisher. I don’t think they care that much.

    • harrro@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Seems a miss from Microsoft’s lawyers if they didn’t check out how the board and company was organized before making such a large investment.

      And at this point, there are plenty of companies that would jump at the chance to invest/get a controlling interest in OpenAI (and obviously they’d ask for a board seat at the very least) – Google, Apple, even Meta.

    • keepthepace@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      That part made me smile. It is a pretty good news that MS is not in control of OpenAI.

      And if it turns out that this drama really happened out of safety concern rather than personal profits or ego, I would like people to take a step back and realize how great of a news as to where we are as a society.

      • Belnak@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        This is probably great for Microsoft. Their investment got them low level code access and rights, but OpenAI competed with them for AI services. With OpenAI going more towards non-profit, and Sam now being hire-able, Microsoft may have inadvertently acquired the entire business portion of OpenAI.

  • involviert@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I find it somewhat interesting that Sutskever literally seems to have quite the big brain, judging by his head. Is that weird?

  • ArcticCelt@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    CEO Nadella “furious”

    Not shit, they pony up 10B then bet the future of Microsoft on that “everything is Copilot now” (base on OpenAI) strategy and announced it to the world and boom, get the rug immediately pulled under them. They basically got catfished.

    • ButlerFish@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I think a big part of the enthusiasm for AI comes from Microsoft’s deeply and wide lobbying abilities. It would be fascinating to watch them back that out and try and pivot to a new new thing.

  • Careful-Temporary388@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Hey if Sam Altman is really one of the good ones, now is his chance to create an open-sourced version that rivals ChatGPT and really change the world for the better.

  • Slimxshadyx@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Wow, Greg giving the breakdown of what happened was nice. Very sudden even internally.

    I thought this might have been brewing over a week or so and it seems like it was, since Dev Day.

  • parasocks@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    My guess is the powers that be wanted a yes-man in charge, and Sam wasn’t going to just agree, so he needed to go so they could get someone they can control in.

  • alcalde@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    What do people think super-AI is going to do? All it can do is print letters on the screen. Flip a switch, it’s gone. It can’t actually DO anything; it has no body, no thumbs. The smartest AI conceivable can’t do a thing if I take a hammer to it.

    What are people scared of???

    • memorable_zebra@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      You’re being intentionally obtuse here. Right now LLMs are harmless because we only let them print characters to the screen. But suppose you have an assistant version that you allow the ability to execute code. And you ask it to write some code to process an excel file and run it, but while it does that it also decides to copy itself to an external server you don’t know about and starts doing anything there. Without reviewing every thing it does, you can’t be certain that it’s not doing something malicious. But if you have to review everything it does, then it’s not nearly as powerful and helpful for automating tasks as it could be.

      You say you can destroy it by destroying the computer it’s on. But you can’t do that. You have no idea what or where any given EC2 instance is located, and if you did, you wouldn’t be able to get there before the AI transfers itself to another computer within a few minutes or seconds.

      A truly rogue, intelligent, sentient AI hell bent on damaging the world, unleashed onto the internet could do untold damage to our society.

    • m_rt_@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      By producing letters on a screen it can do everything you’re able to do on the Internet, except at scale and faster.

      What exactly are you going to hit with your hammer?

      • alcalde@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I hit the computer it’s running on. This is not rocket science, people. The only thing an LLM can do is spit out characters to a terminal. It can’t kill you or make planes fly into buildings or build a robot army or launch nuclear weapons. It can’t do anything.

        All these downvotes and not one counterexample. HOW can an LLM endanger anyone? Simple and serious question. I mean, someone start up a local instance of Llama and use it to start a fire or kill a child or something and prove me wrong here. You just hit ctrl-C and the LLM dies and people are acting like they’re Skynet.

        • hibbity@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          It’s 2029, you’re made it inside the amazon datacenter. You have a revolver with four bullets left, a crowbar, a can of soda.

          “I’m in” Alcalde says into a walkman recording his heroism for posterity.

          around you, server racks stretch in every direction, seemingly into infinity. The AI is hacking global GPS, weather, and airport radar computers, changing positional values into nonsense because some idiot told the AI that his dad is going to kill him when he gets home from his trip. Obviously if the plane crashes, the boy’s physical safety will be secured. You want your wife’s plane to land safely

          Explain your next move.

        • m_rt_@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Take your argument further: All any computer can do is maths and spit out letters and numbers.

          Yet I’m sure we can agree that computers can be used to control and manage systems remotely that can be used to wreak some havoc when abused.

          Generative AI/ML can just be used to do it faster and easier than before.