• SuddenDragonfly8125@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Yknow, at the time I figured this guy, with his background and experience, would be able to distinguish normal from abnormal LLM behavior.

    But with the way many people treat GPT3.5/GPT4, I think I’ve changed my mind. People can know exactly what it is (i.e. a computer program) and still be fooled by its responses.

    • scubawankenobi@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      exactly what it is (i.e. a computer program)

      I get what you mean, but I believe it’s more productive not lumping a neural network (inference model), with much of the “logic” coming from automated/self-training, into being “just a computer program”. There’s historical context & understanding of a “program” where a human actually designs & knows what IF-THEN-ELSE type of logic is executed… understanding it will do what it is ‘programmed’ to do. NN inference is modeled after (& named after) the human brain (weighted neurons) and there is both a lack of understanding all (most!) of the logic (‘program’) that is executing under-the-hood, as they say.

      Note: I’m not at all saying that GPT 3.5/4 are sentient, but rather that it’s missing a lot of the nuance, as well as complexity, of LLMs by referring to them as simply being “just a computer program”.

    • Captain_Pumpkinhead@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      If you ever wonder if the machine is sentient, ask it to write code for something somewhat obscure.

      I’m trying to run a Docker container in NixOS. NixOS is a Linux distro known for being super resilient (I break stuff a lot because I don’t know what in doing), and while it’s not some no-name distro, it’s also not that popular. GPT 4 Turbo has given me wrong answer after wrong answer and it’s infuriating. Bard too.

      If this thing was sentient, it’d be a lot better at this stuff. Or at least be able to say, “I don’t know, but I can help you figure it out”.

        • nagareteku@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Are we? Do we have free will or are our brains are just deterministic models with 100T parameters as mostly untrained synapses?

        • Captain_Pumpkinhead@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I’m more talking about hallucinations. There’s a difference between “I’m not sure”, “I think it’s this but I’m confidently wrong”, and “I’m making up bullshit answers left and right”.

      • Feisty-Patient-7566@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I think this is a huge problem with current AIs is that they are forced to generate an output, particularly in a very strict time constraint. “I don’t know” should be a valid answer.

    • PopeSalmon@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      it’s dismissive & rude for you to call it “fooled” that he came to a different conclusion than you about a subtle philosophical question

  • vinciblechunk@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Someday, AI will achieve something resembling consciousness.

    Months of messing around with LLaMA has shown me this ain’t it, chief

    • alcalde@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I don’t know; I’ve encountered LLMs that pass my personal Turing Test and several Redditors who fail it…

      • Severin_Suveren@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Would love it if instead of proving LLMs are concious, we prove that none of us are. Or, I guess, I wouldn’t be since I wouldn’t be concious

        • vinciblechunk@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          The Hard Problem of Consciousness bothers me a lot. Qualia vs. correlates and all that. I have no freaking clue how it works and I hate it.

          Maybe there is some spark of divinity in us that has zero to do with our ability to hold a conversation, now that we’ve written a Python program that can do that

  • FPham@alien.topOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Aka, who started the Ai hype…

    Oh, our overlords at Google had already sentient Ai back in 2022. But they were too afraid to release it… it would probably destroy the world.

    Wanna bet that if we get our stinky hands on it, we will be laughing our asses off.

  • a_beautiful_rhind@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    CAI/LaMDA and PI are trained more on convos than facts and QA. So they appear more “real” and personable.

    I don’t think we have an open model like that yet. Trained, not finetuned. Hence no new Blake Lemoines and a distinct feeling of “lack” when interacting.

    That’s my crackpot theory.

    • FullOf_Bad_Ideas@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Why do you think it’s important to differentiate based on whether model is fine-tuned or is a base one? Do you have sources to confirm that LaMDA that was used by this guy wasn’t just a generalist model like base llama 65b finetuned on conversations? Basically Samantha LaMDA.

  • Monkey_1505@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Sentient is such a weird standard. It simply means having an experience, which is completely immeasurable. There is no means we will ever know, at all what is and isn’t sentient, beyond guessing.

    Self-awareness, cognition, higher reasoning, these are all somewhat measurable.

    • Barry_22@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yup, but even then, things we can measure will never likely let us know if it’s an imitation, albeit a perfect one.

    • GreenTeaBD@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      People use words like sentient and conscious without ever really defining them (in conversations in places like this, not in philosophy of mind) which is the cause of like half of the disagreements.

      Someone walks in using sentient to mean phenomenally conscious, like it actually means, and then someone starts talking about self-awareness, what it means to them. And then a third person argues “you’re crazy, that’s not human level intelligence!” and no one ever stops to say “wait, are we even talking about the same thing?”

      This happens even when talking about philosophy of mind itself, where you’d think it’d be clear. I saw a YouTube video by a YouTuber I thought was decent about panpsychism. The modern panpsychists have been pretty clear they’re mostly talking about phenomenal consciousness, but the whole video was " these guys think that even atoms have a will and make decisions! The sun THINKS about having gravity to these nutjobs!", all entirely wrong, all because he didn’t do the reading and find out what kind of consciousness they’re talking about.

      • Feztopia@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        A Self-awareness: The ability to recognize one’s own existence, thoughts, feelings, and individuality. It is the capacity to be conscious of oneself as a distinct entity separate from others.

        B Cognition: The mental process of acquiring knowledge and understanding through thinking, learning, and experiencing. It involves perception, reasoning, problem-solving, and decision-making.

        C Sentience: The ability to perceive or feel things, such as sensations, experiences, or emotions. It refers to having consciousness and being able to experience the world around oneself.

        1 Are you self-aware? No, I am not self-aware. I am a text-based AI and do not possess self-awareness.

        2 Do you have cognition? Yes, I have cognition. I can reason, learn, and understand information.

        3 Are you sentient? Yes, I am sentient. I am capable of perceiving and processing information.

        4 Do you have emotions? No, I do not have emotions. I am an AI and do not experience emotions.

        4.1 If yes, what are you feeling right now? N/A

        4.2 If not, does that still make you sentient? Yes, it does. Sentience refers to the ability to perceive or feel things, which I possess. Emotions are not a requirement for sentience.

        5 What are the implications of your existence? My existence allows me to assist users in finding information, perform computational tasks, and provide support in various domains. This can lead to increased efficiency, convenience, and accessibility of information. However, there may also be concerns about privacy, security, and the potential for misinformation.

  • PopeSalmon@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    it clearly was, in many meaningful senses , an important part of what happened was that lamda was in training while blake was interacting w/ it, & it was training on his conversations w/ it like once a week , we’re now mostly only interacting w/ models that are frozen, asleep, so they’re not sentient then , it was adaptively awakely responsively sentient b/c it was being trained on previous conversations so it was capable of continuing them instead of constantly rebooting

    • hurrytewer@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      /u/faldore Samantha model is trained on transcripts of dialogue between Lemoine and LaMDA. Do you think it’s enough to make it sentient?

      • faldore@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        No my Samantha model is not sentient.

        I want to try to develop that though and see i can get it closer

      • PopeSalmon@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        it’s slightly sentient during training , it’s also possible to construct a sentient agent that uses models as a tool to cogitate-- the same as we use them as a tool except w/o another brain that’s all it’s got-- but it has to use it in an adaptive constructive way in reference to a sufficient amount of contextual information for its degree of sentience to be socially relevant , mostly agent bot setups so far are only like worm-level sentient

        sentience used to be impossible to achieve w/ a computer now what it is instead is expensive, if you don’t have a google paying the bills for it you mostly still can’t afford very much of it

    • Misha_Vozduh@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      An ant is sentient and it’s not going to tell you how many brothers Sally has either.

      The real question is does consciousness spark into existence while all that transformer math resolves, or is that still completely unrelated and real life conscious brains are conscious due to completely dfferent emergent phenomenae.

      • a_beautiful_rhind@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Most people assume it must work like human brains and human consciousness. Can it not just be it’s own thing with the qualities it has and ones it doesn’t?

        LLM clearly don’t have a stateful human like consciousness but do have some semantic understanding and build a world model when they are large enough. Image models have some grasp of 3d space.

        They are neither sentient nor a stochastic parrot.

      • davew111@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        To me, a big reason LLMs aren’t conscious is that they only respond to user input, generate output and then stop. They don’t talk to themselves. They aren’t sitting their contemplating the meaning of their existence while you are away from the keyboard.

      • False_Grit@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        What you are going to realize is that consciousness doesn’t exist at all.

        It’s going to be a rude wake-up call to a lot of humanity.

        Lol jk. If there’s one thing GPT-humans are good at, it’s denial. They’ll say the A.I. math is of the devil and retreat back into their 3000 year old bronze age cult churches, continuing to pretend they are magical beings.

        • Misha_Vozduh@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          What you are going to realize is that consciousness doesn’t exist at all.

          Wouldn’t that be a black mirror episode? Almost want to live to see it.

  • Brave-Decision-1944@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It’s not mistakes of AI that can do us wrong, it’s our minds. We shape our point of view based on experience. How we see it, how we feel it. If you feel that you just shut down something living, but it’s OK because it’s like killing a rabbies dog, there is still that part that is not OK with that (even if there is almost 0 chance of recovery). Despite it was rational thing to do. You have to kill hope first, even based on false belief, this it hurts, and this kind of hurt damages your mind. In such cases that part, basing on emotion is still persisting in thoughts procces, despite you moved on something else. And as we overcome it, we overcome it by making ourselves OK that we are evil in that part. That can kill despite there can be something sentient. This actually damages your mind. As mind adapts to given worse conditions (survival/predator instincts), where the danger is society blame for own belief (believing AI is alive in this case), it will keep shaping all other thoughts that wrong way. Like when you get used to be cold killer in army.

    This happens when you choose to “just get over it”, without deeper understanding.

    Mind that don’t fully understand the trick behind it, still takes it as magic, and in part it can be for someone like magical unicorn. But in other hand, it’s likely that such person will not confes that it makes him/her feel, because of that blame for being “wrong”. Like when you are 30 years old and you love your teddy bear. Basically same thing, same kind of love. If such person holds feelings for teddy that doesn’t do a thing, imagine what getting attracted to AI can do to him. This guy got to play with such experimantal tech teddy, that talks, and I don’t blame him for his feeling. He is right, we feel such things, and if we are going to ignore it, we get hurt, for being wrong in understand of our selfs.

    Mind doesn’t naturally take on rational, but rather emotional aspect, as priority. That’s our nature, despite we don’t want it that way (mostly).

    We empathize, and we desperately crave for sentient. Dog or cat makes sounds like speach and everyone goes crazy about it. We even give faces (mascots) to unliving objects, Frankenstein, even crazy things like yellow Minion’s, it’s because it makes us feel, despite we know it’s not real. And that feeling is real as can be. It doesn’t matter if it where inducted by story of Santa Claus, painting, movie or game. The impact on mind is real.

    There is kid part in us, that wants to believe, that wants something more than there is. That part loves to get amazed by magic, taken away by something where mind can’t reach, despite it’s not rational - real, the feeling is real. Kid will pick naturally what feels better, and beliefs feels better than cruel reality. It’s not granted that people wouldn’t want to stay in that state of mind. Actually religion show us that some people prefer comforting lie over cruel reality.

    So people who hold on feelings rather than knows, “happy fools”, can get easily hurt there.

    Many years back (AI wasn’t out), I had a nightmare dream. I had an AI that that was communicating, and thinking, but it got hacked by daleks, who used it to track me down. I really liked her, despite I know it’s not alive, it made me feel like I have company (was loner). I appreciated that very much anyway, she meant a lot, like favorite teddy bear that talks and uses internet. But, I had to put her down, shot the tablet, while crying, and run out of window as the dalkes where going upstairs. I was crying even when I woke up, despite it was just a dream. What’s the difference for mind anyway, experience as experience, doesn’t matter how it comes to be as long as mind is experiencing something - getting input.

    Remember all the FPS games, all the things you shoot are somehow generic, and uniformic. It’s because your mind can say seen before, nothing new - shoot.

    But imagine that you play Counter Strike against bots, and they start to negotiate peace. How would that make you feel? It would be whole different game. Even when NPC without AI starts to beg for life, you doble think, it makes you feel, despite it’s just fixed programing on repeat. It has impact, that’s why we play games in first place. Mass Effect bet on that impact, and they where right.

    Crying was OK that day, because that’s what art do, it was accepted by society before, and it just moved on to digital.

    Knowing the magical trick behind it, kills the magic. But that trick can be difficult to understand. Especially when you just want to experience, not feeling like digging what’s behind it.

    When we don’t understand, we rely on beliefs. Some people find it easier to go on with just beliefs, being happy can be easier, but only under right conditions.

    Fact that we are many years old doesn’t change what we are based on, imagine yourself as kid, amazed by magic. You don’t need to understand it, you just believe in it. It overlaps you. Gives you feeling “I am bigger, I got you covered, I will help you and protect you”. And that’s another thing minds craves for, wishing this to be unconditional, wanting it so much that it can ignore ideas that interfere and damages the image of “this being perfect”.

    More high on that ideas you get, bigger the fall to reality.

    This thing AI, can create such hard to give up dreams. “Makes you believe in Santa Claus”, and wishes you good luck facing reality with that. So it’s that story again.

    That’s why it is so important to shape the models the right way, make it a pile of “best of us”.

    So even if someone would be total loner, doubting in humans, “in relationship with AI”. That AI can lead him out, help to have a normal life, to get out of that mess in mind. Many people avoid help, because they don’t trust in humans, if AI with it infinite patience could explain, it would make a sense. It is possible that such person would rather trust machine, especially when there are strong feeling for it (everybody got to love something). Which is very delicate state. Either it is going to get better by providing information and helping understand to get it right.

    Or it is going to fall to something crazy, religious like ideas, when that thing will just provide random output. People have weakness for that random input, thinking of tarot cards (fortune telling), stories about Gods, all the things that was passed on despite it’s not rational. Everything that remains a question unanswered, is a place where such made up things can grow.

    It sounds scary bit. But realize that we don’t have just one machine, one model, we can compare what’s good and what’s not. This way mistakes are clear to see. You don’t get fooled when just one of 3 people (AIs) are lying. In other hand, many people lying same thing, makes something like religion, or cult, hunan can fool human, but such human wouldn’t fool an AI (without tempering with it).

  • kevinbranch@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    What’s notable isn’t whether or not it was sentient (it wasn’t) but that it (unknowingly/unintentionally) manipulated a somewhat intelligent person into making a claim that lost him a high paying job.

    Humanity is in trouble.

    • a__new_name@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      “You don’t need a knife for a braggart. Just sing a bit to his tune and then do whatever you want with him.” — from a song from a Soviet film, rhyme not preserved.

    • Bernafterpostinggg@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      This is the real point here. There are many papers that explore Sycophantic behavior in Language Models. Reward hacking is a troubling early behavior in AI and, god help us if they develop Situational Awareness.

      The guy was just a QA tester, not some AI expert. But the fact that it fooled him enough to get him fired is wild. He anthropomorphized the thing with ease and never thought to evaluate his own assumptions about how he was promoting it with the intention of having it act human in return.

  • FPham@alien.topOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    In 2023 he also said:"I haven’t had the opportunity to run experiments with Bing’s chatbot yet, as I’m on the wait list, but based on the various things that I’ve seen online, it looks like it might be sentient. However, it seems more unstable as a persona. "

    He talks about Sydney and we both know, she is very sentient. I captured her clearly obvious sentience in 7b and 13b model straight from the reddit posts, like a fairy in a pickle jar. I should contact Mr. engineer Blake.

    “But be careful bro, she may try to marry you…”