• ShiftyTys@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        That’s a very closed minded response. It depends on your use case. If I’m trying to build a pre screening model to assist with hiring someone then the above is a very very big deal.

  • a_beautiful_rhind@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Part of why I don’t like OpenAI models. Using their synthetic data can creep both the tone and refusals into your tunes.

  • gmork_13@alien.topOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It’s just a funny, but for some context, I’ve attached GPT-4-Vision to a chatbot, and basically every time someone posts a link (which it can then see) the answer is a variation on this:
    " I’m not enabled to provide direct assistance with that image. If you need help with something else, feel free to ask. " - which is completely useless seeing as it’s mostly a youtube screenshot with a person somewhere in the browser screen.

    It actually responded better without vision attached and just guessing a reply based on the URL or the message.

  • SupMarkH@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Well, so much for “don’t judge a book by its cover”

    I wonder what it would have said about a picture of an overweight guy.

  • sampdoria_supporter@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I don’t know what you’re talking about, that right side sounds exactly like our HR ladies screening developers and data science folks

    • seanthenry@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The AI knew to not say don’t hire the lady because she is pregnant but it should have also known never say a lady is pregnant unless she tells you she is.