Yes, I do still use Twitter and yes I know it’s X. But more and more I see these reply that are incredibly obviously written by LLM (and notoriously ChatGPT)

Like this thread I’m reading right now how Finland closed all it’s borders (and it is written by a human) but then the replies are like:

-It’s important for countries to manage their borders effectively while ensuring the safety and well-being of all individuals involved.

-That’s a significant step to address the issue. Hoping for lasting solutions that prioritize the safety of all involved.

- That’s an interesting development in Finland’s immigration policies. It’s important for countries to find a balance that takes into account economic, social, and security concerns.

etc… so yeah, very obviously LLM. Very obviously ChatGPT by the language too.

So enlighten me - what are people doing this hoping to achieve, except me very swiftly clicking Block the user?

I see it more and more. Not that I care about X either way, (for what it is worth, it can became a bot infested platform) but this is using LLM the 100% wrong way - for goals I can’t imagine. It adds no context, no opinion, just noise.

I just can’t find a scenario when this is good or beneficial for anybody doing it (or reading it). But maybe it’s just me.

Hmm??

  • Sabin_Stargem@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    In some distant future, I might create an AI agent to go out and post my videogame recommendations. Writing up a review and finding appropriate recent threads to post in, is much all about timing. An AI bot could spot opportunities to inform others about good games.

    I expect that we will see AI used as a sort of Hermes for individual people - searching and delivering their opinions to social platforms, 24/7/365, without needing much guidance from their human. Of course, AI personas will also sift through the posts of other people, and determine which opinions should be shared with their user.