Yes, I do still use Twitter and yes I know it’s X. But more and more I see these reply that are incredibly obviously written by LLM (and notoriously ChatGPT)
Like this thread I’m reading right now how Finland closed all it’s borders (and it is written by a human) but then the replies are like:
-It’s important for countries to manage their borders effectively while ensuring the safety and well-being of all individuals involved.
-That’s a significant step to address the issue. Hoping for lasting solutions that prioritize the safety of all involved.
- That’s an interesting development in Finland’s immigration policies. It’s important for countries to find a balance that takes into account economic, social, and security concerns.
etc… so yeah, very obviously LLM. Very obviously ChatGPT by the language too.
So enlighten me - what are people doing this hoping to achieve, except me very swiftly clicking Block the user?
I see it more and more. Not that I care about X either way, (for what it is worth, it can became a bot infested platform) but this is using LLM the 100% wrong way - for goals I can’t imagine. It adds no context, no opinion, just noise.
I just can’t find a scenario when this is good or beneficial for anybody doing it (or reading it). But maybe it’s just me.
Hmm??
Young and bored teenagers would get a nice chuckle seeing people unknowingly having convos with their bots online.
Imagine you hate political candidate A but love political candidate B. Imagine setting your bot up to trash A and promote B.
Even more entertaining would be to setup your bot to debate and waste peoplea time. Go to sleep and wake up to see your bot has been arguing with someone wasting their time for 8 hours. That would be hilarious to a troll.
The ultimate end game is selling persuasion.
Sentiment is scraped from Twitter and trading and policy is ultimately derived from it.
Because a frightening amount of people still think Twitter matters.
Astroturfing just got orders of magnitude cheaper with the advent of LLMs. This, along with spam and advanced phishing are some of the true real and present dangers of this technology. It’s a battle between content platforms and any bloke with an axe to grind, and it’s probably a loosing battle for the content platforms.
Genuine human to human interaction online is going to become rare and tedious. Can’t even imagine what kind of captchas they’ll have to come up with to fool the next generation of multimodal models.
Almost like we should just go back to meaningful face to face communication.
People will be more in bubbles with bots they agree with, and which they might often know to be bots, which are maybe framed as something likable like an anime girl.
Commu - what? Blasphemy…
over Skype, right?;)
It’s a lot cheaper than paying humans to spread propaganda.
Consider that the audience isn’t you, it’s people who lack discernment. It’s like those scam emails. People with good judgement delete them.
The other audience is engagement algorithms.
Not only that, but the costs of additional fine-tuning is negligible for state actors. And the current open source LLM context length is just ideal for Twitter.
I also wouldn’t be shocked if there are bot campaigns both for & against the issue at hand, by the same groups, to make it confusing for human onlookers and increase polarization
Yes, state actors and companies were already able to do that. More importantly than the price for them going down is that this allows more groups or individuals to engage in that.
Lazy promoting.
Unethical practices, one-man-shops attempting to pump up the account value artificially, aiming for a sale later on.
I’m not doing that but my guess is it’s fun, easy, and cheap to do (only $8/mo!) and potentially lucrative if you can cheese a following somehow.
Using gpt is really lazy though when it’s so easy to do a custom 13B Lora that will actually interact like a human
Basically, it enhances your status in the algorithm, so it’s worth having some bots that will talk you up. Like creating AI friends to tell everyone how cool you are. But, since it’s largely algorithmic/AI determining who should see your content, it works.
I’m thinking that’s probably it.
Valentine and Peter have entered the chat
Elon Musk has made it profitable
It’s also about propaganda. Instead of hiring thousands of people to comb the internet and post a country’s propaganda on any topic related to your point of view, have chatpgt do it for you.
In some distant future, I might create an AI agent to go out and post my videogame recommendations. Writing up a review and finding appropriate recent threads to post in, is much all about timing. An AI bot could spot opportunities to inform others about good games.
I expect that we will see AI used as a sort of Hermes for individual people - searching and delivering their opinions to social platforms, 24/7/365, without needing much guidance from their human. Of course, AI personas will also sift through the posts of other people, and determine which opinions should be shared with their user.
More engagement = more ad revenue. There’s also stuff like operation earnest voice.
That’s an order of magnitude improvement from the average Twitter post in terms of grammar, composition, ability to hold a coherent thought for a few seconds etc. and most importantly does not make your blood boil with rage. Why are you complaining?
I use twitter while drinking morning coffee - it makes it 2x stronger.
Probably to troll Elon