Can you spot the bots? New research says not
Listen to this article
Until recently it has been a challenge to make convincing fake social media profiles at scale because images could be traced back to their source, and the text often didn't sound human-like.
Today, with rapid advances in artificial intelligence, it is becoming increasingly difficult to tell the difference. A team of Danish researchers decided to conduct an experiment with 375 participants to test the difficulty in distinguishing between real and fake social media profiles.
They found participants were unable to differentiate between artificially generated fake Twitter accounts and real ones, and in fact, perceived the fake accounts to be less likely to be fake than the genuine ones.
The researchers created their own mock twitter feed where the topic was the war in Ukraine. The feed included real and generated profiles with tweets supporting both sides. The fake profiles used computer-generated synthetic profile pictures created with StyleGAN, and posts generated by GPT-3, the same language model that is behind ChatGPT.
What the researchers say: “Interestingly, the most divisive accounts on questions of accuracy and likelihood belonged to the genuine humans. One of the real profiles was mislabelled as fake by 41.5% of the participants who saw it. Meanwhile, one of the best-performing fake profiles was only labelled as a bot by 10%,” said the lead author. “Our findings suggest that the technology for creating generated fake profiles has advanced to such a point that it is difficult to distinguish them from real profiles.”
“Previously it was a lot of work to create realistic fake profiles. Five years ago, the average user did not have the technology to create fake profiles at this scale and easiness. Today it is very accessible and available to the many not just the few,” said the co-author.
From political manipulation, to misinformation, to cyberbullying and cybercrime, the proliferation of deep learning-generated social media profiles has significant implications for society and democracy as a whole.
“Authoritarian governments are flooding social media with seemingly supportive people to manipulate information so it’s essential to consider the potential consequences of these technologies carefully and work towards mitigating these negative impacts,” the researchers explained.
The researchers used a simplified setting where the participants saw one tweet and the profile information of the account that posted it, the next research step will be to see if bots can be correctly identified from a news feed discussion where different fake and real profile are commenting on a specific news article in the same thread.
“We need new ways and new methods to deal with this as putting the genie back in the lamp is now virtually impossible. If humans are unable to detect fake profiles and posts and report them then it will have to be the role of automated detection, like removing accounts and ID verification and the development of other safeguards by the companies operating these social networking sites,” the researchers added.
“Right now, my advice would be to only trust people on social media that you know,” the lead author concluded.
So, what? This is yet another study which shows how AI is rapidly taking over our world. The question remains: who should, or can, regulate or constrain the development of AI? I agree with the researchers—the genie is really out of the lamp.
Join the discussion
More from this issue of TR
A strong link between social entrepreneurship and language
The probability that an individual is a social entrepreneur increased when there was weak rule of law, weak property rights and strong corruption.
Can you spot the bots? New research says not
"From political manipulation, to misinformation, to cyberbullying and cybercrime, the proliferation of deep learning-generated social media profiles has significant implications for society and democracy as a whole."
You might be interested inBack to Today's Research
Seeing no longer believing: the manipulation of online images
Image editing software is so ubiquitous and easy to use it has the power to re-imagine history, with deadline-driven journalists lacking the tools to tell the difference, especially when the images come through from social media.
Businesses have a moral duty to explain how algorithms make decisions that affect people
Amazon, Google and Facebook use algorithms to tailor what users see, and Uber and Lyft use them to match passengers with drivers and set prices. Do users, customers, employees and others have a right to know how companies that use algorithms make their decisions?
Machine sentience: what happens when machine learning goes too far?
Although people have wondered about the future of intelligent machinery, such questions have become more pressing with the rise of artificial intelligence (AI) and machine learning. Read what the researchers have to say.
Join our tribe
Subscribe to Dr. Bob Murray’s Today’s Research, a free weekly roundup of the latest research in a wide range of scientific disciplines. Explore leadership, strategy, culture, business and social trends, and executive health.