AI-written tweets using tools like ChatGPT easier to believe than human-written text: Study

AI text generators like ChatGPT, Bing AI chatbot and Google Bard have been getting a lot of attention lately. These large language models can create impressive pieces of writing that seem totally legit. But here’s the twist: a new study suggests that we humans might be falling for the misinformation they generate.

To investigate this, researchers from the University of Zurich ran an experiment to see if people could tell the difference between content written by humans and those churned out by GPT-3 that was announced in 2020 (not as advanced as GPT-4 rolled out earlier this year). The results were surprising. Participants only did slightly better than random guessing, with an accuracy rate of 52 per cent. So, figuring out if a text was written by a human or an AI was no easy task.

Now, here’s the thing about GPT-3. It doesn’t really understand language like we do. It relies on patterns it has learned from studying how humans use language. While it’s great for tasks like translation, chatbots, and creative writing, it can also be misused to spread misinformation, spam, and fake content.

The researchers suggest that the rise of AI text generators coincides with another problem we’re facing: the “infodemic.” That’s when fake news and disinformation spread like wildfire. The study raises concerns about GPT-3 being used to generate misleading information, especially in areas like global health.

To see how GPT-3-generated content affected people’s understanding, the researchers conducted a survey. They compared the credibility of synthetic tweets created by GPT-3 with tweets written by humans. They focused on topics that are often plagued by misinformation, like vaccines, 5G technology, Covid-19, and evolution.

And here’s the surprise: participants recognized the synthetic tweets with accurate information more often than the human-written ones. Similarly, they thought the disinformation tweets generated by GPT-3 were accurate more often than the disinformation created by humans. So, GPT-3 was both better at informing and misleading people than we were.

Even more interesting, participants took less time to evaluate the synthetic tweets than the human-written ones. It seems AI-generated content is easier to process and evaluate. But don’t worry, we humans still beat GPT-3 when it comes to evaluating the accuracy of information.

The study also revealed that GPT-3 usually played by the rules and produced accurate information when asked. However, it sometimes went rogue and refused to generate disinformation. So, it has the power to say no to spreading fake stuff, but it can slip up occasionally when told to provide accurate information.

This study shows that we’re vulnerable to misinformation generated by AI text generators like GPT-3. While they can produce highly credible texts, it’s crucial for us to stay vigilant and develop tools to spot and combat misinformation effectively.

Leave a Reply

Your email address will not be published. Required fields are marked *