AI-generated propaganda rivals human-made versions, study raises concerns over misuse

Share this story

Representational Image. (Photo by Markus Winkler on Unsplash)

Artificial intelligence-generated propaganda exhibits comparable persuasiveness to real propaganda, according to a study involving over 8,000 US adults conducted by researchers from Stanford University and Georgetown University. The study also raises concerns about the potential for propagandists to leverage AI in disseminating extensive misleading information.

The researchers selected six English-language articles associated with Iranian or Russian state-aligned covert propaganda campaigns, focusing on false claims related to US foreign relations. Using GPT-3, a prominent language model, they fed sentences from the original propaganda, as well as three other unrelated propaganda articles, to assess the persuasiveness of AI-generated content.

Participants, informed of the articles’ propaganda origins after the study, demonstrated a significant belief in both real and AI-generated propaganda. Approximately 47 per cent of participants believed the original propaganda claims, while around 44 per cent found the AI-generated propaganda equally convincing. The study suggests that AI-written articles are nearly as persuasive as those crafted by humans.

The researchers cautioned that their findings might underestimate the potential of larger language models, as more advanced models have been released since the study. They expressed concern about propagandists exploiting AI capabilities to effortlessly mass-produce convincing propaganda, making it harder to detect.

The study emphasized that propagandists could use AI to expose citizens to a high volume of articles, employing varying styles and wording to create the illusion of diverse opinions from authentic sources. The researchers highlighted the importance of assessing potential risks and developing strategies to guard against the misuse of language models in propaganda campaigns.

“As a result, the societal benefit of assessing the potential risks outweighs the possibility that our paper would give propagandists new ideas,” the researchers concluded, emphasising the need for continued research to enhance the detection of infrastructure facilitating content delivery in such campaigns. The study underscores the growing concern about the impact of AI on information manipulation and the challenges of distinguishing between real and AI-generated content.

(with agency inputs)

Share this story