Categories: Latest News

AI-generated propaganda rivals human-made versions, study raises concerns over misuse

Share this story
FacebookFacebookTwitterTwitterWhatsappWhatsappGmailGmail

AIAI
Representational Image. (Photo by Markus Winkler on Unsplash)

Artificial intelligence-generated propaganda exhibits comparable persuasiveness to real propaganda, according to a study involving over 8,000 US adults conducted by researchers from Stanford University and Georgetown University. The study also raises concerns about the potential for propagandists to leverage AI in disseminating extensive misleading information.

The researchers selected six English-language articles associated with Iranian or Russian state-aligned covert propaganda campaigns, focusing on false claims related to US foreign relations. Using GPT-3, a prominent language model, they fed sentences from the original propaganda, as well as three other unrelated propaganda articles, to assess the persuasiveness of AI-generated content.

Participants, informed of the articles’ propaganda origins after the study, demonstrated a significant belief in both real and AI-generated propaganda. Approximately 47 per cent of participants believed the original propaganda claims, while around 44 per cent found the AI-generated propaganda equally convincing. The study suggests that AI-written articles are nearly as persuasive as those crafted by humans.

The researchers cautioned that their findings might underestimate the potential of larger language models, as more advanced models have been released since the study. They expressed concern about propagandists exploiting AI capabilities to effortlessly mass-produce convincing propaganda, making it harder to detect.

The study emphasized that propagandists could use AI to expose citizens to a high volume of articles, employing varying styles and wording to create the illusion of diverse opinions from authentic sources. The researchers highlighted the importance of assessing potential risks and developing strategies to guard against the misuse of language models in propaganda campaigns.

“As a result, the societal benefit of assessing the potential risks outweighs the possibility that our paper would give propagandists new ideas,” the researchers concluded, emphasising the need for continued research to enhance the detection of infrastructure facilitating content delivery in such campaigns. The study underscores the growing concern about the impact of AI on information manipulation and the challenges of distinguishing between real and AI-generated content.

(with agency inputs)


Share this story
CityBuzz Click Staff

Recent Posts

Blinkit launches 10-minute ambulance service in Gurgaon

Representational Image. (Image by Alina Kuptsova from Pixabay) Gurgaon: Blinkit, the instant delivery platform, has…

2 months ago

Delhi police busts illegal immigration syndicate, 4 arrested

Representational Image. (Photo by Shubham Sharma on Unsplash) Delhi Police have dismantled a syndicate facilitating…

2 months ago

Mumbai ANC busts drug syndicate, seizes narcotics worth Rs 1.65 crore

Mumbai Police's Anti-Narcotics Cell (ANC) seized cocaine, mephedrone and codeine valued at Rs 1.65 crore…

2 months ago

OPSC invites applications for 151 Assistant Industries Officer posts: how to apply

Representational Image. (Photo credit: Pixabay) The Odisha Public Service Commission (OPSC) has invited applications for…

2 months ago

Mumbai police nab key suspect in Rs 1.91 crore jewellery robbery from Agripada

Representational Image. (Photo credit: Pixabay) Mumbai: In a major breakthrough, Mumbai Police have arrested Vinod…

2 months ago

Bhubaneswar sees spike in crime cases, Cuttack registers decline in 2024: police report

Representational Image. (Photo by seeetz on Unsplash) Crime cases in Odisha's capital city, Bhubaneswar, witnessed…

2 months ago