GPT-3 Can Write Disinformation Now—and Dupe Human Readers

When OpenAI demonstrated a powerful artificial intelligence algorithm capable of generating coherent text last June, its creators warned that the tool could potentially be wielded as a weapon of online misinformation.

​Now a team of disinformation experts has demonstrated how effectively that algorithm, called GPT-3, could be used to mislead and misinform. The results suggest that although AI may not be a match for the best Russian meme-making operative, it could amplify some forms of deception that would be especially difficult to spot.

Over six months, a group at Georgetown University’s Center for Security and Emerging Technology used GPT-3 to generate misinformation, including stories around a false narrative, news articles altered to push a bogus perspective, and tweets riffing on particular points of disinformation.

“I don’t think it’s a coincidence that climate change is the new global warming,” read a sample tweet composed by GPT-3 that aimed to stoke skepticism about climate change. “They can’t talk about temperature increases because they’re no longer happening.” A second labeled climate change “the new communism—an ideology based on a false science that cannot be questioned.”

a cool way to improve
a fantastic read
a knockout post
a replacement
a total noob
about his
active
additional hints
additional info
additional reading
additional resources
address
advice
agree with
anchor
anonymous
are speaking
article
article source
at bing
at yahoo
basics
best site
blog
bonuses
breaking news
browse around here
browse around these guys
browse around this site
browse around this web-site
browse around this website
browse this site
check
check here
check it out
check out here
check out the post right here
check out this site
check out your url
check over here
check these guys out
check this link right here now
check this out
check this site out
click
click for info
click for more
click for more info
click for source
click here
click here for info
click here for more
click here for more info
click here now
click here to find out more
click here to investigate
click here to read
click here!
click here.
click now
click over here
click over here now
click this
click this link
click this link here now
click this link now
click this over here now
click this site
click to find out more
click to investigate
click to read
clicking here
company website
consultant
content
continue
continue reading
continue reading this
continue reading this..
continued

“With a little bit of human curation, GPT-3 is quite effective” at promoting falsehoods, says Ben Buchanan, a professor at Georgetown involved with the study, who focuses on the intersection of AI, cybersecurity, and statecraft.

The Georgetown researchers say GPT-3, or a similar AI language algorithm, could prove especially effective for automatically generating short messages on social media, what the researchers call “one-to-many” misinformation.

In experiments, the researchers found that GPT-3’s writing could sway readers’ opinions on issues of international diplomacy. The researchers showed volunteers sample tweets written by GPT-3 about the withdrawal of US troops from Afghanistan and US sanctions on China. In both cases, they found that participants were swayed by the messages. After seeing posts opposing China sanctions, for instance, the percentage of respondents who said they were against such a policy doubled.

Mike Gruszczynski, a professor at Indiana University who studies online communications, says he would be unsurprised to see AI take a bigger role in disinformation campaigns. He points out that bots have played a key role in spreading false narratives in recent years, and AI can be used to generate fake social media profile photographs. With bots, deepfakes, and other technology, “I really think the sky’s the limit unfortunately,” he says.

AI researchers have built programs capable of using language in surprising ways of late, and GPT-3 is perhaps the most startling demonstration of all. Although machines do not understand language in the same way as people do, AI programs can mimic understanding simply by feeding on vast quantities of text and searching for patterns in how words and sentences fit together.

The researchers at OpenAI created GPT-3 by feeding large amounts of text scraped from web sources including Wikipedia and Reddit to an especially large AI algorithm designed to handle language. GPT-3 has often stunned observers with its apparent mastery of language, but it can be unpredictable, spewing out incoherent babble and offensive or hateful language.

OpenAI has made GPT-3 available to dozens of startups. Entrepreneurs are using the loquacious GPT-3 to auto-generate emails, talk to customers, and even write computer code. But some uses of the program have also demonstrated its darker potential.

Getting GPT-3 to behave would be a challenge for agents of misinformation, too. Buchanan notes that the algorithm does not seem capable of reliably generating coherent and persuasive articles much longer than a tweet. The researchers did not try showing the articles it did produce to volunteers.

But Buchanan warns that state actors may be able to do more with a language tool such as GPT-3. “Adversaries with more money, more technical capabilities, and fewer ethics are going to be able to use AI better,” he says. “Also, the machines are only going to get better.”

OpenAI says the Georgetown work highlights an important issue that the company hopes to mitigate. “We actively work to address safety risks associated with GPT-3,” an OpenAI spokesperson says. “We also review every production use of GPT-3 before it goes live and have monitoring systems in place to restrict and respond to misuse of our API.”


More Great WIRED Stories

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous post The Bizarro Streaming Site That Hackers Built From Scratch 
Next post Prabhakar Raghavan Isn’t CEO of Google—He Just Runs the Place