Online actors linked to the Chinese government are increasingly leveraging artificial intelligence to target voters in the U.S., Taiwan and elsewhere with disinformation, according to new cybersecurity research and U.S. officials.
The Chinese-linked campaigns laundered false information through fake accounts on social-media platforms, seeking to identify divisive domestic political issues and potentially influence elections. The tactics identified in a new cyber-threat report published Friday by Microsoft are among the first uncovered that directly tie the use of generative AI tools to a covert state-sponsored online influence operation against foreign voters. They also demonstrate more-advanced methods than previously seen.
Accounts on X—some of which were more than a decade old—began posting last year about topics including American drug use, immigration policies, and racial tensions, and in some cases asked followers to share opinions about presidential candidates, potentially to glean insights about U.S. voters’ political opinions. In some cases, these posts relied on relatively rudimentary generative AI for their imagery, Microsoft said.
U.S. officials see China’s rising clout in global influence operations as a concern because of the evolving tradecraft and ample state resources. Last fall, for example, the U.S. State Department accused the Chinese government of spending billions of dollars annually on a global campaign of disinformation, using investments abroad and an array of tactics to promote Beijing’s geopolitical aims and stifle criticism of its policies.
In an interview, Tom Burt, Microsoft’s head of customer security and trust, said China’s disinformation operations have become much more active in the past six months, mirroring rising activity of cyberattacks linked to Beijing.
“We’re seeing them experiment,” Burt said. “I’m worried about where it might go next.”
Separately, Microsoft said it detected a surge of more-sophisticated AI tools in the January presidential election in Taiwan, including an AI-created fake audio clip of a former presidential candidate endorsing one of the remaining candidates. That marked the first time the technology giant’s researchers on threats had seen a nation-state actor using AI to attempt to influence a foreign election.
The posts have so far failed to achieve much traction, Microsoft said, but they offer a preview of state-backed election-influence operations to come. Western intelligence officials have said they have growing concerns about how AI tools could be used to flood elections this year with misleading videos or other content, including in the 2024 U.S. presidential contest. Security experts have said fake AI-generated audio clips pose an especially acute threat because they are relatively easy to manufacture and have been shown to dupe audiences easily.
Chinese government operators “have increased their capabilities to conduct covert influence operations and disseminate disinformation,” an annual worldwide threats report from the U.S. intelligence community released recently said. “Even if Beijing sets limits on these activities, individuals not under its direct supervision may attempt election influence activities they perceive are in line with Beijing’s goals.” The report also said China was “experimenting with generative AI” and intensifying efforts to mold U.S. discourse on issues including Hong Kong and Taiwan.
Beijing has repeatedly said that it opposes the production and spread of false information and that U.S. social media is inundated with disinformation about China.
The Microsoft report is the latest of several different sets of published research that shed light on disinformation operations linked to Beijing. A new report from the Institute for Strategic Dialogue, a London-based research organization, identified a small number of accounts on X it said were linked to China that were impersonating supporters of former President Donald Trump and attempting to denigrate President Biden.