Study says Artificial intelligence driven ‘deep fakes’ next big tool of Chinese disinformation campaign

Study says Artificial intelligence driven ‘deep fakes’ next big tool of Chinese disinformation campaign. China is probably going to rely on artificial intelligence-generated disinformation content, for example, deep fake and deep voice videos, as part of its mental and public opinion warfare across the world, another study by United States-based think tank Atlantic Council says.

The Atlantic Council’s Digital Forensics Lab (DFRLab) has published another study examining Chinese disinformation campaigns and recent trends which suggest that despite high accomplishment among the domestic audience base, the Chinese Communist Party (CCP) struggles to drive its message home on the foreign front.

The study notes that so far, Chinese disinformation operations on western social media platforms have been ineffective because of flawed implementation, which includes outsourcing the operation to third parties.

Platforms, for example, Twitter, Facebook and Google have been able to identify Chinese campaigns and take timely actions in the past. But now, experts at DFRLab evaluate that “artificial intelligence will be used to utilize effective, large-scale disinformation campaigns and to covertly run authentic Western social media accounts”.

The study comes in the midst of a tense standoff between India and China along the Line of Actual Control. Indeed, even US Secretary of State Mike Pompeo recently addressed the “threat” posed by China in the region.

Discourse power and information warfare

The Atlantic Council study observes the continuous shift in Beijing’s foreign policy from its earlier “non-intervention” stance in the affairs of other countries. China now hopes to exercise “Discourse Power” – a concept which suggests that a country can attain increased geo-political power by setting agendas internationally through influencing the political order and qualities, both domestically and in other countries, to project its “peaceful” ascent as a global superpower.

CCP has been using the information space, both domestically and internationally, to project the “China Story”. The author of the study, Alicia Fawcett, describes it as projecting its positive image through storytelling in the media landscape, both domestic and abroad.

Information perception tactics, for example, removal, suppression and downplay of negative information, just as gamification of certain hashtags, are tools with which China intends to convince foreign audiences that it is a “responsible world leader” and leading power in reforming the international political system. Today’s Internet-driven global information space offers Beijing an effective method to spread the “China Story” across the globe.

The Chinese government infrastructure has been busy with large-scale operations of producing and reproducing bogus or misleading information with the intention to deceive. The content usually relies on mental bias, provoking ethnic, racial or cultural affiliations within its target audience and intends to implant “paranoia and cognitive blind-spots”.

“China sees disinformation operations as an effective strategy for its government

to achieve foreign policy objectives,” Fawcett says and points out that the People’s Liberation Army (PLA), State Council, and CCP’s Central Committee all take part in organized information operations on domestic and international platforms.

Deep Fakes: Weapon of mass division

Deep fakes are synthetic multimedia where a person’s unique voice or appearance is manipulated to attribute the words they didn’t say or an act they didn’t perform. Artificial intelligence driven deep fakes make the manipulated media flawless and easy to create which deceive watchers into believing the events that didn’t take place.

Some popular instances of deep fake videos are of Mark Zuckerberg and Barrack Obama where researchers and artists demonstrated the possible misuse of the technology.

Deep voice media on the other hand relies on cloning a person’s voice using machine-learning which could be used to create an entirely new speech in their unique voice without consent or information. Since the use of deep fakes and deep voice on Chinese social media and artificial intelligence are not unprecedented, these tools figure prominently in Beijing’s cyber warfare.

The study observes the massive legwork done by popular Chinese apps and big tech firms, for example, TikTok, Baidu and Zao. It also mentions Baidu’s recent Deep Voice project that can clone any voice in seconds. China could use these tools to create AI-driven mass deep fakes, which can be deployed as part of CCP’s information operations.

Several governments, including many US states, have sensed the threat and enacted laws against possible misuse of deep fakes. In India, there are no particular laws governing deep fakes so far.

Beijing’s appetite for big data as a cyber and mental warfare tool is no secret. The PLA’s own academic diary, “Military Correspondent”, had earlier published a commentary suggesting the use of AI-driven bot network by it.

CCP’s interest in big data seems to be heading towards analysis, detection, determination and handling of mass public sentiments. An article by China’s Strategic Support Force (SSF) Base 311, accountable for CCP’s mental warfare, had earlier stressed on the need for a “voice information synthesis technology”. This technology was meant to identify a user’s emotional sentiment and then conduct subliminal informing.

Read More Article…

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World News Today will use the information you provide on this form to be in touch with you and to provide updates and marketing.