CyberSecurity SEE

China’s influence operations in focus: Russia’s official denial, content moderation, and freedom of speech

China’s influence operations in focus: Russia’s official denial, content moderation, and freedom of speech

Beijing’s deniable influence operations have been a source of concern for Australian and US organizations, as they outline different aspects of ongoing Chinese influence campaigns. The Australian Strategic Policy Institute recently published a study that sheds light on the Chinese Communist Party’s efforts to develop an “influence-for-hire” sector among criminal organizations in Southeast Asia. This activity involves spreading influence and disinformation campaigns using fake personas and inauthentic accounts on social media, which are linked to transnational criminal organizations.

The Chinese government creates and procures these inauthentic social media accounts, often accompanied by profiles that feature AI-generated images and associated content. Remarkably, this Party-sponsored influence shares similarities with activity designed to support illegal online gambling and associated fraud. The researchers believe that there may be an overlap in the outsourcing being done between China’s security services and criminal networks, further amplifying the danger of scalable disinformation and influence operations.

Meanwhile, in the United States, the cyber security company Mandiant has observed Chinese attempts to shape American opinion through planted narratives. This work is organized by a Chinese public relations firm known as Shanghai Haixun Technology Co., Ltd., which Mandiant has aptly dubbed HaiEnergy. Haixun Technology employs traditional PR tactics such as purchasing billboard ads and placing stories through wire services. Mandiant has also identified additional dissemination methods used by HaiEnergy, including two “press release” services—’Times Newswire’ and ‘World Newswire’—which utilize legitimate U.S.-based news outlets to spread Chinese influence. HaiEnergy is active in the freelance content production marketplace, using legal resources in what can be seen as a grey-market operation. While Mandiant believes that HaiEnergy has not gained much traction, they remain determined and may escalate their efforts to include organizing physical protests within the United States.

In addition to these influence campaigns, Russian Security Council Secretary Nikolay Patrushev recently accused the United States of conducting an aggressive cyber campaign against Russia. Patrushev claimed that the Pentagon’s cyber command, the National Security Agency, and the NATO Cooperative Cyber Defence Centre of Excellence are planning and steering information attacks on Russia’s critical information infrastructure under the Ukrainian flag. These operations are said to target Russia’s financial infrastructure, transportation, energy, telecom facilities, industrial enterprises, and government services websites. Patrushev also highlighted the West’s involvement in the conflict in Ukraine, accusing them of militarizing the information space and improving computer attack methods.

However, it is worth noting that Patrushev has made some implausible claims in the past. In April, he accused the US of preparing a biological war against Russia, and in May, he urged the West not to cut itself off from scientific collaboration with Russia, claiming that Russian scientific expertise could prevent disasters like an eruption of the Yellowstone Caldera. While it is unclear how Russian geologists would be involved in preventing such an event, Patrushev’s claims suggest a pattern of presenting unlikely scenarios to support his arguments.

The challenges of content moderation have also been highlighted recently, particularly in relation to Facebook. Daily Beast columnist Julia Davis, founder of the Russian Media Monitor, had one of her posts removed by Facebook’s monitors. The post quoted a Russian state television personality who called for the killing of Ukrainians, but it was intended as a critique. Facebook’s monitors deemed the post to be a violation of their Community Standards on violence and incitement, leading to the temporary suspension of Davis’ account. While Facebook later restored the post and acknowledged their mistake, this incident underscores the difficulties of content moderation in the face of a vast volume of content and the potential for misinterpretation.

As platforms like Facebook strive for transparency, they have been focused on exposing “coordinated inauthenticity” – accounts that are organized by individuals who are not the actual owners. Taking down these accounts is an important step in maintaining authenticity on social media platforms. However, as the incident with Julia Davis shows, content moderation is a complex task that requires careful consideration of context and intention. While the rise of artificial intelligence (AI) may offer a potential solution, AI struggles with understanding context and intentionality.

In conclusion, the ongoing influence campaigns conducted by Beijing, the accusations made by Russian Security Council Secretary Nikolay Patrushev, and the challenges of content moderation highlight the need for continued vigilance and critical evaluation of information online. As technology continues to evolve, it is crucial to have robust mechanisms in place to detect and combat disinformation and influence operations. Transparency and authenticity are key principles that should guide the actions of both individuals and platforms in the online world.

Source link

Exit mobile version