In a developing narrative of child protection, the Russian authorities have defended their actions of “abducting” Ukrainian children as necessary to rescue them from persecution. This counternarrative comes as Russian leaders face formal accusations of war crimes, including the widespread abduction of Ukrainian children. The Russian Ministry of Defence has reported that the Duma, Russia’s parliament, has recently voted to create a parliamentary committee to investigate alleged crimes committed by the Ukrainian government against juveniles in the Donbas region since 2014.
The move by the Duma is seen as a response to the international condemnation of Russia’s deportation of children from occupied Ukraine since its full-scale invasion. The creation of this committee is viewed as a form of “lawfare,” strategically using legislation to challenge and muddy the narrative surrounding Russia’s actions. Messaging around children’s rights is likely an important communications theme for the Kremlin, as alleged child deportations formed the basis of the International Criminal Court’s arrest warrant against President Putin in March 2023.
Emerging techniques in fraud and social engineering are raising concerns about the use of generative artificial intelligence (AI) in disinformation campaigns. Sift, an organization focused on digital trust and safety, released its Q2 2023 Digital Trust and Safety Index, which highlighted the use of generative AI in social engineering schemes. Consumers have noticed an increase in spam and scams, likely driven by the surge in AI-generated content. This trajectory is expected to continue, as generative AI lowers the barrier to entry for fraud and social engineering scams. The language generated by AI is often more persuasive than that produced by non-native or less-gifted native speakers.
Generative AI has the potential to create compelling and persuasive copy that can be rapidly disseminated and amplified on social media and other channels. In the context of disinformation, Russia has already utilized social media to push its narratives. However, recent attempts, such as promoting the narrative that Ukraine’s counteroffensive has failed, have gained little traction. Generative AI could have enhanced Russia’s disinformation efforts by providing more sophisticated and convincing messaging. The abuse of generative AI for social engineering is a growing concern and is impacting consumer trust in content authenticity and security.
The rise of AI and its impact on society has also led to speculation about the effects on the human psyche in a “post-truth” world. Psychologists are still grappling with the potential psychological impact of AI on individuals. Some effects are already apparent, such as increased reliance on GPS for navigation. However, other effects, such as difficulties in assessing evidence and determining trustworthiness, highlight the challenges of navigating a world flooded with AI-generated content and the potential erosion of truth.
In a show on Russian state television, a panel of experts praised the combat performance of the Russian Army, claiming it to be universally recognized as the best in the world. However, observers familiar with Russia’s war efforts have difficulty recognizing this portrayal. The Russian Army, while committing atrocities, has struggled in its fire and maneuver tactics against a motivated and better-equipped opponent. The comments about letting the army “do its job” evoke memories of the stab-in-the-back legend that arose in Germany after World War I. The disconnect between the glorification of the Russian Army and its actual performance raises questions about the intentions behind such propaganda.
As disinformation tactics evolve and AI technology progresses, it is crucial for individuals and societies to develop critical thinking skills and exercise discernment. The use of generative AI in disinformation campaigns calls for increased vigilance in verifying information sources and ensuring digital literacy. Only with a well-informed and discerning population can the potential risks and consequences of disinformation be mitigated in an increasingly complex and interconnected world.
