HomeCyber BalkansDetecting Deepfakes: Preparing to Counter Targeted Disinformation, Censorship, and Narrative Control

Detecting Deepfakes: Preparing to Counter Targeted Disinformation, Censorship, and Narrative Control

Published on

spot_img

A recent study has found that humans can only detect deepfaked voices about 73% of the time, leaving researchers concerned about the potential misuse of this technology. The study, conducted by Kimberly T. Mai, Sergi Bray, Toby Davies, and Lewis D. Griffin, presented genuine and deepfake audio to 529 individuals and asked them to identify the deepfakes. The experiment was conducted in both English and Mandarin languages to understand if language affected detection performance, and the researchers found that there was no difference in detectability between the two languages.

According to the study, increasing listener awareness by providing examples of speech deepfakes only slightly improved detection results. As speech synthesis algorithms continue to improve and become more realistic, it is expected that the detection task will become even more challenging. This highlights the need for defenses against speech deepfakes to prevent their potential misuse.

Eduardo Azanza, CEO of Veridas, believes that technical tools, such as AI, can play a role in recognizing voice deepfakes. These tools can use sophisticated algorithms to evaluate the authenticity and “liveness” of a voice or face presented for access or authentication. While the effectiveness of these algorithms may vary, they provide a means to crosscheck samples that would be challenging for humans to disprove.

Azanza also emphasizes the importance of utilizing certifications and third-party evaluators, such as the National Institute of Standards and Technology (NIST) and IBeta Laboratories, to validate the accuracy and reliability of biometric security solutions. Incorporating these practices can establish a routine of cross-checking information on the internet and enhance overall security measures.

However, it is important to note that these tools and certifications are still a work in progress and may require further advancements to effectively combat the threat of voice deepfakes.

In other news, a data breach at the UK’s Electoral Commission has raised concerns of Russian interference in British elections. The ransomware attack and data breach, which occurred in October 2022, were only publicly disclosed by the Electoral Commission recently. The breach exposed personally identifying information, and there is speculation that the data could be used for highly targeted disinformation campaigns.

The Telegraph reports that the incident may have been directed by Russian intelligence services, as cyberespionage and state-directed influence operations are often difficult to distinguish from conventional cybercrime. This highlights the ongoing challenge of attributing cyberattacks to specific actors and understanding their motives.

Another development in the media landscape involves the temporary removal and subsequent restoration of Meduza’s flagship podcast, “What Happened,” from the Apple Podcasts platform. Meduza is an independent Russian-language news service that operates from Latvia and focuses on news affecting Russia. The reason for the temporary removal remains unclear, but Meduza speculates that a complaint from Russia’s Internet governance authority, Roskomnadzor, may have prompted the suspension. However, the ban was short-lived, and the podcast is now available again on Apple Podcasts.

Moscow’s campaign against virtual private networks (VPNs) has also come under scrutiny. The Russian government has been increasing its efforts to disrupt Russian citizens’ access to VPNs, which allow users to maintain privacy, circumvent state-imposed censorship, and access objective international news sources. VPNs have remained popular in Russia, despite being illegal since 2017. The Russian state has launched a public information campaign to discourage citizens from using VPNs by claiming that they put personal data at risk. This indicates that Moscow recognizes the limitations of technical measures alone and is employing a multi-faceted approach to control domestic information.

Lastly, a recent misstep by a senior Russian army officer has highlighted the challenges faced by narrative control. In a recorded address for Russia’s Airborne Forces (VVD) Day, the Commander-in-Chief of the VVD, General Colonel Mikhail Teplinsky, disclosed that 8,500 paratroopers had been wounded in Ukraine but had either returned to duty or refused to leave the front lines. The video was quickly deleted, and it remains unclear how many troops were killed or too seriously wounded to return to duty. Extrapolating from Teplinksy’s figures suggests that at least 50% of the 30,000 paratroopers deployed to Ukraine in 2022 have been killed or wounded. This incident underscores the challenges faced by the Russian government in controlling the narrative surrounding its military operations.

Overall, these developments highlight the evolving landscape of disinformation, cybersecurity, and media control. As technologies like deepfakes continue to advance, it is crucial for researchers, policymakers, and businesses to stay vigilant and develop effective defenses against potential misuse. Additionally, understanding and addressing the motives behind cyberattacks and information campaigns will be critical in countering foreign interference in elections and maintaining an informed public.

Source link

Latest articles

The Battle Behind the Screens

 As the world watches the escalating military conflict between Israel and Iran, another...

Can we ever fully secure autonomous industrial systems?

 In the rapidly evolving world of industrial IoT (IIoT), the integration of AI-driven...

The Hidden AI Threat to Your Software Supply Chain

AI-powered coding assistants like GitHub’s Copilot, Cursor AI and ChatGPT have swiftly transitioned...

Why Business Impact Should Lead the Security Conversation

 Security teams face growing demands with more tools, more data, and higher expectations...

More like this

The Battle Behind the Screens

 As the world watches the escalating military conflict between Israel and Iran, another...

Can we ever fully secure autonomous industrial systems?

 In the rapidly evolving world of industrial IoT (IIoT), the integration of AI-driven...

The Hidden AI Threat to Your Software Supply Chain

AI-powered coding assistants like GitHub’s Copilot, Cursor AI and ChatGPT have swiftly transitioned...