Recent advances in generative artificial intelligence (AI) have raised concerns about the proliferation of deepfake technology, blurring the lines between reality and fiction. The rise of deepfakes, computer-generated clips designed to mimic real-life scenarios, have become a threat to individuals, society, and even geopolitics. The ability to create realistic but fake media poses significant challenges, from undermining trust to sparking social unrest.
While deepfakes have often targeted celebrities and politicians, ordinary people are also at risk of falling victim to malicious actors who use fake media to spread misinformation and harassment. In countries like South Korea and Spain, cases of deepfake pornography have led to protests and legal actions, highlighting the real-world consequences of these technologies. Furthermore, the spread of fake media can erode trust in institutions and digital platforms, leading to widespread skepticism and vulnerability to disinformation campaigns.
The potential impact of deepfakes goes beyond individual harm, with experts warning that these technologies can be used as a weapon for spreading fake news to manipulate public opinion and influence important events such as elections and stock prices. The World Economic Forum has identified fake media as a major global risk, noting its role in disrupting trust and exacerbating social divisions.
In response to this growing threat, stakeholders from diverse backgrounds have come together to address the misuse of generative AI and deepfakes through the development of technical standards and best practices. Organizations like WITNESS have joined forces with tech giants like Adobe and Microsoft, as well as research institutions and think tanks, to establish guidelines for detecting and mitigating the impact of fake media.
The need for standards in AI and multimedia authenticity was underscored at the global AI for Good Summit in Geneva, where experts emphasized the importance of safeguarding the digital space from the misuse of generative AI. By establishing norms and protocols for verifying the authenticity and ownership of digital content, stakeholders aim to rebuild trust and confidence in the media landscape.
One positive outcome of this collaborative effort is the creation of global standards for AI watermarking and deepfake detection technologies, such as the JPEG Trust initiative. This standard, developed by a joint effort of IEC, ISO, and ITU experts, enables users to authenticate photos and videos, ensuring transparency and reliability in multimedia content.
While the misuse of deepfakes remains a concern, experts argue that not all fake media is inherently harmful. The entertainment industry, for example, benefits from AI technologies to create special effects and realistic scenes, demonstrating the creative potential of generative AI. By focusing on detecting, rather than stopping fake media, stakeholders believe they can harness the benefits of AI while mitigating potential risks.
Looking ahead, the multistakeholder collaboration on AI and generative technologies aims to facilitate dialogue and knowledge-sharing among stakeholders, paving the way for global standards that support government policy measures and protect user rights in the digital age. By working together to establish norms and guidelines, stakeholders hope to create a more trustworthy and reliable media environment for all.
In conclusion, the rise of deepfakes presents a complex challenge that requires a coordinated and multifaceted response from stakeholders across sectors. By setting standards for AI and generative technologies, experts aim to mitigate the negative impacts of fake media while harnessing the creative potential of these technologies for the benefit of society. Through collaboration and innovation, stakeholders can build a more resilient and trustworthy media landscape in the face of evolving technological threats.
