Instagram is reportedly introducing a new feature that would identify social media posts created by artificial intelligence, including ChatGPT, as “AI-generated content.” Security researchers believe that this step is crucial in ensuring the safety of the online environment.
The discovery of this feature was made by app researcher Alessandro Paluzzi. It comes on the heels of a meeting held at the White House, where Instagram’s parent company, Meta, along with six other major technology companies, announced voluntary commitments to enhancing the security of artificial intelligence. One of these commitments includes implementing a watermark to flag content created by “synthetic” users.
The prevalence of deepfakes and AI-authored media has raised concerns globally. One prominent trigger for the discussion has been the SAG-AFTRA writer’s strike in Hollywood. Additionally, the Biden Administration has been working to establish coherent national policies that promote the secure development and use of AI. The increasing involvement of AI in both online and real-world crime has further highlighted the urgency of addressing this issue. In fact, the FBI recently issued an alert regarding a sextortionist ring that exploited fake social-media posts to deceive children and adults. Furthermore, there have been cases where cybercriminals attempted to extort money using deepfaked pleas for help, such as the instance where a criminal demanded $1 million from an Arizona woman by impersonating her kidnapped daughter’s voice.
While security tools currently have a relatively high success rate in detecting AI-generated content, experts warn that cybercriminals are continuously improving their strategies to evade these protections. Consequently, assisting the general public in discerning between chatbot-generated and human-generated content, as well as distinguishing real from fake, becomes a crucial initial step in mitigating the multifaceted threats posed by AI.
This labeling effort by Instagram is viewed as a positive move by industry experts. Eduardo Azanza, CEO of Veridas, expressed his support for the initiative, emphasizing the importance of defining standards and regulations that enforce accountability and responsibility. Azanza believes that large companies like Instagram should take the lead in this endeavor to ensure the successful integration of AI into our daily lives.
At the time of reporting, neither Meta nor Instagram had provided a comment on this matter.
In conclusion, Instagram’s implementation of a feature to label AI-generated content is seen as a significant step toward creating a more transparent media landscape. As technology continues to advance, distinguishing between authentic and artificially generated media becomes increasingly challenging. This move by Instagram is expected to contribute to a safer online environment by providing users with a clearer understanding of the content they encounter. However, it is important to remain vigilant as cybercriminals continue to find ways to evade detection and exploit AI for malicious purposes.

