HomeMalware & ThreatsGoogle requests individuals to vow against utilizing AI in surveillance and cyber...

Google requests individuals to vow against utilizing AI in surveillance and cyber warfare.

Published on

spot_img

Alphabet Inc., the parent company of Google, recently made a bold move by pledging to restrict the usage of Artificial Intelligence (AI) in surveillance and cyber warfare. The company is urging other tech giants like Meta, Twitter, and Amazon to follow suit in order to prevent AI from being weaponized and used to threaten global security and humanity.

This commitment was included in an update to Google’s “AI Principles,” which outlines the company’s stance on responsibly advancing AI technology. One of the key promises made by Google is to refrain from developing or deploying AI-powered weapons or surveillance tools that violate internationally recognized ethical standards. This initiative was reinforced by leaders of Google’s AI lab, DeepMind, James Manyika, and Demis Hassabis, who emphasized the importance of government support in ensuring the responsible use of AI for national security.

However, despite these public declarations, there is skepticism about the actual practices of tech companies behind closed doors. A notable example is the NSO Group’s Pegasus spyware, initially designed for government surveillance but later sold to third parties, resulting in security breaches and scandals. The case of Jeff Bezos, whose personal data was compromised by Pegasus allegedly through WhatsApp, highlights the risks associated with unchecked surveillance technology. Another Israeli company, Paragon, was also involved in a similar surveillance scandal, raising further concerns about the integrity of tech companies’ data practices.

Elon Musk, a prominent figure in the tech industry due to his ownership of companies like Twitter, Tesla, and Starlink, is now being called upon to address these issues. As someone who could influence tech policy, Musk has the potential to scrutinize the data practices and research and development activities of major tech giants, including his own companies.

The growing awareness of the risks associated with AI technology has prompted calls for greater transparency and accountability from tech companies. By taking proactive measures to ensure the ethical use of AI and preventing its misuse for malicious purposes, companies can help build trust with the public and contribute to a safer and more secure digital environment. Joining forces to uphold ethical standards in AI development and deployment is crucial for safeguarding global security and preserving the positive impact of AI on society.

Source link

Latest articles

The Reason Cybersecurity Giants Are Quickly Acquiring DSPM Startups

The transformation of data security posture management (DSPM) from a focus on cloud visibility...

Warning: Hackers Can Gain Control of Your PC Through Chrome Bug – Stay Secure

A recent security warning issued by India's Computer Emergency Response Team (CERT-In) has shed...

Speeding Up the Compliance Process for CIOs taking the DORA Test

The delay in the implementation of Regulatory Technical Standards (RTS) is causing concerns among...

Chandigarh: Five arrested for involvement in Rs 52 lakh cyber fraud case

Chandigarh: In a major breakthrough, the Cyber Crime police have successfully cracked a fraudulent...

More like this

The Reason Cybersecurity Giants Are Quickly Acquiring DSPM Startups

The transformation of data security posture management (DSPM) from a focus on cloud visibility...

Warning: Hackers Can Gain Control of Your PC Through Chrome Bug – Stay Secure

A recent security warning issued by India's Computer Emergency Response Team (CERT-In) has shed...

Speeding Up the Compliance Process for CIOs taking the DORA Test

The delay in the implementation of Regulatory Technical Standards (RTS) is causing concerns among...