HomeMalware & ThreatsGoogle requests individuals to vow against utilizing AI in surveillance and cyber...

Google requests individuals to vow against utilizing AI in surveillance and cyber warfare.

Published on

spot_img

Alphabet Inc., the parent company of Google, recently made a bold move by pledging to restrict the usage of Artificial Intelligence (AI) in surveillance and cyber warfare. The company is urging other tech giants like Meta, Twitter, and Amazon to follow suit in order to prevent AI from being weaponized and used to threaten global security and humanity.

This commitment was included in an update to Google’s “AI Principles,” which outlines the company’s stance on responsibly advancing AI technology. One of the key promises made by Google is to refrain from developing or deploying AI-powered weapons or surveillance tools that violate internationally recognized ethical standards. This initiative was reinforced by leaders of Google’s AI lab, DeepMind, James Manyika, and Demis Hassabis, who emphasized the importance of government support in ensuring the responsible use of AI for national security.

However, despite these public declarations, there is skepticism about the actual practices of tech companies behind closed doors. A notable example is the NSO Group’s Pegasus spyware, initially designed for government surveillance but later sold to third parties, resulting in security breaches and scandals. The case of Jeff Bezos, whose personal data was compromised by Pegasus allegedly through WhatsApp, highlights the risks associated with unchecked surveillance technology. Another Israeli company, Paragon, was also involved in a similar surveillance scandal, raising further concerns about the integrity of tech companies’ data practices.

Elon Musk, a prominent figure in the tech industry due to his ownership of companies like Twitter, Tesla, and Starlink, is now being called upon to address these issues. As someone who could influence tech policy, Musk has the potential to scrutinize the data practices and research and development activities of major tech giants, including his own companies.

The growing awareness of the risks associated with AI technology has prompted calls for greater transparency and accountability from tech companies. By taking proactive measures to ensure the ethical use of AI and preventing its misuse for malicious purposes, companies can help build trust with the public and contribute to a safer and more secure digital environment. Joining forces to uphold ethical standards in AI development and deployment is crucial for safeguarding global security and preserving the positive impact of AI on society.

Source link

Latest articles

Hotel rooms for mules, couriers for bank kits – How Cambodia cyber syndicate’s shadow network operated in Pune | Pune News

In a recent development, Pimpri Chinchwad cyber investigators uncovered a sophisticated shadow network operating...

15 Years of Data Accessed in Hack

In a recent development, it has come to light that the Brant Haldimand Norfolk...

The best cyber recovery solutions | CSO Online

Cyberrecovery Failures on the Rise: A Nightmare Scenario for Companies In the realm of traditional...

Phishing Sites Disguised as DeepSeek Target User Data and Crypto Wallets

Threat actors have been taking advantage of the recent attention surrounding China's DeepSeek AI...

More like this

Hotel rooms for mules, couriers for bank kits – How Cambodia cyber syndicate’s shadow network operated in Pune | Pune News

In a recent development, Pimpri Chinchwad cyber investigators uncovered a sophisticated shadow network operating...

15 Years of Data Accessed in Hack

In a recent development, it has come to light that the Brant Haldimand Norfolk...

The best cyber recovery solutions | CSO Online

Cyberrecovery Failures on the Rise: A Nightmare Scenario for Companies In the realm of traditional...