HomeCyber BalkansCompanies rushing to adopt AI skip crucial security hardening steps

Companies rushing to adopt AI skip crucial security hardening steps

Published on

spot_img

A recent security analysis conducted by researchers at Orca Security has revealed alarming trends in the security measures of companies hosting assets on major cloud providers’ infrastructure. The rush to build and deploy AI applications has led to the opening of numerous security holes, putting sensitive data at risk.

The analysis, which spanned from January to August and involved scanning billions of assets on AWS, Azure, Google Cloud, Oracle Cloud, and Alibaba Cloud, uncovered a multitude of worrisome findings. These include the use of default and potentially insecure settings for AI-related services, deployment of vulnerable AI packages, and a lack of adherence to security hardening guidelines.

One of the key issues highlighted by the researchers is the prevalence of exposed API access keys, AI models, and training data. This exposure not only compromises the confidentiality of the data but also puts it at risk of unauthorized access and misuse. Additionally, the analysis revealed instances of overprivileged access roles and users, misconfigurations, and a lack of encryption for data at rest and in transit.

In their 2024 State of AI Security report, Orca Security’s researchers noted that the rapid pace of AI development often prioritizes ease of use over security considerations. This focus on innovation can lead to oversight in properly configuring settings related to roles, buckets, users, and other assets, thereby introducing significant risks to the environment.

The findings of the analysis underscore the urgent need for companies to prioritize security measures when implementing AI applications on cloud infrastructure. Failure to do so not only exposes sensitive data to potential breaches but also poses a threat to the overall integrity and reliability of the systems in place.

Moving forward, it is crucial for organizations to ensure that security is integrated into every step of the AI development and deployment process. This includes performing regular security audits, implementing best practices for securing AI services, and staying informed about potential vulnerabilities and threats. By taking proactive measures to address security concerns, companies can safeguard their assets and mitigate the risks associated with AI development in the cloud.

Source link

Latest articles

Strengthening Cyber Resilience Through Supplier Management

 Recent data shows third-party and supply chain breaches — including software supply chain attacks...

A New Wave of Finance-Themed Scams

 The hyperconnected world has made it easier than ever for businesses and consumers...

New DroidLock malware locks Android devices and demands a ransom

 A newly discovered Android malware dubbed DroidLock can lock victims’ screens for ransom...

Hamas-Linked Hackers Probe Middle Eastern Diplomats

 A cyber threat group affiliated with Hamas has been conducting espionage across the...

More like this

Strengthening Cyber Resilience Through Supplier Management

 Recent data shows third-party and supply chain breaches — including software supply chain attacks...

A New Wave of Finance-Themed Scams

 The hyperconnected world has made it easier than ever for businesses and consumers...

New DroidLock malware locks Android devices and demands a ransom

 A newly discovered Android malware dubbed DroidLock can lock victims’ screens for ransom...