HomeCyber BalkansOpaque Systems launches advanced data security and privacy-preserving features for LLMs.

Opaque Systems launches advanced data security and privacy-preserving features for LLMs.

Published on

spot_img

Opaque Systems, a company specializing in confidential computing, has introduced new features to enhance the confidentiality of organizational data while using large language models (LLMs). By implementing privacy-preserving generative AI and zero-trust data clean rooms (DCRs) optimized for Microsoft Azure confidential computing, the company aims to facilitate secure analysis of combined confidential data without the need to disclose the raw data. Additionally, Opaque has expanded its support for confidential AI use cases, offering safeguards for machine learning and AI models that utilize encrypted data within trusted execution environments (TEEs) to prevent unauthorized access.

LLM use can potentially expose businesses to significant security and privacy risks. The risks associated with sharing sensitive business information with generative AI algorithms have been well-documented. LLM applications have also been found to possess vulnerabilities that can be exploited. While some LLM models, such as ChatGPT, are trained on public data, the true potential of LLMs can be unleashed when they are trained on an organization’s confidential data. However, granting LLM providers visibility into user queries raises concerns about access to sensitive information, including proprietary code. This presents a significant security and privacy issue, as it increases the risk of hacking. To enable the expanded use of LLMs in an enterprise setting, it is crucial to protect the confidentiality of sensitive data, such as personally identifiable information (PII) and internal data like sales figures.

Jay Harel, VP of product at Opaque Systems, emphasizes the challenges organizations face when fine-tuning their models on company data. Currently, organizations have two options: either grant LLM providers access to their data or allow the providers to deploy proprietary models within the customers’ organizations. Regardless of the approach chosen, the training data used for AI models is retained, regardless of its level of confidentiality or sensitivity. If the security of the host system is compromised, the data may be leaked or fall into the wrong hands.

To address these concerns, Opaque Systems developed a confidential computing platform that incorporates multiple layers of protection for sensitive data. By running LLM models within their platform, customers can ensure that their queries and data remain private and secure, with no exposure to the model or service provider. This ensures that the data is not used in unauthorized ways and is only accessible to authorized parties. The platform leverages privacy-preserving technologies, employing secure hardware enclaves and cryptographic fortification to safeguard sensitive data against potential cyber-attacks and data breaches.

One example of how Opaque’s solution enhances security is by enabling generative AI models to run inference within confidential virtual machines (CVMs). This capability allows organizations to develop secure chatbots that comply with regulatory requirements.

Opaque Systems’ commitment to protecting confidentiality and enhancing data security in AI applications reflects the increasing importance of these issues in today’s digital landscape. With the introduction of their privacy-preserving generative AI and zero-trust DCR features, Opaque empowers organizations to confidently leverage the benefits of LLMs without compromising sensitive data. By utilizing their confidential computing platform, businesses can take proactive measures to safeguard their data, ensuring compliance with privacy regulations and mitigating the risks associated with unauthorized access and exposure. As the demand for confidential AI and analytics continues to grow, Opaque Systems’ advancements in this field position them at the forefront of the industry, offering comprehensive solutions for organizations seeking to protect their sensitive data during large language model use.

Source link

Latest articles

MuddyWater Launches RustyWater RAT via Spear-Phishing Across Middle East Sectors

 The Iranian threat actor known as MuddyWater has been attributed to a spear-phishing campaign targeting...

Meta denies viral claims about data breach affecting 17.5 million Instagram users, but change your password anyway

 Millions of Instagram users panicked over sudden password reset emails and claims that...

E-commerce platform breach exposes nearly 34 million customers’ data

 South Korea's largest online retailer, Coupang, has apologised for a massive data breach...

Fortinet Warns of Active Exploitation of FortiOS SSL VPN 2FA Bypass Vulnerability

 Fortinet on Wednesday said it observed "recent abuse" of a five-year-old security flaw in FortiOS...

More like this

MuddyWater Launches RustyWater RAT via Spear-Phishing Across Middle East Sectors

 The Iranian threat actor known as MuddyWater has been attributed to a spear-phishing campaign targeting...

Meta denies viral claims about data breach affecting 17.5 million Instagram users, but change your password anyway

 Millions of Instagram users panicked over sudden password reset emails and claims that...

E-commerce platform breach exposes nearly 34 million customers’ data

 South Korea's largest online retailer, Coupang, has apologised for a massive data breach...