ExtraHop, a cybersecurity company, has launched a new tool called “Reveal(x)”, which aims to provide companies with visibility into the devices and users on their networks that are connected to OpenAI domains. OpenAI domains are used for AI tools such as ChatGPT, which has recently gained widespread popularity due to its potential usage in every organization. ChatGPT only took two months to reach one billion customers, whereas TikTok took eight years. A survey conducted by Gartner in 2023 revealed that 9 out of 10 respondents chose ChatGPT implementation in their organization by 2025.
Despite ChatGPT’s potential to speed up organizational progress, there are still intellectual property risks associated with using AIaaS tools within organizations. Recently, there have been many data leaks associated with ChatGPT. Users who use ChatGPT for code reviews or any discovery or research share proprietary information with ChatGPT, putting confidential data at risk. In addition, ChatGPT stores the data in public domains and uses this information to answer other user requests.
Reveal(x) helps organizations overcome this risk by providing visibility into the devices and users inside the network that are connected to OpenAI domains. This can help organizations implement AI language models and generative AI tools with a lot of control over their data. This information is crucial for organizations to identify the amount of data sent to OpenAI domains, which can help assess the risks linked to using AI services. Security personnel can validate the range of acceptable risks and decrease potential intellectual property loss.
Reveal(x) uses network packets as a primary data source for monitoring and analysis during real-time detection. It can provide deep visibility and real-time detection because it uses network packets as the primary data source for monitoring and analysis. It strips the content and payload sent from OSI layers 2-7 (DataLink Layer to Application Layer) for complete data visibility.
Though several rules, regulations, and policies exist on how AI must store and use data, it is still essential for organizations to understand how to use these services. ExtraHop believes that the productivity benefits of these tools outweigh the data exposure risks, as long as organizations understand how these services will use their data and how long they will retain it. ExtraHop also recommends that organizations implement policies governing the use of these services and have a control like Reveal(x) in place to assess policy compliance and spot risks in real-time.
It is still unclear how far an AI can go based on its capabilities and the risks it poses to data exposure. As AIaaS tools like ChatGPT continue to gain widespread popularity, organizations must prioritize data protection to avoid any potential breaches or data leaks. Tools like Reveal(x) provide organizations with much-needed visibility into their network devices and users and help them assess their policy compliance and identify risks in real-time.