CyberSecurity SEE

Training AI with Customer Data: The Enterprise Risk for Vendors

Training AI with Customer Data: The Enterprise Risk for Vendors

Zoom, the popular video conferencing company, recently faced criticism after announcing its plans to use customer data for training its machine learning models. While some customers raised concerns about the privacy implications, it is important to recognize that Zoom is not the first, nor will it be the last, company to pursue such practices.

Enterprises, especially those integrating AI tools for internal use, should consider these plans as emerging challenges that require proactive solutions. Processes, oversight, and technological controls need to be implemented to address potential privacy risks associated with the use of customer data in training AI models.

Earlier this year, Zoom modified its terms of service to give itself the right to use certain customer content for AI training purposes. However, the company eventually abandoned this change after facing backlash from customers. This incident serves as a reminder that companies should actively evaluate how technology vendors and third parties may utilize their data during the AI era.

Claude Mandy, Chief Evangelist of Data Security at Symmetry Systems, highlights the importance of making a clear distinction between data about the customer and data of the customer. While technology companies have been collecting data about customer service usage, using that data for AI training purposes involves a different level of access to and utilization of customer-generated content.

This distinction is becoming a focus in several lawsuits involving major technology companies and consumers. For example, Google is being sued by a class of millions of consumers who allege that the company scraped publicly available data, including personal and professional information, and used it to train their AI technology. Similar allegations have been made against Microsoft, with comedian Sarah Silverman and two authors accusing the company of using their copyrighted material without consent for AI training.

These lawsuits serve as a reminder that organizations need to ensure technology companies do not misuse their data. Denis Mandich, Co-founder of Qrypt and former member of the US intelligence community, emphasizes that there is a significant difference between using customer data to improve user experience and using it for training AI. The latter poses additional risks, as AI models can potentially predict individual behavior and expose individuals and companies to jeopardy.

To mitigate the risk of sensitive data being utilized in AI models, organizations should consider opting out of AI training and generative AI features that are not under private deployment. Transparency and informed consent regarding data usage should be prioritized, and vendors’ terms of service should provide clear guidelines on how company data will be used.

Furthermore, it is essential to ensure that AI tools store customer information securely to prevent potential vulnerability in the event of a cyber-attack or data breach. Mandich suggests that companies should demand technology providers to use end-to-end encryption to minimize the risk of unauthorized access. Ideally, encryption keys should be issued and managed by the company rather than the provider.

In conclusion, Zoom’s recent incident highlights the need for organizations to address the potential risks associated with the use of customer data in training AI models. By implementing processes, oversight, and technological controls, companies can proactively protect their sensitive data while still reaping the benefits of AI technology. Transparency, informed consent, and data security should be prioritized to ensure ethical and responsible use of customer data in the AI era.

Source link

Exit mobile version