OpenAI, the developer behind ChatGPT, is facing a lawsuit for allegedly scraping internet data to train its artificial intelligence technology. The class-action lawsuit alleges that OpenAI violated the privacy rights of internet users by using public content such as social media comments, blog posts, and Wikipedia articles without permission. The lawsuit, filed by the California law firm Clarkson, aims to represent individuals whose information was used without consent and seeks to implement guardrails on data usage and compensation for internet users.
OpenAI’s ChatGPT relies on reading billions of words of text to predict the best response to a user’s query. While some argue that the use of public internet data should be considered fair use, the lawsuit raises questions about the boundaries of data usage and the rights of internet users. Katherine Gardner, an intellectual-property lawyer, points out that when users post content on social media or other sites, they generally grant a broad license for the site to use their content. This may make it difficult for ordinary users to claim compensation for the use of their data in AI training.
This lawsuit is not the first legal challenge faced by OpenAI. In November of last year, a class-action lawsuit was filed against OpenAI and Microsoft for the use of computer code in GitHub to train AI tools. Furthermore, OpenAI was recently sued by a radio host who claimed that ChatGPT produced text that falsely accused him of fraud. OpenAI’s increasing popularity and the visibility of its chatbot have made it a prominent target for legal action.
In another news study, researchers at Comparitech discovered that 25% of kids’ apps available in Apple’s App Store potentially violate the Children’s Online Privacy Protection Act (COPPA). The study analyzed the top 400 children’s apps and found that many failed to provide clear and comprehensive information on obtaining parental consent. Sixteen apps had broken links or other issues that prevented the researchers from reviewing their privacy policies, and twenty-six apps had no child privacy policy at all. Additionally, almost half of the apps were found to collect data without parental consent, with persistent identifier data being the most commonly collected type.
While Apple is technically liable under COPPA for offering apps directed at children, legal gray areas surrounding apps and app stores have allowed violations to go unchecked. The researchers have reached out to Apple regarding the study, but the company has not yet responded. This study highlights the importance of privacy protections for children using mobile apps and the need for app stores to enforce compliance with regulations like COPPA.
In a concerning cyberattack incident, the University of Manchester (UoM) recently disclosed that it suffered a data breach on June 9. According to leaked information, the hackers responsible for the breach claim to have obtained 7 terabytes of data, including the information of 1.1 million National Health Services (NHS) patients across 200 hospitals. The stolen data, collected by UoM for research purposes, includes records of patients treated for major trauma after terror attacks.
One of the concerning aspects of this breach is that patients may not be aware if their data is included in the database, as they were not required to give consent for the collection of their data. Although the dataset has been secured, UoM has warned NHS officials about the potential for NHS patient data to be made available in the public domain. UoM is currently investigating the incident with the help of internal and external data experts.
These cases highlight the ongoing challenges and legal issues surrounding data privacy and security, as well as the need for stricter regulations and safeguards to protect individuals’ information. As technology continues to advance, it is crucial to ensure that data usage and collection adhere to ethical standards and that individuals’ privacy rights are respected.
