The ICO’s updated guidance on workplace monitoring has been released by the UK’s Information Commissioner’s Office. The guidance, which covers not just employees but also “workers” such as consultants and contractors, applies to both systematic and occasional monitoring, including the use of security cameras. The guidance emphasizes the need for transparency and allows workers to challenge any automated legal decisions made based on surveillance. Biometric data must be adequately protected, and special category data requires a permitted purpose and legal basis. Workers have the right to object to monitoring, but the company can override the objection if it can demonstrate compelling legitimate interests.
Hackers have claimed responsibility for a recent cyberattack on St. Louis’ Metro Transit and have reportedly published the data stolen in the attack. The cybercriminals had demanded a ransom, which the company refused to pay. The stolen data is said to include passports, Social Security numbers, and tax information belonging to agency employees. While the breach has not yet been confirmed, cybersecurity experts have shared screenshots of the allegedly stolen data, which has been viewed over 700 times. There have been no reports of malicious activity related to the breach affecting employees or customers.
The UK’s Department of Science, Innovation, and Technology has issued a code of practice for app stores and app developers regarding privacy and security. The code, composed of eight key principles, is considered voluntary but is recommended to ensure compliance with existing legislation. Store operators are responsible for ensuring that developers adhere to the code and are advised to only allow apps on their platform that meet the code’s requirements. The code also emphasizes the need for vulnerability disclosure processes, keeping apps up to date, and providing essential security and privacy information to users. The Information Commissioner’s Office has provided additional guidance on legal obligations and how to report apps with security or privacy issues.
Data brokers, such as Near Intelligence, have been supplying user data to the US government for surveillance purposes, according to The Wall Street Journal. Ad-supported phone apps collect data about their users, which is then shared with data brokers. The data brokers repackage the information and sell it to government contractors, who pass it on to government agencies. Near Intelligence reportedly obtained data from various advertising exchanges and claimed to have collected data from over a billion devices. Some of the ad exchanges involved have severed ties with Near Intelligence, stating that their actions violated their terms of service.
The Delete Act, also known as Senate Bill 362, has been passed in California. The law allows consumers to request the deletion of their personal information from all data brokers with a single request. The California Privacy Protection Agency will create the data deletion mechanism to support these requests by January 2026. Data brokers will be required to review and process new requests every forty-five days starting in August 2026. From 2028, data brokers will also be required to undergo independent audits every three years to ensure compliance. Failure to comply with the Delete Act can result in steep penalties, including fines.
A new study raises concerns about the intelligence of AI chatbots. The study suggests that chatbots could become too smart, leading to ethical and privacy issues. As chatbots become more advanced, they are able to generate more realistic and persuasive responses. This raises concerns about the potential for chatbots to manipulate or deceive users. The study calls for further research into the ethics of AI chatbots and the development of guidelines to ensure their responsible use.
