HomeCII/OTOpenAI Establishes New Safety Committee Following Dissolution of Previous Team

OpenAI Establishes New Safety Committee Following Dissolution of Previous Team

Published on

spot_img

Open AI has recently announced the formation of a safety and security committee, which will be led by company directors Bret Taylor, Adam D’Angelo, Nicole Seligman, and CEO Sam Altman. The purpose of this committee is to provide recommendations to the full board regarding safety measures and security decisions for OpenAI projects and operations.

In their official announcement of the committee, OpenAI revealed that they have already started training the next iteration of the large language model that powers ChatGPT. The company emphasized the importance of engaging in a robust debate on AI safety at this critical juncture. The committee’s initial task is to assess and enhance the organization’s processes and safeguards over the next 90 days. Subsequently, the committee will present its recommendations to the board for review before sharing them with the public.

The formation of this committee follows the resignation of Jan Leike, a former safety executive at OpenAI, due to concerns about inadequate investment in safety initiatives and conflicts with leadership. Additionally, OpenAI disbanded its “superalignment” safety oversight team, reassigning its members to other roles within the company.

Cybersecurity expert and entrepreneur Ilia Kolochenko expressed skepticism about the potential societal benefits of this organizational change at OpenAI. While he acknowledged the importance of making AI models safe to prevent misuse and dangerous outcomes, Kolochenko underscored that safety is just one element of the risks that AI vendors must address. He emphasized that AI solutions must also exhibit characteristics such as accuracy, reliability, fairness, transparency, explainability, and non-discrimination to be truly effective and beneficial for society.

The decision to establish a safety and security committee at OpenAI signals a proactive approach to addressing concerns about the ethical and responsible use of AI technologies. By involving key stakeholders in the evaluation and enhancement of safety measures, OpenAI is taking steps to ensure that its projects and operations adhere to high standards of security and accountability. As the committee begins its work, stakeholders will be looking to see how its recommendations will shape the future direction of AI development at OpenAI and the broader tech industry.

Source link

Latest articles

China’s Silver Dragon Dismantles Governments in the EU and Southeast Asia

Title: Emerging Actor Linked to APT41 Nexus Unveils New Tactics in Cyber Espionage In a...

The 10-Hour Problem: Impact of Visibility Gaps on SOC Burnout

Visibility Issues Plague Security Teams, Study Reveals In the dynamic and complex world of cybersecurity,...

How AI, Zero Trust, and Modern Security Demand Deep Visibility

The Imperative of Visibility in Modern Cybersecurity Strategies In today's rapidly evolving cybersecurity landscape, three...

More like this

China’s Silver Dragon Dismantles Governments in the EU and Southeast Asia

Title: Emerging Actor Linked to APT41 Nexus Unveils New Tactics in Cyber Espionage In a...

The 10-Hour Problem: Impact of Visibility Gaps on SOC Burnout

Visibility Issues Plague Security Teams, Study Reveals In the dynamic and complex world of cybersecurity,...