HomeCII/OTOpenAI Establishes New Safety Committee Following Dissolution of Previous Team

OpenAI Establishes New Safety Committee Following Dissolution of Previous Team

Published on

spot_img

Open AI has recently announced the formation of a safety and security committee, which will be led by company directors Bret Taylor, Adam D’Angelo, Nicole Seligman, and CEO Sam Altman. The purpose of this committee is to provide recommendations to the full board regarding safety measures and security decisions for OpenAI projects and operations.

In their official announcement of the committee, OpenAI revealed that they have already started training the next iteration of the large language model that powers ChatGPT. The company emphasized the importance of engaging in a robust debate on AI safety at this critical juncture. The committee’s initial task is to assess and enhance the organization’s processes and safeguards over the next 90 days. Subsequently, the committee will present its recommendations to the board for review before sharing them with the public.

The formation of this committee follows the resignation of Jan Leike, a former safety executive at OpenAI, due to concerns about inadequate investment in safety initiatives and conflicts with leadership. Additionally, OpenAI disbanded its “superalignment” safety oversight team, reassigning its members to other roles within the company.

Cybersecurity expert and entrepreneur Ilia Kolochenko expressed skepticism about the potential societal benefits of this organizational change at OpenAI. While he acknowledged the importance of making AI models safe to prevent misuse and dangerous outcomes, Kolochenko underscored that safety is just one element of the risks that AI vendors must address. He emphasized that AI solutions must also exhibit characteristics such as accuracy, reliability, fairness, transparency, explainability, and non-discrimination to be truly effective and beneficial for society.

The decision to establish a safety and security committee at OpenAI signals a proactive approach to addressing concerns about the ethical and responsible use of AI technologies. By involving key stakeholders in the evaluation and enhancement of safety measures, OpenAI is taking steps to ensure that its projects and operations adhere to high standards of security and accountability. As the committee begins its work, stakeholders will be looking to see how its recommendations will shape the future direction of AI development at OpenAI and the broader tech industry.

Source link

Latest articles

US Sanctions Focus on Leaders of Cambodian Scam Network

US Sanctions Target Cambodian Cryptocurrency Fraud Network A Cambodian network accused of orchestrating large-scale cryptocurrency...

Microsoft resolved an unpatched ‘agent-only’ role issue.

In a recent analysis conducted by cybersecurity experts, concerns were raised regarding the potential...

Many Cybersecurity Professionals Feel Undervalued and Underpaid

A recent report has revealed that a significant majority of cybersecurity professionals did not...

Aspiritech Celebrates Cybersecurity Apprenticeship Program

Aspiritech Launches Cybersecurity Apprenticeship Program to Empower Autistic Adults Aspiritech, a nonprofit organization based in...

More like this

US Sanctions Focus on Leaders of Cambodian Scam Network

US Sanctions Target Cambodian Cryptocurrency Fraud Network A Cambodian network accused of orchestrating large-scale cryptocurrency...

Microsoft resolved an unpatched ‘agent-only’ role issue.

In a recent analysis conducted by cybersecurity experts, concerns were raised regarding the potential...

Many Cybersecurity Professionals Feel Undervalued and Underpaid

A recent report has revealed that a significant majority of cybersecurity professionals did not...