HomeCII/OTOpenAI Announces Safety And Security Committee As New Models Are Developed

OpenAI Announces Safety And Security Committee As New Models Are Developed

Published on

spot_img

In the realm of artificial intelligence (AI), OpenAI made headlines with the announcement of their new safety and security committee, a pivotal move as they gear up to train a cutting-edge AI model meant to supersede the current GPT-4 system powering their ChatGPT chatbot. This development was disclosed in a blog post by the San Francisco-based startup on Tuesday, shedding light on the committee’s critical role in advising the board on essential safety and security decisions concerning OpenAI’s various projects and operations.

The formation of the safety committee comes at a time when AI safety is a hot topic at OpenAI, following the resignation of researcher Jan Leike, who criticized the company for prioritizing product advancement over safety protocols. Subsequently, co-founder and chief scientist Ilya Sutskever also stepped down, resulting in the disbandment of the “superalignment” team he co-led with Leike, focused on addressing AI risks. Despite these internal challenges, OpenAI made it clear that their AI models are top-tier in both capability and safety, expressing a willingness to engage in robust discussions during this crucial period.

Diving into the composition and responsibilities of OpenAI’s Safety and Security Committee, it comprises key company figures such as CEO Sam Altman, Chairman Bret Taylor, four technical and policy experts from within OpenAI, as well as board members Adam D’Angelo of Quora and Nicole Seligman, former general counsel of Sony. The committee’s initial task is to assess and enhance OpenAI’s existing safety protocols within the next 90 days, with a commitment to publicly disclose adopted recommendations aligned with safety and security considerations.

This strategic move by OpenAI underscores their dedication to addressing AI safety concerns and upholding their position as innovators in the AI landscape. By incorporating a diverse group of experts and stakeholders in the decision-making process, OpenAI aims to prioritize safety and security as they forge ahead with groundbreaking AI advancements.

In tandem with the safety committee’s establishment, OpenAI revealed that they have commenced training a new AI model labeled as a “frontier model,” denoting an advanced AI system capable of text, image, video, and human-like conversation generation based on extensive datasets. Additionally, the company introduced their latest flagship model, GPT-4o (omni), a multilingual, multimodal generative pre-trained transformer, unveiled by CTO Mira Murati during a live-streamed demonstration on May 13. GPT-4o, available for free with enhanced features for ChatGPT Plus subscribers, boasts a context window supporting up to 128,000 tokens, ensuring coherence in longer conversations and documents, making it ideal for detailed analysis.

This development showcases OpenAI’s commitment to pushing boundaries in the AI domain while prioritizing safety and security, setting a precedent for responsible AI innovation. As they navigate through the evolving landscape of artificial intelligence, OpenAI’s strategic initiatives and advancements continue to shape the future of AI technology.

Source link

Latest articles

China’s Silver Dragon Dismantles Governments in the EU and Southeast Asia

Title: Emerging Actor Linked to APT41 Nexus Unveils New Tactics in Cyber Espionage In a...

The 10-Hour Problem: Impact of Visibility Gaps on SOC Burnout

Visibility Issues Plague Security Teams, Study Reveals In the dynamic and complex world of cybersecurity,...

How AI, Zero Trust, and Modern Security Demand Deep Visibility

The Imperative of Visibility in Modern Cybersecurity Strategies In today's rapidly evolving cybersecurity landscape, three...

More like this

China’s Silver Dragon Dismantles Governments in the EU and Southeast Asia

Title: Emerging Actor Linked to APT41 Nexus Unveils New Tactics in Cyber Espionage In a...

The 10-Hour Problem: Impact of Visibility Gaps on SOC Burnout

Visibility Issues Plague Security Teams, Study Reveals In the dynamic and complex world of cybersecurity,...