OpenAI is currently training a new AI system to succeed GPT-4, as the company aims to rebuild its reputation. To ensure safety, OpenAI has established a safety and security committee, which will be led by CEO Sam Altman and three other board directors.
The company has been facing pressure to demonstrate its commitment to safety after former risk researchers Jan Leike and Gretchen Krueger publicly criticized OpenAI’s alleged poor safety culture on X, formerly known as Twitter.
In a blog post published on Tuesday, OpenAI stated that they did not disclose the release date or capabilities of the GPT-4 successor. With a valuation of $80 billion, OpenAI took a bold step by introducing its popular AI chatbot, ChatGPT, while other major players like Google were hesitant due to reputational risk.
Despite being at the forefront of the generative artificial intelligence race, competing with companies like Anthropic, Google, and Microsoft, OpenAI has not always met ethical standards. Leike, in his resignation announcement, revealed that safety culture and processes were often neglected in favor of flashy products. Krueger emphasized the need for improvement in decision-making processes, accountability, transparency, policy enforcement, and responsible use of technology.
Adding to the criticism, a European Union task force recently reported that ChatGPT, OpenAI’s flagship product, falls short of accuracy standards.
In another high-profile controversy, OpenAI faced accusations of unauthorized use of actress Scarlett Johannson’s voice in their GPT-4 model update.
To address these concerns, OpenAI is now focusing on safety as the selling point of its upcoming AI program. The company aims to be recognized by regulators as a responsible AI developer. The newly established safety and security committee, led by directors Altman, Bret Taylor, Adam D’Angelo, and Nicole Seligman, will evaluate and enhance processes and safeguards over the next three months. The committee will report back to three other directors.
Earlier this month, OpenAI disbanded its safety team after the departure of team leader and co-founder Ilya Sutskever.
Source: Cryptopolitan Reporting by Jeffrey Gogo