(Source: ECI/ABP News/ABP Majha)
OpenAI Employees Accuse The Company Of Neglecting Safety & Security Protocols To Speed Through Innovations: Report
Three OpenAI employees have said that the team was pressured to speed through a new testing protocol designed to “prevent the AI system from causing catastrophic harm, to meet a May launch date set by OpenAI's leaders.”
OpenAI has been at the frontline for quite some time when it comes to artificial intelligence (AI) and advanced Large Language Models (LLMs). It seems that amid all this hype and popularity, the company has forgotten about the safety concerns. There have been instances wherein OpenAI employees have resigned from their positions while citing ignorance towards safety and security protocols as the reason. Recently another report has surfaced as per which OpenAI is speeding through and neglecting the safety and security protocols while developing new models.
The report emphasised the negligence which occurred prior to the launch of OpenAI's latest GPT-4 Omni (or GPT-4o) model.
Some OpenAI employees signed an open letter without disclosing their identities and expressed concerns about the lack of oversight around building AI systems. It is important to note that OpenAI has also created a new Safety and Security Committee of some select board members and directors in order to evaluate and form new protocols.
ALSO READ | OnePlus 12 Gets Massive Price Cut: Here's How To Get The Device On Discounted Price
OpenAI Is Neglecting Safety Protocols?
Three OpenAI employees have anonymously told The Washington Post that the team had been under pressure to speed through a new testing protocol which was specifically designed to “prevent the AI system from causing catastrophic harm, to meet a May launch date set by OpenAI's leaders.”
The report has highlighted a similar incident which occurred before the launch of the GPT-4o. The report quoted an anonymous OpenAI employee as saying, “They planned the launch after-party prior to knowing if it was safe to launch. We basically failed at the process.”
OpenAI employees have previously raised concerns about the company's apparent neglect of safety and security measures. Recently, numerous former and current staff members from both OpenAI and Google DeepMind signed an open letter. This letter highlighted their worries about the insufficient oversight in developing new AI systems that could present significant risks.