Explorer

Ex OpenAI Employees Issue Open Letter Against Oversight Of Safety, Labelled Whistleblower Protections As ‘Inefficient’

The open letter addressed the risks that are associated with AI systems such as manipulation, misinformation and losing control of autonomous AI systems.

It is no mystery that numerous minds dedicated their intelligence and hard work in OpenAI to bring ChatGPT to our laptops/PCs. Now, a number of former employees who were a part of this journey have written an open letter as per which AI companies are critical of oversight and criticism. So far, the letter has been signed by 13 former employees. Out of those 13, six remained anonymous.

The open letter addressed the risks that are associated with AI systems such as manipulation, misinformation and losing control of autonomous AI systems.

What Does The Letter Say

The letter stated that since there is currently no effective government oversight on what these AI companies do, they should be at least open to criticism by their current as well as former employees. It added that these companies should also be accountable to the public. It further said that these AI companies have 'strong financial incentives' at the moment to ignore safety and that existing 'corporate governance' structures are not enough to keep them in check for the same.

The letter also mentions that AI companies have not yet shared the capabilities and limitations of these systems publically and what kind of risk levels do they pose. It also cautioned about the different kinds of harm that such systems can do. As per these former employees of OpenAI, the normal “whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”

In the past few months, numerous AI companies including the juggernaut, OpenAI, have been criticised repeatedly when it comes to the oversight of safety. Earlier last month, the chief scientist of OpenAI, Ilya Sutskever, parted ways with the company, following which the Superalignment team head, Jan Leike also resigned. Leike claimed that the safety has “taken a backseat to shiny products.”

According to reports, after these resignations, OpenAI has shut down the Superalignment team and formed a new Safety and Security Committee which is led by the CEO Sam Altman himself.

Top Headlines

Smuggled Weapons Recovered By Jammu Security Forces Near LOC Border; Pakistani Drone Drop Suspected
Smuggled Weapons Recovered By Jammu Security Forces Near LOC Border; Pakistani Drone Drop Suspected
'Mocked Me For Being Woman CM’: Rekha Gupta Defends AQI Statement, Applauds BJP Govt’s 11 Months
'Mocked Me For Being Woman CM’: Rekha Gupta Defends AQI Statement, Applauds BJP Govt’s 11 Months
Putin Next After Maduro? Trump Responds To Whether US Will Take Action Against Russian President
Putin Next After Maduro? Trump Responds To Whether US Will Take Action Against Russian President
Ayodhya Security Tightens After Prayer Attempt Inside Ram Temple; 3 Detained
Ayodhya Security Tightens After Prayer Attempt Inside Ram Temple; 3 Detained

Videos

Breaking News: SP MP Ramjilal Suman Stopped by Police While Heading to Meet Dalit Victim’s Family in Uttar Pradesh
Breaking News: West Bengal Government Files Caveat in Supreme Court Ahead of ED Hearing
Breaking News: Delhi Police Take Action at Turkman Gate, 16 Held for Role in Clashes
Breaking News: Devendra Fadnavis, Eknath Shinde Targeted in Alleged False Cases
Breaking News: Jaipur Hit-and-Run: Audi Car Tragedy Leaves 1 Dead, 15 Injured in Reckless Racing

Photo Gallery

25°C
New Delhi
Rain: 100mm
Humidity: 97%
Wind: WNW 47km/h
See Today's Weather
powered by
Accu Weather
Embed widget