Explorer

Ex OpenAI Employees Issue Open Letter Against Oversight Of Safety, Labelled Whistleblower Protections As ‘Inefficient’

The open letter addressed the risks that are associated with AI systems such as manipulation, misinformation and losing control of autonomous AI systems.

It is no mystery that numerous minds dedicated their intelligence and hard work in OpenAI to bring ChatGPT to our laptops/PCs. Now, a number of former employees who were a part of this journey have written an open letter as per which AI companies are critical of oversight and criticism. So far, the letter has been signed by 13 former employees. Out of those 13, six remained anonymous.

The open letter addressed the risks that are associated with AI systems such as manipulation, misinformation and losing control of autonomous AI systems.

What Does The Letter Say

The letter stated that since there is currently no effective government oversight on what these AI companies do, they should be at least open to criticism by their current as well as former employees. It added that these companies should also be accountable to the public. It further said that these AI companies have 'strong financial incentives' at the moment to ignore safety and that existing 'corporate governance' structures are not enough to keep them in check for the same.

The letter also mentions that AI companies have not yet shared the capabilities and limitations of these systems publically and what kind of risk levels do they pose. It also cautioned about the different kinds of harm that such systems can do. As per these former employees of OpenAI, the normal “whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”

In the past few months, numerous AI companies including the juggernaut, OpenAI, have been criticised repeatedly when it comes to the oversight of safety. Earlier last month, the chief scientist of OpenAI, Ilya Sutskever, parted ways with the company, following which the Superalignment team head, Jan Leike also resigned. Leike claimed that the safety has “taken a backseat to shiny products.”

According to reports, after these resignations, OpenAI has shut down the Superalignment team and formed a new Safety and Security Committee which is led by the CEO Sam Altman himself.

Read more
Sponsored Links by Taboola

Top Headlines

‘Project To Vilify Nehru, Erase His Legacy’: Sonia Gandhi Accuses BJP Of Rewriting History
‘Project To Vilify Nehru, Erase His Legacy’: Sonia Gandhi Accuses BJP Of Rewriting History
'Operations Steadily Resuming': Delhi Airport Issues Advisory Amid IndiGo Meltdown
'Operations Steadily Resuming': Delhi Airport Issues Advisory Amid IndiGo Meltdown
Putin’s Big Visit, Small Gains: Russian President's Much-Hyped India Tour Had More Show Than Substance
Big Visit, Small Gains: Putin's Much-Hyped India Tour Had More Show Than Substance
US Should Apologise To India, Says Ex-Pentagon Official As He Calls For Arrest Of Pakistan Army Chief
US Should Apologise To India, Says Ex-Pentagon Official As He Calls For Arrest Of Pakistan Army Chief

Videos

Breaking: Massive fire at Moradabad scrap warehouse; all rescued safely, blaze under control
Russia-India Relations: Major Defence Agreement Inked Between Two Nation, Marking a New Step Toward Military Cooperation
Breaking: Putin to pay tribute at Rajghat; Delhi on alert with tight security, diversions
Breaking: Political clash in Bengal intensifies as TMC MLA and Governor face off
Breaking: Deadly floods in Greece, major road accidents rock Andhra & UP amid chaos

Photo Gallery

25°C
New Delhi
Rain: 100mm
Humidity: 97%
Wind: WNW 47km/h
See Today's Weather
powered by
Accu Weather
Embed widget