Ex OpenAI Employees Issue Open Letter Against Oversight Of Safety, Labelled Whistleblower Protections As ‘Inefficient’
The open letter addressed the risks that are associated with AI systems such as manipulation, misinformation and losing control of autonomous AI systems.
It is no mystery that numerous minds dedicated their intelligence and hard work in OpenAI to bring ChatGPT to our laptops/PCs. Now, a number of former employees who were a part of this journey have written an open letter as per which AI companies are critical of oversight and criticism. So far, the letter has been signed by 13 former employees. Out of those 13, six remained anonymous.
The open letter addressed the risks that are associated with AI systems such as manipulation, misinformation and losing control of autonomous AI systems.
What Does The Letter Say
The letter stated that since there is currently no effective government oversight on what these AI companies do, they should be at least open to criticism by their current as well as former employees. It added that these companies should also be accountable to the public. It further said that these AI companies have 'strong financial incentives' at the moment to ignore safety and that existing 'corporate governance' structures are not enough to keep them in check for the same.
The letter also mentions that AI companies have not yet shared the capabilities and limitations of these systems publically and what kind of risk levels do they pose. It also cautioned about the different kinds of harm that such systems can do. As per these former employees of OpenAI, the normal “whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”
In the past few months, numerous AI companies including the juggernaut, OpenAI, have been criticised repeatedly when it comes to the oversight of safety. Earlier last month, the chief scientist of OpenAI, Ilya Sutskever, parted ways with the company, following which the Superalignment team head, Jan Leike also resigned. Leike claimed that the safety has “taken a backseat to shiny products.”
According to reports, after these resignations, OpenAI has shut down the Superalignment team and formed a new Safety and Security Committee which is led by the CEO Sam Altman himself.