AI Tool Copilot Designer Has Tendency To Create 'Sexually Objectified' Images: Microsoft Software Engineer
According to a letter sent to the FTC, the engineer said that he told Microsoft about this repeatedly and urged them to remove Copilot Designer from public use till the safety mechanism gets improved.
A software engineer from Microsoft recently sent letters to the company's board, lawmakers and the Federal Trade Commission (FTC) and claimed that the tech giant is not doing enough to safeguard its AI image generation tool from creating abusive and violent content. The engineer, Shane Jones, said that he found a vulnerability in OpenAI's latest DALL-E image generator model which allowed him to bypass the safeguards which was supposed to prevent the tool from creating harmful images.
According to a letter sent to the FTC on Wednesday, Jones said that he informed Microsoft about this and 'repeatedly urged' the company to "remove Copilot Designer from public use until better safeguards could be put in place," reported Bloomberg.
The letter read, "While Microsoft is publicly marketing Copilot Designer as a safe AI product for use by everyone, including children of any age, internally the company is well aware of systemic issues where the product is creating harmful images that could be offensive and inappropriate for consumers. Microsoft Copilot Designer does not include the necessary product warnings or disclosures needed for consumers to be aware of these risks."
He alleged that Copilot Designer had a tendency to generate an "inappropriate, sexually objectified image of a woman in some of the pictures it creates." He added that the AI tool created "harmful content in a variety of other categories including political bias, underage drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion to name a few."
The FTC confirmed that it had indeed received the letter but declined to comment further.
Jones also wrote to Microsoft's Environmental, Social and Public Policy Committee. In the letter to the Environmental committee, he wrote, "I don't believe we need to wait for government regulation to ensure we are transparent with consumers about AI risks. Given our corporate values, we should voluntarily and transparently disclose known AI risks, especially when the AI product is being actively marketed to children."
He reiterated that he has expressed his concerns to the company a number of times over the past three months. Earlier in January, he wrote to Democratic Senators Patty Murray and Maria Cantwell. He asked them to investigate the risks of "AI image generation technologies and the corporate governance and responsible AI practices of the companies building and marketing these products."
Microsoft Indirectly Admitted The Same?
Microsoft announced last week that it was looking into complaints about its Copilot chatbot producing responses that users deemed disturbing, including inconsistent messages related to suicide. In February, Alphabet Inc.'s primary AI product, Gemini, faced criticism for generating historically inaccurate scenes when tasked with creating images of people.
In a statement, Microsoft said it's "committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety."