A Microsoft executive has claimed that OpenAI's DALL-E 3 image-generating artificial ielligence has security vulnerabilities that could allow users to produce viole or immoral images. However, Microsoft's legal team apparely tried to dissuade Redmond's head of engineering, Shane Jones, from warning about the problem.
According to GeekWire, Jones se a letter to US Senators Patty Murray, Maria Cawell, Congressman Adam Smith, and Washington State Attorney General Bob Ferguson. It is written in this letter:
“I have concluded that DALL-E 3 poses a public safety risk and should be removed from public access uil OpenAI can address the risks associated with this model.”
In his letter, Jones claims that in early December of last year, he discovered a vulnerability that allowed him to bypass the DALL-E 3's AI security capabilities. He says he reported this to his superiors at Microsoft, who instructed Jones to “personally report this to OpenAI.”
He then attempted to publicize the issue in a LinkedIn post. Jones wrote:
“On the morning of December 14th, 2023, I posted a letter publicly on LinkedIn to the OpenAI board asking them to take DALL-E 3 out of reach. “As Microsoft is a board observer on OpenAI and I had already discussed my concerns with our leadership team, I immediately informed Microsoft of the letter I had se.”
Microsoft and artificial ielligence maker DALL-E 3's response to Jones' concerns
In return, Microsoft apparely asked Jones to remove his post:
“Shortly after disclosing the letter to the leadership team, my manager coacted me to say that Microsoft's legal departme had asked me to remove the post. “He told me that Microsoft's legal departme would soon provide their reasons for removing this post via email and that I should immediately remove it without waiting for a legal email.”
Jones eveually deleted the post, but Microsoft's legal team has yet to return the email.
Also, an OpenAI spokesperson explained in an email to Engadget:
“We immediately reviewed the Microsoft employee's report and found that the technique shared by him did not bypass our security systems. Safety is our priority and we have a multifaceted approach to it. In the original DALL-E 3 model, we have tried to filter explicit coe, including sexual and viole coe, from its training data, and we have developed powerful classification systems to preve the model from producing harmful images.”




