A Microsoft executive has claimed that OpenAI's DALL-E 3 image-generating artificial intelligence has security vulnerabilities that could allow users to produce violent or immoral images. However, Microsoft's legal team apparently tried to dissuade Redmond's head of engineering, Shane Jones, from warning about the problem.
According to GeekWire, Jones sent a letter to US Senators Patty Murray, Maria Cantwell, Congressman Adam Smith, and Washington State Attorney General Bob Ferguson. It is written in this letter:
“I have concluded that DALL-E 3 poses a public safety risk and should be removed from public access until OpenAI can address the risks associated with this model.”
In his letter, Jones claims that in early December of last year, he discovered a vulnerability that allowed him to bypass the DALL-E 3's AI security capabilities. He says he reported this to his superiors at Microsoft, who instructed Jones to “personally report this to OpenAI.”
He then attempted to publicize the issue in a LinkedIn post. Jones wrote:
“On the morning of December 14th, 2023, I posted a letter publicly on LinkedIn to the OpenAI board asking them to take DALL-E 3 out of reach. “As Microsoft is a board observer on OpenAI and I had already discussed my concerns with our leadership team, I immediately informed Microsoft of the letter I had sent.”
Microsoft and artificial intelligence maker DALL-E 3's response to Jones' concerns
In return, Microsoft apparently asked Jones to remove his post:
“Shortly after disclosing the letter to the leadership team, my manager contacted me to say that Microsoft's legal department had asked me to remove the post. “He told me that Microsoft's legal department would soon provide their reasons for removing this post via email and that I should immediately remove it without waiting for a legal email.”
Jones eventually deleted the post, but Microsoft's legal team has yet to return the email.
Also, an OpenAI spokesperson explained in an email to Engadget:
“We immediately reviewed the Microsoft employee's report and found that the technique shared by him did not bypass our security systems. Safety is our priority and we have a multifaceted approach to it. In the original DALL-E 3 model, we have tried to filter explicit content, including sexual and violent content, from its training data, and we have developed powerful classification systems to prevent the model from producing harmful images.”
RCO NEWS