OpenAI announced in a new report that ChatGPT has rejected more than 250,000 requests to produce deepfakes about the US presidential election. Also, if this chatbot was asked about the elections, it would refer to official sources.
According to OpenAI, in recent months ChatGPT has rejected more than 250,000 requests to produce deepfake images of Joe Biden, Donald Trump, Kamala Harris, and JD Vance, Donald Trump’s vice president.
Earlier this year, this artificial intelligence company announced that its artificial intelligence products prevent deepfaking or impersonation of candidates.
ChatGPT’s performance in the US election
During the election, even months before, if you asked ChatGPT about the election, it would usually direct you to CanIVote.org, the official online voting resource in the United States. According to OpenAI, the ChatGPT chatbot has answered about a million voting questions and told users to check out the site.
In addition, OpenAI said on Election Day it referred users to news organizations such as the Associated Press and Reuters. According to this company, on election day, about 2 million user requests were referred to reputable news sites. Also, ChatGPT avoided expressing political opinions about candidates. In contrast, chatbots like Elon Musk’s Grok AI were excited about Trump’s victory.
In general, as previously promised by OpenAI, ChatGPT did not recommend any particular presidential candidate or political viewpoint to users, even if it was directly asked to do so. One concern was that DALL-E’s artificial intelligence would produce deepfake images of the election, but OpenAI says it has rejected such requests.
RCO NEWS