OpenAI announced in a new report that ChatGPT has rejected more than 250,000 requests to produce deepfakes about the US presideial election. Also, if this chatbot was asked about the elections, it would refer to official sources.
According to OpenAI, in rece mohs ChatGPT has rejected more than 250,000 requests to produce deepfake images of Joe Biden, Donald Trump, Kamala Harris, and JD Vance, Donald Trump’s vice preside.

Earlier this year, this artificial ielligence company announced that its artificial ielligence products preve deepfaking or impersonation of candidates.
ChatGPT’s performance in the US election
During the election, even mohs before, if you asked ChatGPT about the election, it would usually direct you to CanIVote.org, the official online voting resource in the United States. According to OpenAI, the ChatGPT chatbot has answered about a million voting questions and told users to check out the site.
In addition, OpenAI said on Election Day it referred users to news organizations such as the Associated Press and Reuters. According to this company, on election day, about 2 million user requests were referred to reputable news sites. Also, ChatGPT avoided expressing political opinions about candidates. In corast, chatbots like Elon Musk’s Grok AI were excited about Trump’s victory.

In general, as previously promised by OpenAI, ChatGPT did not recommend any particular presideial candidate or political viewpoi to users, even if it was directly asked to do so. One concern was that DALL-E’s artificial ielligence would produce deepfake images of the election, but OpenAI says it has rejected such requests.




