OpenAI first unveiled Sora’s text-to-video artificial intelligence model last winter. This company recently allowed some users to test this model in a limited way. Apparently, a group of these testers have disclosed access to this model in a protest move.
Recently, a group of Sora video production model testers claimed to have disclosed access to this artificial intelligence. In a post on the public Hugging Face platform, they made available a project that apparently connected to the Sora Model API. The project worked through authentication tokens that were probably given to them in the past to test this AI and allowed users to create videos.
Users made videos with artificial intelligence Sora
This project was shut down shortly after its release, but some users on the X social network say they managed to use it and the output performance of this model has the famous OpenAI characteristics. The whistleblower group claims that OpenAI has cut off all artists’ access to the Sora model after 3 hours.
But why was access to this artificial intelligence revealed? The group says OpenAI pressured early Sora testers to create a positive narrative about Sora, but did nothing to compensate them for their efforts. “Hundreds of artists have toiled through bug testing, feedback, and unpaid demo work for a program (Sora Early Access) owned by a $150 billion company,” the group’s note reads in part. “This early access program seems to have been more about public relations and advertising than about creative and critical expression.”
Whistleblowers claim that OpenAI misled the public about Sora’s capabilities and restricted access to the model in order to conceal the truth. All outputs of this video production model must be approved by OpenAI before release, and the company will only review the output of a limited group of these testers.
In response to this incident, OpenAI stated in a statement that participation in this project was optional and participants were under no obligation to provide feedback or use the tool.
RCO NEWS