YouTube has announced the release of a special tool for “recognition of similarity” in videos, which has been developed to deal with deepfakes or fake videos.
According to Engget, the feature of detecting similarity (Likeness Detection) is currently available for some members of the YouTube Partner Program. The tool currently only covers cases where a person’s face has been altered using artificial intelligence; So in cases where a person’s voice has been altered by artificial intelligence without their consent, this feature probably won’t be able to identify it.
YouTube fights deepfakes with Likeness Detection
To use this feature, YouTube users must provide the platform with an ID and a short selfie video so that the system can be sure that they are who they claim to be, and can also use this information as a reference to identify similarities in their reviews.
The Likeness Detection feature works similar to YouTube’s Content ID feature, that is, it scans uploaded videos and looks for cases of similarity or unauthorized use of a person’s face. The person himself can check these videos and, if detected, send a request to remove them from the platform.
Along with the spread of artificial intelligence tools on the Internet, concerns about the rise of deepfake and fake videos of people have also increased. In the last month, video production tools using artificial intelligence, such as Sora, have been released by OpenAI, which have an extremely high ability to make videos using artificial intelligence.
The videos produced by Sora are so professional that it is very difficult to distinguish them from real videos; Therefore, it is natural that despite these tools, both celebrities and ordinary people want to have more control over protecting their face and image.
Recently, some abuses of the Sora tool caused OpenAI to ban the production of videos using the face of “Martin Luther King”.
RCO NEWS




