YouTube is rolling out its AI likeness detection tool to more people. The feature, first made available to creators in the Partner Program, is now being tested with government officials, journalists, and political candidates. The goal is to help protect public figures from having their image misused in AI-generated videos.
The tool works like Content ID. It scans uploads for matches to a participant’s face or likeness. If something is flagged, the person can review the video and ask for removal if it violates YouTube’s privacy rules.
Not everything will be taken down, though. Parody, satire, and content considered in the public interest will still be allowed.
To join, participants need to verify their identity. YouTube says the data collected is only used for verification and to power the tool. It will not be used to train Google’s AI models.
Also Read: YouTube overtakes Disney as the world’s largest media company
The company also supports stronger laws to protect people’s likenesses. They believe individuals should be given more control over how their image is used.
Detecting fake likenesses early can reduce the spread of misleading content and protect viewers from being tricked by AI-generated media.






