When AI tools started spreading online, deepfakes became one of the first big worries. And with tech like OpenAI’s new Sora 2 becoming more advanced and easy to use, that concern has only grown. It’s no longer just about what AI can do but how people might misuse it. To give people more control over how their image is used, YouTube is finally rolling out a likeness detection tool designed to spot and remove unwanted deepfakes.
The feature was teased last year and is now being launched for members of the YouTube Partner Program. It focuses on identifying videos where someone’s face has been altered using AI. However, it won’t yet cover situations where someone’s voice is faked without their consent. To use it, users will need to verify their identity by submitting a government ID and a short video selfie. This helps YouTube confirm who they are and gives the system reference material to compare against.
Once set up, the tool works a lot like YouTube’s Content ID system, which finds copyrighted audio. It scans new uploads for possible AI-generated matches and notifies the person involved. If the likeness appears to have been used without permission, they can request a review and have the video taken down.
For now, the tool is limited in scope, but it’s a clear sign that YouTube is starting to tackle one of the messiest problems in the age of generative AI, and personally, I think this is definitely a step in the right direction, because deepfakes have gotten so realistic these days that we actually do need advanced tools to help see the reality.


