YouTube AI likeness detection tool helps creators remove deepfake videos

A A
Resize

AI-generated videos are becoming so realistic that identifying a fake online version of someone is increasingly difficult. This raises concerns for creators: what if their face appears in videos they didn’t produce?

YouTube is addressing this issue by expanding its AI likeness detection system to a larger group of creators. Eligible users will now have new tools to monitor and report AI-generated videos that imitate them. Initially tested with a small pilot within the YouTube Partner Program, the system will soon be available to all eligible creators over 18.

Accessible through YouTube Studio, the tool helps creators detect when their likeness has been used in synthetic or altered videos. It scans for AI-generated content resembling a creator’s face, allowing users to review suspicious videos and request removal if they breach privacy policies.

This is vital as AI impersonation, including deepfake videos, can mimic expressions, voices, and speech patterns with high accuracy, risking damage to creators’ reputations and spreading misinformation.

YouTube’s goal is to offer creators better insight into how their images are used and to help viewers avoid confusion.

To enable the feature, creators can go to YouTube Studio, navigate to Content Detection > Likeness, and follow the steps for permission and verification.

Once activated, the system will automatically scan for AI-altered videos using their likeness. If matches are found, creators can review and request removal directly in YouTube Studio.

The platform also notes that flagged videos may not appear immediately, which doesn’t indicate a problem—simply that no such content exists yet. The system continues to work in the background, underscoring a broader shift across online platforms: AI tools are advancing faster than moderation systems can keep up with.

Companies are under increasing pressure to develop safeguards against identity misuse, synthetic media, and deepfakes before issues escalate. For YouTube creators, this new detection system could be one of the most important safety tools in the AI age.