YouTube Warns of AI-Generated Video Impersonating CEO in Phishing Scam

Edited by Ben Jacklin
11,946

Image source: Freepik

In the recent scheme, scammers created a deepfake video that appears to show Mohan, YouTube’s CEO, announcing changes to the platform’s monetization policies. It was shared privately with targeted users via email​. The ruse is designed to steal YouTube channel login credentials, and several creators have already been affected.

Deepfake technology primarily uses a specific type of neural network known as a Generative Adversarial Network (GAN). GANs operate by training two neural networks simultaneously: a "generator," which creates new images or video frames, and a "discriminator," which evaluates their authenticity. 

According to YouTube and security researchers, the attackers made the scam video seem legitimate by using YouTube’s own features. 

In the video’s description, victims are prompted to click on a link to “confirm the updated YouTube Partner Program (YPP) terms” in order to continue monetizing their content. That link leads to a fake website – studio.youtube-plus.com – crafted to look like a YouTube login page. Once creators enter their username and password, the credentials are siphoned straight to the scammers.

Reports of this campaign began surfacing in late January, and YouTube’s security team started investigating the issue by mid-February​. 

In early March, the platform took the unusual step of posting a prominent warning on its official community support forum, stating that they’re “aware that phishers have been sharing private videos to send false videos"​

YouTube says it’s working to shut down the fraudulent pages and videos involved. The company has also directed affected creators to resources for recovering hacked accounts.

The company also emphasized that it never contacts users or shares policy changes via unlisted private videos. If you receive such video or email, report it to YouTube via the in-platform reporting functions or Help Center and consider blocking the sender​.

Though, it’s important to note that not all uses of AI-generated video are malicious – the same technology is being employed in positive and creative ways. For example, in a 2019 public health campaign, a deepfake was used to make soccer star David Beckham appear to speak nine different languages in a malaria awareness video​. 

Museums have even used deepfakes to animate historical figures; visitors to the Dalí Museum in Spain can “meet” a lifelike digital Salvador Dalí who chats with them, thanks to this technology​. 

Digital security experts warn that internet users may need to adopt a mindset of healthy skepticism toward unexpected videos or voice messages – “trust, but verify” might become “verify first, then trust” in an era when seeing (or hearing) is no longer proof of believing.

Have questions?

Have questions?

If you can’t find the answer to your question, please feel free to contact our Support Team.

Join us for discounts, editing tips, and content ideas

1.5M+ users already subscribed to our newsletter

By signing up, I agree to receive marketing emails from Movavi and agree to Movavi's Privacy Policy.