Published in AI

CEO deepfakes scam millions from gullible staff

by on21 August 2025


AI cons get dangerously slick

Crooks have discovered they don’t need to hack your network when they can just pretend to be your CEO and ask nicely.

Cybercriminals are using AI-generated voice and video to impersonate corporate bigwigs and trick staff into wiring away millions in company funds or giving up confidential information.

According to the Wall Street Journal deepfake scams might not be new, but the surge in cheap and convincing AI tools has turned them into a bigger threat than ever, say security experts.

Adaptive Security chief executive and co-founder Brian Long said his OpenAI-backed outfit has tracked more than 105,000 deepfake attacks in the US last year.

“A year ago, maybe one in 10 security executives I spoke to had seen one. Now it’s closer to five in 10,” Long said.

Firms already caught out include Ferrari, cloud security outfit Wiz and ad agency WPP. At engineering firm Arup, a UK staffer sent $25 million (€23.1 million) to fraudsters after a fake video call with AI-generated company execs.

The usual script involves a finance manager or senior engineer getting a message from a very convincing fake CEO, claiming there’s an urgent merger or deal needing instant action. This is followed by a virtual meeting with a deepfaked executive giving instructions to wire funds, send internal data or click dodgy links.

These digital puppets mimic familiar voices and faces in real time, throwing in accents, speech patterns and other mannerisms.

According to Darktrace director of security and AI strategy Margaret Cunningham, the technique works because “they simply target how humans operate.”

She said: “Familiarity, authority and urgency are powerful cognitive levers... once trust is established, even briefly, attackers can step into an insider role and request actions that feel legitimate.”

Guardio head of research Nati Tal reckons the real numbers are even higher, with many firms keeping their mouths shut to avoid reputational damage.

“Many companies and organisations will never disclose these attacks publicly,” Tal said.

YouTube warned creators in March about a phoney video of CEO Neal Mohan announcing changes to monetisation policies. The fake video was privately circulated and contained links leading to phishing sites designed to nick credentials or install malware.

“If creators have received these private videos from phishers, we encourage them to report the content,” a YouTube spokesperson told the Journal.

Optiv vice president of global cyber advisory James Turgal said AI deepfake fraud cost more than $200 million in just the first quarter of this year. OpenAI boss Sam Altman reckoned a “fraud crisis” is brewing thanks to AI’s power to impersonate people.

US Treasury’s Financial Crimes Enforcement Network has seen a rise in deepfake scams aimed at banks, mortgage firms and casino operators. In an alert late last year, it warned that crooks are now “impersonating an executive or other trusted employee and then instructing victims to transfer large sums.”

The American Bankers Association claims to be keeping an eye on things, working with federal agencies to alert members.

According to Group-IB chief executive Dmitry Volkov, top brass make perfect deepfake bait. With countless interviews, promo clips and public speaking gigs online, execs provide an endless stream of training data for scammers.

Guardio’s Tal added: “A few minutes of clean audio or video is extremely valuable... AI models learn to copy someone’s voice pattern, tone and facial movements.”

Detection is getting harder. Dave Tyson, partner at iCOUNTER, said deepfake scams are increasingly sophisticated, avoiding the usual telltale signs like dodgy links or glitchy behaviour.

“A greater and greater proportion of scams are leaving out these giveaways,” Tyson said.

A new wave of deepfake detection startups has popped up, with early funding rounds ranging from $5 million to more than $30 million (€27.7 million).

YouMail chief executive Alex Quilici said tech isn’t enough. He reckons the best fix is low-tech: verify requests using other channels. He also encouraged execs to make it clear to their teams that it’s OK to double-check identity when something smells off.

Last modified on 21 August 2025
Rate this item
(0 votes)
More in this category: « AI stocks take a kicking

Read more about: