One employee at a British engineering firm joined a routine video call with his CFO and 5 colleagues in 2024


He authorized $25.6 MILLION in transfers based on what was discussed but what he didn't know was that every single person on that call besides him was an AI deepfake
The company was Arup. A British design and engineering firm with 18,500 employees across 34 offices
This entire attack started with a single email
A finance worker in Arup's Hong Kong office received a message claiming to be from the firm's UK based CFO
The email referenced a "secret transaction" that needed to be processed urgently and confidentially
The worker thought it was phishing. He was right to be suspicious but then the attackers escalated
They invited him to a video conference call to verify the request was real. He joined expecting to confirm it was a scam
The CFO was on the call. So were several other senior colleagues he recognized. They greeted him by name
They discussed the transaction in detail using internal company terminology. They referenced ongoing projects only insiders would know about
Their faces moved naturally, their voices sounded exactly right and they had real conversations with each other and with him
He stopped being suspicious and following the instructions on the call, he made 15 separate wire transfers totaling HK$200 million ($25.6 MILLION) to five different Hong Kong bank accounts over the following days
The fraud was only discovered when he followed up with the actual UK head office about the project. The real CFO had never heard of the transaction. The other colleagues on the call had never been on the call
Every face on that screen was a deepfake
Hong Kong police investigated... The attackers had built sophisticated AI deepfakes of all the executives by harvesting publicly available video and audio from online conferences, virtual company meetings, and earnings calls
Arup had hours of public facing video of its leadership
The deepfakes weren't pre recorded, they were running in real time. Each fake participant could speak, react, respond, and engage in unscripted dialogue with the victim
Single phishing emails get caught all the time. Voice clone phone calls have a higher success rate but still fail when the victim asks for written confirmation
Multi person deepfake video calls solve both problems at once. You can't ask for verification when you're already verifying with your own eyes
In the same investigation, Hong Kong police uncovered a related operation. Eight stolen Hong Kong identity cards were used to make 90 loan applications and 54 bank account registrations
AI deepfakes successfully bypassed facial recognition systems on at least 20 occasions to verify those identities as legitimate
The technology stack is now mature enough to fool both human verification (video calls) and machine verification (KYC facial recognition)
Arup's chief information officer Rob Greig later confirmed the firm has been hit by attacks "rising sharply" and he estimated the global cost of AI enabled fraud will hit $40 BILLION by 2027
The Arup loss was $25.6M from one finance worker on one call. It's the most successful single deepfake fraud ever publicly recorded
The investigation remains open and no arrests have been made. The funds were dispersed across multiple accounts within hours of the transfers and most have already been laundered out of the Hong Kong banking system
To be safe you better:
Ask the person on the call to turn their head and show their full profile because the deepfake model usually breaks at extreme angles
Ask them to change the lighting in their room as deepfakes struggle with sudden lighting changes
Ask them to pick up a random object and show it on camera due to the model often glitching when integrating real objects into the synthetic face
These checks work today. They will not work for much longer
The bigger story is what this attack proves about the next decade of corporate finance
Every executive at every public company has hours of recorded video on YouTube, LinkedIn, earnings calls, and conference panels. That footage is the training dataset for any attacker who wants to build a deepfake of them
Something to be concerned about
post-image
post-image
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin