Deepfake-Driven Fraud Is Costing Companies Millions
Credit: Reality Defender/Dall-E 3

Deepfake-Driven Fraud Is Costing Companies Millions

Last week, a Hong Kong finance worker was tricked into transferring $25 million to scammers. As revealed in findings by the Hong Kong police, the worker fell for an advanced social engineering attack using deepfake technology to impersonate his company's CFO.  

This is nowhere near the first high-profile case involving significant financial losses to voice-powered fraud attacks, nor will it be the last. Last year, in the U.S. alone, voice deepfakes and other “synthetic identity” attacks accounted for 33% of fraud events reported by businesses. Such schemes may cost taxpayers in the U.S. $1 Trillion over the next twelve months, with billions in losses projected to hit enterprises facing similar schemes over the same timeframe. These incidents and these projects prove that financial voice fraud is not a problem for some distant future and for some people — it is ever-present today, with shocking and costly consequences for everyone.

The massive $25 million sum stolen shows that criminals — both amateur and organized — will pull out all the stops to maximize their fraud attempts with brazen, high-reward scams that were unimaginable just a few years ago. Deepfake technology has advanced to the point where even low-tech criminals can create fake but convincingly realistic media of specific people saying and doing things they never actually did. In the case of this Hong Kong scam, criminals were able to fool the worker into thinking he was on a legitimate video call with his well-known colleagues, their personas and voices expertly recreated using generative AI. With the recent advancements of audio deepfakes — both text-to-speech and speech-to-speech — having a realistic-sounding voice with a convincing-enough video is enough oomph to make even the more astute worker never second guess who they’re talking with.

While much public discussion of deepfakes still focuses on potential future risks — particularly those used in the political sphere — the threat of deepfake-driven fraud is already here. As many companies either scramble or neglect to deal with this threat, we expect to see many more stories of professionals across different industries falling victim to convincing fake voice recordings and videos of colleagues asking for help in transferring funds or sharing sensitive data.

The stakes, of course, run the gamut of existential problems, involving everything from a depleted bottom line to a tarnished brand reputation and even, if uncareful, the crippling of an entire company in one swoop. This is no longer mere speculation; with targeted and sophisticated deepfake-powered social engineering attacks at the right level or target, such endeavors are wholly possible.

How to Stop Deepfake Fraud Today

As generative AI creation tools improve, so too does the detection end of the equation. At Reality Defender, we built industry-leading tools to solve this issue at the highest touchpoints. We do this by empowering enterprises to analyze calls in real time and at scale, all while determining whether the party on the other line is misrepresenting themselves with generative AI.

Reality Defender's audio detection requires just a six second voice sample to check for signs of AI manipulation on any existing call center or telecommunications backend. Instead of comparing the caller’s voice against a database of known voices, we employ a highly accurate and robust inference-based method, using multiple concurrent models to detect innumerable AI signatures as the call happens. 

This kind of deepfake detection capability is no longer just an aspirational safeguard; it is an absolute necessity for any organization relying on remote verbal communication or a volume of calls to complete transactions. Companies across finance, healthcare, and a bevy of impacted sectors already use Reality Defender’s solution to avoid becoming the next victim of a brazen AI-powered scam. This enables employees to get notified instantly if an ongoing call may involve deepfake manipulation.

Criminals are already using deepfakes to steal tens of millions of dollars at a time. As time goes on, as deepfakes improve, and as these instances increase, the losses of today will seem like chump change tomorrow. Making deepfake detection a standard part of a digital security strategy is the only way organizations can shield themselves from these attacks —and avoid catastrophic financial and reputational losses that unchecked voice fraud can inflict.

-Ben Colman, Co-Founder and CEO, Reality Defender


Facing an increase in sophisticated incidents utilizing deepfaked audio to deceive employees, a multinational tier-one bank contacted Reality Defender to analyze calls and determine real callers from those committing fraud.

Click below to download this free case study and learn more about Reality Defender's anti-fraud solutions.


Thank you for reading the Reality Defender Newsletter. If you have any questions about Reality Defender, or if you would like to see anything in future issues, please reach out to us here.

Absolutely, the peril of fraud cannot be underestimated in our rapidly evolving digital age. As Warren Buffett wisely said, "It takes 20 years to build a reputation and five minutes to ruin it." Protecting your organization's integrity is paramount. In the spirit of building a greener, safer future, we’re sponsoring an eco-friendly initiative that might intrigue you: the Guinness World Record for Tree Planting 🌱. Check it out for an opportunity to merge sustainability with security! http://bit.ly/TreeGuinnessWorldRecord

Like
Reply

Absolutely, it's a stark reminder that, as Warren Buffet advises, "It takes 20 years to build a reputation and five minutes to ruin it." 💡 Ensuring robust security measures is critical - it's not just about prevention, but also about safeguarding your hard-earned reputation. 🛡️ Let's stay vigilant and proactive together!

Like
Reply
Christian Klasen

"If our AI can help improve someone's well-being, it's all worthwhile."

9mo

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics