Back
Data breach image

UBS’ data breach is perfect fuel for deepfake frauds

This week, Swiss banking giant UBS confirmed a breach at a third-party supplier that exposed personal details of 130,000 employees, including the direct phone number of CEO Sergio Ermotti. This massive data leak is more than a privacy nightmare; it’s a fraudster’s dream toolkit. Why? Because armed with names, roles, and contact info of key personnel, today’s attackers can go beyond phishing emails; they can call employees impersonating executives using AI-generated voices. Welcome to the era of deepfake “CEO fraud,” where criminals clone voices to scam organizations. Here’s how these frauds work, why they’re escalating, and what you can do now to defend your institution.

From a data leak to deepfake fraud

Armed with widely available voice cloning tools and insider info (like direct lines and org charts from breaches), the attacker calls an employee and sounds exactly like the CEO or CFO with tone and accent. The fake boss urgently instructs a wire transfer or a confidential action, often with a pretext (“I’m in a meeting, it’s time-sensitive, just do it”).

Employees often fall for these deepfake calls because the voice on the line truly sounds like someone they trust. This is the new face of impersonation fraud: not a sketchy email from a “Nigerian prince,” but a phone call that looks and sounds like it’s straight from the C-suite.

The result of the above is a surge in incidents. Financial institutions are seeing a sharp uptick in voice impersonation fraud. In fact, voice deepfake fraud attempts rose over 1,300% in 2024 (Speechtechmag).

Mitigate the risks with appropriate response

Below are five key actions that can turn a data breach like the UBS / Chain IQ one, into a controlled multi-layered response. 

Raise awareness and skepticism

Make sure your employees from the helpdesk to treasury know that voice calls can be faked. Train staff to recognize the hallmarks of a deepfake call scam, such as unusual requests that violate policy or urgent pressure to bypass normal procedures. Encourage a culture where it’s okay to question a request supposedly from a VIP if something feels “off.” Scammers exploit trust and urgency, so healthy skepticism can thwart many attempts.

Verify identities through a second channel

Institute strict policies to verify any high-risk request through an independent channel. For example, if staff gets a call that sounds like the CEO asking for an emergency transfer, they should hang up and call back using the official number on file or confirm via a known personal contact. No legitimate leader should object to a quick double-check. Instore a “never trust, always verify” mindset for unexpected instructions. This simple step can break the scam’s spell, as the fraudster will typically vanish when challenged.

Use code phrases or multi-person approval 

Some banks are adding passphrases or verification questions for authorization calls (“What’s the project code for this quarter?”) something a deepfaker wouldn’t know. Others require two executives to approve large transfers, making it much harder for a single spoofed call to do damage. Such process safeguards can neutralize the one-person impersonation tactic.

Lock Down Executive Data

Treat executive and key employee information as the crown jewels. The less attackers can learn about your org chart, direct lines, or personal details, the harder it is for them to impersonate you convincingly. Limit what you publish online about staff roles and movement. Harden your vendor and HR systems against breaches, since those are prime targets (as UBS learned). Regularly audit what PII (personally identifiable information) of execs is floating around, and get exposed phone numbers or emails changed if possible. Reducing the “digital footprint” makes life tougher for would-be impostors.

Deploy real-time deepfake detection technology

Finally, consider augmenting your defenses with advanced AI that can automatically detect deepfake voices. New solutions (like Aurigin.ai’s Aurigin Guard) can run in the background of calls and video meetings, analyzing audio patterns to flag synthetic voices in real time . These tools use machine learning to spot subtle anomalies inaudible to humans, giving you an instant warning if “the person speaking” might be an AI clone. With over 98% accuracy in catching deepfake audio , such technology acts as a safety net especially in those critical moments when an employee is one click away from sending money. Integrating real-time deepfake detection into your communication channels can stop a fraudulent transfer in its tracks, by alerting your team before the scammer gets what they want

By combining staff vigilance, process checks, and cutting-edge detection tools, banks can stay a step ahead of deepfake fraudsters. It’s about restoring trust in voice communications without leaving an Achilles’ heel for attackers to exploit.

Deepfake CEO fraud is a real and rising threat in the financial industry, but it’s one you can counter with proactive measures. The UBS data leak should be a wake-up call for every bank: if criminals can imitate your leaders, you need to authenticate your callers. Don’t wait for a costly incident to take action.

To learn more about protecting your institution from deepfake voice scams or to see Aurigin.ai’s real-time deepfake detection in action, contact us or request a demo.