Deepfake Call from the CEO: CEO Fraud Targeting Mid-Sized Companies
7 min Read Time
In January 2024, an employee at the UK-based engineering firm Arup transferred $25.6 million to fraudsters. He believed he was participating in a video call with his CFO and several colleagues – all of whom were deepfakes. This was no isolated incident: voice phishing attacks surged by 442 percent in 2024 (H1 to H2), while deepfake fraud attempts against German companies rose by 53 percent. Mid-sized firms are especially exposed. Where flat hierarchies and rapid decision-making are strengths, they become vulnerabilities in CEO fraud schemes.
The Key Takeaways
- Voice phishing up 442 percent: AI-generated voices make calls from the supposed CEO nearly indistinguishable from the original (CrowdStrike, Global Threat Report 2025).
- Deepfake fraud in Germany plus 53 percent: Attempts rose significantly in 2025, particularly targeting mid-sized companies with fewer control instances (Sumsub Identity Fraud Report, 2025).
- $25.6 million loss at Arup: A single deepfake video call with a fake CFO sufficed for the largest documented single transaction to date (CNN, May 2024).
- BSI recommends callback verification: Codeword protocols and mandatory callbacks via stored numbers as a first protective measure against CEO fraud.
- Inconsistent insurance coverage: Traditional cyber insurance policies often cover social engineering losses only up to a low sublimit or exclude AI-driven attacks explicitly.
How a Deepfake Attack Works in Practice
The typical CEO fraud of the old school was an email: The supposed managing director asks accounting to urgently initiate a transfer. The sender address was fake, the text clumsy, the trick understandable after some training. The new generation works differently.
In the Arup case, attackers first collected publicly available video and audio recordings of the CFO. LinkedIn videos, conference appearances, and podcast interviews provided enough material to clone voice and appearance. Then they scheduled a video call in which not only the CFO but also several other colleagues appeared as deepfakes. The employee saw familiar faces, heard familiar voices, discussed an ongoing project. The instruction to transfer funds came in the context of a normal business conversation.
The technical hurdle for such attacks is sinking rapidly. Current voice cloning services need less than ten seconds of audio material to convincingly copy a voice. Real-time video deepfakes are possible with freely available software. The quality has reached a point where even trained employees can no longer make a reliable distinction.
“The combination of AI-supported voice cloning and real-time video deepfakes represents a qualitatively new threat that many companies are not prepared for.”
– BSI, Situation Report on IT Security in Germany 2025
Why the Mid-Sized Sector is Especially Exposed
Large corporations usually have multi-level approval processes for transfers: four-eyes principle, signature regulations above certain thresholds, automated compliance checks. In the mid-sized sector, reality often looks different.
In companies with 50 to 500 employees, accounting knows the managing director personally. A call from the boss with a request for an urgent transfer is nothing unusual. The short paths that make the mid-sized sector operationally strong simultaneously create attack surfaces: fewer control instances, more trust in oral instructions, lower inhibition threshold for inquiries.
Added to this is the public visibility of executives. Mid-sized managing directors are frequently active on LinkedIn, speak at industry events, give interviews. Every public appearance supplies material for deepfakes. The paradox: The more successfully an entrepreneur builds their personal brand, the easier they become a target.
The figures confirm the risk: 80 percent of companies have no established protocols or response plans for deepfake-based attacks according to industry analyses. In the mid-sized sector, this quota is likely even higher, because cybersecurity teams often consist of one person or do not exist at all.
The Damage Balance: What Deepfake Fraud Costs
The damage goes beyond the immediate transfer. In the Arup case, $25.6 million was gone before the fraud was noticed. The worldwide damage from deepfake fraud amounted to over 200 million US dollars in the first quarter of 2025 alone, with North America most affected at 38 percent of incidents (Resemble AI Q1 2025 Report). The unreported figure is high: Many companies do not report such incidents, out of shame or fear of reputational damage.
In addition to direct financial damage, follow-up costs arise: forensic investigations, legal remediation, tightened internal controls, potentially reporting obligations under GDPR if personal data was affected. And the damage to trust within the company is hardly quantifiable: The employee who initiated the transfer did everything right that they could know.
The insurance situation is inconsistent. Classic cyber insurance policies often cover social engineering losses only up to a low sublimit. Some policies explicitly exclude deepfake fraud. Companies should check their policy for coverage scope for AI-supported attacks and renegotiate if necessary.
The Attack Chain: From LinkedIn Profile to Fake CFO
A deepfake attack on a mid-sized company follows a systematic pattern. Preparation begins weeks before the actual fraud attempt and uses exclusively publicly accessible information.
Phase 1: Reconnaissance. Attackers analyze the company website, the imprint, LinkedIn profiles of management and the finance department. They identify decision structures, current projects (via press releases) and personal relationships in the leadership team. In many mid-sized companies, the “About Us” page is enough to trace the complete decision chain.
Phase 2: Collecting Material. Less than ten seconds of audio is sufficient for the voice clone. For a real-time video deepfake, current models need 30 to 60 seconds of high-quality video material. LinkedIn videos, which many managing directors post regularly, deliver both simultaneously. Conference recordings on YouTube or company channels are also fruitful sources.
Phase 3: Timing. Attackers choose a time when the real managing director is hard to reach: trade fair weeks, vacation time, Friday afternoons. In the Arup case, the real CFO was on a business trip. The urgency of the transfer was justified with a supposedly time-critical acquisition.
Phase 4: Striking. The deepfake call is scheduled, the transfer ordered. The amounts are immediately forwarded via several accounts in different countries. After 24 to 48 hours, the funds are practically no longer traceable.
This pattern shows: The vulnerability is not the company’s technology, but the trust between people. That is exactly why organizational countermeasures work better than technical ones.
Detection Tools: What the Market Offers and What Really Helps
Deepfake detection is a race between attackers and defenders, which defenders are structurally losing. Every improvement in detection software is overtaken by better generation models. Nevertheless, there are tools that reflect the current state of the art.
On the analysis side, products like Pindrop’s Deepfake Detection, Reality Defender and Sensity AI work with AI-based models that identify artifacts in image, sound and video: unnatural lip movements, inconsistencies in lighting, anomalies in the frequency spectrum of the voice. These tools work well with pre-produced videos, but reach limits with real-time deepfakes.
More relevant for enterprise use are solutions that secure the communication channel instead of detecting the deepfake itself. Platforms like Veridas or iProov rely on biometric verification: Before a sensitive transaction is triggered, the requester must authenticate via a separate, verified identity check. This is technically more complex, but more robust than trying to unmask the deepfake.
The honest assessment: No tool offers one hundred percent security. Technical detection is a building block, but no substitute for organizational measures. The BSI therefore recommends a combined approach of technology and process.
Immediate Actions: Verification Protocol in 30 Minutes
The most effective defense against deepfake CEO fraud is organizational, not technical. A verification protocol can be introduced in half an hour and costs nothing except discipline.
1. Callback obligation for financial transactions. Every payment instruction above a defined threshold (for example 5,000 euros) must be confirmed by a callback to a known, internally stored telephone number. Not the number from the email, not the number from the video call, but the number from the internal directory. This rule applies without exception, even if the managing director calls personally and signals time pressure.
2. Codeword System. An agreed codeword that is not used in any digital communication and was only exchanged orally in physical presence. If there are doubts about identity, the codeword is requested. Simple, analog and deepfake-resistant.
3. Two-Person Release. No single person may release transfers above the threshold alone. The four-eyes principle is already standard in many companies. The deepfake extension: Both releasers must verify the instruction independently, not in the same call or chat.
4. Awareness Training. Employees in finance and assistant functions must know that deepfakes exist and how convincing they can be. The Arup case is the strongest training material: An experienced finance professional in a multinational corporation fell for it. No one is immune. Regular repetitions of the training are crucial, because the technology develops faster than the awareness of it.
5. Define Emergency Process. What happens if a fraud attempt is detected? Who is informed? How is the transfer stopped? The faster the reaction, the higher the chance of getting funds back. Banks usually have a short time window for reversals.
Legal Situation: What Criminal Law Says and What Companies Can Do
Deepfake-based CEO fraud falls under Section 263 StGB (Fraud) and possibly under Section 263a StGB (Computer Fraud). Prosecution is difficult, however: The perpetrators usually operate internationally, the money flows are obscured via crypto wallets or accounts in third countries.
For affected companies, the civil law side is more relevant: Who is liable for the damage? Usually the employer, unless gross negligence can be proven against the employee. And exactly here the verification protocol becomes a shield: Those who can document that processes existed and were followed are in a better position in the event of a dispute than a company without any precautions.
Internationally, law enforcement agencies are increasingly cooperating. Europol has identified deepfake-supported fraud as a growing threat and coordinates cross-border investigations. In Germany, the central offices for cybercrime of the public prosecutor’s offices are responsible. The success rate for recovering funds is low, however, once the amounts have left the European banking system.
For the mid-sized sector, the most important legal insight is: Prevention is not only economically, but also liability-wise the better way. Managing directors who demonstrably have not taken appropriate protective measures can, under certain circumstances, be held personally liable. The introduction of a documented verification protocol is thus not only a security measure, but also a measure for managing director liability.
The BSI explicitly pointed out the deepfake threat for companies in its 2025 situation report and recommends organizational protection concepts in addition to technical measures. There is no reporting obligation for deepfake incidents so far, but companies that fall under NIS2 or the KRITIS umbrella law must report significant security incidents anyway.
Consciously Control Digital Presence
Managing directors face a dilemma: Visibility on LinkedIn and in media is good for business, but simultaneously supplies material for deepfakes. The solution is not invisibility, but more conscious handling of public audio and video content.
Concrete measures: Publish videos on LinkedIn preferably with text overlays and music instead of continuous speech. Do not archive podcast interviews on the company channel, but refer to the host’s platform. Do not publish conference recordings publicly on YouTube, but put them behind a login area. These measures do not eliminate the risk, but they significantly increase the effort for attackers.
Conclusion
Deepfake CEO fraud is not a future scenario and not a problem only for corporations. The technology is available, the costs for attackers are sinking, the quality is rising. The mid-sized sector is an attractive target because of its flat hierarchies and informal decision paths.
The good news: The most effective countermeasures are not expensive and immediately implementable. A callback protocol, a codeword system and consistent four-eyes principle for financial transactions cost nothing and protect more reliably than any software. Those who additionally invest in awareness training and check their cyber insurance for deepfake coverage have addressed the bulk of the risk.
Frequently Asked Questions
How do I recognize a deepfake video call?
Watch for unnatural lip synchronization, inconsistent lighting in the face, artifacts at the edges of hair and ears, and unusual delays. However, current deepfakes achieve a quality that makes visual detection unreliable. Do not rely on your eyes, but on verification processes.
How much material do attackers need for a voice deepfake?
Current voice cloning services need less than ten seconds of audio material for a convincing voice copy. LinkedIn videos, podcast appearances or YouTube interviews usually provide more than enough material.
Does my cyber insurance cover deepfake fraud?
That depends on the policy. Many cyber insurances have low sublimits for social engineering damages or explicitly exclude AI-supported attacks. Check the coverage scope with your insurance broker and negotiate an extension if necessary.
What is a verification protocol?
A defined process that ensures payment instructions and other sensitive transactions are confirmed via a second, independent channel. Typical elements are callback to an internally stored number, codeword query and two-person release.
Do I have to report a deepfake attack?
There is no specific reporting obligation for deepfake incidents. However, companies that fall under NIS2 or the KRITIS umbrella law must report significant security incidents. In case of financial damage, a criminal complaint to the BKA or the responsible cybercrime office is recommended.
Are small companies really targets of deepfake attacks?
Yes. The sinking costs for deepfake creation make attacks on smaller targets economically profitable. In addition, SMEs often have fewer controls than corporations. The fraud amounts are individually smaller, but the success rate is higher.
Editor’s Reading Recommendations
More from the MBF Media Network
- cloudmagazin – Cloud, SaaS and IT Infrastructure for Decision Makers
- Digital Chiefs – Leadership, Transformation and C-Level Perspectives
- SecurityToday – Cybersecurity, Compliance and Data Protection
Header Image Source: Diva Plavalaguna / Pexels
