Bankenbetrug mit KI
31.03.2026

AI-Driven Bank Fraud on the Rise

3 min Read

Key Points: Nearly half of all fraud attempts in the financial sector are already AI-driven. Banks and financial service providers must adapt their defense strategies to keep pace with the rapid development of AI-driven attack methods.


Nearly half of all fraud attempts in the financial sector are already AI-driven, according to a new study. Banks must therefore come up with new ideas to better protect their customers’ accounts.

 

Fraudsters are increasingly targeting the customers of banks and fintechs, as an analysis by the International Monetary Fund (IMF) reveals. According to this analysis, one-fifth of all reported cyber incidents over the past two decades have affected the financial sector. The global losses incurred by financial institutions since 2020 have amounted to 2.5 billion US dollars, as reported in the Global Financial Stability Report.

 

AI-driven attacks on bank accounts have seen a significant rise. According to the service provider Signicat and t3n, these now account for 42.5 percent of all detected fraud attempts in the financial and payment sectors. Approximately 29 percent of these attempts have reportedly been successful. The number of such fraud schemes has surged by 80 percent over the past three years.

 

Deepfakes are all the rage today

While identity theft was once at the top of the list, it no longer ranks among the top three fraud schemes in the financial sector, as the Signicat report further reveals. Today, fraudsters are more likely to attempt to hijack existing accounts rather than create new ones, targeting both personal and corporate accounts. They increasingly use AI-driven methods such as deepfakes, synthetic identities, and phishing campaigns. Deepfakes and social engineering are replacing “simple” document forgeries, according to the report.

 

There are differences from country to country. In Germany, ID forgery remains a significant problem for financial institutions, while deepfakes are much more common in Norway. So far, only about one-fifth of financial companies (22 percent) have implemented their own AI protection measures. Three-quarters of respondents plan to increase their budgets to combat AI-driven fraud.

 

Financial Institutions Called to Share More Responsibility

While security professionals agree that artificial intelligence (AI) is a significant driver of identity fraud and that more people than ever are falling victim to it, only about one in three is aware that AI is already being used to forge identification documents, create deepfake identities, or generate voice deepfakes.

 

Meanwhile, those in charge often rely on stronger passwords, even though these do not provide reliable protection against AI-driven identity fraud. The belief that personal interviews or customer interactions offer protection against misuse is also widespread. However, these methods are very resource-intensive and are therefore not suitable as security mechanisms for banks and fintechs in mass-market operations. Many financial institutions now recognize that AI poses a threat. More than three-quarters of them already have specialized teams dedicated to addressing AI-driven identity fraud.

 

Additionally, they are upgrading their technology and expect to have a higher budget for cyber defense in the future. Nearly a quarter have already taken concrete measures. According to a survey by the security service provider Biocatch, three-quarters of financial institutions are already using AI themselves to counter attacks. 87 percent confirmed that AI tools have enhanced their ability to respond to potential threats.

 

However, the institutions only partially see themselves as responsible for compensating customers for damages incurred and often blame customers for mishandling their access data. This is also shown in a report by the European Banking Authority (EBA), according to which consumers still bear the brunt of 79 percent of the damages they suffer. Consumer advocates are therefore calling for improvements at the European level.

STATISTIC
42.5 percent
of all detected fraud attempts in the financial and payment sector
STATISTIC
29 percent
of these attempts have already been successful
STATISTIC
80 percent
particularly on the rise. Deepfakes are very en vogue today

“Fraudsters are increasingly targeting the customers of banks and fintechs, as an analysis by the International Monetary Fund (IMF) shows.”

Frequently Asked Questions

How do fraudsters use AI?

Deepfake videos for CEO fraud, AI-generated phishing emails, automated account takeovers through credential stuffing, and synthetic identities for credit fraud. The quality of these attacks is increasing dramatically.

How can banks protect themselves?

By using their own AI systems for anomaly detection, behavioral-based authentication, real-time transaction monitoring, and close collaboration within the banking sector to share threat information.

Are private customers at risk?

Yes, especially from deepfake calls and high-quality phishing emails that are nearly indistinguishable from genuine bank communications. Two-factor authentication and a healthy skepticism towards unexpected contact attempts are the best defenses.

 

Source of title image: Pexels / Pixabay.

Also available in

A magazine by evernine media GmbH