Criminals’ increasing adoption of artificial intelligence to power fraud schemes is leaving merchants and payment providers little choice but to adopt the technology to fight back.
Artificial intelligence has made the fraudster’s job easier. Not only has AI improved the quality and sophistication of his attacks, it’s now easier than ever for criminals to access the technology through marketplaces on the dark Web.
Hence, any wannabe criminal, even one with limited resources, can find advanced tools to beat identity-verification and fraud-detection systems—at prices lower than they were just a few years ago, fraud-prevention experts say.
Criminals can use AI to deploy bots for rapid card testing and credential-stuffing attacks, launch sophisticated phishing campaigns that capture login credentials or personal and financial data, and create synthetic identities so realistic they can fool identity-verification systems.
Indeed, AI has handed fraudsters such an edge that 84% of fraud-prevention professionals at financial institutions say the technology can beat their current fraud defenses, according to a recent study by Datos Insights. It has also helped create a criminal enterprise that represents a nearly a $10-trillion economy, says Mastercard Inc.
As a result, the threat of fraud from the use of AI is extremely high. On a scale of 1 to 10, with 10 being the most serious, “I would rate it an 8,” says Sunny Thakkar, senior director of product management, fraud, disputes, and authentication at Worldpay Inc.
Indeed, thanks to criminals’ adoption of artificial intelligence, losses from cyberfraud rose 14% in 2024, according to a report from Trustpair, a provider of fraud-prevention technology.
“Cybercriminals’ increasing sophistication and use of AI is a growing concern, as highly sophisticated AI is available to the general market and continues to get more and more advanced,” Thakkar says. “We are monitoring this threat with extreme caution.”
‘A Double-Edged Sword’
While AI has given fraudsters an edge, that doesn’t mean processors, networks, and card issuers are defenseless against fraud. Rather than develop a new technology to combat AI-based fraud, the payments industry is recognizing that AI can be used to turn the tables on criminals and strengthen its own fraud defenses.
What makes AI an effective fraud-fighting tool is that it uses predictive analytics to detect emerging patterns and adapt to those patterns—in real time. AI can also aggregate and interpret multiple data attributes, such as device data, email address, IP address and physical address, to detect any red flags in a given transaction.
Using AI, merchants can increase approval rates by declining fewer legitimate transactions. These are transactions that would have been declined using less sophisticated fraud-detection technologies. And all while catching more fraudulent transactions.
“Advancements in AI, coupled with robust data, are revolutionizing fraud prevention, making [AI] a double-edged sword,” Thakkar says. “AI is both an asset and a threat, and is one of the areas I continue to watch closely.”
But before processors, networks, and payments providers can use AI to beat fraud, they need to understand how the technology is being used to commit fraud in the first place, industry experts say.
AI’s ability to rapidly analyze vast fields of data makes it an effective tool for creating synthetic identities. These IDs, which can be used to open a credit card account or take out a loan, for example, are forged using valid consumer data stolen in a data breach—such as Social Security Numbers—paired with personally identifiable information that is often false.
Synthetic identities can be so realistic that many card issuers don’t realize they are falling prey to them. Indeed, it is not uncommon for issuers to incorrectly assume a delinquent account opened with a synthetic identity went bad for reasons other than fraud.
“Financial institutions don’t do a great job of getting at the root source of the fraud to combat it,” says David Mattei, a strategic advisor at Datos Insights. “FIs will write off an account that goes bad as a loss without necessarily drilling down to see whether the cardholder’s identity is real.”
‘Fraud-as-a-Service’
Another growing use of AI for fraud is fraud-as-a-service, a cybercrime business model where a criminal provides the necessary tools and services to other criminals seeking to perpetrate fraud. FaaS opens the door for any individual to commit fraud, regardless of his expertise.
The problem for banks and payments companies that attacks mounted by individuals or small groups tend to be smaller in scope and therefore harder to detect than attacks orchestrated by large fraud rings.
“FaaS doesn’t require specific fraud skills [of the user], which gives individuals and small groups the ability to perpetrate fraud more effectively,” says Ofer Friedman, chief business development officer for Au10tix, an Israel-based identity-verification firm. “FaaS is like autopilot. The user tells the provider what they want” and it gets served up.
Making matters worse is that many FaaS providers actively advertise their services on the dark Web.
“Fraud-as-a-service is leveraging AI to scale and automate fraud, making sophisticated AI-driven fraud patterns accessible to a wider pool of criminals,” says Laura Quevedo, executive vice president, fraud and decisioning solutions, at Mastercard.
“AI tools can create synthetic identities and bypass traditional know-your-customer [practices] and infiltrate payment systems,” she adds. “These techniques facilitate automated payment fraud and advance money laundering.”
‘A Huge Problem’
The use of AI to facilitate the opening of money-laundering accounts is a growing problem for financial institutions. It is not uncommon for criminals to open a so-called mule account and funnel through that account money scammed from consumers through phishing attacks or social engineering.
“While banks typically didn’t worry about mule accounts on the payments side of the house because they didn’t lose money on a transaction, concern is growing on the money-receiving side of the bank about them,” says Mattei.
AI can help banks identify mule accounts by analyzing in real time transaction and accountholder behavior patterns, as well as other anomalies, Mattei adds.
But AI-based tools aren’t only being used to perpetrate consumer fraud. Criminals are using them in the business-to-business payments space, too, where transaction sizes are substantially larger. In 2024, AI was involved in 25% of the cases in which a business lost $10 million or more due to fraud, according to Trustpair, a provider of fraud-prevention technology.
One attack vector in the B2B space is for criminals to take over a vendor’s email account using AI-based tools, then notify buyers in the vendor’s email address book that the account number to which payments are made has been changed. The notice of course adds that buyers should send future payments to the new account, which the criminal controls.
With AI, criminals can not only gain access to email accounts, they can see all correspondence with those accounts to identify which ones to target, such as those that have an invoice due soon.
“Generative AI can make illegitimate emails difficult to identify, especially for smaller companies that have limited technology resources,” says Baptiste Collot, chief executive at Trustpair. “A lot of businesses use manual processes for validation, and that is becoming a huge problem with the use of AI by criminals.”
Mastercard’s Approach…
Not surprisingly, larger players such as Mastercard and Visa Inc. are actively getting out in front of the fraud threat posed by AI.
Mastercard’s real-time decisioning solution, known as Decision Intelligence, uses Generative AI to scan 1 trillion data points to predict whether a transaction is likely to be genuine or not. The operation takes less than 50 milliseconds.
The app, which scores and approves 159 billion transactions a year, can boost fraud-detection rates by an average of 20%, and as much as 300% in some instances, Mastercard says.
Other AI-based-fraud fighting solutions used by Mastercard include the network’s Safety Net technology to protect against fraud and cyber-attacks. The technology has prevented nearly $50 billion in potential customer fraud losses from attempted global fraud and cybercrime attacks across Mastercard’s network over the past three years.
Mastercard also employs identity solutions to spot synthetic identities being used to open an account and to share fraud and disputes data to speed up the resolution process for merchants, issuers, and consumers.
“We have long recognized the importance of delivering technology to stay ahead of criminal activities to protect banks, businesses, and consumers worldwide,” Quevedo says. “This technology helps us know where the fraudsters are operating so we can act swiftly to help protect banks and consumers.”
Mastercard further bolstered its AI-based fraud-prevention capabilities through the acquisition last year of Recorded Future Inc. The company uses AI to analyze broad data sets to provide real-time insights to help mitigate risk.
“Mastercard’s acquisition of Recorded Future will further enhance our threat intelligence, enabling us to create smarter models to better protect organizations, consumers, and the ecosystem as a whole,” says Quevedo.
…And that of Others
Like Mastercard, Visa has deployed an array of AI-based fraud-detection tools. These include cardholder-authentication apps, support for token-provisioning requests, fraud detection for account-to-account payments for financial institutions, and scoring a transaction to determine whether it is part of an enumeration attack.
An enumeration attack occurs when criminals try to gather information about a system by testing various inputs of consumer data to identify valid usernames, email addresses, or other data points within a database. Visa did not make executives available for comment on these tools.
Payments providers like Worldpay are also investing heavily in AI. Worldpay’s FraudSight is a multi-layered solution that combines data insights, technology, as well as a team of fraud experts and data scientists to predict the types of transactions at high risk for fraud.
Acquisitions, too, are helping Worldpay build out its fraud-fighting technology stack. Earlier this year, the payments provider entered into an agreement to acquire Ravelin Technology Ltd., a provider of fraud-prevention solutions for e-commerce merchants. Terms were not disclosed.
Ravelin specializes in identifying payments fraud—account takeover, return and refund abuse, promotion and voucher abuse, and marketplace fraud—and in 3D Secure authentication. The acquisition will enable Worldpay merchants to increase authorizations, the company says.
“In e-commerce, the cost of customer acquisition is higher than ever, yet merchants face lower overall approval rates as transactions are susceptible to more fraud and higher issuer declines,” says Worldpay’s Thakkar. “Given the need for higher approval rates to maximize retention of revenue, merchants are applying continued pressure for payment providers to maximize authorization rates.”
E-commerce merchants’ need to effectively combat fraud has made AI a “survival” technology, for payment providers, according to Thakkar.
“Nearly all the top payment providers are applying AI in their business, with AI-based fraud detection being a critical component in merchants’ achieving higher approval rates,” Thakkar adds.
Vetting Vendors
When choosing an AI-based fraud-detection provider, it is important to dig beneath the sales pitch and get a demonstration of the technology, as well as talk to other users about how the effectiveness of the vendor’s solution and whether it is meeting their needs and objectives, says Datos Insights’ Mattei.
Au10tix’s Friedman also recommends working with established AI vendors. “Go with known players in the space that can run a proof of concept,” he says.
Vetting vendors is just one step in the process of using AI to fight fraud. The exchange of data, internally and industrywide, is viewed as critical to making sure AI-based fraud-prevention tools keep pace with cybercriminals’ changing use of the technology. This means keeping in mind that AI is a data-driven solution.
“The more data available, the higher the accuracy in the risk decision. That’s one reason collaboration within the payments industry and building a strong ecosystem are important best practices,” says Thakkar. “We’re starting to see a lot more partnerships across the ecosystem to create more robust frameworks for data sharing to improve outcomes for every payment.”
Working with issuers such as Capital One, Worldpay has developed an application programming interface that securely packages information so that when the authorization reaches the issuer, the issuer can see the data linked to a transaction. Such information helps improve the issuer’s confidence in approving the transaction, Thakkar adds.
Lastly, payment providers need to set realistic goals with an eye for the long-term when developing an AI-based fraud-prevention strategy. “Start small, but think long-term and be prepared to grow with change,” says Trustpair’s Collot. “Once you get through the initial steps, you can start to think bigger. Too many think they will find a long-term answer [to fighting fraud], but no solution is good forever.”
To stay ahead of evolving threats, he adds, “You need to be constantly looking ahead.”