Fraud attacks initiated by automated bots using artificially generated images, also known as deepfakes, are becoming a major problem in the payments industry, according to a report Wednesday from AU10TIX, a provider of identity-verification and risk-management technology.
Overall, 32% of all Internet traffic during the second quarter was driven by bots, the report says. Of the bot-driven fraud attacks, 52% were targeted at the payment industry.
While the number of attacks against the payments industry in the second quarter decreased 17% from the prior quarter, the industry remains by far criminals’ most-favored target. Attacks against the cryptocurrency industry incurred the second-highest number of attacks, totaling 29% of all attacks during the quarter, followed by social-media platforms at 16%, up from 3%.
So-called impersonation bots are apps programmed to mimic real human identities and behaviors to create highly convincing fake profiles. One reason criminals are gravitating to the technology is its level of maturity, according to Ofer Friedman, chief business development officer for AU10TIX.
“The rise in usage of impersonation bots can only be accounted for by the maturity of technology producing and distributing it,” Friedman says by email. “[Since] at least a year back we’ve been seeing a rapid growth in industrialized AI impersonation. The burst-like growth can be accounted to fraudsters’ success in implementing randomization into GAI ID (generative artificial intelligence) documents and biometrics. This enabled the mass-production of potentially never-repeating documents and biometrics.”
One region of the world that has become particularly vulnerable to fraud attacks driven by impersonation bots is Asia-Pacific, where fraud rates increased 24% between 2022 and 2023. The region has the world’s highest fraud rate, with 3.27% of all transactions being fraudulent, the report says.
Helping drive fraud in the Asia-Pacific region is the emergence of fraud-as-a-service (FaaS), in which criminals sell their tools, services, and expertise on the dark Web to carry out fraud on behalf of paying clients.
“Fraud as a Service [has been] out there for a good couple of years now,” Friedman says in his email. “Now FaaS is much more developed and diversified, also offering [personal identifiable information] sourcing, phishing services, etc. In fact. FaaS is closely related to cybercrime activities, and it too features marketplaces, subscription services, even customer support.”
PII sourcing is the use of any personal information to distinguish or trace an individual’s identity. Such information can include name, Social Security Number, date and place of birth, mother’s maiden name, or biometric records.
In addition to the increasing use of automated bots, fraudsters are moving away from account-takeover attacks toward to authorized fraud attacks, which trick consumers into authorizing a payment to a criminal, according NICE Actimize’s 2024 Fraud Insights Report.
While attempted fraud increased 6% in value in 2023 compared to the previous year, the volume of attacks decreased 26%, according to the report, which also notes a movement away from peer-to-peer (P2P) payments.
“This change reflects the shift towards payment types and fraud typologies traditionally higher in volume and lower in value, as well as improvements in detection and prevention— especially in P2P,” the report says.
For other payment types, such as wire transfers, criminals were more likely to use authorized fraud tactics, such as investment fraud, to conduct scams. These attacks rely on duping victims into giving the OK for transfers.
“Scams now make up a larger share of fraud than unauthorized fraud, and it’s an increasing trend,” the report says. “While in many cases there are high-volume, low-value scams, such as purchase scams, there are also much higher value authorized frauds such as investment fraud or business email compromise that impact authorized fraud’s overall fraud mix.”