To beat AI-empowered fraudsters, credit-card providers may need to share data
Source: Business Times
Article Date: 29 Dec 2023
Author: Yong Jun Yuan
Fraudsters are also getting more sophisticated in the way that they commit fraud. Lack of data is one of the big challenges of fraud detection.
Artificial intelligence (AI) models are helping credit-card providers detect and block more fraudulent transactions than ever, but the fraudsters are using AI too. Industry players suggest some sharing of once-closely-guarded data may be necessary to stay ahead in fraud detection.
Gross card fraud per US$100 fell to US$0.066 in 2021, from US$0.072 in 2016, according to the Nilson Report, which publishes data on the global card and mobile payment industry.
Part of the reason is cleverer AI-driven models. At American Express, for example, decision science executive vice-president Chao Yuan said AI models are trained using hundreds of factors.
These include the devices used to make a transaction, the location of the transaction and the quantum of the transaction.
“When the signal is strong enough, we will stop the charge,” he said.
The mathematical formulae used to build the AI are also better. Visa’s Asia-Pacific head of risk, Joe Cunningham, said many models may have their roots in mathematical breakthroughs made in the 50s and 60s.
“They may have been too slow or too process-intensive in order to work,” he said.
Today, the maths behind the models has improved. Processing power is also “effectively free”, he said, making AI models much more effective.
These two factors have allowed the company to bring petabytes of historical transaction data to bear.
“Not only the transaction history, but also every single historical fraudulent transaction is known to us; so we can identify patterns in that, as well as patterns in your transaction data,” he said.
These same factors can, however, be brought to bear by bad actors too, Cunningham noted.
Instead of hacking into the stores of credentials held by payment facilitators or e-commerce sites, they are using cheap processing power to guess at legitimate 16-digit numbers, expiry dates and CVV numbers.
National University of Singapore (NUS) professor Hahn Jungpil noted that fraudsters are also getting more sophisticated in the way that they commit fraud.
His own credit card was compromised in November and seven unauthorised online transactions of S$90 were made.
“That’s probably the most common way to stay under the radar: make transactions (and) hope nobody finds out,” said Prof Hahn, who is also deputy director of AI governance at research institute AI Singapore.
Many do not go through their credit card statements rigorously every month, he added. This allows such fraud to go undetected, and affects the data used in the AI models.
As it is, the lack of data is one of the big challenges of fraud detection.
Cindy Deng, associate professor of finance (practice) at Nanyang Technological University, said it may be difficult to further improve the accuracy of models because the amount of fraudulent transactions is dwarfed by that of legitimate ones.
“If the (dataset) is too small, you wouldn’t be able to have a very accurate model,” she said.
A better dataset might also require data beyond a single credit card’s transactions.
“You probably have several cards and several different payment methods, such as GrabPay, DBS PayLah… Only by integrating all (these) do you get a full picture of transactions,” said NUS’ Prof Hahn.
In a utopian world, Prof Hahn said, companies would get together to create a massive dataset to be used for training AI models on fraud.
Compliance and data protection issues make sharing of data across companies difficult, though. Companies may also consider large, accurate datasets a competitive advantage.
Despite these challenges, he said research is being done to preserve the statistical properties of encrypted data so that it can be shared safely.
BCG X partner and vice-president of data science Bharath Vasudevan said the sharing of data across institutions and borders will significantly strengthen fraud models. “This would include not just actual fraud events, but near misses that can serve as early warning signals,” he said.
“AI models used in fraud detection cannot be static because it is an ongoing cat-and-mouse game, with bad actors who are also using AI and the latest technologies to build their own capabilities.”
Source: Business Times © SPH Media Limited. Permission required for reproduction.
1616