ChatGPT has brought artificial intelligence into the mainstream, leading to predictions of everything from the end of work to the end of the world.
While these predictions lead to great headlines, the reality is that financial-services providers already use AI for things like chatbots and fraud detection. But as this technology advances, executives will be asked to make decisions about how to use AI throughout their businesses.
Since financial-services executives aren’t technologists, they need to be prepared to ask good questions as they consider how to use AI. There are five areas of inquiry for internal teams as AI enters its next stage.
The first is, what is the technology being offered and how does it work? ChatGPT is just one kind of artificial intelligence that generates content based on using large datasets to predict what the most likely response to a question might be. Machine learning is another form of AI that uses algorithms to find patterns in data.
Understanding what kind of AI is being used, what data it needs, and what kinds of outputs it can deliver can help determine whether AI is a good fit for a particular company. The limitations of any possible AI tool also need to be considered. Does the tool create hallucinations, where AI delivers a false or even fabricated result?
A related question is whether AI is necessary. Does the company already have a statistical tool that does the same thing outside of a black box?
The second area of inquiry is, what does an organization need to do to use a tool? Is its data ready? Financial-services providers are often awash in data, but having a lot of data doesn’t necessarily mean a company is ready to use AI. The data needs to be in a form that can be parsed by the tool.
Additionally, companies need to know how that data is managed, stored, and even used to create other products and services. The protection of data is a critical question going forward, which leads us to the third area of inquiry: what does a tool mean for privacy?
Because AI is trained on large data sets, researchers have found ways to pull data like names, phone numbers, and e-mail addresses from generative AI tools. Once financial-service providers put customer data into AI training sets, they need to ensure that data will not be leaked either publicly or to competitors if it is being fed to a third party that works with multiple banks.
This leads to the fourth area: what does artificial intelligence mean for fraud vulnerabilities and prevention? AI tools have been used to spoof people’s voices to socially engineer fraud victims, and video is not far behind. In a presentation at the IPA’s annual conference, Adwait Joshi, the founder of DataSeers, a company that applies AI to banking, said he sees a coming arms race as criminals and companies work to develop AI tools to attack, and defend against, each other.
The fifth area, which will be informed by all the other areas, is regulatory compliance. Federal banking regulators said in May last year that companies cannot hide behind algorithms to justify credit decisions. They must provide “specific and accurate explanations.” This is why providers need to answer the preceding questions about what their AI tools do. We can expect more laws and regulations concerning privacy, fraud liability, and financial inclusion.
The next stage of AI in financial services has begun. The successful companies will not be the ones starting out with the most answers. They’ll be the ones that ask the best questions.
—Ben Jackson bjackson@ipa.org