Just before his early death in 1957, John von Neumann designed a self-reproducing automata. The brightest mathematician of the 20th century, von Neuman realized the implications of his design and attempted to hide it from the public. A self-reproducing entity can spawn “children” entities (copies) marked by random mutations that render some of them more advanced than others.
By allowing the more advanced automata (and not the others) to reproduce themselves (with more random mutations), and then repeating this process generation after generation, we create capabilities not even imagined otherwise. As Darwin taught us, that is how an amoeba evolved into human beings. Von Neumann taught us how to unleash this sequence in cyberspace, and then watch with awe what comes forth.
Indeed, the fear of AI is well-founded. But it is too late to suck this genie back into the bottle. As far as payments is concerned, AI is both an enemy and a friend. AI creates havoc in identity-management procedures, which are the backbone of online payment. Voices and physical looks are connivingly AI-generated and beat even the most sophisticated payments security.
On the other hand, AI gives us more-advanced tools with which to fight AI-generated, credible fakes. We are now in an AI-dominated battle. And as payments becomes more automatic, faster, global, and fine-tuned, security challenges are rising, too. A big boost to security will come from shifting to digital-coin payments, where the identities of the payor and the payee may remain unknown. But these solutions will not prevent an AI-generated look-alike from claiming the benefits earned by a payment, defrauding the actual payor.
I have written in this column before that AI already, today, scans our spending habits as reflected in our credit card accounts. The AI mechanism, which creates ever-smarter generations of artificial “thinkers,” is becoming ever more sophisticated in its ability to draw behavioral and character conclusions from lists of payments any of us may make over a few months. The way AI reaches its conclusions is beyond us humans. So, if AI decides that either payor or payee is “suspicious,” and the transaction is stopped, then humans cannot argue against it. Lots of injustice in store.
The challenge of setting guardrails for AI has been much-debated, but barely resolved. Still, we can’t drop this challenge. We build nuclear plants with guardrails because we don’t want a nuclear eruption. It’s the same for AI—it can erupt!
In coming columns, I will share some guardrail ideas. But right here I will put forth what I consider the most critical principle: don’t deploy AI and then become so dependent on it that you cannot shut it down if it goes wild. We must ensure there’s a plan B to replace a disconnected AI capability that, for some unknown reason, went too far. This is a principle that lends itself to regulation, and it is a solid, common-sense initiative.
Remember that the hardware and software that constitutes the AI operation can itself be hacked, impacting the AI-generated conclusions we so blindly rely on. This risk gives rise to the dual AI principle: for critical situations, let two independent AI entities chew on the same data, then compare their conclusions.
Payments in the age of AI will be a trying challenge. Many people I talk to have a totally unrealistic sense of what AI is. One banker told me he saw a three-minute video about it on Tik Tok. That is why I opened this column with a very brief introduction to what this beast is. More to come.
—Gideon Samid gideon@bitmint.com