Saturday , November 16, 2024

Security Notes: AI, Mis-Profiling, And a Call to Action

Artificial intelligence is profoundly impactful from a payment point of view, probably much more so than we can tell now. What we already foresee is part helpful, part alarming. I will dedicate this column to a particular threat: mis-profiling.

Google’s AlphaZero AI machine taught itself to play chess by playing against itself for a couple of hours. It then beat every computer program that had been fed centuries of human wisdom—a stunning milestone.  Unlike non-AI computing, AI inference is not comprehensible to humans. Humans can mentally evaluate a handful of factors at any given moment to reach a balanced decision. By contrast, AI can handle myriad details, and use all those details (each one of no great significance) to infer a decision that accounts for all these little pieces of data.

People are impressed by this and learn to trust the machine. After all, AI sees pattern and order where people see chaos and randomness. If privacy-violating central bank digital currency, for example, becomes the way payments are conducted, every one of us will become fodder for AI analytics. That process will reveal subconscious personal goals,  passions, proclivities, desires that none of our close ones, nor ourselves, is aware of.

By comparing each of us to all of us, AI profiles the entire payment-using population. We expose almost everything about ourselves through the accumulated record of our payment behavior, day and night, in person and remotely.And AI can—and will—unveil its conclusions to governments.

Even if these profiles are made known to their subjects, there will be no way to contest them, the way we contest credit scores today. This no-recourse feature reflects human ignorance as to how AI draws its conclusions. Now, while AI follows probability calculus and is generally accurate, it is randomly wrong, sometimes very wrong. Countless innocent victims will be denied a loan, a job, school admission on account of what AI said about them.

Let’s say you buy a book on the history of political assassinations, and a month later you buy a long-range hunting rifle. You may then fit a pattern that will send the police knocking at your door. Aware of this risk, people will consciously avoid buying items that may lead to logical—but inaccurate–inferences. It is a nightmare out of 1984.

There have been numerous attempts to regulate AI at the inference level. These attempts are mostly articulated by politicians and activists with an insufficient technical foundation. A competent coder can plug into the software stealth algorithmic entities that would bypass top-level regulatory restrictions.

Instead, the best way to tame AI is to control the data it digests. There is a sophisticated way to do this through data contamination, but in this short column I will focus on another tactic, data denial. We have the technology to effect payments on a solid cryptographic foundation that will keep the identities of payors and payees secret from each other, from the bank, and from the government. This technology (one example is BitMint) keeps payment behavior unexposed while offering powerful tools to prevent money laundering.

Alas, central banks lean toward digital money capable of full payment exposure, promoted with shaky assurances of privacy. Central banks have messaging power like no other, including in the United States. Once a digital-coin framework is put in place, it will be too late to uproot it.

This monthly column cannot do much. Yet, maybe these words will stir an enterprising reader to chart a course to preserve that staple of the American way of life—unmonitored payments.

—Gideon Samid gideon@bitmint.com

Check Also

Celero Marks Another Acquisition and other Digital Transactions News briefs from 11/15/24

Payments provider Celero Commerce acquired Precision Payments. Celero said its total annual North America card processing volume …

Digital Transactions