The New Checkout Cashier That Doesn’t Care If You Starve
There is an effort to use a simple AI at the office where I work. Some slick salespeople sold the building 2 cutting edge, top-of-the-line, automated checkout machines. These machines have a camera that stares at a designated check-out square. People simply select the items they wish to purchase and place it in the designated area. The camera recognizes the items, registers the purchases, and the person then swipes their card and completes the purchase process. However, the camera sometimes does not recognize the item- and there’ s no other method for buying the item when this happens. I leave my snack or drink by the incredibly expensive and completely useless machine. Betrayed by technology and the salespeople who sold the devices to the facilities management, I walk back to my desk in anger and disgust.
It’s a simple story, but an increasingly common one: we start to rely on technology, and when it fails, we just hit a wall. It’s not clear to me what advantages the camera offers over a scanner (which is used elsewhere in the same cafeteria for self-checkout). This kind of story will be more common as more people rely on smart homes, smart fridges, smart dishwashers, smart alarm clocks, etc. The “smartness” behind each of these is rudimentary AI- recognizing patterns and sometimes making simple predictions. The hope is that the technology will understand its role and take a more proactive approach to helping humans.
However, the technology doesn’t understand its role, and it really doesn’t care about helping humans. When AI encounters an error, it doesn’t go into “customer service mode” and try to help the human achieve its goal. It doesn’t try to resolve the problem or work around it. It just reports that there was an error. If a retail employee did this, it would be the equivalent of being told “I can’t ring up this item,” and then the employee just walks off to the break room. Most people wouldn’t return to a store that had that level of customer service. People born before 1965 would probably even complain to the manager or local community newspaper.
These problems can be resolved, but the fixes are rarely designed into the technology at release. I’ve had this problem with the checkout machines at work about 7 times over 7 months (I don’t even try to use them more than about once a week)- I am aware of no effort to improve the situation. Because the designers probably never use the machines, there’s a good chance no one in a position to fix the problem is aware of the problem.
More Dangerous Places to Put AI: Cars and Financial Markets
The fundamental problems for AI are annoying and disappointing when they deny us snacks or try to sell us shoes that we already bought. But these problems are amplified from “annoying” to “tragic” and “disappointing” to “catastrophic” when they manifest in vehicles and financial markets. If our AI checkout machine doesn’t care if people can purchase food, what else are we failing to get AI to care about in other applications?
AI is the newest technology, which means it is subject to all of the failures of previous technology (power outage, code errors, physical tech break) and also the new failures of technology (AI-specific problems that sometimes actively resist resolution).
None of this is anti-technology- on the contrary, I think AI is a fantastic development that should be used in many applications. But that doesn’t make it a great (or even acceptable) tool for every application. A warning that hammers should not be used to put screws through windows is not a diatribe against hammers, screws, or windows. It’s just a caution that those things may not mix in a way that will yield optimal results.