TechNews Pictorial PriceGrabber Video Sun Nov 24 07:38:59 2024

0


Bots may send your liability risk soaring
Source: Evan Schuman


Artificial intelligence bots are all the rage these days, as companies try to figure out the best ways they can be used. But using them to interact directly with customers forces some interesting questions about legal liability.

What happens when a wrong answer causes financial harm to a customer? Does it make a difference if the answer was delivered by a human call center representative or an automated bot? In most cases, it absolutely will.

Consider a typical fintech company, a bank. It uses a bot to cover the most commonly asked retirement fund questions, but someone programmed the wrong answer into the system. Let’s assume that the error causes a customer to miss a key deadline, which causes that customer to have an opportunity-loss of a lot of money. If this matter goes to litigation and a jury or judge is deciding an appropriate resolution, will they view this differently than if an associate gave that wrong answer?

Let’s say that the human associate is a 22-year-old with just one week on the job. A jury might decide that her error was deserving of some leeway. The same jury might take a completely different view if the error resulted from code that was written, reviewed and approved at multiple levels — including two people in the Legal department — over several months.

There are parallels between this and laws dealing with host liability. If you host a big party, your ultimate liability should an intoxicated guest cause some harm could depend on whether you have an amateur serving drinks or a professional bartender. On the one hand, the bartender is likely far better at noticing the signs of intoxication and should understand his duty to cut the partygoer off. On the other hand, a jury is likely to hold a professional bartender to a much higher standard than, say, your Uncle Phil.

Just to be clear, your A.I. bot is the professional bartender, and the 22-year-old new hire is your Uncle Pat. With the bot, you have an exponentially better shot at controlling what answers are given to customers’ questions, but if that more easily controlled method does glitch somehow, your liability is likely to be far higher.

Michael Stelly recently retired after years as the lead mobile developer for Transamerica Retirement, which had $202 billion of insurance policies in force as of Dec. 31, 2015. While noting that the year of law school that he suffered through was just “enough to be dangerous,” he argues that the differences in liability that bots can impose are typically ignored by many companies using them.

Yes, a bot has to go through many layers of approval before uttering a single vocalization. “It has to be fully vetted by Legal before [becoming available through] Apple and Google. Financial institutions employ a legion of lawyer to eliminate any kind of fiduciary circumstance,” Steely said. “There are any number of stopgaps where it can be shot down.”   

Compare that to a human who undergoes a short training class, possibly giving it less than undivided attention.   

But a bot is software, and there’s no such thing as perfect software. “Every single program has bugs. There’s no way that you can do stress-testing on that program in a controlled environment,” Stelly said, adding that code must be released to a large number of customers for meaningful testing. “You’ve got to let it out into the wild.”

Companies can try to limit liability with a terms-and-conditions disclaimer, but it’s not clear how much legal weight those really have given their practically ubiquitous use. If everything is stamped with “we’re not responsible for anything we do that harms you,” it pretty much loses its meaning.   

Companies considering using A.I. bots should realize that some jurors or judges may think them careless in relying on a new and unproven technology instead of trained personnel who can theoretically think independently.

And of course, the question of liability may come down to whether a judge or a jury decides the matter. A judge, Steely said, is more likely to opt for strict liability, seeing both the bot and the person as authorized representatives of the company and, therefore, subject to identical liability.

A jury may see the situation very differently. “You can’t get past the emotional aspect of a jury,” Steely said. “That all comes down to perceptions.”

The issue ultimately comes down to control. If everything goes right — something that is far from certain with software — corporate has almost complete control of every bot utterance. Nonetheless, code can glitch, allowing for bot errors that few humans would make. On the other hand, emotional humans can make errors that no programmed bot would make.

There is little question that bots can save money on the front end. The question is whether they will end up costing more money on the back end.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |